A Video Game Based on Optimal Control and Elementary Statistics

Affiliation(s)

Dipartimento di Scienze Sociali “D. Serrani”, Università Politecnica delle Marche, Ancona, Italy.

CERI-Centro di Ricerca Previsione Prevenzione e Controllo dei Rischi Geologici, Università di Roma “La Sapienza”, Roma, Italy.

CERI-Centro di Ricerca Previsione Prevenzione e Controllo dei Rischi Geologici,.

CERI-Centro di Ricerca Previsione Prevenzione e Controllo dei Rischi Geologici,Università di Roma “La Sapienza”, Roma, Italy.

Dipartimento di Scienze Sociali “D. Serrani”, Università Politecnica delle Marche, Ancona, Italy.

CERI-Centro di Ricerca Previsione Prevenzione e Controllo dei Rischi Geologici, Università di Roma “La Sapienza”, Roma, Italy.

CERI-Centro di Ricerca Previsione Prevenzione e Controllo dei Rischi Geologici,.

CERI-Centro di Ricerca Previsione Prevenzione e Controllo dei Rischi Geologici,Università di Roma “La Sapienza”, Roma, Italy.

ABSTRACT

The video game presented in this paper is a prey-predator game where two preys (human players) must avoid three predators (automated players) and must reach a location in the game field (the computer screen) called preys’ home. The game is a sequence of matches and the human players (preys) must cooperate in order to achieve the best perform- ance against their opponents (predators). The goal of the predators is to capture the preys, which are the predators try to have a “rendez vous” with the preys, using a small amount of the “resources” available to them. The score of the game is assigned following a set of rules to the prey team, not to the individual prey. In some situations the rules imply that to achieve the best score it is convenient for the prey team to sacrifice one of his components. The video game pursues two main purposes. The first one is to show how the closed loop solution of an optimal control problem and elementary sta- tistics can be used to generate (game) actors whose movements satisfy the laws of classical mechanics and whose be- haviour simulates a simple form of intelligence. The second one is “educational”, in fact the human players in order to be successful in the game must understand the restrictions to their movements posed by the laws of classical mechanics and must cooperate between themselves. The video game has been developed having in mind as players for children aged between five and thirteen years. These children playing the video game acquire an intuitive understanding of the basic laws of classical mechanics (Newton’s dynamical principle) and enjoy cooperating with their teammate. The video game has been experimented on a sample of a few dozen children. The children aged between five and eight years find the game amusing and after playing a few matches develop an intuitive understanding of the laws of classical me- chanics. They are able to cooperate in making fruitful decisions based on the positions of the preys (themselves), of the predators (their opponents) and on the physical limitations to the movements of the game actors. The interest in the game decreases when the age of the players increases. The game is too simple to interest a teenager. The game engine consists in the solution of an assignment problem, in the closed loop solution of an optimal control problem and in the adaptive choice of some parameters. At the beginning of each match, and when necessary during a match, an assign- ment problem is solved, that is the game engine chooses how to assign to the predators the preys to chase. The resulting assignment implies some cooperation among the predators and defines the optimal control problem used to compute the strategies of the predators during the match that follows. These strategies are determined as the closed loop solution of the optimal control problem considered and can be thought as a (first) form of artificial intelligence (AI) of the preda- tors. In the optimal control problem the preys and the predators are represented as point masses moving according to Newton’s dynamical principle under the action of friction forces and of active forces. The equations of motion of these point masses are the constraints of the control problem and are expressed through differential equations. The formula- tion of the decision process through optimal control and Newton’s dynamical principle allows us to develop a game where the effectiveness and the goals of the automated players can be changed during the game in an intuitive way sim- ply modifying the values of some parameters (*i.e.* mass, friction coefficient, ^{...}). In a sequence of game matches the predators (automated players) have “personalities” that try to simulate human behaviour. The predator personalities are determined making an elementary statistical analysis of the points scored by the preys in the game matches played and consist in the adaptive choice of the value of a parameter (the mass) that appears in the differential equations that define the movements of the predators. The values taken by this parameter determine the behaviour of the predators and their effectiveness in chasing the preys. The predators personalities are a (second) form of AI based on elementary statistics that goes beyond the intelligence used to chase the preys in a match. In a sequence of matches the predators using this second form of AI adapt their behaviour to the preys’ behaviour. The video game can be downloaded from the website: http://www.ceri.uniroma1.it/ceri/zirilli/w10/.

The video game presented in this paper is a prey-predator game where two preys (human players) must avoid three predators (automated players) and must reach a location in the game field (the computer screen) called preys’ home. The game is a sequence of matches and the human players (preys) must cooperate in order to achieve the best perform- ance against their opponents (predators). The goal of the predators is to capture the preys, which are the predators try to have a “rendez vous” with the preys, using a small amount of the “resources” available to them. The score of the game is assigned following a set of rules to the prey team, not to the individual prey. In some situations the rules imply that to achieve the best score it is convenient for the prey team to sacrifice one of his components. The video game pursues two main purposes. The first one is to show how the closed loop solution of an optimal control problem and elementary sta- tistics can be used to generate (game) actors whose movements satisfy the laws of classical mechanics and whose be- haviour simulates a simple form of intelligence. The second one is “educational”, in fact the human players in order to be successful in the game must understand the restrictions to their movements posed by the laws of classical mechanics and must cooperate between themselves. The video game has been developed having in mind as players for children aged between five and thirteen years. These children playing the video game acquire an intuitive understanding of the basic laws of classical mechanics (Newton’s dynamical principle) and enjoy cooperating with their teammate. The video game has been experimented on a sample of a few dozen children. The children aged between five and eight years find the game amusing and after playing a few matches develop an intuitive understanding of the laws of classical me- chanics. They are able to cooperate in making fruitful decisions based on the positions of the preys (themselves), of the predators (their opponents) and on the physical limitations to the movements of the game actors. The interest in the game decreases when the age of the players increases. The game is too simple to interest a teenager. The game engine consists in the solution of an assignment problem, in the closed loop solution of an optimal control problem and in the adaptive choice of some parameters. At the beginning of each match, and when necessary during a match, an assign- ment problem is solved, that is the game engine chooses how to assign to the predators the preys to chase. The resulting assignment implies some cooperation among the predators and defines the optimal control problem used to compute the strategies of the predators during the match that follows. These strategies are determined as the closed loop solution of the optimal control problem considered and can be thought as a (first) form of artificial intelligence (AI) of the preda- tors. In the optimal control problem the preys and the predators are represented as point masses moving according to Newton’s dynamical principle under the action of friction forces and of active forces. The equations of motion of these point masses are the constraints of the control problem and are expressed through differential equations. The formula- tion of the decision process through optimal control and Newton’s dynamical principle allows us to develop a game where the effectiveness and the goals of the automated players can be changed during the game in an intuitive way sim- ply modifying the values of some parameters (

Cite this paper

Giacinti, M. , Mariani, F. , Recchioni, M. and Zirilli, F. (2013) A Video Game Based on Optimal Control and Elementary Statistics.*Intelligent Information Management*, **5**, 103-116. doi: 10.4236/iim.2013.54011.

Giacinti, M. , Mariani, F. , Recchioni, M. and Zirilli, F. (2013) A Video Game Based on Optimal Control and Elementary Statistics.

References

[1] A. K. Bay-Hinitz, R. F. Peterson and H. R. Quilitch, “Cooperative Games: A Way to Modify Aggressive and Cooperative Behaviours in Young Children,” Journal of Applied Behavior Analysis, Vol. 27, No. 3, 1994, pp. 435-446. doi:10.1901/jaba.1994.27-435

[2] M. V. Aponte, G. Levieux and S. Natkin, “Scaling the Level of Difficulty in Single Player Video Games,” Proceedings of the 8th International Conference on Entertainment Computing, Paris, 3-5 September 2009, S. Natkin and J. Dupire, Eds., ICEC 5709, 2009, pp. 24-35.

[3] R. Prada and A. Paiva, “Teaming up Humans with Autonomous Synthetic Characters,” Artificial Intelligence, Vol. 173, No. 1, 2009, pp. 80-103. doi:10.1016/j.artint.2008.08.006

[4] I. Millington, “Artificial Intelligence for Games,” Elsevier, Morgan Kaufmann Publications, 2006.

[5] L. Sha, S. He, J. Wang, J. Yang, Y. Gao, Y. Zhang and X. Yu, “Creating Appropriate Challenge Level Game Opponent by the Use of Dynamic Difficulty Adjustment,” 2010 6th International Conference on Natural Computation (ICNC 2010), IEEE Circuits and Systems Society, 2010, pp. 3897-3901.

[6] M. Giacinti, F. Mariani, M. C. Recchioni and F. Zirilli, “A Video Game Based on Elementary Equations,” Annals of Mathematics and Artificial Intelligence, 2012.

[7] N. Beume, H. Danielsiek, T. Hein, B. Naujoks, N. Piatkowski, R. Stüer, A. Thom and S. Wessing, “Towards Intelligent Team Composition and Maneuvering in Real-Time Strategy Games,” IEEE Transactions on Computational Intelligence and AI in Games, Vol. 2, No. 2, 2010, pp. 82-98. doi:10.1109/TCIAIG.2010.2047645

[8] D. B. Fogel, T. J. Haysa and D. R. Johnson, “A Platform for Evolving Intelligently Interactive Adversaries,” Biosystems, Vol. 85, No. 1, 2006, pp. 72-83. doi:10.1016/j.biosystems.2006.02.010

[9] V. Kokkevis, “Practical Physics for Articulated Characters,” Game Developers Conference 2004, San Jose, 2004, pp. 1-16. http://www.red3d.com/cwr/games/#ai-papers

[10] I. Millington, “Game Physics Engine Developments,” Elsevier Inc., San Francisco, 2007.

[11] C. W. Reynolds, “Flocks, Herds, and Schools: A Distributed Behavioral Model,” Computer Graphics, Vol. 21, No. 4, 1987, pp. 25-34. doi:10.1145/37402.37406

[12] S. I. Nishimura and T. Ikegami, “Emergence of Collective Strategies in a Prey-Predator Game Model,” Artificial Life, Vol. 3, No. 4, 1997, pp. 243-260. doi:10.1162/artl.1997.3.4.243

[13] S. Kim, C. Hoffman and J. M. Lee, “An Experimental in Rule-Based Crowd Behaviour for Intelligent Games,” ICCIT, 2009 IEEE Fourth International Conference on Computer Sciences and Convergence Information Technologies, Seoul, 24-26 November 2009, pp. 410-415. doi:10.1109/ICCIT.2009.239

[14] K. Gopalsamy, “Exchange of Equilibria in Two Species Lotka-Volterra Competition Models,” The Journal of the Australian Mathematical Society (Series B), Vol. 24, No. 2, 1982, pp. 160-170. doi:10.1017/S0334270000003659

[15] B. Lahey, W. Burleson, C. N. Jensen, N. Freed, P. Lu and K. Muldner, “Integrating Video Games and Robotic Play in Physical Environments,” Proceedings of the 2008 ACM SIGGRAPH Symposium on Video Games, Los Angeles, ACM Publishing, New York, 2008, Vol. 1, pp. 153-170.

[16] B. Lahey, N. Freed, P. Lu, C. N. Jensen, K. Muldner and W. Burleson, “Human-Robot Interactions to Promote Play and Learning,” Proceedings of the 8th International Conference on Interaction Design and Children (IDC 2009), Como, ACM Publishing, New York, 2009, pp. 280-281.

[17] M. Athans, “On Optimal Allocation and Guidance Laws for Linear Interception and Rendez Vous Problems,” IEEE Transactions on Aerospace and Electronics Systems, Vol. AES7, No. 5, 1971, pp. 843-853. doi:10.1109/TAES.1971.310324

[18] R. Isaacs, “Differential Games,” Dover Publication, New York, 1999.

[19] T. L. Friesz, “Dynamic Optimization and Differential Games,” International Series in Operations Research & Management Science, Vol. 135, Springer, New York, 2010, pp. 138-142.

[1] A. K. Bay-Hinitz, R. F. Peterson and H. R. Quilitch, “Cooperative Games: A Way to Modify Aggressive and Cooperative Behaviours in Young Children,” Journal of Applied Behavior Analysis, Vol. 27, No. 3, 1994, pp. 435-446. doi:10.1901/jaba.1994.27-435

[2] M. V. Aponte, G. Levieux and S. Natkin, “Scaling the Level of Difficulty in Single Player Video Games,” Proceedings of the 8th International Conference on Entertainment Computing, Paris, 3-5 September 2009, S. Natkin and J. Dupire, Eds., ICEC 5709, 2009, pp. 24-35.

[3] R. Prada and A. Paiva, “Teaming up Humans with Autonomous Synthetic Characters,” Artificial Intelligence, Vol. 173, No. 1, 2009, pp. 80-103. doi:10.1016/j.artint.2008.08.006

[4] I. Millington, “Artificial Intelligence for Games,” Elsevier, Morgan Kaufmann Publications, 2006.

[5] L. Sha, S. He, J. Wang, J. Yang, Y. Gao, Y. Zhang and X. Yu, “Creating Appropriate Challenge Level Game Opponent by the Use of Dynamic Difficulty Adjustment,” 2010 6th International Conference on Natural Computation (ICNC 2010), IEEE Circuits and Systems Society, 2010, pp. 3897-3901.

[6] M. Giacinti, F. Mariani, M. C. Recchioni and F. Zirilli, “A Video Game Based on Elementary Equations,” Annals of Mathematics and Artificial Intelligence, 2012.

[7] N. Beume, H. Danielsiek, T. Hein, B. Naujoks, N. Piatkowski, R. Stüer, A. Thom and S. Wessing, “Towards Intelligent Team Composition and Maneuvering in Real-Time Strategy Games,” IEEE Transactions on Computational Intelligence and AI in Games, Vol. 2, No. 2, 2010, pp. 82-98. doi:10.1109/TCIAIG.2010.2047645

[8] D. B. Fogel, T. J. Haysa and D. R. Johnson, “A Platform for Evolving Intelligently Interactive Adversaries,” Biosystems, Vol. 85, No. 1, 2006, pp. 72-83. doi:10.1016/j.biosystems.2006.02.010

[9] V. Kokkevis, “Practical Physics for Articulated Characters,” Game Developers Conference 2004, San Jose, 2004, pp. 1-16. http://www.red3d.com/cwr/games/#ai-papers

[10] I. Millington, “Game Physics Engine Developments,” Elsevier Inc., San Francisco, 2007.

[11] C. W. Reynolds, “Flocks, Herds, and Schools: A Distributed Behavioral Model,” Computer Graphics, Vol. 21, No. 4, 1987, pp. 25-34. doi:10.1145/37402.37406

[12] S. I. Nishimura and T. Ikegami, “Emergence of Collective Strategies in a Prey-Predator Game Model,” Artificial Life, Vol. 3, No. 4, 1997, pp. 243-260. doi:10.1162/artl.1997.3.4.243

[13] S. Kim, C. Hoffman and J. M. Lee, “An Experimental in Rule-Based Crowd Behaviour for Intelligent Games,” ICCIT, 2009 IEEE Fourth International Conference on Computer Sciences and Convergence Information Technologies, Seoul, 24-26 November 2009, pp. 410-415. doi:10.1109/ICCIT.2009.239

[14] K. Gopalsamy, “Exchange of Equilibria in Two Species Lotka-Volterra Competition Models,” The Journal of the Australian Mathematical Society (Series B), Vol. 24, No. 2, 1982, pp. 160-170. doi:10.1017/S0334270000003659

[15] B. Lahey, W. Burleson, C. N. Jensen, N. Freed, P. Lu and K. Muldner, “Integrating Video Games and Robotic Play in Physical Environments,” Proceedings of the 2008 ACM SIGGRAPH Symposium on Video Games, Los Angeles, ACM Publishing, New York, 2008, Vol. 1, pp. 153-170.

[16] B. Lahey, N. Freed, P. Lu, C. N. Jensen, K. Muldner and W. Burleson, “Human-Robot Interactions to Promote Play and Learning,” Proceedings of the 8th International Conference on Interaction Design and Children (IDC 2009), Como, ACM Publishing, New York, 2009, pp. 280-281.

[17] M. Athans, “On Optimal Allocation and Guidance Laws for Linear Interception and Rendez Vous Problems,” IEEE Transactions on Aerospace and Electronics Systems, Vol. AES7, No. 5, 1971, pp. 843-853. doi:10.1109/TAES.1971.310324

[18] R. Isaacs, “Differential Games,” Dover Publication, New York, 1999.

[19] T. L. Friesz, “Dynamic Optimization and Differential Games,” International Series in Operations Research & Management Science, Vol. 135, Springer, New York, 2010, pp. 138-142.