Game Solvers/Engines edit

Game solvers and engines have been a large source of interest for programmers throughout the history of computing. These programs are capable of either playing a game optimally, with the goal of playing better than a human, or solver games where that is possible. There are different methods that developers of this software use, including SAT solvers and AI models. These engines have also evolved from use on classic board games such as chess to modern video games such as DOTA II.

Neural Networks edit

A popular method of developing systems to make the best move in a game is by using an AI powered by a neural network. These AI configurations model the human brain with large amounts of weighted nodes that represent the neurons of a human brain. These artificial neurons are then connected with a computational model that allows the computer to make decisions based on the inputs given to what is the most likely answer to the given input is. More recently, this is accompanied with deep learning algorithms that allow the network to take large amounts of training data to build their network, and then give results based on that training.

Go edit

 
Game of Go being played by AlphaGo as white

The board game Go was for a long period of time considered unsolvable due it's abstract nature, with an average of 250150 moves available per game. This meant that purely algorithmic systems had difficulty finding statistically correct moves. This meant while computers were already able to beat the best chess players in the world, professional Go players were required to be handicapped before losing to a machine. AlphaGo was the first Go engine that was able to beat a professional player. The version AlphaGo Master would go 60:0 against professional players online[1]. This model would then be further refined to AlphaZero, which is a more general AI used to player various board games. This version, instead of being trained solely off of human games to teach optimal moves, instead had the AI train itself off of solely self played games with no access to human moves[2]. AlphaZero is now used as a training tool to help players find mistakes in and optimize their own play.

Chess edit

 
Garry Kasparov

Chess has long been one of the most popular and historic board games of all time. This made it a prime target of neural network engineering looking to create a computer better than the best human players. The World Computer Chess Championship held periodically since 1974 to showcase the best of chess engines at the time. The AI Deep Blue from IBM was able to beat the Grandmaster Garry Kasparaov in 1997[3], telling the world that computers were beginning to overtake humans in the realm of chess. Today, chess engines are not only far ahead of any human chess player, but have also been adapted to play at various elo's to allow developing players to learn from various levels of skill[4]. Additionally, sites such as chess.com have engines that play similar to real life figures, from chess champions such as Hikaru Nakamura to internet celebrities such as Lugwig Ahgren.[5][6]

Controversy edit

Since machines are now able to beat human players at chess, the opportunity to cheat at the game is now much more prevalent. Online matches in particular are often scrutinized for players who play too perfectly, moving their pieces as if they were an engine. One very public accusation came after Hans Neman upset world champion Magnus Carlson. Magnus believed that Hans' plays were too close to the engine for him to have come up with them during the match, and Hans had previously been caught cheating in online matches[7]. Extensive checks for external devices are now commonplace at over the board chess matches to prevent players from being able to receive information from engines during a match.

SAT Solvers edit

In instances where a game can be broken down into Boolean algebra, an SAT Solver can be used to find the mathematically correct solutions. These are typically implemented using a Python library and make use of the parallel processing available on most modern computers. Since any problem that can be boiled down to Boolean algebra can be solved by an SAT Solver, this method of creating a game solver can be applied to a wide variety of different games.

Sudoku edit

 
Example Sudoku Puzzle

A common example of the utility of SAT solvers is the popular newspaper game Sudoku. Sudoku is solved by filling a 9x9 grid divided into 3x3 squares with the number 1-9 in each row, column, and 3x3 square without repeating within a row, column, or square. Each digit, 1-9, can be represented by a Boolean variable and an equation can be created based on the starting digits provided by the puzzle. This allows the SAT Solver to find the solution to the problem by solving that equation[8].

Queen Armies edit

Chess puzzles can also be applied to SAT solvers. One common example is an extension of the 8 queens puzzle, where you are required to place 8 queens onto the chess board where none of them can attack each other. The Queen Armies puzzle instead separates the queens into two armies. The two armies can be divided into different variables, and the size of the board can also be represented. This allows the SAT Solver to take all of the possible inputs of pieces and solve to find the correct ones much more efficiently than other algorithms[9].

IBM Watson edit

Another unique example of a game engine is the IBM Watson system. This was an early attempt at creating something that could do what AI LLM's do today. It took large amounts of training data and utilized a DeepQA architecture to deconstruct clues and find the correct answer. It does this by using massive parallelism to consider many different possibilities, and connect them to many different sources of knowledge to implement both shallow and deep knowledge and assign them all different confidence estimates. This allows the system to break down questions and build solutions[10].

Jeopardy edit

 
Replica of stage from Watson on Jeopardy

Similar to how chess was used as the basis for AI engines, the team behind Watson used Jeopardy as the inspiration and goal behind their next development. The language processing was created around building a system that could analyze the "answer" clues in Jeopardy, and quickly find the correct "question" to solve. This was demonstrated in a televised match between two of the best players in Jeopardy history, Ken Jennings and Brad Rutter. In the matches played, IBM Watson was able to win the 1million dollar prize and beat the human competitors[11]. This demonstration, similar to the victory over Kasparov, resulted another victory of machine over humans in competition.

AI Video Game Bots edit

After board game engines were able to consistently beat human opponents, developers looked to new field to further their efforts of creating more efficient and advanced AI. This lead them to create systems that could play video games better than a human opponent. Unlike board games, that have a relatively limited amount of inputs and decisions being played in a turn based manner, video games have a much larger amount of inputs and real-time decisions being made. This creates a much greater challenge for the AI to be able to not only think as smart as a human player, but also as fast.

Quake III edit

The arena FPS game Quake III was the target of OpenAI researching when developing a game engine capable of playing a First Person Shooter video game. For the purpose of training the model, the CTF, capture the flag, game mode was used. This provides a wide range of decisions for the AI to chose, whether to go for kills against enemy agents and how to play around the flag to capture. By using a FTW (First to win) model, the bots were able to outperform the typical human player[12]

DOTA II edit

The popular MOBA DOTA II made headlines when it's AI first started being able to beat humans. The match between the current world champions and OpenAI's team of computer controlled players was another landmark in game engine development, with the AI coming out on top. DOTA II professional players now use the AI as a practice tool, honing their skills by testing themselves against an opponent that plays ideally helps them gain an edge against players who don't reach the abilities of the computer.[12]

StarCraft II edit

Similar to how Go is considered one of the most difficult board games due to the huge amount of options that a player has in a game, RTS games such as StarCraft II are lauded as the most difficult genre due to both the high actions per minute a player has to sustain throughout the game, as well as the extreme depth of strategy involved due to a large number of different units a player can create and manipulate throughout a match. While StarCraft II AI's haven't been able to consistently beat the top echelon of human competition, they were able to reach the higher ranks of online competition, achieving the rank of grandmaster which places them above 98% of players.[12]

Future Work edit

As AI continues to evolve, developer will continue to seek out new problems to overcome. As the field of AI computing continues to develop, it will continue beating humans at games that we haven't even thought of yet. The field of robotics is one potential avenue, as it will combine both the knowledge based skills that these AI game engines have conquered, as well as the physical limits of what humans are capable of. Such examples can already be found in table top games such as air hockey, but one day televised robots vs humans in something such as ice hockey may be in our future.

  1. ^ "Humans Mourn Loss After Google Is Unmasked as China's Go Master". Wall Street Journal. 2017-01-05. ISSN 0099-9660. Retrieved 2023-06-28.
  2. ^ Silver, David; Hubert, Thomas; Schrittwieser, Julian; Antonoglou, Ioannis; Lai, Matthew; Guez, Arthur; Lanctot, Marc; Sifre, Laurent; Kumaran, Dharshan; Graepel, Thore; Lillicrap, Timothy; Simonyan, Karen; Hassabis, Demis (2018-12-07). "A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play". Science. 362 (6419): 1140–1144. doi:10.1126/science.aar6404. ISSN 0036-8075.
  3. ^ Team (CHESScom), Chess com (2018-10-12). "Kasparov vs. Deep Blue | The Match That Changed History". Chess.com. Retrieved 2023-06-28.
  4. ^ Zang, Hongyu; Yu, Zhiwei; Wan, Xiaojun (2019). "Automated Chess Commentator Powered by Neural Chess Engine". Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics. doi:10.18653/v1/p19-1597.
  5. ^ Chess.com (News) (2021-01-29). "Play Chess Against PogChamps 3 Bots". Chess.com. Retrieved 2023-06-28.
  6. ^ "Hikaru-bot - Chess Profile". Chess.com. Retrieved 2023-06-28.
  7. ^ Chappell, Bill (September 27, 2022). "Chess world champion Magnus Carlsen accuses Hans Niemann of cheating". npr.{{cite news}}: CS1 maint: url-status (link)
  8. ^ Charalambidis, Angelos; Handjopoulos, Konstantinos; Rondogiannis, Panos; Wadge, William W. (2010), "Extensional Higher-Order Logic Programming", Logics in Artificial Intelligence, Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 91–103, ISBN 978-3-642-15674-8, retrieved 2023-06-28
  9. ^ Wolfram, Christopher. "Chess Queen Armies With SAT Solvers". christopherwolfram.com. Retrieved 2023-06-28.
  10. ^ Ferrucci, David; Brown, Eric; Chu-Carroll, Jennifer; Fan, James; Gondek, David; Kalyanpur, Aditya A.; Lally, Adam; Murdock, J. William; Nyberg, Eric; Prager, John; Schlaefer, Nico; Welty, Chris (2010-07-28). "Building Watson: An Overview of the DeepQA Project". AI Magazine. 31 (3): 59–79. doi:10.1609/aimag.v31i3.2303. ISSN 2371-9621.
  11. ^ Best, Jo (2013-09-09). "IBM Watson: The inside story of how the Jeopardy-winning supercomputer was born, and what it wants to do next". TechRepublic. Retrieved 2023-06-28.
  12. ^ a b c Yin, Qi-Yue; Yang, Jun; Huang, Kai-Qi; Zhao, Mei-Jing; Ni, Wan-Cheng; Liang, Bin; Huang, Yan; Wu, Shu; Wang, Liang (2023-06-01). "AI in Human-computer Gaming: Techniques, Challenges and Opportunities". Machine Intelligence Research. 20 (3): 299–317. doi:10.1007/s11633-022-1384-6. ISSN 2731-5398.