Thursday, April 25, 2024
Search
  
Monday, June 25, 2018
 AI Algorithms Crushed Human Players in Dota 2 Video Game
You are sending an email that contains the article
and a private message for your recipient(s).
Your Name:
Your e-mail: * Required!
Recipient (e-mail): *
Subject: *
Introductory Message:
HTML/Text
(Photo: Yes/No)
(At the moment, only Text is allowed...)
 
Message Text: Researchers at OpenAI have created a bot which beats the world's top professionals at 1v1 matches of Dota 2 under standard tournament rules.

OpenAI, a nonprofit based in San Francisco California backed by billionaire Elon Musk, developed software that can master complex games combining fast, real-time action, longer-term strategy, imperfect information and team play.

OPenAI's team of five neural networks, OpenAI Five, has started to defeat amateur human teams at Dota 2. The company aims to beat a team of top professionals at The International in August subject only to a limited set of heroes.

Dota 2 is one of the most popular and complex esports games in the world, with creative and motivated professionals who train year-round to earn part of Dota's annual $40M prize pool (the largest of any esports game).

Dota 2 is played between two teams of five players, with each player controlling a character called a "hero". The tournament version pits two competing five-player teams. Each team is assigned a base on opposing ends of a map that can only be learned through exploration. Each player controls a separate character with unique powers and weapons. Each team must battle to reach the opposing team's territory and destroy a structure called an Ancient.

The Dota rules are very complex - the game has been actively developed for over a decade, with game logic implemented in hundreds of thousands of lines of code. This logic takes milliseconds per tick to execute, versus nanoseconds for Chess or Go engines. The game also gets an update about once every two weeks, constantly changing the environment semantics.

A Dota-playing AI must master the following:

  • Long time horizons. Dota games run at 30 frames per second for an average of 45 minutes, resulting in 80,000 ticks per game. Most actions (like ordering a hero to move to a location) have minor impact individually, but some individual actions like town portal usage can affect the game strategically; some strategies can play out over an entire game. OpenAI Five observes every fourth frame, yielding 20,000 moves. Chess usually ends before 40 moves, Go before 150 moves, with almost every move being strategic.
  • Partially-observed state. Units and buildings can only see the area around them. The rest of the map is covered in a fog hiding enemies and their strategies. Strong play requires making inferences based on incomplete data, as well as modeling what one's opponent might be up to. Both chess and Go are full-information games.
  • High-dimensional, continuous action space. In Dota, each hero can take dozens of actions, and many actions target either another unit or a position on the ground. We discretize the space into 170,000 possible actions per hero (not all valid each tick, such as using a spell on cooldown); not counting the continuous parts, there are an average of ~1,000 valid actions each tick. The average number of actions in chess is 35; in Go, 250.
  • High-dimensional, continuous observation space. Dota is played on a large continuous map containing ten heroes, dozens of buildings, dozens of NPC units, and a long tail of game features such as runes, trees, and wards. Our model observes the state of a Dota game via Valve's Bot API as 20,000 (mostly floating-point) numbers representing all information a human is allowed to access. A chess board is naturally represented as about 70 enumeration values (a 8x8 board of 6 piece types and minor historical info); a Go board as about 400 enumeration values (a 19x19 board of 2 piece types plus Ko).

OpenAI's software learned solely through trial-and-error while playing against itself. The software plays 180 years worth of games against itself every day, learning via self-play. It trains using a scaled-up version of Proximal Policy Optimization running on 256 GPUs and 128,000 CPU cores - a larger-scale version of the system we built to play the much-simpler solo variant of the game last year. Using a separate LSTM for each hero and no human data, it learns recognizable strategies. This indicates that reinforcement learning can yield long-term planning with large but achievable scale - without fundamental advances.

The ability to learn these kinds of video games at human or super-human levels is important for the advancement of AI because they more closely approximate the uncertainties and complexity of the real world than games.

OpenAI said its software in mid-June beat a semi-professional team that is ranked among the top 1 percent of Dota 2 players and an amateur team ranked in the top 10 percent -- both times winning two games to one in a best-of-three series. Earlier in the month, OpenAI's bot crushed three amateur teams.

The sort of reinforcement learning OpenAI used could be promising for solving real-world situations, particularly those that could be couched as games. But have in mind that the amount of data and computing power required to use the technique effectively has limited its applications.

 
Home | News | All News | Reviews | Articles | Guides | Download | Expert Area | Forum | Site Info
Site best viewed at 1024x768+ - CDRINFO.COM 1998-2024 - All rights reserved -
Privacy policy - Contact Us .