Breaking News

COLORFUL Expands B850 Motherboard Lineup with New CVN, Battle-Ax, and MEOW Models HighPoint Unveils the MCIO-PCIEX16-G5 CASIO introduces a new limited edition Hammered heritage model, the New MRG-B5000HT Introducing the Game-Changing MINISFORUM G1 Pro PlayStation Plus Monthly Games for December 2025

logo

  • Share Us
    • Facebook
    • Twitter
  • Home
  • Home
  • News
  • Reviews
  • Essays
  • Forum
  • Legacy
  • About
    • Submit News

    • Contact Us
    • Privacy

    • Promotion
    • Advertise

    • RSS Feed
    • Site Map

Search form

AI Algorithms Crushed Human Players in Dota 2 Video Game

AI Algorithms Crushed Human Players in Dota 2 Video Game

Enterprise & IT Jun 25,2018 0

Researchers at OpenAI have created a bot which beats the world's top professionals at 1v1 matches of Dota 2 under standard tournament rules.

OpenAI, a nonprofit based in San Francisco California backed by billionaire Elon Musk, developed software that can master complex games combining fast, real-time action, longer-term strategy, imperfect information and team play.

OPenAI's team of five neural networks, OpenAI Five, has started to defeat amateur human teams at Dota 2. The company aims to beat a team of top professionals at The International in August subject only to a limited set of heroes.

Dota 2 is one of the most popular and complex esports games in the world, with creative and motivated professionals who train year-round to earn part of Dota's annual $40M prize pool (the largest of any esports game).

Dota 2 is played between two teams of five players, with each player controlling a character called a "hero". The tournament version pits two competing five-player teams. Each team is assigned a base on opposing ends of a map that can only be learned through exploration. Each player controls a separate character with unique powers and weapons. Each team must battle to reach the opposing team's territory and destroy a structure called an Ancient.

The Dota rules are very complex - the game has been actively developed for over a decade, with game logic implemented in hundreds of thousands of lines of code. This logic takes milliseconds per tick to execute, versus nanoseconds for Chess or Go engines. The game also gets an update about once every two weeks, constantly changing the environment semantics.

A Dota-playing AI must master the following:

  • Long time horizons. Dota games run at 30 frames per second for an average of 45 minutes, resulting in 80,000 ticks per game. Most actions (like ordering a hero to move to a location) have minor impact individually, but some individual actions like town portal usage can affect the game strategically; some strategies can play out over an entire game. OpenAI Five observes every fourth frame, yielding 20,000 moves. Chess usually ends before 40 moves, Go before 150 moves, with almost every move being strategic.
  • Partially-observed state. Units and buildings can only see the area around them. The rest of the map is covered in a fog hiding enemies and their strategies. Strong play requires making inferences based on incomplete data, as well as modeling what one's opponent might be up to. Both chess and Go are full-information games.
  • High-dimensional, continuous action space. In Dota, each hero can take dozens of actions, and many actions target either another unit or a position on the ground. We discretize the space into 170,000 possible actions per hero (not all valid each tick, such as using a spell on cooldown); not counting the continuous parts, there are an average of ~1,000 valid actions each tick. The average number of actions in chess is 35; in Go, 250.
  • High-dimensional, continuous observation space. Dota is played on a large continuous map containing ten heroes, dozens of buildings, dozens of NPC units, and a long tail of game features such as runes, trees, and wards. Our model observes the state of a Dota game via Valve's Bot API as 20,000 (mostly floating-point) numbers representing all information a human is allowed to access. A chess board is naturally represented as about 70 enumeration values (a 8x8 board of 6 piece types and minor historical info); a Go board as about 400 enumeration values (a 19x19 board of 2 piece types plus Ko).

OpenAI's software learned solely through trial-and-error while playing against itself. The software plays 180 years worth of games against itself every day, learning via self-play. It trains using a scaled-up version of Proximal Policy Optimization running on 256 GPUs and 128,000 CPU cores - a larger-scale version of the system we built to play the much-simpler solo variant of the game last year. Using a separate LSTM for each hero and no human data, it learns recognizable strategies. This indicates that reinforcement learning can yield long-term planning with large but achievable scale - without fundamental advances.

The ability to learn these kinds of video games at human or super-human levels is important for the advancement of AI because they more closely approximate the uncertainties and complexity of the real world than games.

OpenAI said its software in mid-June beat a semi-professional team that is ranked among the top 1 percent of Dota 2 players and an amateur team ranked in the top 10 percent -- both times winning two games to one in a best-of-three series. Earlier in the month, OpenAI's bot crushed three amateur teams.

The sort of reinforcement learning OpenAI used could be promising for solving real-world situations, particularly those that could be couched as games. But have in mind that the amount of data and computing power required to use the technique effectively has limited its applications.

Tags: Artificial IntelligenceOpenAI
Previous Post
Oculus TV Now Available on Oculus Go
Next Post
NVIDIA Adds New High-Performance Computing Containers to GPU Cloud

Related Posts

  • ChatGPT is now available in Azure OpenAI Service

  • Microsoft and OpenAI extend partnership

  • What Is Explainable AI?

  • Fujitsu AI-Video Recognition Technology Promotes Hand Washing Etiquette and Hygiene in the Workplace

  • PAC-MAN Recreated with AI by NVIDIA Researchers

  • Chinese Sogou Introduces 3D AI News Anchor

  • Microsoft Announces New AI Supercomputer

  • Sony and Microsoft to Create AI-powered Smart Cameras

Latest News

COLORFUL Expands B850 Motherboard Lineup with New CVN, Battle-Ax, and MEOW Models
PC components

COLORFUL Expands B850 Motherboard Lineup with New CVN, Battle-Ax, and MEOW Models

HighPoint Unveils the MCIO-PCIEX16-G5
Enterprise & IT

HighPoint Unveils the MCIO-PCIEX16-G5

CASIO introduces a new limited edition Hammered heritage model, the New MRG-B5000HT
Consumer Electronics

CASIO introduces a new limited edition Hammered heritage model, the New MRG-B5000HT

Introducing the Game-Changing MINISFORUM G1 Pro
Enterprise & IT

Introducing the Game-Changing MINISFORUM G1 Pro

PlayStation Plus Monthly Games for December 2025
Gaming

PlayStation Plus Monthly Games for December 2025

Popular Reviews

be quiet! Dark Mount Keyboard

be quiet! Dark Mount Keyboard

Terramaster F8-SSD

Terramaster F8-SSD

be quiet! Light Mount Keyboard

be quiet! Light Mount Keyboard

Soundpeats Pop Clip

Soundpeats Pop Clip

Akaso 360 Action camera

Akaso 360 Action camera

Dragon Touch Digital Calendar

Dragon Touch Digital Calendar

Noctua NF-A12x25 G2 fans

Noctua NF-A12x25 G2 fans

be quiet! Pure Loop 3 280mm

be quiet! Pure Loop 3 280mm

Main menu

  • Home
  • News
  • Reviews
  • Essays
  • Forum
  • Legacy
  • About
    • Submit News

    • Contact Us
    • Privacy

    • Promotion
    • Advertise

    • RSS Feed
    • Site Map
  • About
  • Privacy
  • Contact Us
  • Promotional Opportunities @ CdrInfo.com
  • Advertise on out site
  • Submit your News to our site
  • RSS Feed