DeepMind, an Artificial Intelligence company, has made a deal with game developing software Unity Technologies to do virtual world AI training on a grand scale. The algorithms will be trained in physics-realistic situations, which is an emerging trend in AI. Gaming softwares like Unreal or Unity offer editable options for high-level AI techniques such as a reinforcement learning, which is also a machine learning technique. In this technique, the algorithm learns by trial-and-error.
“Games are in many, many ways . . . much closer to nature than people think,” says Danny Lange, Vice President of ML and AI at Unity. “You get the visual, the physics, the cognitive, and . . . the social aspect–the interaction.” All of this will pressurize algorithms just like the Nature’s evolutionary pressure is put on the living things, he claims.
DeepMind and Unity are not revealing many details for now. Their collaboration happened after the deal made in June between Google Cloud and Unity to offer platforms for online game developers.
Although this technology is nothing new, Lange has a broad vision about what AI and reinforcement learning can do in the world of games. For example, virtual people can be used to develop more comfortable building models. “You can actually test a thousand different designs on a thousand different virtual families living in that house,” he says.
Another splendid work that simulated physics could do is to perform virtual chemistry experiments, where more experiments can be conducted by the software using the virtual chemicals than by the human using the real chemicals, claims Lange. This could reduce the number of people required in real life for testing. Lange’s prediction is that gaming engines’ AI can make this happen in almost five years. By the way, quantum computing supporters have also given the same timeline after which it would be possible to simulate complex chemistry.
Lange showed an adorable example of how Artificial Intelligence (AI) models are trained by games. A virtual dog was trained for fetching. The algorithm used knew that dog had to fetch the stick: if the dog was moving correctly, numerical rewards were triggered and that resulted in more correct behavior. At first, the dog didn’t even know how to move but with continuous training it eventually learned everything.
Players of growing, virtual world games know that these games are something more than just physics. If on one side Grand Theft Auto mimics the grip of tires on asphalt, then, on the other side it also simulates communication between vibrant virtual people. “It’s an emerging area,” says Lange, about developing social dynamics. “You simulate multiple agents and they interact with each other. They invent what they say.”
For instance, this provides deep understandings about behavior of crowds. As a possibly real world -but still hypothetical- example, he said about the model’s ability to predict stock prices from mere crowd talks. “One guy says the stock is going to go up, another guy says this stock is going to go down,” says Lange. “How do they influence the crowd?”