There's something amazing about today's big-budget open world games. Worlds like Los Santos in Grand Theft Auto 5, Medici in Just Cause 3 or the imaginatively named San Fransisco in Watch Dogs 2. These worlds are massive, filled with interesting, detailed landmarks. Shadows and shaders make the beautifully modeled and animated people come to life - all at higher frame rates than a movie theater - while being fully interactive and rendered in real time.
But even games which cost north of $200 million to develop have to deal with one basic problem; every game world has an edge. And the job of maintaining an immersive experience in a limited world is made so hard by the free-roam game mechanic of the genre.
Meanwhile, back in software development; the world is being changed by machine learning. Developers can harness the power of data and GPUs to make predictions, recognise images, deal with audio data and even synthesise text. Now is truly an amazing time.
But we haven't created "General AI" yet. Software giant Google's DeepMind lab has managed to generate bots that play Atari games. But that's a far cry from their next research subject of playing complex, open-world games such as GTA 5, let alone full general AI.
So this begs the question; how as developers and designers, not AI researchers, can we deliver great user experiences using machine learning? General AI has been just around the corner for perpetuity, but it is not her yet. How can we hide machine learning's rough edges?
One common edge hiding technique is water. Los Santos conveniently happens to be an island in the middle of the sea. Medici takes that a step further with an archipelago of islands.
But video games are even more limited, even the oceans must have edges. Or at least you would think. This trick is the most important part of the strategy: creating an illusion of motion.
Having a fully unlimited ocean is harder for developers, but is also terrible from a design perspective. If a player wastes X minutes swimming to the "edge of the ocean", it's pretty boring for them to have to turn around and swim X minutes back again.
So the developers use a trick. The ocean does have an edge. But once you get to the edge - while the swimming animations continue - you don't move. Oceans have waves, so by changing the wave's parameters the developers can create an illusion of infinite ocean.
The ocean technique is interesting. It hides the edge of the world in infinite, meaningless and boring content. Almost a "default behaviour" if you will.
![Siri loves to just search the web instead of answering questions](/static/images/6/siri.Hd6e1c6f06296195d01478325e528439f66b44acb7c1803fdd647554bf1f86ef4.jpg' | img_url }})
I think this is a technique that many vendors have already picked up on. When Apple's Siri system can't find an action related to your input, it just searches the web - a place not unlike the ocean.
Not all games use the sea to hide the edge. Other games use more subtle ways; like dense forests with trees too tall for the user to overcome, or removing resources so that the character can not survive the journey to the edge.
These are some really interesting ideas. For the integrity of the machine learning hype train, is it possible to push the blame for issues onto different parts of the application - just like the forest of trees strategy? Or can applications remove data when something reaches the edge of the functionality?
Video games have always been inspirational when looking for practical ways to make AI a reality. Games very early on created AIs to challenge players - many of which are extremely limited, but created an immersive experience. How do these new generation of open-world games, with their new techniques for "faking it", inspire better AI user experiences?
Comments, thoughts? Mail them to [email protected]. I would love to hear them!