The AI innovations of the last two decades in simple English

In 1997, a little game called Age of Empires came out. For those of you not familiar with it, it’s a real-time strategy game where you create and arrange different pieces of a civilization in order to advance through different ages and defeat your enemies! To start, you create villagers that collect resources. Later on, you use those resources to update various technologies or create military or other units to support your goal of world domination.

You can play against other players, but you can also play against an AI player. A year later, Age of Empires II came out, with slightly improved gameplay and AI. The AI in both of these games highlight the limitations and design style of artificial intelligence systems of that era.

AI in the 90s

The original AI is marked by some very predictable patterns for each level of difficulty. The hardest difficulty of AI notably cheats – it gets more resources by default. Other than this, the AI has a pretty standard modus operandi – it builds an army made up of 1 or 2 unit types, which is not considered diverse enough by most players. On a forum discussing this topic, a player noted that a distinct feature of the original AI is “you can start to predict what it’s going to do after 1-2 games.” While the AI is reactive to its circumstances, it operates based on some pre-defined paths. For example, the AI is not smart enough to gauge that in certain battles, retreat is better. The AI simply goes from directive to directive: gain more resources, attack (keep attacking no matter what), and so on. The computer pays no mind to the variations in your play – it tends to behave the same way over and over.

The very first AI couldn’t even build walls around settlements. This was fixed in 1999.

Over time, new installments of the game added features to the AI, such as messaging. The AI would pick out of 30 or so canned messages to threaten you every so often, sometimes reacting to your behavior in the game. It was entertaining, but still fairly limited.

The best feature of the original AI was unit movement. AI can always outrun you. Unit speed is often based on a pretty difficult calculation of multiple bonuses that stack. As a human player you click around trying to run away as fast as possible, but AI can easily move the perfect angle based on the unit’s calculated speed to avoid attack and quickly move back to attack you. For this reason, beginner players still lose against AI in small battles that heavily depend on the angles you move your units along. This feature seems to be the same or slightly better today, with no major changes.

So far, we have artificial intelligence that has a couple pre-defined actions it can take. During the game, it will calculate benefits and losses in order to decide actions. Once it chooses an action, it does not stop (no retreats.) Additionally, the AI is great at fast geometric calculations, making unit movement far superior to what most humans are capable of.

AI Today

To truly explain the difference between then and now, we can look at the rebooted version of Age of Empires II: HD Edition.

The new AI calculates the benefit and loss of every single micro-decision. Instead of having a bulk action like, “Make a new mine and set 3 villagers on it,” it will decide every individual action separately based on what is happening. Let’s say a mine is needed. It will begin building the mine. If something happens – say, an attack on the other side of the map, it will cancel the mine if there is a more efficient action.

Let’s assume a mine is built. At this point, it will assess again, can I afford to still put a villager on this mine? If nothing changed in the last few minutes, the answer is most likely yes. It will make the decision on how many villagers to put on the mine and how fast based on what is happening on the entire map economically and militarily.

In certain cases, it might put 3 villagers on the mine exactly like the original AI, but in all the edge cases which make or break games, it will behave quite differently from the original AI.

Not only does the modern AI make smaller individual decisions, it also accesses many more data points. The goal was to mimic the decision making process of real human players. In fact, players of this game describe the new AI as “strategizing like a human being.”

That’s cool, but not mind-blowing

Well, it’s not. The advances behind modern AI are rooted in hardware. The original developers of Age of Empires were definitely smart enough to mimic a human’s strategizing, but no consumer machines would be able to do that much processing without slowing down the game.

While AI varies by application, I think this is a good sampler of the last two decades of development. Software developers simply trail hardware. As hardware gets cheaper and faster, software AI will be able to collect much more data, process it much faster, and make decisions much more often, leading to higher quality results. The modern AI quality is generally smarter than humans, if only because we can no longer easily predict the behavior of AI systems in order to out-strategize them.

That being said, tons of people have managed to beat the hardest new AI. Youtube has plenty of examples of wins against the hardest new AI within 20 minutes (Most human vs human games last 2+ hours.) We’re still winning. And by “we”, I mean very few humans who have mastered the game.

Does this surprise you?

Humans still have the ability to intuit, test theories, and deduce. The computer by default is aware of more data than we are – a human can’t possibly constantly calculate the cost and opportunity cost of a bunch of units using all different resources. It’s too much math. Humans still generally play with pre-set strategies (like the original AI) that change depending on circumstances (like the new AI.) On top of that, they have other skills and abilities we have not yet re-created. One example is experimenting and testing theories. It’s only a matter of time before these skills are also codified via machine learning and added to AI models, as long as there we have hardware fast enough to support it in real-time gameplay.

Moving past experimentation, humans also can compose and design. These are higher order types of thinking. We have modeled our emotional ranges fairly well, but haven’t figured out yet how to aid AI with emotion. My feeling is that emotion helps humans with decision-making much more than we think. What’s exciting about AI is that it forces us to think more and more about our own intelligence and mental abilities, figure out a way to model it, and then wait for the hardware that can run it.