This is the first in a series of posts that discuss the developments, prospects, hopes and fears of Artificial Intelligence in human societies. This is a field of computer science that has been making noticeable strides in recent months and it is therefore important that our societies have a serious and informed conversation about this. So, without further ado, part 1 of my series on AI.


A Recent Development

In the wake of the DeepMind (Alphastar) defeat of professional Starcraft II players the world once again turned its short attention briefly to the discussion of artificial intelligence. Most journalists and news sources aren’t terribly concerned with the event, considering it to be just another sign-post in our technological history. However, others realize that it should be an event that spawns more serious conversation.

The Deepmind project, owned by the parent company of Google, is attempting to develop artificial intelligence agents that can self-learn any game given an initial human example (and life is essentially a game in this context). Known for defeating world champions in Chess and Go, the Deepmind team recently turned its attention to the e-sport Starcraft II. Not only to test its decision making speed, but also to see how well it could understand unit management on a very dynamic field and most importantly, to test whether the AI could learn how to manage decision-making with incomplete information (Starcraft II’s Fog of War).

The initial 10-0 defeat of the human players highlighted its strengths and also an oversight in the training that gave the AI an advantage. However, even when retrained to only use the main camera to make its moves (as humans are limited to) it still performed at a professional level, although it lost 1-0. This second example is all the more important when considering that the agent was trained from novice to professional in only one week.

But surely this is just a gimmick, harmful fun by computer nerds, isn’t it?

One thing that such AI experiments regularly highlight is the unpredictable conclusions that AIs can arrive at when left to train themselves.

It might be just a cool gimmick but for a few important details that highlight a future closer than we might think — or wish for.

Several important points in this latest Deepmind experiment are:

  • after watching the initial set of human games (of all skill levels), the AI was then entirely self-trained, learning only by playing other AI agents in an ‘Alphastar’ league
  • such agents can be trained up a very short time (a week or two is sufficient and corresponds to decades, if not centuries, of human training).
  • while the agents require some serious hardware to be trained on, once developed they can run on a standard laptop
  • while there were some weaknesses in gameplay, the Alphastar agents developed and successfully utilized strategies that humans either hadn’t thought of or had collectively abandoned as sub-optimal.
  • there are organizations in the world that desire, or require (e.g.for their social order to function), such AIs to run, or ruin, countries.
Myths of AI, from the Future of Life Institute.

One thing that such AI experiments regularly highlight is the unpredictable conclusions that AIs can arrive at when left to train themselves. Indeed, when extrapolating this to the running of a society, this unpredictability could lead to greater optimization or it could lead to what the the Future of Life Institute might consider as AIs having ‘misaligned goals’.

That is, when humans consider running societies, we have goals. Some of them are explicit, some of them are implicit, and some of them have to be fought over by the majority in order to have our voices heard. However, while AIs may be more efficient in managing and distributing resources, an AI leader may set up a social structure that misaligns implicit and explicit human goals (more in a moment).

Arnold Swarzenegger as an AI assassin in the Terminator series.

After all, we ourselves are still bumbling through the creation of our social structures, trying to determine what is truly important for our survival and well-being. For example, it’s quite clear that the simplistic view of aiming to maximizing human happiness (a goal of the early Humanist movement) is flawed and would end up likely resulting in a society that collapsed in strife and humans sought to ‘fix’ ever smaller imperfections. So any explicit goals for an AI manager would have to be very carefully chosen.

Insane AI Ultron from the Avengers series.

Hollywood loves to use the obvious example of having an AI created to protect us and then having it decide that we are our own worst enemies. Or that protection means never allowing us to evolve or leave the planet. Let’s look at the considerations of an equally important social discussion underway in Western societies right now.

Vision, benevolent robotic AI in the Avengers series.

Safety and Equity vs. Exploration and Freedom

We are currently embroiled in a *sometimes violent* social debate on multiple fronts that effectively boils down to the right of the individual to be completely safe vs the right to have freedom to take risks and make mistakes. This is a crucial debate because risk-taking can lead not only to great harm and even death, but also great boons for humanity. Without the risk-taking of our ancestors we would not have any of the wonders we have now and it’s unlikely we would even have left our African crib. Furthermore, a safe society, while feeling more humane, is less robust because it loses the ability to adapt to hardships and change as its members expect all their needs to be met by someone else.

Let’s say we decide that humans are unable to come to an agreement on this and we turn it over to an AI for management. What considerations might it make?

Depending on who programmed it it could favour equity and safety over risk-taking and personal freedom. This could be expected to result in short-term survival and even prosperity but long-term stagnation (survival but no growth either personal or social). While our species might survive a long time, they would do so as little more than automatons themselves.

Conversely, too much emphasis on freedom in a hi-tech age could lead to the use by individuals of technologies that might destroy our entire species. Some middle ground is required but, of course, the devil is in the details.

It could be that an AI comes up with an original and wonderful solution to this conundrum. Or it could be that its explicit goals are misaligned with our implicit ones and it comes up with a truly horrendous solution.

For example, let’s say we give an AI like Deepmind past human societies from around the world and across history to base its models on and we give it no human bias regarding the success of those societies so that it can create its own criteria. And let’s say it discovers some patterns of bevaviour or social structures that it equates with social degradation and some with social success. Then it structures our society based on those models. Now, it could be that we agree on the new ideas the AI discovers and love the new society. Or it could be that the AI decides the best society is one of a benevolent dictatorship where a certain segment of the population is given second class citizenship (e.g. from some perspectives a valid line of reason could be made for this to target any of: men, women, certain – or all – religions). It could even be that this latter conclusion is demonstrably, mathematically, superior. Yet most of us today would feel such a regime to be a nightmare scenario.

Catch-22

So we’re damned if we do, and damned if we don’t. Try to guide the AI and we introduce human biases that could destroy the experiment, don’t guide the AI and we could get a highly ‘successful’ yet nightmarish scenario. Do nothing and rogue players will force our hand by introducing their own AIs.

The recent Deepmind experiment with Starcraft II has demonstrated that AIs can make successful decisions with incomplete data information and that their goals can align well with the overall goals of the game they’re playing. However, games have simple rules, life and societies do not. With Artificial Intelligence we are entering a period of our history that is truly different from any that have come before. We must tread very carefully on the path forward, all kinds of unknown dangers await.


Next week: Fake News is about to get real (prevalent)

Enjoy this blog? Please spread the word :)