7 Areas that Could See Massive Benefits from Artificial…

There’s a lot of discussion regarding the development of Artificial Intelligence (AI) and the consequences for human society. Such concerns hit the media in waves, generally around some new success by AI in the latest human competition. I’ve covered a few of these recently, including games, biochemical research, and writing. Most of the discussion centres around the negative implications because, let’s face it, humans don’t like change, as a general rule. Change means uncertainty, and uncertainty means things could get worse.

But what about the chance that AI makes the world better?

Here are seven areas where AI could greatly aid humans.

1. Personal Assistant

The artificial home personal assistant is already a huge growth area as devices like Amazon Echo, Google Home, and the many competitors surge into the marketplace to be devoured by consumers. It seems we’ve finally found the market for voice-activated devices. However, as amazing as these devices are, they are still very primitive in regards to their potential.

The AI Jarvis, from Marvel’s Age of Ultron.

Advanced artificial intelligence could soon be incorporated in personal assistants in such a way as to make them very much like Jarvis from the Iron Man movies. Not only would they be able to store and retrieve the information we want, but they would learn the nuances of our individual speech patterns, and our daily schedules, to anticipate our needs. Everything from better understanding our accents to reminders to buy groceries and prepare for meetings. Eventually, they could be good enough to prepare presentations and documents based on our person style from a few seeded keywords.

If this sounds creepy, consider that the wealthy and powerful already have human versions of just this type of assistant. The only difference now would be that everyone could have one.

2. Personal Educator

I’ve been saying for almost a decade now that universities are on the losing side of history. Not only are many departments (and entire universities) failing in the mandate to properly educate their students, but they’re actually introducing and even enforcing socially harmful, and factually incorrect, ideologies. Add to this the rise of online education and it seems reasonable to expect that the university as we know it, with the possible exception of STEM fields, is already obsolete as a place of education (research remains another matter). Filling this void in higher education to an increasing degree are online courses.

Currently, online classes are created by one of three types of people. Either professors from prestigious universities who have put their class online, entrepreneurs who cover niche topics in enough detail for their students to become proficient enough they can also make money in that area, or online child education for home-schooled children or those needing/wanting extra tutelage.

But all of these area require the individual to discover the course, assess the best one, and then sit through all the material to get the bits relevant to them. In addition, it requires the courses be created, often at significant expense. But imagine what an AI tutor could do!

Using a regular series of assessments that could range from tests to simple questionnaires, a personal AI tutor could identify the areas a student is weak in, or an individual wishes to learn about and, determining their level of competence, an AI could pull together relevant information from the net and compile it into a training course personalized to that individual.

My thoughts are that, maybe twenty to thirty years in the future, AI will be the dominant teaching method in advanced countries. Teaching by humans will then fall into two categories: research and management of social development. If the majority of families still have both parents working by that time, schools will be sites that have a combination of self-directed AI learning centres combined with team-building and social development activities moderated by human instructors.

3. Financial Management

Ask people in the West what their greatest fear is and, after climate change, they will most likely say financial collapse. Our well-being relies on the smooth functioning of a complex global financial system that not only appears fundamentally unstable but seems ripe with manipulation leading to regular periods of boom and bust (approximately every 18 years). So it’s no surprise that the majority of the public don’t have a lot of confidence in the continued functioning of such a system.

But what if there is a better way? What if there is a technology that could actually determine and act on all the variables to create a stable financial system? Ignoring the fact that there would be significant resistance to this from the financial sector itself, artificial intelligence would seem to be the perfect solution. Ideally designed to determine patterns and connections between immensely complex data sets, it would seem that, once properly trained, an AI would be the ideal tool for such an endeavour. Furthermore, it should have the added bonuses of being immune to human manipulation while being able to respond to potentially disastrous trends or changes very quickly.

4. Disease Management

Disease management is one area of scientific research that almost no one will worry about an invasion of AI in. The potential benefits to human health and welfare are so great that most objections are quickly swept away and with good reason. Computers have been incredibly important aids in everything from drug development to data analysis and patient management for decades now. However the standard ways of using computers are not meeting the needs of the next level of advancements we need to make in understanding complex diseases and disorders.

Disorders like Alzheimer’s or Parkinsons, even cancer, can be managed to some small degree with modern treatments. However, the complexity of these, and many other disorders, which can result from the failure in any of dozens of biochemical pathways and often in the interaction between those pathways, necessitates a level of analysis that is difficult if not impossible for the human mind to manage. A modern, heuristic AI, however, is designed to be able to make sense out of such complexity. So, once the technology matures, we could be seeing huge advances in medical sciences due to AI, potentially even leading to an extension of the human life-span.

Interestingly, just recently, DeepMind has announced being close to releasing, commercially, a device using their software that can diagnose eye-related illnesses, from a 30 second scan, to a degree of accuracy rivaling the top specialists in the world.

5. Climate Change

Much like the complexity of the human body, concern over climate change is an example of humans trying to make sense of an incredibly complex system with only limited awareness of its full scale. We have numerous models being created and updated each year, which is evidence of just how little we actually know — if we understood the climate we wouldn’t have a need for more models!

Once again, if trained properly (no small challenge given the limited data), the complexity of climate is just the type of challenge that an AI could excel at understanding. Of course, an even bigger challenge would be to convince humans to act, or not, on the results (especially if the results go against the prevailing ideology).

One potential positive secondary consequence of using AI to study climate change could be in regards to developing models and technologies for the eventual terraforming of planets we are looking to colonize.

6. Government Management

I’m certain I’m not alone in my poor view of political options both in my own country and around the world. Perhaps it’s just tunnel vision and the world has always been this way, or perhaps it’s the end of an era, but these days regardless of where one looks, there seems to be no good options for national leaders. Perhaps this is a sign we’ve let the wrong people dominate power for so long, or maybe it’s a sign of the increasing complexity of the world. When I think of the type of lifestyle, and the vast number of decisions and considerations a national leader encounters in a given day I’m amazed anyone would want the job – let alone that anyone might be remotely capable of actually doing it.

Scene from the game Civilization 6

This, then, seems to me another field ripe for experimentation with AI, and I’m not alone on this thought. Numerous groups have sprung up over the last few decades that have AI at the centre of their social restructuring policies. Furthermore, given the DeepMind AI training strategy, which is to begin with human input, and then train the AI against each other in numerous rapid simulations, I could even see an AI United Nations existing. If Google ever encounters this article, I’d like to put forth a request. I’d love to see such an AI united nations publicized as a reality show using a system like the game Civilization as a backdrop to their decisions.

7. Interplanetary Exploration

Advanced Engineering is a field that stands to gain tremendously from the application of artificial intelligence. Everything from driverless cars and hypersonic aircraft design to the guidance of nanite construction swarms and fully functioning robots can see the benefits of an infusion of AI. One area I’m particularly excited to see AI applied to is in the development and construction of advanced propulsion systems and ship structures for interplanetary travel.

Artistic representation of Interplanetary Superhighways as calculated by NASA scientist

The complexity of physics on both large and small scales, and the application of principles marrying the phenomenon to real-world space travel would seem to be an excellent direction to turn AI lose on and the potential benefits could literally determine the future of our species.


There are a great many fields that artificial intelligence could by applied to for the benefit of humanity. An interesting idea to ponder is whether any of these applications are already being attempted behind closed doors. After all, our governments hardly tell us everything and, even more importantly, we’re not the only ones in the game.

Next –> AI: What If It’s Not Ours?


Artificial Intelligence: Wonder Tool or Human Replacement?

As artificial intelligence becomes ever more powerful the fear of whether it will be a tool for good or evil, or even whether it will replace us, becomes increasingly important.

The Issue

Every two years a group of structural biologists (those who study the structure of large biological molecules) have a competition – the Critical Assessment of techniques for protein Structure Prediction, or CASP – to test the software they’ve developed. Their goal is to see who has developed the most accurate tool for predicting the fold of a protein from the amino acid structure and to share new developments or strategies with the wider community of researchers. At the last competition, in 2018, the Deepmind AI called AlphaFold defeated all other challengers, scoring better than the entire competition combined and far better than the second best.

The Importance of Knowing a Protein’s Structure

Proteins are biological molecules composed of amino acid subunits. They can range in size from moderate sized peptide chains (a dozen amino acids) to immense multi-subunit complexes of thousands of amino acids. They are crucial for everything from the structure of our body to the functioning of our life-sustaining chemical processes.

When the first protein structure was determined by Max Perutz and John Kendrew in the late 1950s, the scientific community was dismayed. In comparison to the elegant simplicity of the structure of DNA, determined x years earlier, it seemed ugly and chaotic. Since that time, however, scientists have discovered that protein structures have their own elegance hidden within the jumble of coils, sheets, and strands of amino acids.

False-coloured 3D structures of three proteins. A cartoon viewing mode is used to highlight structural features.

Knowing a protein’s 3-dimensional structure is important for understanding how it carries out its function and for how we might develop pharmaceuticals for biological issues that might involve that protein. Determining a protein’s fold and thereby a more complete understanding of its function is also an interesting endeavour in its own right.

What Happened?

At CASP 2018, challengers were given the amino acid sequences of 90 proteins for which the structures were known but not yet published. In the case of 43 of these proteins, only the amino acid sequence was known (in addition to the unpublished structure). That means things like function or protein classification could not be used to aid the structure determination (certain protein folds are common motifs in certain classifications of proteins). Of those 43 ‘unknown’ proteins, AlphaFold was more accurate in predicting their structure than the competition 25 times. The next best challengers won in only 3 cases.

For an insider’s look at the recent CASP challenge and detailed thoughts on the significance, I recommend a read of M. AlQuraishi’s blog article.

While AlphaFold used computational techniques that have been well-known for some time, because of its nature and the massive hardware investment behind it, its machine-learning algorithms could be trained on vastly larger data sets than ever before. This success has led some to question everything from are scientists obsolete to why can’t big pharmaceutical companies be as successful as an IT company? Especially if they have similar money and have been in the field far longer than Deepmind’s two years.

As with the increasing number of fields where AI is succeeding, the initial shock of many specialists leads to a re-evaluation of what they thought they knew both about themselves, about their field, and about human potential in this technological world (and also their job security). Eventually, however, those who are not too discourages allow their thoughts to settle and come to the realisation that this is just another, albeit very amazing and powerful, tool we’ve developed. And, if harnessed well, it can allow us to do amazing new things.

In Conclusion

For now at least, it remains humans who are the ones making the decisions based on the AI results. As long as that remains the case, artificial intelligence remains a tool and, like any tool, can be used for good or evil. We could see AI used for incredible advances that we had once thought centuries away. Disease cures, human longevity, understanding of the fundamental nature of reality, interstellar travel, we simply don’t know how powerful AI can be and what wonders we can discover and create with it. On the other hand, the potential for restructuring of our societies and sowing political and social chaos is also immense. With each game-changing technology we create comes renewed fear, renewed hope, and ever greater responsibility both on our leaders and on each and every individual.

Alternate Futures

Artificial Intelligence: A View into the New Future

AI and The World of Tomorrow?

A row of cars wait, orderly and patient, as we reach the school driveway. I head for our family Auto, distinguishable by the customized chromo-flex decal on the hood, currently set to cycle through red and black arrows pointing toward the front. It’s supposed to look dramatic, tough, and it might if dad didn’t keep setting the chromo-flex body skin to cycle through a pattern of pale purple and blue.

The two side doors open outward from a centre seam as I come within range of the RFID scanner and flip my hand up, palm facing the door.
Dad’s colours change to an electric blue tech pattern as the Auto says, “Hello, sir, how are you today?” The deep, soothing voice is an immediate comfort after the crazy day.

Will and I reprogrammed the vehicle’s AI for an alternative behaviour pattern when there are no adults about. For one thing, it mimics the Artificial Intelligence that was Manifold’s partner in The Destiny Justice Force, a movie series Will and I were obsessed with a few years ago. For another, mom and dad’s AI is a nanny-bot.

“Fine, Jacob. How are you?” I answer, climbing into the spacious interior and taking a place on the circular sofa.

“As well as always, sir. Master Will, Mistress Annie, will you be joining us today?”

“Sure will Jacob.”

“Yes, thank you.”

Will and Annie climb in, spreading out to get comfortable. Our teachers and parents regularly try to impress upon us what it was like before Autos— autodrive cars— had revolutionalized personal transportation, but it’s just unimaginable. Apparently, there was a lot less room inside the car and parents had to pick their kids up from school— assuming they didn’t take the bus. However, most of the normal population has Autos now, except those in small, rural communities.

“Would sir like the usual visual feeds for the drive home?”

“I won’t be going home just yet, Jacob. Override destination alpha-seven-niner-delta-four. Voice code ‘singularity’ ”. Of course, I couldn’t reprogram the AI without giving myself a backdoor.

“Very good, sir. Your parents have been notified that you and Master Will will be meeting at the library. Where would you like to go?”

“Seventy-three Newton Place. There’s a flash tournament I want in on. Oh, and would you pipe my usual training feeds, please.”

“Very good, sir. Arrival will be in thirteen minutes, twenty seven seconds from the time of departure. Will that be all, sir?”

“You have any snacks, Jacob?” Will asks as I feel the car pull away from the school lot.

The circle of screens surrounding the interior light-up, displaying live streams from the top gaming sites, as well as recordings from the global Regional Championships.

“I have recently been fitted with a prototype organic printer. It makes a mean batch of chocolate chip cookies. They will take five minutes to print. Would you like some?”

“Of course! Bring ’em on.”

The above excerpt is (c) 2019 Edwin H Rydberg

The preceding is an excerpt from a work-in-progress of mine, a near-future world where AI is one all-pervasive technology that is just accepted as common-place. This is, of course, AI as we currently experience it, not self-aware or ‘General’ AI (the development of the latter AI drives some of the above story).

In previous posts (7 areas that could see massive benefits from artificial intelligence, AI – Wonder tool or human replacement?, Fake News is about to get real (prevalent), AI – Boon or Bane?) I’ve written about various developments with AI, so how might they all come together? What might the future be with artificial intelligence?

The Good

As you can see in the excerpt, I suggest humans are adaptable and I believe most humans will adapt to the convenience of pervasive artificial intelligence even at the cost of privacy and security.

As a side note, artificial intelligence is not synonymous with a loss of privacy or security. Those only follow it in our world because of the big companies (and governments) that are behind the development of it. In theory, one could develop their own AI that wouldn’t have such flaws.

AI Assistants

‘Assistant’ AI (advanced versions of Alexa, Siri, or Google@home) will be accessible from every room in a house (except ‘privacy rooms’), in the auto-drive cars, and will eventually enter the workplace by the time today’s toddlers reach adulthood. Combining this with ever-improving speech-to-text algorithms will mean all jobs will experience dramatic change as individuals become one-person departments.

AI for Research

But such simplistic AI barely touches on the true power of the technology. Google’s Deep Mind project has grown in leaps and bounds over the last few years tackling everything from strategy games with incomplete information, to the vast complexity of protein folding, to helping manage medical services. Projects I’d love to see them take on (with unbiased training sets) are climate change, global food and water management, government management, space vehicle propulsion, and the use of gravitational ‘highways’ for interplanetary transportation.

The Human Connection

However, back on Earth, once we get over the initial response of ‘massive job loss, computers can do everything better, and the future is no longer human’ we come to realize that it is still humans who act with purpose and direction and this technology will actually allow us to do far more than we ever could without it. After all, the universe is an incredibly complex place and it’s possible our brains are simply not capable of processing that complexity without assistance. The downside is, of course, that we have entered a period in our history where we no longer understand how our own technology works.

The Bad

Pervasive artificial intelligence will, without a doubt, continue the trend toward increasing human laziness and an ever-growing disconnect with both nature and our own past as we move further from both.

Changing Job Market

However, even before that we will see massive changes in the job market. With individuals being able to do more and technologies like auto-drive cars dramatically impacting the transport of goods and people the job market at all levels will change. Professionals like doctors and lawyers won’t be safe either as both of their jobs have a large database retrieval component to them which, at best, would mean a reorientation toward the human-centred part of the job rather than the diagnostic portion. By this time we’ll be long past the point were ‘learn to code’ will be useful advice.

Somewhat ironically perhaps, a consideration of AIs limitations would suggest that jobs in traditional trades (although upscaled for dealing with the installation of new tech) will be the safest and will be among the last replaced.

Fake News

Also, fake news, which is always being discussed these days, will continue increasing in amount and believability as new technologies come online. Not only will AI be able to write amazingly realistic fake news stories, but it will even be able to generate videos of famous people that appear authentic. Bots are already being used to generate fake static facial images for social media, but video is not far behind.

Increasing Vulnerability

Finally, I would be remiss if I didn’t address an ever-present concern in the increasingly connected world. Specifically that of malicious hackers. Being so dependent on a connected world and the Internet of Things (IoT) necessarily leaves us increasingly open to cyberbullying, cyber-attack, and having our devices used for malicious purposes. Let’s hope the developers consider their builds well, and that our governments don’t try and force them to put in backdoors.

The Strange

All of the above doesn’t even touch on up-and-coming technologies like Elon Musk’s NeuraLink, which intends to build a direct interface between the human brain and technology and could be the beginning steps toward a cybernetic future. That, combined with his global internet called StarLink, of which the early stages are currently being placed in orbit would mean hands-free technology access anywhere on the planet. Something that would forever alter humanity.

And just imagine how AI could aid the new business space-race and the space tourism industry about to take off.

In Conclusion

I, for one, am cautiously optimistic about the future with artificial intelligence and excited about the possible advancement we can make with it — at least until it becomes self-aware. Perhaps I am fortunate to be entering my 50s as the wave of AI comes online, however, it means somehow preparing the children for what’s coming even when we can’t see it clearly ourselves.

We are truly at the event horizon of the technological singularity. As usual, we’ve predicted the existence of something without having any idea of what it actually means. But humans are explorers so let’s travel to this undiscovered country.


Artificial Intelligence: Fake News is about to get real…

With the coming technological advances and the dearth of investigative journalists in the modern world, fake news is about to explode.

The Issue

Among his many other projects, uber-wealth entrepreneur Elon Musk sponsors OpenAI, a group with the goals of guiding the development of artificial intelligence so that it is ‘nice’ and representative of peoples and positive ideas from around the world. Certainly, this is a laudable goal for the near future, for that period of technological development where artificial intelligence means advanced heuristic systems that can pass the Turing Test but are still just software we can turn on and off. In other words, that period of technological development that we seem to have entered sometime in the last year.

Some of the recent advances in AI (detailed in other posts) included aiding or replacing human health care workers in hospitals, defeating humans at games where incomplete information is a feature (Deepmind vs Starcraft II Pros) and much greater success than human-generated algorithms in regards to the protein folding problem… and now, a highly trained AI author that can mimic genres and styles and even respond to rudimentary questions.

Journalist Activists

Over the last five-to-eight years the Western world has seen a strong and notable shift in the focus of journalists. No longer is the goal of journalism impartial and investigative. No longer do modern journalists feel their job is to get as close to the truth as possible. Instead, most modern journalists seem to feel they already know the truth and instead have taken it upon themselves to share their interpretation of any given story with their audience in an attempt to convince us they are right.

We can argue the why’s of how this shift has happened but what is difficult to argue is how it has changed the news landscape providing the viewing and reading public with a surge in what is now termed fake news. What once was the domain of government propaganda and tabloid rags is now proudly splayed across even what used to be the most respected news franchises.

Modern mainstream journalists are no longer journalists in anything but job title. They’re now primarily activists.

Artificial Intelligence Journalism

Into this landscape, where the truth is already relative and amorphous, enters a project by OpenAI – an artificial intelligence writer. A bot so convincing in its ability to write that it’s virtually impossible to tell it apart from a human.

Open AI has trained an artificial intelligence they call GPT-2 on a huge dataset of writing. The result is yet another AI that we can be afraid of (coming after the Microsoft – Twitter debacle that created a hate-bot, and the Google AIs that developed their own language). GTP2 can write so convincingly in different genres that the creators have decided not to release the code for fear of malicious use. Specifically, they fear its use in the generation of Fake News.

While they should be commended for their sense of social responsibility, OpenAI researchers must realize that their actions are only delaying the inevitable. Currently, most of the visible AI research and milestones are coming from Western organizations. However, we would be naive to think that other nations don’t have their own projects. Russia and China come to mind as two countries who would have the resources to fund such projects and would be happy to use the results to destabilize the fragile West.

Furthermore, the recent results of the investigation into Russian collusion in the US election that revealed no actual collusion with the victor but suggested interference – likely on both sides in an attempt to destabilize the nation – should have revealed to all of us that there are other nations actively working toward the demise of the West.

Given these facts it’s difficult to argue that some malicious power (or even an individual with a basement full of hardware) won’t soon reproduce the OpenAI success. When that happens, we will find a huge proliferation of AI writing bots as the software floods the internet. And then the entire landscape of news, among other things, will change. When this happens it will be more important than ever that each person has done their research and has gathered a network of respectable investigative journalists they feel they can trust to do the research and report honestly on stories. Because the others, the lazy journalists and the activists, are going to do nothing more than muddy the waters ever further in their race for fame and notoriety.

The power and scope of modern heuristic algorithms (AI) is a socio-political game-changer of a magnitude equivalent to fire, the wheel, gunfire, the printing press, and nukes – combined. In a small world filled with the deafening cacophony of every voice on the planet, AI will drown them all out and will change everything we think we know within a matter of decades. And that’s without gaining self-awareness. Never before has human adaptiveness been so tested. The coming years will truly be ‘interesting times’ as Terry Prachet might have described them.


Artificial Intelligence: boon or bane?

This is the first in a series of posts that discuss the developments, prospects, hopes and fears of Artificial Intelligence in human societies. This is a field of computer science that has been making noticeable strides in recent months and it is therefore important that our societies have a serious and informed conversation about this. So, without further ado, part 1 of my series on AI.

A Recent Development

In the wake of the DeepMind (Alphastar) defeat of professional Starcraft II players the world once again turned its short attention briefly to the discussion of artificial intelligence. Most journalists and news sources aren’t terribly concerned with the event, considering it to be just another sign-post in our technological history. However, others realize that it should be an event that spawns more serious conversation.

The Deepmind project, owned by the parent company of Google, is attempting to develop artificial intelligence agents that can self-learn any game given an initial human example (and life is essentially a game in this context). Known for defeating world champions in Chess and Go, the Deepmind team recently turned its attention to the e-sport Starcraft II. Not only to test its decision making speed, but also to see how well it could understand unit management on a very dynamic field and most importantly, to test whether the AI could learn how to manage decision-making with incomplete information (Starcraft II’s Fog of War).

The initial 10-0 defeat of the human players highlighted its strengths and also an oversight in the training that gave the AI an advantage. However, even when retrained to only use the main camera to make its moves (as humans are limited to) it still performed at a professional level, although it lost 1-0. This second example is all the more important when considering that the agent was trained from novice to professional in only one week.

But surely this is just a gimmick, harmful fun by computer nerds, isn’t it?

One thing that such AI experiments regularly highlight is the unpredictable conclusions that AIs can arrive at when left to train themselves.

It might be just a cool gimmick but for a few important details that highlight a future closer than we might think — or wish for.

Several important points in this latest Deepmind experiment are:

  • after watching the initial set of human games (of all skill levels), the AI was then entirely self-trained, learning only by playing other AI agents in an ‘Alphastar’ league
  • such agents can be trained up a very short time (a week or two is sufficient and corresponds to decades, if not centuries, of human training).
  • while the agents require some serious hardware to be trained on, once developed they can run on a standard laptop
  • while there were some weaknesses in gameplay, the Alphastar agents developed and successfully utilized strategies that humans either hadn’t thought of or had collectively abandoned as sub-optimal.
  • there are organizations in the world that desire, or require (e.g.for their social order to function), such AIs to run, or ruin, countries.
Myths of AI, from the Future of Life Institute.

One thing that such AI experiments regularly highlight is the unpredictable conclusions that AIs can arrive at when left to train themselves. Indeed, when extrapolating this to the running of a society, this unpredictability could lead to greater optimization or it could lead to what the the Future of Life Institute might consider as AIs having ‘misaligned goals’.

That is, when humans consider running societies, we have goals. Some of them are explicit, some of them are implicit, and some of them have to be fought over by the majority in order to have our voices heard. However, while AIs may be more efficient in managing and distributing resources, an AI leader may set up a social structure that misaligns implicit and explicit human goals (more in a moment).

Arnold Swarzenegger as an AI assassin in the Terminator series.

After all, we ourselves are still bumbling through the creation of our social structures, trying to determine what is truly important for our survival and well-being. For example, it’s quite clear that the simplistic view of aiming to maximizing human happiness (a goal of the early Humanist movement) is flawed and would end up likely resulting in a society that collapsed in strife and humans sought to ‘fix’ ever smaller imperfections. So any explicit goals for an AI manager would have to be very carefully chosen.

Insane AI Ultron from the Avengers series.

Hollywood loves to use the obvious example of having an AI created to protect us and then having it decide that we are our own worst enemies. Or that protection means never allowing us to evolve or leave the planet. Let’s look at the considerations of an equally important social discussion underway in Western societies right now.

Vision, benevolent robotic AI in the Avengers series.

Safety and Equity vs. Exploration and Freedom

We are currently embroiled in a *sometimes violent* social debate on multiple fronts that effectively boils down to the right of the individual to be completely safe vs the right to have freedom to take risks and make mistakes. This is a crucial debate because risk-taking can lead not only to great harm and even death, but also great boons for humanity. Without the risk-taking of our ancestors we would not have any of the wonders we have now and it’s unlikely we would even have left our African crib. Furthermore, a safe society, while feeling more humane, is less robust because it loses the ability to adapt to hardships and change as its members expect all their needs to be met by someone else.

Let’s say we decide that humans are unable to come to an agreement on this and we turn it over to an AI for management. What considerations might it make?

Depending on who programmed it it could favour equity and safety over risk-taking and personal freedom. This could be expected to result in short-term survival and even prosperity but long-term stagnation (survival but no growth either personal or social). While our species might survive a long time, they would do so as little more than automatons themselves.

Conversely, too much emphasis on freedom in a hi-tech age could lead to the use by individuals of technologies that might destroy our entire species. Some middle ground is required but, of course, the devil is in the details.

It could be that an AI comes up with an original and wonderful solution to this conundrum. Or it could be that its explicit goals are misaligned with our implicit ones and it comes up with a truly horrendous solution.

For example, let’s say we give an AI like Deepmind past human societies from around the world and across history to base its models on and we give it no human bias regarding the success of those societies so that it can create its own criteria. And let’s say it discovers some patterns of bevaviour or social structures that it equates with social degradation and some with social success. Then it structures our society based on those models. Now, it could be that we agree on the new ideas the AI discovers and love the new society. Or it could be that the AI decides the best society is one of a benevolent dictatorship where a certain segment of the population is given second class citizenship (e.g. from some perspectives a valid line of reason could be made for this to target any of: men, women, certain – or all – religions). It could even be that this latter conclusion is demonstrably, mathematically, superior. Yet most of us today would feel such a regime to be a nightmare scenario.


So we’re damned if we do, and damned if we don’t. Try to guide the AI and we introduce human biases that could destroy the experiment, don’t guide the AI and we could get a highly ‘successful’ yet nightmarish scenario. Do nothing and rogue players will force our hand by introducing their own AIs.

The recent Deepmind experiment with Starcraft II has demonstrated that AIs can make successful decisions with incomplete data information and that their goals can align well with the overall goals of the game they’re playing. However, games have simple rules, life and societies do not. With Artificial Intelligence we are entering a period of our history that is truly different from any that have come before. We must tread very carefully on the path forward, all kinds of unknown dangers await.

Next week: Fake News is about to get real (prevalent)


What Lessons Can We Learn from Deepmind vs the…

What Happened?

Recently (in December, although the results were only just revealed Jan 24), a team of Artificial Intelligence agents created by the group at Deepmind — the same group that created AIs to beat the Chess and Go world champions — challenged two Team Liquid Starcraft II pro players (TLO and MaNa) to a 5-game match playing as the Protoss race and proceeded to defeat each player 5-0. A week later, MaNa defeated a new agent 1-0 for humanity’s only win.

What Are the Details?

Deepmind self-training strategy. Human examples to begin, then competition against other AI agents to improve.

Deepmind is an organization, financially supported by Google, that creates self-learning artificial intelligence programs. They’re best known for creating the AI agents that defeated the chess and go world champions.

Since then they’ve turned their attention to Starcraft II. Not only is the e-sport an incredibly fast-paced game with a large number of aspects for the player to consider at any given time but, more importantly, it features ‘imperfect information’ or, as it’s more commonly known, The Fog of War. In other words, while in chess and go the players can see the entire board and judge their next moves using a totality of information that includes the up-to-date position of the opponent pieces, in Starcraft II the only information a player has is what their pieces can ‘see’ on the map at any given moment.

Deepmind began by training an initial AI agent using human replays from all leagues. They used this agent to spawn new agents, which then trained further by playing each other in purely AI leagues. There they learned to develop and counter their own strategies.

Once ready, the Deepmind team reached out to Blizzard for a pro player to test their AI bots on and TLO was the name give. The Little One is a German pro known for his ability to play all races, and for being a generally all-round nice guy. They chose a Protoss vs Protoss match-up to reduce the complexity for the AI agents. A 5-game match was agree upon and played in early December. TLO lost 5-0.

However, it was only revealed after the match that TLO was playing 5 different agents, each with preferred strategies, meaning his standard method of adapting to an opponent wasn’t effective since he was effectively playing five different opponents.

Given that TLO does not play Protoss as his main (he’s a Zerg player), Deepmind then reached out to his teammate MaNa who was also beaten 5-0, this time by a set of agents trained for a week more. Some interesting revelations occured during that match, which led Deepmind to train a new agent, which MaNa managed to defeat 1-0.

What Does it Mean?

To start with, it’s important to note that the single human win was, arguably, the fairest game of the challenge as it used a new agent, trained in one week to only act on what was showing in the camera view of the screen — just like a human player would. It appears this was done because it came to light in the 9th game (4th game against MaNa), that the AI had perfect micro (individual unit control) while managing three squads of units in vastly different parts of the map. Something no human player could do. Presumably, while the original AIs were handicapped to human actions-per-minutes (APM), they primarily used the mini-map to manage units, giving them a massive advantage over humans, who cannot do that (at least partially because the minimap is in a tiny corner of the screen).

The other point worthy of note is that the human players commented that they could not adjust for the AI’s tactics from match to match. It came to light the reason for that was because there were, in fact, five different AI agents, each with preferred strategies, instead of one. This also put the human players at a vast disadvantage because it was equivalent to them playing a team of five pros while not knowing anything about their opponent’s tactics. Something that would rarely happen in human pro matches.

On the positive side, there were some interesting strategies that the AIs used repeatedly. While the human-like AI walled-off, many of the other AIs didn’t. All AIs, however, over-saturated their gas mining in the early game, choosing to expand later than their human opponents. It became clear that this small difference in income ended up resulting in a stronger economy, something underestimated by the human community of Protoss players. Of course, these strategies have only been tested in one type of match-up so it remains to be seen what other interesting strategies the AI will come up with.


This was a very interesting challenge. The main difference between Starcraft II and board games, and the reason the DeepMind team chose this game, was to see if their AIs could perform to human-like capabilities in an environment with imperfect information. That is, anyone who’s played Starcraft knows of the Fog of War, segments of the map you can’t see because you don’t have units there. Entire battle in WWII have been won due to creative use of the Fog of War, so this is not an idle challenge.

Interestingly, and perhaps a bit worrying, the Deepmind agents, once trained, can be installed, fully functional, on laptop computers. That’s right. While the training phase takes a vast amount of resources (equivalent to 50 GPUs for each agent), the finished agent can then function on a device that any person can buy. Scarier still, an agent can be trained from novice to pro in a week, amounting to the human equivalent of dozens, or even hundreds, of years of playing.

While the real-world consequences of this technology are obviously frightening, in game, I’m looking forward to the pros training with AIs to develop new and interesting strategies. Perhaps more feints or double feints, and other bits of misdirection. One strategy the last AI showed itself vulnerable to, and confused by, was oracle harassment of its mineral line. Humans would have dealt with that easily while the AI was confused time and time again. Presumably, future agents trained from this one will not be fooled but it will be interesting to see how man and machine relearn the game together. And I’m looking forward to seeing a single AI agent enter a pro tournament.


5 Technologies that are more advanced than you thought

Technological advancement is a strange thing. On one hand, it’s moving far faster than we’re ready for. On the other… where are the flying cars! Amiright?

One thing that seems certain with technology is that regardless of how fast or slow it progresses, we’re never ready for the changes it brings and we’re largely unable to anticipate how it, and we, will interact with secondary enabling technologies.

The mobile phone is an excellent example of this. Although the technology developed from the shoe-sized box to the pocket-sized smart-phone over about a decade, we were still not ready for the social changes that happened when smart phones were married with social media.

So, with all kinds of technology quickly maturing, what other surprises might be in store for us? Well, here’s a countdown of five technologies that are far more developed than you probably realize. We can only imagine how they might change our world.

5. Robot Servants

Regardless of how old you are, you’ve grown up with some awareness and expectation of robots. Whether it’s Rosie the maid from the Jettson’s cartoon, Robot B-9 from Lost in Space, the Terminators, Johnny-five, the Blade Runner Replicants, Wall-E, each generation since WWII has lived with the promise and fear of robots. And yet, we still don’t have them except as moving hands in manufacturing industries. Where are the humanoid robots that can vacuum, do the dishes, and wash the car?

Well, they might not be as far away as they seem. Japan and Korea are well known to have been working on robots for some time and with a respectable degree of success — we’ve all seen them on the news. But did you know that the US military has robot dogs that are quite amazing in their ability to navigate the environments and may be generally available within a few years for carrying objects and helping around the home?

Going one step further, we now have the first robot given citizenship of a country. October 2017, Sophia, a seventh generation robot from Hanson Robotics, was given citizenship in Saudi Arabia, where it addressed the assembly saying that it wants to work towards friendship and compassion between humans and robots.

During the Q&A it still seemed fairly robotic and it’s language processor could take some advice from Amazon Alexa (something that, in itself is interesting, as the voice recognition was developed by Google’s parent company Alphabet Inc.).

Sophia may not be the paragon of burgeoning artificial intelligence that the creators are suggesting, but it is a good example of how close we’re getting to having humanoid robots walking among us.

Sophia does, however, beg the question, just how close are we to a true general artificial intelligence?

4. General Artificial Intelligence

For some, Artificial General Intelligence (AGI) cannot arrive fast enough. For others, if it ever arrives it will be too soon. So, how close are we?

Well, most experts suggest that if it’s possible, we’re still decades away. Although the truth is no one really knows, as no one fully understands what technical breakthroughs are required to achieve AGI. However, in recent years there have been some interesting experiments by the big tech companies that have given rise to at least a little concern by the opponents of AGI.

  • Microsoft’s attempt at training an AI using Twitter in 2016 went horribly wrong when it started spouting hateful bigotry within 24 hours of being turned loose on the internet — a warning for new users of this social media, perhaps? It’s also a great warning of what could go wrong for us if we mess this up.
  • Google has been busy with AI development, working on a general gaming AI that can learn and game just by watching. Furthermore, in 2017 they developed an AI that could make new AIs. When given the task of creating an object recognition AI for visual systems, it produced one superior to any human attempt.
  • The Google AI project Deep Mind has also surpassed it’s own record at the Japanese game of Go. The newest Go AI beat the reigning world champion (the previous AI) 100 games to 0 by a system of iterative trial and error. This is a clear example of how AIs could dramatically improve from one generation to the next.
  • In 2017, a Facebook AI invented it’s own language as part of learning for another task. This sufficiently worried researchers that they shut it down.

So, these recent attempts to develop a general AI would seem to each come with a particular warning. Of course, being human, we will push ahead anyway. The real challenge, it seems, will be for us to figure out when an AI is truly self-aware. Hopefully, we build a positive rapport with them by that time, or we may still be sitting committees when our obsolescence knocks on the door.

3. Genetic Manipulation

Since the advent of biotechnology in the early 1970s, the promise of genetic manipulation to correct and cure genetic problems such as inherited diseases and cancer has been left unfulfilled despite amazing advances in the technology. Now, a new technique called CRISPR (Clustered Regular Interspersed Short Palindromic Repeats), discovered about ten years ago and developed to a practical application only five years ago, may finally make the dream real.

The tools and techniques of CRISPR have been released for use among all scientists and has led to rapid developments of the technique even in the short time since it’s initial development. Who knows, we might be coming to the end of genetic diseases.

2. Artificial Womb

We’re currently in the middle of the largest gender war in the history of our species. Moreover, technology of various kinds is changing our definitions of gender, and indeed, our definition of ourselves. But we’re not done yet because in order for:

  • women to be truly free of the household (if they want to be)
  • men to have reproductive freedom for the first time in history
  • humans to colonize space

we need to develop artificial wombs.

Estimates for the development of artificial wombs for humans place the technology 30-50 years away, depending on funding and social importance. However, researchers have taken important steps forward in being able to bring extreme-preterm lambs to full term in what could be called an artificial womb. Already, the second lamb has now been born using this techniqe.

Admittedly, the artificial womb looks more like something you’d store your lunch in, but it is a huge step toward actual artificial wombs — and the next gender war.

1. Flying cars

Since the 1950s we’ve been waiting… and waiting… and waiting. But no flying cars.

Until now.

Finally, with the development of drone technology, increased computing power, and better electric batteries and power systems, it seems the dream may finally be just around the corner.

A wide variety of companies have flying cars in development. A few are even ready for sale this year, although these are more flying and less car. Still some of the designs are proving to be very interesting. with everything from personal electric air transportation, to hybrid plane/cars, to drone-transportation services.



Now we just need the glacial pace of Western politics to speed up and get on this before they fall too far behind (in England they’re still planning to build a high speed rail system by 2035! – by then, we won’t even need rail). What they’ll need to do is decide on regulations for piloting, safety, whether all such vehicles  should be autodrive, and how to control the borders.

Let’s hope they can sort out the details quickly, because I, for one, am looking forward to a world of personal flying cars!


We’re entering a time of great advancement, and great change. I’ve listed five technologies that will change our world in the coming decades, but there are many more. If this was interesting to you, why not explore other alternate futures by signing up to my newsletter.

Insight and longevity,

Edwin H Rydberg