A Road to Artifical General Intelligence

Artificial General Intelligence (AGI). It’s the stuff of far future science fiction or modern nightmares. Or far future science fiction nightmares.

There are a great many people who tell us it can’t be done. At least not with the technology we currently have. And many suggest there is no path from where we are now, to that distant place. But it seems to me we’re already there. We just have to put all the pieces together.

So, on the unlikely chance that no-one else in the world has realized this, I’ll post my thoughts on the road to self-aware machine intelligence (so called Artificial General Intelligence) here, because, almost certainly, the people who have realized it are already quietly working on it. If, on the other hand, the information present here is the key piece of information that’s been missing, I suppose I’ll go down in history as the father of the robot apocalypse.

The dream and promise of artificial general intelligence (AGI) has been a pervasive one through my life. From the droids of Star Wars and Star Trek, to the rise of computers and the first AI to defeat a Chess grandmaster, AI has captured our imaginations in both fiction and, increasingly, the real world.

But artificial intelligence of the kind that wins Starcraft games and predicts protein folds (narrow AI), while impressive, is still a long way from a true, self-aware, artificial geneneral intelligence. A great many people wonder when we will achieve such a feat of creation.  An equally large number believe it to be so far distant that the horizon is not yet visible. And an increasing number fear the day we ever do (not without reason, but more on that in a future article).

Recently, the rather well-known author and digital rights activist Cory Doctorow made the asertion that he could not see a path to AGI from the current level of artificial intelligence. And he’s hardly alone. Comments range from the necessity of certain biological structures for self-awareness to a lack of understanding and ability to replicate whatever the mystical aspect of life is that creates such awareness. Essentially, they claim there is just too much of a gap between the present technology and what is needed to bring about that future birthing of new intelligence.

I beg to differ.

Generalizing the Structure of the Human Brain

I believe that many of the critics are correct when they suggest we should be looking to mimic the structure of the brain for our inspiration with AGI. Their failure of vision, in my opinion, comes from looking too closely.

We’re not trying to recreate the organic brain, we’re trying to create a machine one with similar properties to the organic one. Why would we expect them to be precisely the same? Instead, they should have similar elements that do similar things, but in practically different ways.

Very generally, that means AGIs need:

  • semi-autonomous subroutines for managing necessary and complex systems that don’t generally require conscious decision making.
  • short and long term foundational goals for the entity
  • training time for the various levels of AI to learn to work together

Getting a Little More Specific

The human brain is not a single entity that functions to do everything it needs. That’s become more and more clear over the course of this century. Rather, it’s a series of specialized structures that are physically interconnected, working together toward a single goal. For biological entities, that goal is survival of both the individual and the species by procuring food, escaping predators and harsh environmental conditions, and reproducing. For humans, we can also add the necessity of creating shelter and developing tools, as we have little in the way of natural climate defenses or innate weaponry. While there are other animals that do these two actions, humans do it far better than any other creature on the planet.

Much like the human brain, there’s no reason to expect that an artifical, self-aware brain would not also develop as an emergent property from the combination of specialized artificial intelligences that all function to provide information toward the limited goals of the greater entity. At least, given enough environmental input to assimilate and time to assimilate it in. Keep in mind that it takes a human child several years of processing before their brains are capable of making enough sense of the world to recognize themselves as a distinct entity.

So, If one were to break down the ‘human intelligence’, and especially the physical brain, into conceptual components, we might characterize some of these heuristic components as providing management for the physical support structures of the following:

  1. Gas, liquid, solid chemical sensing and interpetation
  2. Audio sensing and interpretation
  3. Visual sensing and interpretation
  4. Tactile sensing and interpretation
  5. Mobility and orientation
  6. Physical interaction with the environment
  7. Communication
  8. Energy management (consumption and generation)
  9. Waste management (gas, liquid, and solid)
  10. Reproduction

Our body contains the physical systems that do the work and out brain has a corresponding autonomous or semi-autonomous interpretive system for each. Our brains manage each of these fundamental processes and interpret the physical signals in relation to those specific sub-systems (some of which are also further broken down for more specific management). Each interpretive system can be thought of as its own artificial intelligence, trained to manage a specific process very well.

Furthermore, each component, which can be localized or delocalized throughout the physical entity of the brain, is intimately linked with its physical component(s). As the individual components of the brain train and grow, separately and together, they evolve a higher level neurological map of the entire entity (the brain eventually grows to develop intimate knowledge of the signal transmission pathways of the body on multiple levels).

As an aside, it would appear this may be a reason why some people will feel phantom limb pain after an amputation. It also suggests a reason why a brain or head transplant will face more challenges that ‘just’ connecting several thousand nerves. We would need to find some way to ‘replace the brain’s neurological body map’ (or ‘residual self image’ as described in The Matrix). Otherwise, we could expect the patient would feel anything from massive confusion and disorientation to outright full-body pain.

Time is of the Essence

Of course, simply creating an entity with the above components might create a functioning entity but it would not be autonomous or self-aware because it is missing other necessary components.

An autonomous entity requires a system for processing and prioritizing the inputs from its components in relation to the entity’s goals. Essentially, it requires a system for putting the various inputs into a frame of reference and acting on them.

Essentially, a second level narrow AI would be necessary to manage the inputs from the lower level AI-managed component systems and optimize them for the primary goals of the entity.

This would allow the creation of a fully autonomous entity. But it would not be self-aware.

There’s a reason we call the evolutionarily newest part of the brain, the parts largely responsible for our self-awareness, the temporal lobes. Self-awareness requires an expanded awareness of time in order to allow the entity to understand its place in the world and generate goals beyond those immediately necessary.

It seems reasonable to speculate that self-awareness evolved as a defence mechanism of a creature that had no natural defences. A broader perspective of time reslults in a greater ability to anticipate and react to future changes of the environment. But it requires a greater awareness of the entity’s place in the world in order to take advantage of that knowledge.

And these aspects of the temporal system feedback on each other, by necessity. It then seems reasosonable to consider that consciousness arose from this system and the necessary ability for the entity to supercede low-risk short term survival goals that may result in poor long-term survival, in favour of riskier short-term goals that have a higher chance of long-term survival. (As an aside, the great migration of 50,000 years ago, and the subsequent spread of humans to every corner of the globe could be consider the result of such a system.)

Putting it all Together

In the complete and developed, self-aware entity, these semi-autonomous systems are all integrated into a greater whole that can be considered to have three main levels. To put this together in the form of an artificial entity, we might describe it as follows.

The first, most basic level, are the narrow AIs that manage the processes that interact directly with the environment. They receive and transmit information from and to the physical components of the body, make basic-level decisions and pass on their information to the next level.

The second level is the first manager AI (also a narrow AI) or what we would call the subconscious. This AI interprets the information passed on to it from all the lower AIs and makes short-term, goal-oriented (e.g. immediate survival-based) decisions.

The upper-most level is the second AI manager (still a narrow AI) and the one we would call the conscious mind, should it be trained for long enough, and under the right conditions, to develop that state. It receives more selective information from the entity’s primary systems, as well as some of the interpretive information from the subconscious. It then makes longer-term goal-oriented decisions and, eventually, learns to understand the world and its place in it.

All three levels of AI are what we current call narrow artificial intelligence. The kind the DeepMind routinely develop to defeat StarCraft II pros in limited information scenarios, or to upend the world of protein fold prediction. I propose, however, that Artificial General Intelligence is an emergent property that would arise from the correct structure and training of the three levels of narrow AI.

In Conclusion - There's Just One More Thing...

To summarize, I suggest that a true General Artificial Intelligence could be developed, even with today’s technology and nothing more than a three tiered structure of narrow AIs, through the creation of analogous systems to the human brain and body, as described above.

Briefly, that means and AGI would require:

  • systems to interogate and interact with the environment (whichever environment the entity exists in) and to manage lifesupport.
  • dedicated AI for each sub-system
  • a ‘governing AI’ to interpret and utilize the information from the dedicated AIs in pursuit of the short-term motivating goals of the entity
  • a ‘command AI’ for overriding and re-interpreting the ‘governing AI’s directives in relation to long-term motivating goals

From this, and with appropriate training of the integrated systems, I propse that an AGI would eventualy arise, as self-awareness eventually arises in a human child.

All levels of systems are necessary, as self-awareness is a term that is relative to the entity’s environment and its interpretation thereof. However, while an autonomous entity could arise from only the lower two tiers of processes, I propose the top level is absolutely crucial for the development of a self-aware entity.

However, even after bringing together the component systems, there remains one other important aspect to the development of a self-aware entity. All the heuristic components, i.e. the three levels of narrow AI, must be trained together so they can meld and evolve in a manner that will be both useful and unique to each individual entity. That is, while we may be able to copy an AGI once it is created the first time (and that’s arguable – we may end up with machine psychopaths  instead), the first entity will require time to grow and train, just like a human child (especially if it is to have any understanding and appreciation of the complexity of other lifeforms such as humans).

While the emergent property of self-aware intelligence cannot be assured with such a recipe, it does seem to be one that works for humans and, as of the time of writing, I don’t see why it wouldn’t stand a good chance of working for machines.

Your Thoughts?

What do you think? Does this seem plausible? Is artificial general intelligence potentially closer than many think? Stop by my FaceBook page and let me know your thoughts.

Scroll to top
EMAIL