A Road to Artifical General Intelligence

Artificial General Intelligence (AGI). It’s the stuff of far future science fiction or modern nightmares. Or far future science fiction nightmares.

There are a great many people who tell us it can’t be done. At least not with the technology we currently have. And many suggest there is no path from where we are now, to that distant place. But it seems to me we’re already there. We just have to put all the pieces together.

So, on the unlikely chance that no-one else in the world has realized this, I’ll post my thoughts on the road to self-aware artificial intelligence here, because, almost certainly, the people who have realized it are working on it in secret.

The dream and promise of artificial general intelligence has been a pervasive one through my life. From the droids of Star Wars and Star Trek, to the rise of computers and the first AI to defeat a Chess grandmaster, AI has captured our imaginations in both fiction and, increasingly, the real world.

But artificial intelligence of the kind that wins Starcraft games and predicts protein folds, while impressive, is still a long way from a true, self-aware, artificial geneneral intelligence. A great many people wonder when we will achieve such a feat of creation.  An equally large number believe it to be so far distant that the horizon is not yet visible.

Recently, the rather well-known author and digital rights activist Cory Doctorow made the asertion that he could not see a path to AGI from the current level of artificial intelligence. And he’s hardly alone. Comments range from the necessity of biological structure for self-awareness to a lack of understanding and ability to replicate whatever the mystical aspect  of life is that creates such awareness. Essentially, they claim there is just too much of a gap between present technology and that future birthing of new life.

I beg to differ.

 

Generalizing the Structure of the Human Brain

 

I believe that many of the critics are correct when they suggest we should be looking to mimic the structure of the brain for our inspiration with AGI. Their failure of vision, in my opinion, comes from looking too closely.

We’re not trying to recreate the organic brain, we’re trying to create a machine one with similar properties to the organic one. Why would we expect them to be precisely the same? Instead, they should have similar elements that do similar things, but in practically different ways.

Very generally, that means AGIs need:

  • semi-autonomous subroutines for managing necessary and complex systems that don’t generally require conscious decision making.
  • short and long term foundational goals for the entity
  • training time

Getting a Little More Specific

 

The human brain is not a single entity that functions to do everything it needs. That’s become more and more clear over the course of this century. Rather, it’s a series of specialized structures that are physically interconnected, working together toward a single goal. For biological entities, that goal is survival of both the individual and the species by procuring food, escaping predators and harsh environmental conditions, and reproducing. For humans, we can also add the necessity of creating shelter and developing tools, as we have little in the way of natural climate defenses or innate weaponry. While there are other animals that do these two actions, humans do it far better than any other creature on the planet.

Much like the human brain, there’s no reason to expect that an artifical, self-aware brain would not also develop as an emergent property from the combination of specialized artificial intelligences that all function to provide information toward the single goal of the greater entity. At least, given enough environmental input to assimilate. Keep in mind that it takes a human child several years of processing before their brains are capable of making enough sense of the world to recognize themselves as a distinct entity.

So, If one were to break down the ‘human intelligence’ and especially the physical brain, we might characterize some of the heuristic components as providing management for the physical support structures of the following:

  1. Gas, liquid, solid chemical sensing and interpetation
  2. Audio sensing and interpretation
  3. Visual sensing and interpretation
  4. Tactile sensing and interpretation
  5. Mobility and orientation
  6. Physical interaction with the environment
  7. Communication
  8. Energy management (consumption and generation)
  9. Waste management (gas, liquid, and solid)
  10. Reproduction

We have components in our brains for managing each of these fundamental processes and interpreting the physical signals in relation to those specific sub-systems (and many of them are broken down further for management). Each interpretive system can be thought of as its own artificial intelligence, trained to manage a specific process very well.

Furthermore, each component, which can be localized or delocalized throughout the physical entity of the brain, is intimately linked with its physical component(s). As the individual components of the brain train and grow, separately and together, they evolve a higher level neurological map of the entire entity (the brain eventually grows to develop intimate knowledge of the signal transmission pathways of the body).

As an aside, this is one reason why some people will feel phantom limb pain after an amputation. It’s also why a brain or head transplant will face more challenges that ‘just’ connecting several thousand nerves. We would need to find some way to ‘replace the brain’s neurological body map’. Otherwise, we could expect the patient would feel anything from massive confusion and disorientation to outright full-body pain.

Time is of the Essence

 

Of course, simply creating an entity with the above component might create a functioning entity but it would not be autonomous because it is missing another necessary component. An autonomous entity requires a system for processing and prioritizing the inputs from its components in relation to the entity’s  goals. Essentially, it requires a system for putting the various inputs into a frame of reference and acting on them.

This would allow the creation of a fullly autonomous entity. But it would not be self-aware.

There’s a reason we call the evolutionarily newest part of the brain, the parts largely responsible for our self-awareness, the temporal lobes. Self-awareness requires an expanded awareness of time in order to allow the entity to understand its place in the world.

It seems reasonable to speculate that self-awareness evolved as the defence mechanism of a creature  that had no natural defences. A broader perspective of time reslults in a greater ability to anticipate and react to future changes of the environment. But it requires a greater awareness of the entity’s place in the world in order to take advantage of that knowledge.

And these aspects of the temporal system feedback on each other, by necessity. Consciousness then arised from this system and the necessary ability for the entity to supercede low-risk short term survival goals that may result in poor long-term survival, in favour of riskier short-term goals that have a higher chance of long-term survival. The great migration of 50,000 years ago could be consider the result of this such a system.

Putting it all Together

 

In the complete and developed, self-aware entity, these semi-autonomous systems are all integrated into a greater whole that can be considered to have three main levels. To put this together in for the form of an artificial entity, we might describe it as follows.

The first, most basic level, are the AIs that manage the processes that interact directly with the environment. They receive and transmit information from and to the physical components of the body, make basic-level decisions and pass on their information to the next level.

The second level is the first manager AI or what we would call the subconscious. This AI interprets the information passed on to it from all the lower AIs and makes short-term goal-oriented (e.g. survival) decisions.

The upper-most level is the second AI manager and the one we would call the conscious mind, should it be trained for long enough to develop that state. It receives more selective information from the entity’s primary systems, as well as some of the interpretive information from the subconscious. It then makes longer-term goal-oriented decisions and, eventually, learns to understand the world and its place in it.

In Conclusion - There's Just One More Thing...

 

To summarize, I suggest that a true General Artificial Intelligence could be developed, even with today’s technology, through the incorporation of analogous systems to the human brain and body, as described above.

Briefly, that means and AGI would require:

  • systems to interogate and interact with the environment (whichever environment the entity exists in) and to manage lifesupport.
  • dedicated AI for each sub-system
  • a ‘governing AI’ to interpret and utilize the information from the dedicated AIs in pursuit of the short-term motivating goals of the entity
  • a ‘command AI’ for overriding and re-interpreting the ‘governing AI’s directives in relation to long-term motivating goals

From this, and with appropriate training of the integrated systems, I propse that an AGI would eventualy arise, as self-awareness eventually arises in a human child.

All levels of systems are necessary, as self-awareness is a term that is relative to the entity’s environment and its interpretation thereof. However, while an autonomous entity could arise from only the lower two tiers of processes, I propose the top level is absolutely crucial for the development of a self-aware entity.

However, even after bringing together the component systems, there remains one other important aspect to the development of a self-aware entity. All the heuristic components must be trained together so they can meld and evolve in a manner that will be both useful and unique to each individual entity. That is, while we may be able to copy an AGI once it is created the first time (and that’s arguable – we may end up with machine psychopaths), the first entity will require time and patience to grow and train. Just like a human child.

While the emergent property of self-aware intelligence cannot be assured with such a recipe, it does seem to be one that works for humans.

 

Your Thoughts?

What do you think? Does this seem plausible? Is artificial general intelligence potentially closer than many think? Stop by my FaceBook page and let me know your thoughts.

Scroll to top
EMAIL