In this post, I talk about Peter Voss’s paper. I first read about Peter’s theory of mind approach in Chapter 1 of the Artificial General Intelligence Textbook.
I talk about much of what I learn in this paper in this video.
In this paper, Peter discusses about essential properties of an AGI system and the best way to accomplish it. The paper is a very interesting read. It starts with the definition of Intelligence Peter works with and then talks about core requirements and advantages of general intelligence over narrow AI. Some very interesting points mentioned in the paper:
a) Pattern Processing: Peter says and I agree that pattern processing is a core requirement. Our own human intelligence starts with great pattern processing. We humans are great at finding patterns not just in our environments but also our thoughts and behavior. So underneath any AGI architecture one will need a very strong and robust pattern extractor.
b) Adaptive Datastructures: Whatever knowledge an intelligent system observes or acquires from its surroundings has to be stored somewhere and worked upon. Classically we have worked with static databases and datastructures but a system capable of self learning should be able of self modification of the knowledge it has which is only possible with adaptive datastructures. In humans the synaptic connection between neurons is always changing and our brain is constantly changing all the time. Peter mentions that they used Growing Neural Gas methodology. This area of self organizing data structures is another basic problem we need to work along with great pattern extractor.
c) Contextual and grounded concepts versus hard coded symbolic concepts: If we ever have to understand an AGI or even have a conversation with, it will only be possible if it understands the world the same way we do. We understand the world by observing objects and how they behave when we perform actions on them. Later these behavior of the objects become concepts in our heads. Eg. when we were kids we dropped objects all the time. So we know that if we drop something mid air, it will hit the surface. This observation later becomes as the concept ‘gravity’ in our minds and as we grow we can learn more about this concept. I think that this grounded learning is important to understand the world and if we want an AGI that helps us it will first have to understand our world.
d) Foundational Cognitive capabilities: Peter is right. These cognitive capabilities are enough for an intelligent but he doesn’t mention how they interact with each other. Its like saying to build a rocket you need fuel and metal. What is interesting though is I can find some correlations in Peter’s fundamental cognitive functions and Numenta’s ‘A Thousand brain Theory‘ but I will write on them sometime later. I cover much about the cognitive capabilities in the video, so do watch it