Monday, 7 May 2018
How to build an A.I. brain that can surpass human intelligence | Ben Goertzel
Artificial intelligence has the capability to far surpass our intelligence in a relatively short period of time. But AI expert Ben Goertzel knows that the foundation has to be strong for that artificial brain power to grow exponentially Read more at BigThink.com: https://ift.tt/2JZmLiX Follow Big Think here: YouTube: http://goo.gl/CPTsV5 Facebook: https://ift.tt/1qJMX5g Twitter: https://twitter.com/bigthink If you think much about physics and cognition and intelligence it’s pretty obvious the human mind is not the smartest possible general intelligence any more than humans are the highest jumpers or the fastest runners. We’re not going to be the smartest thinkers. If you are going to work toward AGI rather than focusing on some narrow application there’s a number of different approaches that you might take. And I’ve spent some time just surveying the AGI field as a whole and organizing an annual conference on the AGI. And then I’ve spent a bunch more time on the specific AGI approach which is based on the OpenCog, open source software platform. In the big picture one way to approach AGI is to try to emulate the human brain at some level of precision. And this is the approach I see, for example, Google Deep Mind is taking. They’ve taken deep neural networks which in their common form are mostly a model of visual and auditory processing in the human brain. And now in their recent work such as the DNC, differential neural computer, they’re taking these deep networks that model visual or auditory processing and they’re coupling that with a memory matrix which models some aspect of what the hippocampus does, which is the part of the brain that deals with working memory, short-term memory among other things. So this illustrates an approach where you take neural networks emulating different parts of the brain and maybe you take more and more neural networks emulating different parts of the human brain. You try to get them to all work together not necessarily doing computational neuroscience but trying to emulate the way different parts of the brain are doing processing and the way they’re talking to each other. A totally different approach is being taken by a guy named Marcus Hutter in Australia National University. He wrote a beautiful book on universal AI in which he showed how to write a superhuman infinitely intelligence thinking machine in like 50 lines of code. The problem is it would take more computing power than there is in the entire universe to run. So it’s not practically useful but they’re then trying to scale down from this theoretical AGI to find something that will really work. Now the approach we’re taking in the OpenCog project is different than either of those. We’re attempting to emulate at a very high level the way the human mind seems to work as an embodied social generally intelligent agent which is coming to grips with hard problems in the context of coming to grips with itself and its life in the world. We’re not trying to model the way the brain works at the level of neurons or neural networks. We’re looking at the human mind more from a high-level cognitive point of view. What kinds of memory are there? Well, there’s semantic memory about abstract knowledge or concrete facts. There’s episodic memory of our autobiographical history. There’s sensory-motor memory. There’s associative memory of things that have been related to us in our lives. There’s procedural memory of how to do things. And we then look at the different kinds of learning and reasoning the human mind can do. We can do logical deduction sometimes. We’re not always good at it. We make emotional intuitive leaps and strange creative combinations of things. We learn by trial and error and habit. We learn socially by imitating, mirroring, emulating or opposing others. These different kinds of memory and learning that the human mind has – one can attempt to achieve each of those with a cutting-edge computer science algorithm, rather than trying to achieve each of those functions and structures in the way the brain does. So what we have in OpenCog we have a central knowledge repository which is very dynamic and lives in RAM on a large network of computers which we call the AtomSpace. And for the mathematicians or computer science in the audience, the AtomSpace is what you’d call a weighted labeled hypergraph. So it has nodes. It has links. A link can go between two nodes or a link could go between three, four, five or 50 nodes. Different nodes and links have different types and the nodes and links can have numbers attached to them. A node or link could have a weight indicating a probability or a confidence. It could have a weight indicating how important it is to the system right now or how important it is in the long term so it should be kept around in the system’s memory.
Labels:
Big Think
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment