Welcome to Artificial Intelligence week!
As you probably know, AI is not just a theoretical thing for me. I’m actually building an AI of my own. His name is Procyon, and he has a Lego body and a C++ brain. And although he doesn’t know much yet, he’s getting smarter by the day.
Of course, my design notebooks are way ahead of what I’ve actually built. So to kick off AI week, I thought I’d share my own personal principles of AI design.
These are in no particular order, and they range from fairly specific design issues up to general philosophy. The list below didn’t exist in this form until today; I just pulled it together based on miscellaneous insights I’ve had so far. If I’d spent more time on it, it would have more bullet points. Still, it’s an accurate and reasonably broad look at the way I view my work.
Without further ado:
Brian’s Principles of AI Design
- It is possible to build a human-level artificial intelligence. Kind of a no-brainer, right? I wouldn’t be trying if I didn’t think it was possible. But I’m surprised how many people seem to think it just can’t be done, for a variety of philosophical, logical, and religious reasons. I could write a whole post on why I think those reasons are bogus (and I might sometime) but for now I’ll just say this is a very firm belief of mine.
- I, personally, am capable of building a human-level artificial intelligence. This may sound egotistical, but I don’t think it is. I’m not claiming to be smarter than Marvin Minsky or any of the other giants of the AI field who have yet to succeed in achieving this dream. Rather, I believe this out of necessity – because if I don’t believe in myself, then what’s the point? This is closely related to the joy of hubris, which I’ve written about before.
- The brain is the mind. Or, to put it another way, the brain is the soul. Many people believe the mind is somehow a separate entity, related to the physical brain but with some extra spark of, I don’t know, thinking-ness. They can’t accept that the vast array of human experience – our transcendent joys, our unspeakable passions, our ability to see the color red – all comes from something as prosaic as a three-pound lump of ugly gray tissue, or could come from something like a computer. But I can accept it, and I do.
- The human brain is mechanical. By “mechanical” I mean that there’s nothing magical about how it works. The brain is made of cells, the cells are made of molecules, the molecules are made of atoms, and the atoms all follow the laws of physics in the usual way. The brain is the most marvelous machine we know of – but it’s still a machine. Another like it can be built.
- The human brain should be a guide, not a map, for the AI designer. I’ve learned an awful lot about designing a mind by studying the human mind. But I also think there’s more than one way to skin a cerebrum, and I don’t see a need to follow biological design slavishly. If it makes sense, I do it.
- Neural nets are a good idea, but too limiting. Neural nets are one of the classical AI constructs, and they’ve been very influential in the way I think about design. But they can only take you so far, at least in my experience. I think of neural nets as a signpost that helped point the way, rather than a destination.
- E pluribus unum. Out of many, one. Like Minsky, I believe that high-level intelligence comes from the interaction of thousands (millions? trillions?) of very low-level, unintelligent agents. It may seem counterintuitive that something smart can come from a bunch of dumb things working together, but that’s how our own brains work, so why not an AI?
- Trust in emergent behavior. My design does not have a language module, a navigation object, or a self-awareness function, yet I expect my robot to read and write, move around intelligently, and be self-aware. Why? Because I consider these high-level abilities to be emergent properties of the system.
- Scruffy, not neat. In the neat vs. scruffy AI debate, I’m scruffy all the way. I’ve already explained this more than once, so I won’t belabor the point again.
- Aim high. So many AI researchers today are focused on tiny subsets of the big AI problem. They work on specific issues like machine translation or facial recognition, and there’s a widespread feeling that a high-level AI can eventually be cobbled together from all these little pieces. I’m awfully skeptical of this idea, and I’m wary of solving easier versions of the Big Problem and working my way up. I prefer to start at the top. Maybe I’m being naive, but naivete is the prerogative of anybody under 30. 🙂
- The Singularity is real and it is coming, so design with that in mind. More on this tomorrow.
Finally, there’s one other design principle I follow, one that I discovered myself and have never read about anywhere else. It’s probably the greatest single insight I’ve had since starting this project. But I’m out of time this morning, and it probably deserves a whole post in itself, so it’ll have to wait for now.
As I mentioned, tomorrow’s topic is the Singularity. Don’t miss it! And remember Ben Trube is doing AI week on his blog too, so head on over and see what he’s up to. (He generally posts around 1:00 PM, Eastern time.)