AI Week, Day 1: Principles of Design

Welcome to Artificial Intelligence week!

As you probably know, AI is not just a theoretical thing for me. I’m actually building an AI of my own. His name is Procyon, and he has a Lego body and a C++ brain. And although he doesn’t know much yet, he’s getting smarter by the day.

Of course, my design notebooks are way ahead of what I’ve actually built. So to kick off AI week, I thought I’d share my own personal principles of AI design.

These are in no particular order, and they range from fairly specific design issues up to general philosophy. The list below didn’t exist in this form until today; I just pulled it together based on miscellaneous insights I’ve had so far. If I’d spent more time on it, it would have more bullet points. Still, it’s an accurate and reasonably broad look at the way I view my work.

Without further ado:

Brian’s Principles of AI Design

  • It is possible to build a human-level artificial intelligence. Kind of a no-brainer, right? I wouldn’t be trying if I didn’t think it was possible. But I’m surprised how many people seem to think it just can’t be done, for a variety of philosophical, logical, and religious reasons. I could write a whole post on why I think those reasons are bogus (and I might sometime) but for now I’ll just say this is a very firm belief of mine.
  • I, personally, am capable of building a human-level artificial intelligence. This may sound egotistical, but I don’t think it is. I’m not claiming to be smarter than Marvin Minsky or any of the other giants of the AI field who have yet to succeed in achieving this dream. Rather, I believe this out of necessity – because if I don’t believe in myself, then what’s the point? This is closely related to the joy of hubris, which I’ve written about before.
  • The brain is the mind. Or, to put it another way, the brain is the soul. Many people believe the mind is somehow a separate entity, related to the physical brain but with some extra spark of, I don’t know, thinking-ness. They can’t accept that the vast array of human experience – our transcendent joys, our unspeakable passions, our ability to see the color red – all comes from something as prosaic as a three-pound lump of ugly gray tissue, or could come from something like a computer. But I can accept it, and I do.
  • The human brain is mechanical. By “mechanical” I mean that there’s nothing magical about how it works. The brain is made of cells, the cells are made of molecules, the molecules are made of atoms, and the atoms all follow the laws of physics in the usual way. The brain is the most marvelous machine we know of – but it’s still a machine. Another like it can be built.
  • The human brain should be a guide, not a map, for the AI designer. I’ve learned an awful lot about designing a mind by studying the human mind. But I also think there’s more than one way to skin a cerebrum, and I don’t see a need to follow biological design slavishly. If it makes sense, I do it.
  • Neural nets are a good idea, but too limiting. Neural nets are one of the classical AI constructs, and they’ve been very influential in the way I think about design. But they can only take you so far, at least in my experience. I think of neural nets as a signpost that helped point the way, rather than a destination.
  • E pluribus unum. Out of many, one. Like Minsky, I believe that high-level intelligence comes from the interaction of thousands (millions? trillions?) of very low-level, unintelligent agents. It may seem counterintuitive that something smart can come from a bunch of dumb things working together, but that’s how our own brains work, so why not an AI?
  • Trust in emergent behavior. My design does not have a language module, a navigation object, or a self-awareness function, yet I expect my robot to read and write, move around intelligently, and be self-aware. Why? Because I consider these high-level abilities to be emergent properties of the system.
  • Scruffy, not neat. In the neat vs. scruffy AI debate, I’m scruffy all the way. I’ve already explained this more than once, so I won’t belabor the point again.
  • Aim high. So many AI researchers today are focused on tiny subsets of the big AI problem. They work on specific issues like machine translation or facial recognition, and there’s a widespread feeling that a high-level AI can eventually be cobbled together from all these little pieces. I’m awfully skeptical of this idea, and I’m wary of solving easier versions of the Big Problem and working my way up. I prefer to start at the top. Maybe I’m being naive, but naivete is the prerogative of anybody under 30. πŸ™‚
  • The Singularity is real and it is coming, so design with that in mind. More on this tomorrow.

Finally, there’s one other design principle I follow, one that I discovered myself and have never read about anywhere else. It’s probably the greatest single insight I’ve had since starting this project. But I’m out of time this morning, and it probably deserves a whole post in itself, so it’ll have to wait for now.

As I mentioned, tomorrow’s topic is the Singularity. Don’t miss it! And remember Ben Trube is doing AI week on his blog too, so head on over and see what he’s up to. (He generally posts around 1:00 PM, Eastern time.)

Any questions?

8 responses to “AI Week, Day 1: Principles of Design

  1. I wanted to *like* this, but the function isn’t working for me on some blogs and I haven’t figured out how to fix it. Wonderful post!

  2. I’ve been accused, probably rightly so, of attempting to create SkyNet on more than one occasion and there are a few conclusions that I’ve come to myself that are annoyingly limiting when it comes to my own progress and attempts at AI (mind you, I come from the computational and probabilistic theory side of things).

    First and foremost, there is a difference between the human brain and current computing models that is computationally hard to overcome. To use your own metaphor, the brain is a machine with loose gears and pulleys that can at random work in different combinations than intended or even cause seemingly unrelated parts of the machine to operate unexpectedly while a computer is 100% predictable and, short of injecting quantum particle theory for randomness, even random behavior is mathematically quantifiable.

    Second, I LOVE emergent behavior theory… so much so that if I could use bold and html header tags in my comment I would πŸ™‚ The problem with emergent behavior theory is that all emergent behavior, at least so much as the observable world has shown, is based on some preexisting knowledge or baser instinct. An overly simple example would be an animal’s locomotive ability is a result of the base knowledge of how to move their extremities. The problem I’ve had with AI is how to build in enough base knowledge to allow something like emergent behavior while not pre-limiting unknown but possibly creative behavior because the knowledge given was too specific. With your robot, does this mean simply providing a path to the function calls to move the arms is enough (i.e. neural net access or machine learning) or do you have to create explicit functionality?

    While I could write about this for hours, I’ll end on this. The way an organic brain and a computer interprets things is completely different. While the basic premise is that everything is electrical impulses, computers require data to be formatted in certain ways while the brain requires no such translation. The ultimate goal would be to simply feed raw data into the AI and have it process that data… but the problem is, there is no such thing as raw data in the computer world. Its all formatted ones and zeros. A camera formats images into various compressions or bit maps. Something as simple as a USB heat sensor has to negotiate and send data over the bus.

    If you haven’t already, although I suspect you’ve read about it more than me, read up on fuzzy logic and inexact CPUs/memory. The basic premise is to allow computers to inherently get away from formal logic and determinism.

    • RE: randomness vs. predictability, I’d say computers can actually be more random and unpredictable than humans. Ask some college students to sit down and recite 1,000 “random” digits from 0 to 9, and compare the results with a pseudorandom number generation algorithm. I guarantee the computer’s numbers will be closer to true randomness, statistically speaking. And if pseudorandomness doesn’t satisfy you, it’s easy to call a web service that will give you a truly random number based on natural phenomena (no quantum theory required).

      More fundamentally, though, a human brain and a computer brain are both subject to the ironclad laws of physics, which means they’re both deterministic, barring minor quantum effects (and even there the degree of uncertainty is quantifiable). So I’d challenge your assertion that humans are more random than computers.

      RE: emergent behavior, the building blocks would be things I call agents, which make up the mind. Input agents correspond to the senses, output agents correspond to (for example) moving a muscle, and lots of processing agents in the middle do the heavy lifting of genuine thought. The simple individual interactions between these agents is the raw material from which our emergent properties, well, emerge.

      RE: data formatting, the human brain doesn’t deal with “raw data” any more than a computer does. Neurons in the eye will either fire, or not, based on whether the amount of light hitting a certain area of the eye surpasses a certain threshold. Even in biology, it’s binary!

      • The problem with pseudo random number generation is that they’re predictably unpredictable. Given enough information about the distribution and past numbers in the series, you can fairly easily predict the next number. Truly random numbers, which humans are arguably incapable or (at least past the first number) are IID (independent and identically distributed).

        Laws of physics don’t mean everything is 100% predictable. At the quantum level, particles only have a chance of being at a particular point at a particular time. In fact, a quantum particle has a probabilistic wave function that allows it to randomly exist at any point in the universe at any time. While the same is true for both the brain and the computer, the difference is that a computer verifies the math behind computations (logical constructs obviously unaffected by quantum mechanics) while our brain simply continues on after some sort of error event, caused by bad inputs or possibly quantum mechanics. [Ironically, after my comment, I remembered a problem we recently had at work with alpha particles affecting the DRAM on one of our boxes in an interesting way… so… perhaps its not all THAT different after all.]

        As I said in my comment, I love the idea of emergent behavior and in my particular attempts at AI I found to get the AI to do what I wanted I had to build in some contextual knowledge which unfortunately prelimited other behaviors, which was disappointing. I completely agree that given the right “base” instincts with enough learning modules you could have a system that can learn anything. I was just unsuccessful in that endeavor and when I tried to work backwards I found my entire framework was overly specific and gave up for a time.

      • The point about quantum effects in the human brain is interesting. I tend to think of neurons as too big to be affected much by quantum fluctuations. I wonder if I’m wrong about that?

  3. Pingback: AI Week, Day 1: Now you’re speaking my language | Ben Trube

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.