Two weeks ago, I wrote a post about AI principles of design. I concluded with this:
…there’s one other design principle I follow, one that I discovered myself and have never read about anywhere else. It’s probably the greatest single insight I’ve had since starting this project. But I’m out of time this morning, and it probably deserves a whole post in itself, so it’ll have to wait for now.
No time like the present.
First, a little background. We all know that animals – even simple animals – have a basic kind of intelligence. Even my pet hamster had it, back in the day, and trust me when I say hamsters are not the smartest creatures in the world.
My hamster’s name was Bowser, and his greatest desire in life was to escape from his cage. He tried all sorts of things: gnawing on the bars, forcing his way through the bars, digging a hole through the cage floor, climbing up to the ceiling. He had what I call trial-and-error intelligence. In other words, he would try something, see if it worked, and adjust his behavior accordingly.
This may not sound like much, but as an AI programmer, let me assure you that even this is a tall order to code from scratch. It took a long time for evolution to produce anything that complex. If you’ve ever watched a fly buzzing endlessly at a window, never thinking to try anything besides its default go-forward behavior, you can see how smart this trial-and-error mentality really is.
But of course, most of us wouldn’t consider trial and error alone to be true intelligence. True intelligence means sitting down with a totally new problem and figuring out the answer in your head, so that you only have to try one way: the correct way.
I call this reasoning intelligence, and it’s much more rare in the animal world. Other than humans, only crows, chimps, elephants, and a few other animals have demonstrated any kind of reasoning ability.
I’ve read lots of discussion about the gap between these two fundamentally different kinds of intelligence. How do you make that leap? How do you get from mere trial and error to actual reasoning? How do you make a machine that can truly think?
About five months ago, I figured out the answer in a late-night revelation, after everyone else had gone to bed. I can’t prove this is correct, but it feels very right to me, and it’s now one of the cornerstones of my design philosophy.
It’s simple. Trial and error isn’t fundamentally different from reasoning. They’re the same thing, the same essential act. The only difference is that trial and error means trying things in the real world, while reasoning means trying things in the mental world.
If I, like Bowser, were stuck in a big cage, I’d do exactly the same things he did. I’d try digging, attacking the bars, climbing to the ceiling, everything. The only difference – the only extra wrinkle – is that first, I would try those things in my imagination. Safer, faster, easier. But not really so different, when you think about it.
That’s reasoning. Trying things in your head before you try them in reality. That’s what separates us from the hamsters.