Tag Archives: Artificial Intelligence

Descending Vectors

The stars are falling – his first thought, upon
The sight of snow, before today unseen;
Descending vectors, fractal-point are drawn
Across the vision-scope of the machine.
The robot’s palm extends; his pixeled eyes
Record, by frames, what metal cannot feel
And neural nets unbidden analyze
The sight of frozen water over steel.
Behind him stands the conference hall, whose door
Projects inviting warmth on salted stairs –
And here, in laughing groups of two and four
(And wrapped in coats of other mammals’ hairs)
The first distinguished scientists arrive
To argue over whether he’s alive.

I wrote that when I was twenty years old.

Advertisements

Artificial Intelligence and Nietzsche

Nietzsche

I’m working my way through Friedrich Nietzsche’s philosophical masterpiece, Thus Spoke Zarathustra. It’s easy to read, and more profound than I expected.

Take this quote:

The self says to the “I”: “Feel pain!” And at that it suffers, and thinks how it may put an end to it – and for that very purpose it is made to think.

The self says to the “I”: “Feel pleasure!” At that it is pleased, and thinks how it might often be pleased again – and for that very purpose it is made to think.

Nietzsche is saying that pain and pleasure aren’t just one small piece of consciousness. Rather, seeking pleasure and avoiding pain is the mind’s entire job, its sole function.

This defies tradition, and common sense too. We’re used to thinking of the mind as something noble and elevated, for which the body is merely a tool. Most people believe the mind can even exist apart from the body, as an immortal soul. Yet Nietzsche claims the mind is not a master but a slave, whose elaborate calculations serve only to avoid an ouch.

Here, you might say: “That sounds silly, but what’s the point of arguing about it? We’ll never know. We can’t test it one way or another, it makes no difference to anyone’s life, so this is all a bunch of hot air.”

I would’ve said that myself, not so long ago.

But one of the great beauties of artificial intelligence is that it’s a proving ground for philosophy, especially philosophy of the mind. (And after all, it’s all philosophy of the mind.)

Statements like “experience is the foundation of knowledge” seem airy and abstract. But to an AI engineer, it’s all very real. Every day, I write programming code that constructs the foundation of knowledge. When you’re building a mind, philosophical questions become practical.

And from what I’ve seen so far, I believe Nietzsche was exactly right. The mind is a noble, elegant, powerful thing, but at its roots, it has one main job: seek pleasure, avoid pain. Everything else is subordinate to that primary function.

You may argue that lots of people do things that could be considered pain-seeking: going to battle, even volunteering to be crucified. But I would say that these acts are driven by pursuit of a higher “pleasure” or a deeper “pain,” encoded in the conscience.

Well, you say, then isn’t my argument circular? Isn’t such a definition of pain and pleasure so broad as to be meaningless?

From a human, subjective viewpoint: perhaps. From the objective stance outside of the AI’s code: no, it’s not meaningless at all. Pleasure and pain are values that can be calculated and measured.

Of course, my AI design is not the only one possible. Even if my architecture works, it may be very different than how the human brain operates.

Maybe. But I don’t think so – not about this, anyway.

What’s on your mind this Monday morning?

Robo-Riddling

On Monday, I posed this riddle:

I thrive in gardens and in war.
I’m what drunk drivers oft ignore.
I am a thing no man can hear,
yet light a fire and I’ll appear.
When cherries rot, I also die.
I can’t be felt – but what am I?

Longtime blog reader momenteye asked an excellent question:

How would an AI approach solving this riddle?

Let’s talk about that!

Of course, I’m only going to tell you how my model of an AI would handle this. There are a million approaches to artificial intelligence, and I don’t claim to have the only right answer. The AI I’m building is named Procyon. What would Procyon do with this riddle?

We’ll start by asking: why is Procyon trying to solve this riddle in the first place?

At his most basic level, Procyon has two main impulses: seek pleasure and avoid pain. That may sound crude or hedonistic, but it’s only the foundation of the mind, not the mind itself. Humans are, I believe, built on the same foundation. Ideas as varied as pacifism, fascism, Zen, and even sadomasochism all derive, I think, from these two basic impulses.

(In a philosophy class, we might stumble now into a debate about the definition of “pleasure,” and whether the concept is really meaningful, etc. Fortunately, in an AI, it’s much simpler. “Pleasure” is what happens when this variable right here is set to True.)

The point is, Procyon isn’t working on the riddle because some high-level directive tells him that riddles are fun. He’s doing it because this kind of thing has brought him pleasure in the past. (And if it hadn’t, then he wouldn’t care about riddles.)

Similarly, his approach to figuring out the riddle doesn’t follow some crystal-clean logic that starts with a set of Riddle Axioms and derives an answer. Formal logic is good for a lot of things, but as a foundation for intelligence, it sucks.

Instead, Procyon compares his situation with similar scenes from his past. In effect, he’s asking, “When I’ve come across puzzling textual questions before, what did I do? Did I like the way that approach turned out? Then I’ll try to apply it here.” Though of course he may not explicitly think in those terms.

So how would Procyon solve it? The answer, like so much else, is simple but maybe unsatisfying: it depends. His approach will be based on whatever’s worked in the past, not on any pre-defined rules for Figuring Things Out. Kind of like how humans operate.

This strategy, which I’ve breezily summarized, turns out to be fiendishly complex in practice. When I talk about “similar scenes from his past,” for example, what do I mean by “similar?” How do I define a “scene?” What does this comparing his situation entail? And how does he “try to apply it here?”

As generations of AI researchers have learned the hard way, the devil is in the details. Just explaining my own approach to these questions would take a full-length book, and I’m still far from having all the answers. Likewise, Procyon in his current state is far from having any concept of what a question is, much less a riddle.

But he’s getting there, a day at a time. And so am I.

AI Status Update

AI

It’s weird: the artificial intelligence is my biggest project right now, the thing I’m most excited about. I’m putting in lots of work and making lots of progress. Yet I’ve barely mentioned it on the blog for the past few months.

Why?

Partly because my AI work these days is…abstract. The code I’m writing now lays the groundwork for cool, gee-whiz features later, but there isn’t a lot of “real” stuff yet that I can demo, or describe.

I could tell you about the abstract stuff, but I avoid that for two reasons. First, because I don’t want to bore you.

And second, because – crazy as it sounds – I think this thing might actually work. If it does, it could be dangerous in the wrong hands. So I want to keep the details secret for now. (The code snapshot above is real, but – I think – not especially helpful. Yes, I am actually paranoid enough to think about things like that.)

Having said all that, I can tell you a few things.

For starters, I really am working hard. I put in an hour and a half every day (or drop it to a half hour sometimes when my schedule’s tight). I’ve written over 5,000 lines of C++ and over 1,000 lines of C#, and created a database with 13 tables and 43 stored procedures (to say nothing of the XAML, Javascript, and Ruby components). I’ve filled two notebooks with designs and ideas. Not that those numbers are astoundingly high, but the code works, and I’m revising it constantly.

And as much time as I’ve spent on theory, design, and groundwork, the current program does actually do some fairly cool things. For instance…

  • You type to it, and it types back.
  • You can make it recognize arbitrary new commands. No hard-coding required, just click a few buttons in the MindBuilder interface and drop the new agents into the database.
  • It takes as input, not merely a stream of typed characters, but a stream of moments. So it can recognize words, but it can also notice if you pause while typing.
  • Right now, inputs are keyboard and robot sensors, outputs are text and robot action. But the framework is completely flexible. Any new input or output I want to add – a camera, a thermometer, whatever – it’s just a matter of writing the interface. The underlying architecture doesn’t change at all.
  • It can recognize words, phrases, sentences, even parts of speech. It can respond differently to later commands based on something you told it earlier. And likewise, none of this is hardcoded, so the exact same mechanism that recognizes a written phrase could also recognize a tune, or Morse code.

The history of AI is littered with the ashes of hubris. So although I’m still wrapped up in the joy of hubris today, I’m well aware it’s a delusion. I can honestly say that the path to a strong AI seems fairly clear, that I don’t see any major obstacles that will prevent me from creating a thinking machine. Yet I know the obstacles are surely there, and I’ll see them soon enough.

Still, it’s exciting.

Any questions?

How Smart is a Fruit Fly?

This guy right here? This is Drosophila melanogaster, the common fruit fly. Small, gross, annoying, generally something you want to avoid. So why would I want to get up close and personal with one?

Well, good ol’ D. m. is also a model organism. They’re cheap, easy to care for, reproduce quickly, and genetically simple. Biologists have been studying these guys hardcore for over a century, so they understand them really, really well. Pretty much anything you want to know about a fruit fly, it’s out there somewhere.

That’s especially cool for me as an A.I. researcher. The human brain is – as neuroscientists put it – “really friggin’ complicated.” The fruit fly brain? Not quite so much.

For comparison, a human has about 85 billion neurons in his whole body. Cat, 1 billion. Frog, 16 million. Cockroach, 1 million.

Fruit fly? A measly 100,000 neurons.

Pretty simple, right?

Yet this tiny, almost microscopic brain turns out to be surprisingly sophisticated. Here’s what it looks like, courtesy of the Virtual Fly Brain website (yes, that exists):

brain

You can see it has a definite structure: two optic lobes (connected to the eyes) on the far left and right, two hemispheres in the main body of the brain, smaller structures clearly visible.

What does a fruit fly do with 100,000 neurons?

They can fly, of course, navigating around obstacles and searching for food. They can see, hear, smell, taste, and touch – and make decisions based on all those senses. They’re affected by alcohol in much the same way humans are, and can even become “alcoholic,” seeking more and more of the stuff over time. Remarkably, they can even form long-term memories, learning to seek or avoid arbitrary smells based on laboratory training.

In a nutshell: they’re thinking. They’re not reading Hamlet, they’re not self-aware, and who knows if they have anything like consciousness – but they’re definitely thinking.

With only 100,000 neurons. All of which have been painstakingly studied and analyzed for decades. For someone working on artificial intelligence, that’s pretty flippin’ sweet.

I’ve got some studying to do.

Finding Joy in the Work Again

Eight months ago, I told you I was switching gears from writing a novel to building an artificial intelligence.

That was a monumental decision at the time. Switching gears meant switching dreams. It meant putting on hold – perhaps indefinitely – the goal that had driven me for almost a decade. It felt something like freedom and something like giving up. But most importantly, it wasn’t just the loss of novel-writing, it was the start of a new and exciting project – something I hoped desperately would last.

It has.

That big decision eight months ago seems even bigger now, because building an AI feels more and more like my life’s work, the project I was born to pursue.

How do I know? Because the work is joyful.

Not just the end goal, not just the finished project you can point to proudly and say, “Yes, that was me, I made that.” I mean the day-to-day, minute-to-minute act of building a thinking machine simply makes me happy. That’s what was missing with novel-writing, and that’s what I’ve finally gotten back.

For a while, I was working on the AI half an hour every day. A few weeks ago I upped that to an hour, and then an hour and a half. With each increase I loved the work more, spent more time in “the zone,” saw more progress by day’s end. Remarkably, I’m still reluctant to start the work; every day I have to convince myself fresh. But once I get over that first hurdle, I love what I’m doing.

What does AI give me that novel-writing didn’t? A lot of things:

  • AI is functional. Meaning, it doesn’t exist to create emotional reactions in others. It does something in itself. To be clear, I’m not saying functional pursuits are better or more important. Far from it. I am saying that, for me personally, it’s much easier to tell when the AI is working than when the novel is “working,” because the AI is, you know, doing things.
  • With AI, I know when I’m doing well. Closely related to the point above. One of the most frustrating things about writing a novel was that after five years of work, I still had no idea whether it was any good. With an AI, it’s much easier to measure the progress – and the quality. If the robot be gettin’ smarter, you be gettin’ better. (You can quote me on that.)
  • You can write an AI without being a great programmer. How? Because being a great programmer requires many different skills: reading unfamiliar code, using the full potential of a language, finding the most efficient algorithms, obeying customer requirements, finishing before deadline, following best practices, and a thousand other things. Sitting down by yourself to write an AI requires exactly none of those skills. With an AI, it’s the design that has to be exquisite. The code itself doesn’t have to be great, it only has to be good enough. Compared to the stress of writing a novel, where every word has to be just right, it’s a great relief.
  • AI-building uses a wider range of my skills than novel-writing. Designing an AI engages me in philosophy, psychology, language, math, and complex logic. Those last two didn’t get a lot of play when I was writing the novel. Yes, there’s certainly a kind of logic that goes on as you’re crafting a plot, and it certainly can get complex. But the specifications are less…precise. It’s hard to explain, but it feels different as I do it. It feels better.

I could go on, but if you’ve read this far, I’m sure you’ll thank me to wrap it up. So I’ll simply say that I’ve found joy in my work again, and it’s a good place to be.

Do you like what you’re doing right now?

The Chinese Room Debunked

Yesterday I explained the Chinese Room argument, which tries to show that a digital computer could simulate human understanding without “really” understanding anything. Today I’ll show why I find this argument deeply unconvincing.

My counterargument is the same one that thousands of other people have given over the years – so much so, that I’d call this the standard reply. It may be standard, but it’s correct, so I see no reason to invent something new. This is the so-called “Systems Reply,” which says that, even if no single part of the room understands Chinese – neither the person, nor the library, nor the notebooks – the overall system does, in fact, understand.

Although Searle never gives a clear definition of “understanding” (as far as I know), I take his meaning to be the subjective experience of consciousness. That is, when I look at something red, it isn’t merely that a particular neuron fires in my brain; I actually, in some strange, inexplicable, unprovable way, see something that I can only describe as red. I have, in other words, a conscious experience. I am sentient.

Am I really claiming that a room with a Chinese-illiterate dude and a big library could, as a whole, be conscious and aware in the same sense that you and I are?

Yes, I am.

Searle is naturally skeptical of this response, as you probably are too. It sounds crazy. It sounds counterintuitive. But it’s only counterintuitive because all the minds we’re familiar with happen to run on biological neurons, so it just seems natural that neurons lead to minds.

By the way, let’s be clear about what’s really happening in this room. This is not, as you might imagine, just a busy guy in a large office with a few hundred books. We’re talking about trillions of books, a building the size of a thousand Astrodomes, a tireless and immortal man who takes millennia to answer an input as simple as “hello.” It is a vast system, as unfathomably complex as the human brain itself. We are, in effect, emulating a brain, but instead of using mechanical hardware, we happen to have chosen paper and pencil. Doesn’t matter. What matters is the system.

What would Searle say to the Systems Response? In fact, we know exactly what he would say, because he’s said it:

My response to the systems theory is quite simple: Let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn’t anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn’t anything in the system that isn’t in him. If he doesn’t understand, then there is no way the system could understand because the system is just a part of him.

Yet the Systems Response still applies. We’ve merely decided to use one brain to emulate another, just as my PC can emulate an NES and play video games. The underlying PC hardware and operating system know nothing at all about how to run Super Mario Bros., but the NES emulator running on the PC understands it just fine. In the same way, the poor sap who memorized the rules may know nothing about Chinese, but the second mind he’s created through his emulation will understand it just fine.

The odd thing about the Chinese Room argument is that it could be applied just as easily to claim that humans don’t understand anything either. After all, the brain is just a bunch of neurons and glial cells and so on, and each of those is just a bunch of molecules that obey the laws of physics in the usual way. How can a molecule be intelligent? It can’t, of course. The idea of a molecule being intelligent is just as absurd as a book being intelligent. The idea of a massive collection of molecules, arranged in highly specific ways (like the human brain), achieving sentience, is no less absurd, but we accept it because experience has shown it must be true. Likewise, then, we must accept that the Chinese Room can be sentient.

Searle’s fundamental problem, as far as I can tell, is that he just can’t accept results that seem too counterintuitive. But the world of AI research is not an intuitive place – and neither are the worlds of math or science, for that matter. Something can be incredibly strange, but still true.

By the way, this is not all merely academic. The answer to this question has powerful ethical implications.

Consider this question: is it wrong to torture a robot?

If we believe that digital robots can have no conscious experience, that they merely simulate human responses, then torturing a robot isn’t actually possible. They may scream or twitch or do whatever else, but it is all, in some sense, “fake.” The robot can’t really experience pain. On the other hand, if we are willing to accept that machines can have subjective experiences on fully the same level as a human being, then torturing a robot becomes monstrous (as, in fact, I believe it would be).

And trust me – sooner or later, there will be robots. We’d better get this one right.

What do you think?