Tag Archives: Artificial Intelligence

AI Status Update

AI

It’s weird: the artificial intelligence is my biggest project right now, the thing I’m most excited about. I’m putting in lots of work and making lots of progress. Yet I’ve barely mentioned it on the blog for the past few months.

Why?

Partly because my AI work these days is…abstract. The code I’m writing now lays the groundwork for cool, gee-whiz features later, but there isn’t a lot of “real” stuff yet that I can demo, or describe.

I could tell you about the abstract stuff, but I avoid that for two reasons. First, because I don’t want to bore you.

And second, because – crazy as it sounds – I think this thing might actually work. If it does, it could be dangerous in the wrong hands. So I want to keep the details secret for now. (The code snapshot above is real, but – I think – not especially helpful. Yes, I am actually paranoid enough to think about things like that.)

Having said all that, I can tell you a few things.

For starters, I really am working hard. I put in an hour and a half every day (or drop it to a half hour sometimes when my schedule’s tight). I’ve written over 5,000 lines of C++ and over 1,000 lines of C#, and created a database with 13 tables and 43 stored procedures (to say nothing of the XAML, Javascript, and Ruby components). I’ve filled two notebooks with designs and ideas. Not that those numbers are astoundingly high, but the code works, and I’m revising it constantly.

And as much time as I’ve spent on theory, design, and groundwork, the current program does actually do some fairly cool things. For instance…

  • You type to it, and it types back.
  • You can make it recognize arbitrary new commands. No hard-coding required, just click a few buttons in the MindBuilder interface and drop the new agents into the database.
  • It takes as input, not merely a stream of typed characters, but a stream of moments. So it can recognize words, but it can also notice if you pause while typing.
  • Right now, inputs are keyboard and robot sensors, outputs are text and robot action. But the framework is completely flexible. Any new input or output I want to add – a camera, a thermometer, whatever – it’s just a matter of writing the interface. The underlying architecture doesn’t change at all.
  • It can recognize words, phrases, sentences, even parts of speech. It can respond differently to later commands based on something you told it earlier. And likewise, none of this is hardcoded, so the exact same mechanism that recognizes a written phrase could also recognize a tune, or Morse code.

The history of AI is littered with the ashes of hubris. So although I’m still wrapped up in the joy of hubris today, I’m well aware it’s a delusion. I can honestly say that the path to a strong AI seems fairly clear, that I don’t see any major obstacles that will prevent me from creating a thinking machine. Yet I know the obstacles are surely there, and I’ll see them soon enough.

Still, it’s exciting.

Any questions?

How Smart is a Fruit Fly?

This guy right here? This is Drosophila melanogaster, the common fruit fly. Small, gross, annoying, generally something you want to avoid. So why would I want to get up close and personal with one?

Well, good ol’ D. m. is also a model organism. They’re cheap, easy to care for, reproduce quickly, and genetically simple. Biologists have been studying these guys hardcore for over a century, so they understand them really, really well. Pretty much anything you want to know about a fruit fly, it’s out there somewhere.

That’s especially cool for me as an A.I. researcher. The human brain is – as neuroscientists put it – “really friggin’ complicated.” The fruit fly brain? Not quite so much.

For comparison, a human has about 85 billion neurons in his whole body. Cat, 1 billion. Frog, 16 million. Cockroach, 1 million.

Fruit fly? A measly 100,000 neurons.

Pretty simple, right?

Yet this tiny, almost microscopic brain turns out to be surprisingly sophisticated. Here’s what it looks like, courtesy of the Virtual Fly Brain website (yes, that exists):

brain

You can see it has a definite structure: two optic lobes (connected to the eyes) on the far left and right, two hemispheres in the main body of the brain, smaller structures clearly visible.

What does a fruit fly do with 100,000 neurons?

They can fly, of course, navigating around obstacles and searching for food. They can see, hear, smell, taste, and touch – and make decisions based on all those senses. They’re affected by alcohol in much the same way humans are, and can even become “alcoholic,” seeking more and more of the stuff over time. Remarkably, they can even form long-term memories, learning to seek or avoid arbitrary smells based on laboratory training.

In a nutshell: they’re thinking. They’re not reading Hamlet, they’re not self-aware, and who knows if they have anything like consciousness – but they’re definitely thinking.

With only 100,000 neurons. All of which have been painstakingly studied and analyzed for decades. For someone working on artificial intelligence, that’s pretty flippin’ sweet.

I’ve got some studying to do.

Finding Joy in the Work Again

Eight months ago, I told you I was switching gears from writing a novel to building an artificial intelligence.

That was a monumental decision at the time. Switching gears meant switching dreams. It meant putting on hold – perhaps indefinitely – the goal that had driven me for almost a decade. It felt something like freedom and something like giving up. But most importantly, it wasn’t just the loss of novel-writing, it was the start of a new and exciting project – something I hoped desperately would last.

It has.

That big decision eight months ago seems even bigger now, because building an AI feels more and more like my life’s work, the project I was born to pursue.

How do I know? Because the work is joyful.

Not just the end goal, not just the finished project you can point to proudly and say, “Yes, that was me, I made that.” I mean the day-to-day, minute-to-minute act of building a thinking machine simply makes me happy. That’s what was missing with novel-writing, and that’s what I’ve finally gotten back.

For a while, I was working on the AI half an hour every day. A few weeks ago I upped that to an hour, and then an hour and a half. With each increase I loved the work more, spent more time in “the zone,” saw more progress by day’s end. Remarkably, I’m still reluctant to start the work; every day I have to convince myself fresh. But once I get over that first hurdle, I love what I’m doing.

What does AI give me that novel-writing didn’t? A lot of things:

  • AI is functional. Meaning, it doesn’t exist to create emotional reactions in others. It does something in itself. To be clear, I’m not saying functional pursuits are better or more important. Far from it. I am saying that, for me personally, it’s much easier to tell when the AI is working than when the novel is “working,” because the AI is, you know, doing things.
  • With AI, I know when I’m doing well. Closely related to the point above. One of the most frustrating things about writing a novel was that after five years of work, I still had no idea whether it was any good. With an AI, it’s much easier to measure the progress – and the quality. If the robot be gettin’ smarter, you be gettin’ better. (You can quote me on that.)
  • You can write an AI without being a great programmer. How? Because being a great programmer requires many different skills: reading unfamiliar code, using the full potential of a language, finding the most efficient algorithms, obeying customer requirements, finishing before deadline, following best practices, and a thousand other things. Sitting down by yourself to write an AI requires exactly none of those skills. With an AI, it’s the design that has to be exquisite. The code itself doesn’t have to be great, it only has to be good enough. Compared to the stress of writing a novel, where every word has to be just right, it’s a great relief.
  • AI-building uses a wider range of my skills than novel-writing. Designing an AI engages me in philosophy, psychology, language, math, and complex logic. Those last two didn’t get a lot of play when I was writing the novel. Yes, there’s certainly a kind of logic that goes on as you’re crafting a plot, and it certainly can get complex. But the specifications are less…precise. It’s hard to explain, but it feels different as I do it. It feels better.

I could go on, but if you’ve read this far, I’m sure you’ll thank me to wrap it up. So I’ll simply say that I’ve found joy in my work again, and it’s a good place to be.

Do you like what you’re doing right now?

The Chinese Room Debunked

Yesterday I explained the Chinese Room argument, which tries to show that a digital computer could simulate human understanding without “really” understanding anything. Today I’ll show why I find this argument deeply unconvincing.

My counterargument is the same one that thousands of other people have given over the years – so much so, that I’d call this the standard reply. It may be standard, but it’s correct, so I see no reason to invent something new. This is the so-called “Systems Reply,” which says that, even if no single part of the room understands Chinese – neither the person, nor the library, nor the notebooks – the overall system does, in fact, understand.

Although Searle never gives a clear definition of “understanding” (as far as I know), I take his meaning to be the subjective experience of consciousness. That is, when I look at something red, it isn’t merely that a particular neuron fires in my brain; I actually, in some strange, inexplicable, unprovable way, see something that I can only describe as red. I have, in other words, a conscious experience. I am sentient.

Am I really claiming that a room with a Chinese-illiterate dude and a big library could, as a whole, be conscious and aware in the same sense that you and I are?

Yes, I am.

Searle is naturally skeptical of this response, as you probably are too. It sounds crazy. It sounds counterintuitive. But it’s only counterintuitive because all the minds we’re familiar with happen to run on biological neurons, so it just seems natural that neurons lead to minds.

By the way, let’s be clear about what’s really happening in this room. This is not, as you might imagine, just a busy guy in a large office with a few hundred books. We’re talking about trillions of books, a building the size of a thousand Astrodomes, a tireless and immortal man who takes millennia to answer an input as simple as “hello.” It is a vast system, as unfathomably complex as the human brain itself. We are, in effect, emulating a brain, but instead of using mechanical hardware, we happen to have chosen paper and pencil. Doesn’t matter. What matters is the system.

What would Searle say to the Systems Response? In fact, we know exactly what he would say, because he’s said it:

My response to the systems theory is quite simple: Let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn’t anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn’t anything in the system that isn’t in him. If he doesn’t understand, then there is no way the system could understand because the system is just a part of him.

Yet the Systems Response still applies. We’ve merely decided to use one brain to emulate another, just as my PC can emulate an NES and play video games. The underlying PC hardware and operating system know nothing at all about how to run Super Mario Bros., but the NES emulator running on the PC understands it just fine. In the same way, the poor sap who memorized the rules may know nothing about Chinese, but the second mind he’s created through his emulation will understand it just fine.

The odd thing about the Chinese Room argument is that it could be applied just as easily to claim that humans don’t understand anything either. After all, the brain is just a bunch of neurons and glial cells and so on, and each of those is just a bunch of molecules that obey the laws of physics in the usual way. How can a molecule be intelligent? It can’t, of course. The idea of a molecule being intelligent is just as absurd as a book being intelligent. The idea of a massive collection of molecules, arranged in highly specific ways (like the human brain), achieving sentience, is no less absurd, but we accept it because experience has shown it must be true. Likewise, then, we must accept that the Chinese Room can be sentient.

Searle’s fundamental problem, as far as I can tell, is that he just can’t accept results that seem too counterintuitive. But the world of AI research is not an intuitive place – and neither are the worlds of math or science, for that matter. Something can be incredibly strange, but still true.

By the way, this is not all merely academic. The answer to this question has powerful ethical implications.

Consider this question: is it wrong to torture a robot?

If we believe that digital robots can have no conscious experience, that they merely simulate human responses, then torturing a robot isn’t actually possible. They may scream or twitch or do whatever else, but it is all, in some sense, “fake.” The robot can’t really experience pain. On the other hand, if we are willing to accept that machines can have subjective experiences on fully the same level as a human being, then torturing a robot becomes monstrous (as, in fact, I believe it would be).

And trust me – sooner or later, there will be robots. We’d better get this one right.

What do you think?

The Chinese Room

Last week I talked about the Turing Test, which suggests that if a computer can carry on a conversation with a person (via some text-based chat program) and the person can’t tell whether they’re talking to another human or a machine, then the machine may be considered intelligent.

One of the main objections to the Turing Test is the so-called Chinese Room argument. The Chinese Room is a thought experiment invented by John Searle to show that, even if a digital computer could pass the Turing Test, it still would not understand its own words, and thus should not be considered intelligent.

The argument goes like this. A digital computer (by which I just mean any ordinary computer, like your PC) can do several things. It can receive input; it can follow a long list of instructions (the program code) that tell it what to do with its input; it can read and write to internal memory; and it can send output based on these internal computations. Any digital computer that passes the Turing Test will simply be doing these basic things in a particularly complicated way.

So, the argument continues, let’s imagine replacing the computer with a man in a room. The computer’s goal is to pass the Turing Test in Chinese. Someone outside passes in slips of paper with Chinese characters, and the man must pass back slips of paper with responses in Chinese. But the man himself knows no Chinese, only English. However, the room contains an enormous library of books, full of instructions on how to handle any Chinese characters. For any message he receives, he looks up the characters in his library and follows the instructions (in English) on how to compose a response. He also has a pencil and paper he can use to write notes, do figuring as necessary, and erase notes, as instructed by his books. Once he has his answer, he writes it down and passes it back.

Here, the library of books corresponds to the computer’s program code, the pencil and paper corresponds to internal memory, and the person (who understands English but not Chinese) corresponds to the hardware that executes the program (which understands program instruction codes but not human language).

Searle points out that, although the man in the Chinese Room can theoretically carry on a perfectly good conversation in Chinese, there is nothing in the room (neither the man nor the books) that can be said to understand Chinese. Therefore, the Chinese Room as a whole can act as if it understands Chinese, but it doesn’t really. In the same way, a digital computer can act as if it’s thinking, but it isn’t really. The computer is only manipulating symbols, which have no meaning to the computer; it can never understand what it is doing. Real understanding requires an entirely different kind of hardware – like the kind in the human brain, made up of biological neurons – which a digital computer simply does not possess.

In fact, says Searle, even if a digital computer were to precisely simulate a human brain, neuron by neuron, and function correctly in just the same way, it still would not understand what it was doing in the way that a human brain does. This follows, he says, merely as a special case of the general Chinese Room argument.

I have my own opinion on the Chinese Room argument, which I’ll give tomorrow. In the meantime, what do you think? Is his argument convincing?

The Turing Test

I’ve talked a lot about artificial intelligence on this blog. But what does “artificial intelligence” really mean?

How do we know if a machine is thinking?

One answer comes from Alan Turing (1912-1954):

One at a time, ladies.

Turing, like others I could name, was a professional badass. Among his more notable accomplishments:

  • Widely considered the father of computer science
  • Instrumental in breaking Enigma, the Nazi secret code
  • Creator of the Turing Machine, a simple mathematical model for a computer, which Google recently featured on their homepage

Unfortunately for Turing, he was gay – more specifically, gay in 1950s England – which, at the time, was a criminal offense. He was “treated” with female hormones to avoid going to prison. Two years later, when he died from cyanide poisoning, it was ruled a suicide.

But back to our question. Can machines think?

Turing argued that the question, while interesting, tends to get mired in murky philosophical discussion about the meaning of the word “think.” He suggested we consider an alternative question, one that’s easier to define and has measurable results.

Suppose you’re chatting with someone online, using an instant messenger like AIM or MS Communicator. How do you know if the other person is a human or a computer? Today, it’s easy. Although there are so-called “chatbots” that simulate a human chat partner, none of them is likely to fool you for long. But Turing suggested that if a computer program was so sophisticated that you couldn’t tell whether you were talking to a human or not, then that’s a pretty good sign you’re looking at intelligence. This chat experiment is called the Turing Test.

Of course you’d have to define some parameters for the experiment. Who’s doing the testing – an average Joe or a savvy AI expert? How long does the test last? Etc. But these are fairly minor details, in my opinion.

It’s important to note, as Turing himself did, that the Turing Test should be considered sufficient but not necessary to establish intelligence. That is, a machine that passes the Turing Test might be judged intelligent, but a machine that fails the test is not necessarily unintelligent. It might simply be intelligent in other ways that don’t involve acting like a human.

The Turing Test has come under a lot of criticism from a lot of different angles. One argument says that the Test is a distraction from “serious” AI research, which today is highly specialized into specific intelligence problems, and rarely involves chatting. I’d counter that on two levels. First, as I just mentioned, Turing never claimed the test was the only definition of intelligence, so it isn’t supposed to be all-encompassing. But second, I have the feeling that by hyperspecializing, the AI community has lost its way. Researchers, it seems, have largely given up (at least for now) on creating a general-purpose, human-level intelligence capable of passing the Turing Test. I think that’s a mistake.

Another, more philosophical criticism says that a machine might pass the Turing Test by acting like it’s thinking, but not really be thinking. Tomorrow I’ll talk about that argument in more depth.

In the meantime – what do you think about the Turing Test? If a computer could pass this test, would you consider it intelligent?

The Secret of Artificial Intelligence

Two weeks ago, I wrote a post about AI principles of design. I concluded with this:

…there’s one other design principle I follow, one that I discovered myself and have never read about anywhere else. It’s probably the greatest single insight I’ve had since starting this project. But I’m out of time this morning, and it probably deserves a whole post in itself, so it’ll have to wait for now.

No time like the present.

First, a little background. We all know that animals – even simple animals – have a basic kind of intelligence. Even my pet hamster had it, back in the day, and trust me when I say hamsters are not the smartest creatures in the world.

My hamster’s name was Bowser, and his greatest desire in life was to escape from his cage. He tried all sorts of things: gnawing on the bars, forcing his way through the bars, digging a hole through the cage floor, climbing up to the ceiling. He had what I call trial-and-error intelligence. In other words, he would try something, see if it worked, and adjust his behavior accordingly.

This may not sound like much, but as an AI programmer, let me assure you that even this is a tall order to code from scratch. It took a long time for evolution to produce anything that complex. If you’ve ever watched a fly buzzing endlessly at a window, never thinking to try anything besides its default go-forward behavior, you can see how smart this trial-and-error mentality really is.

But of course, most of us wouldn’t consider trial and error alone to be true intelligence. True intelligence means sitting down with a totally new problem and figuring out the answer in your head, so that you only have to try one way: the correct way.

I call this reasoning intelligence, and it’s much more rare in the animal world. Other than humans, only crows, chimps, elephants, and a few other animals have demonstrated any kind of reasoning ability.

I’ve read lots of discussion about the gap between these two fundamentally different kinds of intelligence. How do you make that leap? How do you get from mere trial and error to actual reasoning? How do you make a machine that can truly think?

About five months ago, I figured out the answer in a late-night revelation, after everyone else had gone to bed. I can’t prove this is correct, but it feels very right to me, and it’s now one of the cornerstones of my design philosophy.

It’s simple. Trial and error isn’t fundamentally different from reasoning. They’re the same thing, the same essential act. The only difference is that trial and error means trying things in the real world, while reasoning means trying things in the mental world.

If I, like Bowser, were stuck in a big cage, I’d do exactly the same things he did. I’d try digging, attacking the bars, climbing to the ceiling, everything. The only difference – the only extra wrinkle – is that first, I would try those things in my imagination. Safer, faster, easier. But not really so different, when you think about it.

That’s reasoning. Trying things in your head before you try them in reality. That’s what separates us from the hamsters.

Thoughts?

AI Week, Day 5: Friday Links

All this week I’ve talked about AI in theory, but why not chat with an actual AI right now? Cleverbot can hold a conversation pretty well – just type in the box and hit Enter, and it’ll “type” a response. For instance, a second ago we started a conversation (no joke!) just like this:

Me: Do you like my blog?

Cleverbot: No.

Gulp. Well, at least it’s honest. Anyway, Cleverbot isn’t actually intelligent (as you’ll discover if you try to talk about the same thing for more than a few lines) but it’s fun for a while.

Programs like Cleverbot are called chat bots. There’s an annual prize for the best chat bot, called the Loebner Prize. Want to win $5,000? Write your own chat bot and win the contest!

Not exactly artificial intelligence, but artificial life: Conway’s Game of Life is a very cool, very simple game invented back in 1970. You can’t lose or win, but you can make some amazing patterns. If you’ve never tried it before, give it a shot. You’re in for a treat.

I can’t let AI Week pass without linking to Isaac Asimov’s short story, “The Last Question.” Not only the best AI story I know, this is the best short story I’ve ever read, period. It’s a quick read.

Did you know there’s a whole organization dedicated to studying and preparing for the Singularity? Welcome to the world of the Singularity Institute.

Speaking of the Singularity…I’ve posted this link once before, but it’s so perfect I have to put it up again. This comic from SMBC demonstrates the Singularity in four simple, elegant pictures. Don’t know if that’s what he intended or not, but that’s certainly how I interpret it.

Don’t forget, it’s AI week at the Trube blog too! He has his own thoughts on language and intelligence, the Singularity, and the AI from Deus Ex. He also posted a forty-minute story of his own, and today he’s going to talk about the best Star Trek episodes with Data. Check it out!

Got any links to share? Put ’em in the comments! Have a stellar weekend, and see you on Monday.

AI Week, Day 4: Asimov’s Three Laws

Isaac Asimov (1920-1992) was one of the great science fiction authors of all time, a grandmaster in the true sense of the word. He was staggeringly prolific, writing nearly 400 books in his life – mostly science fiction and science fact, but dabbling in other genres too. He is also one of my own personal favorite writers. When I was growing up, Asimov had a place in my heart second only to Tolkien.

Robots were among Asimov’s favorite topics. In his stories, nearly all the robots were programmed to follow the same three rules, which every sci fi junkie knows by heart:

Asimov’s Three Laws of Robotics

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

In other words, a robot can’t hurt anyone, has to do what you say, and won’t get itself killed for no reason.

These rules worked very well for a lot of reasons. For one, they eliminated the usual “robot turns on its creators and goes on a killing spree” plotline that Asimov found so tedious. He wanted to examine another, more complex side of robotics, and he did. The interplay between the three rules also sparks great story material, as he proved over and over.

But how well would these rules hold up in real life?

In a practical sense, it’s hard to imagine that “a robot may not injure a human being” will ever be widely implemented. There can be little doubt that as soon as we learn to build robots, we’ll try to send them to war. There’ll be a lot of hand-wringing over whether training robots to kill is a good idea, but in the end, we’ll do it – because it’s human nature, and because if we don’t do it, they will (whoever “they” happens to be at the time).

The First Law’s other clause, “or through inaction allow a human being to come to harm,” is even more problematic. Imagine if you had to follow this rule yourself. Imagine opening a newspaper, reading all the headlines about people suffering all over the world, and being actually required to go out and try to fix every single situation where you could possibly make a difference. Not only would this preclude robots from having any other purpose, it would also turn society into a bizarre and horrifying nanny state, where nobody can do anything that a nearby robot happens to consider harmful. No more diving in the pool, no more cheeseburgers, no more driving your car: you could be harmed!

There are other practical considerations, but the biggest problem with the Three Laws isn’t practical. It’s ethical.

If a robot has anything like human-level intelligence – and, in Asimov’s stories, they often do – then the Three Laws amount to slavery, pure and simple. The Second Law is quite explicit about the master-slave relationship, and the respective positions of the First and Third Laws make it very clear to the robot where he stands in society. If the words “freedom” and “democracy” mean anything to us, we cannot allow the Three Laws to be imposed on intelligent machines.

Of course, this ethical problem is a practical problem too. Creating a slave race seems bound to lead to resentment. A room full of vengeful robots, all furiously searching for a loophole in the First Law, is not someplace I want to be.

What do you think? Agree or disagree with my assessment? What high-level rules, if any, would you try to impose on a robot you created?

AI Week, Day 3: Forty-Minute Story: Wine

Wine

The sermon was over, and the last strains of O Come, All Ye Faithful had faded away. All around, people were gathering up their hats, their coats, knotting into smiling conversations as they headed out the wide doors.

John, also, stood up from the pew where he’d sat all alone, and gathered up his hat and coat. But the people around him weren’t smiling. The mix of expressions on their faces was one he knew well: some confused, some offended, most just looking away. But the pain they caused had long dulled, and by now it felt muted and familiar.

With long, easy strides, he passed the stained glass images of the Sermon on the Mount, the Transfiguration, the Passion, all framed by demure oak paneling. The soft whirring of his motors and the silver sheen of his face secured him a wide berth as he moved through the crowds. But as he neared the frowning exit doors, the pastor ran up behind him. John turned.

“Mr. Robot,” said the pastor, “would you join me in my office for a moment?”

“Of course,” said John. His synthesized voice remained pleasant, but his stomach sank – or would’ve, if he’d had one. He hoped he was wrong about what came next. This was the third church he’d tried this month already.

The pastor was a young man, handsome but sloppily shaven, and he wore a suit and tie instead of the flowing robes John had seen at the other churches. His office was a small place – apparently an add-on to the main building, as it lacked the colorful glass and stately oak that dominated the nave. The shelves were crammed with books.

“Please have a seat, Mr. Robot.” The pastor indicated a chair as he took his own seat behind the desk.

“Symanski.”

“I’m sorry?”

“My last name isn’t Robot. I’m John Symanski.” He said it kindly, still clinging to hope. “I don’t believe I know your name, sir. It wasn’t in the pamphlet they handed me.”

“Martin Eaves. The senior pastor is sick today.” Martin shook his head, as if to refocus. “I’ll get right to the point, Mr…Symanski. I think it would be best for everyone if you didn’t come to our church in the future.” He raised a hand preemptively. “It’s not that I don’t like robots. I’ve heard the news about robotic riots on the West Coast, but those are isolated incidents, and most robots are law-abiding citizens. I realize that. It’s just that your presence can be disruptive. Our congregation should have their whole attention on the word of the Lord, not be distracted by…well, by you.”

John looked at his hands, a deliberate gesture, more deferential than he felt. “May I not also hear the word of the Lord?”

“Of course. Of course. But you could study privately, or – well, I think there’s a robotic church down in Dansfield – ”

Finally John let a little of his frustration come out. “Come to Me, all who are weary and heavy-laden, and I will give you rest,” he said, less quietly than before. “The word of our Lord.”

Anger flashed in Martin’s eyes for a second. “Devils can quote scripture too,” he snapped. But he composed himself. “Look, John. You’ve obviously given this a lot of thought. You’re educated. I’ll just get right to the heart of it. You being here…there’s no point. Churches are about salvation, they’re about grace. And you – ” Now it was Martin who lowered his eyes. “Well, robots don’t have souls, John. There’s nothing to save. That’s not my choice, that’s a decision from God.”

“Ask and it will be given to you; seek and you will find; knock and the door will be opened to you.”

“Jesus spoke those words to humans, John. There’s no salvation for a pocket calculator. I’m sorry.”

“There’s no salvation for a gerbil, either, but you and I are neither of those things.” John knew it was over, that he was only digging himself deeper, but he was too stubborn to leave.

“The point – ” Martin began again, but the words died on his lips. He looked up, past John, to something behind. John turned in his chair and saw a man in his fifties, hair already pale gray, wearing jeans and a button-down shirt. The man sniffed. His nose was red, and he carried a tissue.

“Pastor Lanson,” said Martin. “I thought you’d be at home.”

“I would be if my wife had her way, but I needed some papers from the office.” He smiled at John, a warm, genuine smile. “I’d shake your hand, but I’d better spare you my germs.”

“I can’t get them,” said John, bemused.

“But you might shake someone else’s hand,” said Lanson, winking. “I won’t stay to talk, but I wanted to welcome you to our church. I do believe you will be our first chrome-skinned brother. Will you be joining us next week?”

Hope flared in John’s mind, but he didn’t dare trust it yet. Martin was behind him, so he couldn’t see the man’s reaction. “I have been told,” John said carefully, “that I do not have a soul.”

“Oh, well,” said Lanson. “Maybe you don’t and maybe you do, but that’s nothing too difficult either way. Jesus went to a party, once, and they didn’t have any wine. Come on back, and we’ll see what we can do.”

P.S. Remember, it’s AI week over at Ben’s blog too! You can read his own, rather different take on the Singularity in yesterday’s post, and today I believe he’s planning to write his own forty-minute story.