Tag Archives: Artificial Intelligence

AI Week, Day 2: The Singularity

Yesterday I wrote:

The Singularity is real, and it is coming.

What did I mean by that?

Here, “Singularity” refers to the Technological Singularity, which is a future event predicted by a vocal minority of computer scientists. It’s a fringe belief, which makes me one of the crazies.

So let’s get to the crazy: what is the Singularity?

Descriptions vary depending on who you ask, but basically, the idea is that sooner or later we’re going to create a superhuman intelligence. Maybe it’ll be AI (like I’m working on now), maybe it’ll be human intelligence augmented by technology, maybe something else entirely. But once we take that first step, there’s no going back.

Look at the technology explosion we’ve already seen in the last 100 years. New technology makes it easier to develop even more new technology, which in turn makes it even easier – and so on. It’s a feedback loop. We’re using software to build software, using computer chips to design computer chips, reading websites to learn how to build websites. The pace of technological advancement is only getting faster.

Now imagine that same effect, but with intelligence instead of technology. If we can build a superintelligence, surely a superintelligence would be a whole lot better at it than we are. And if a superintelligence could improve on itself and build a hyperintelligence, what could a hyperintelligence build? It’s a feedback loop on steroids, an exponential explosion of capability.

What would such a future look like? We do not and cannot know. By its very definition, it is beyond our ability to understand. The point at which our creations overtake us and do things we can’t imagine: this is the Singularity.

This could be a heaven, a hell, or anything in between. A superintelligent AI might decide to wipe us out – a scenario Hollywood’s fond of portraying, though it’s hard to imagine a Hollywood-style happy ending. Or it might decide to be benevolent, building us a utopia or even augmenting our own abilities, pulling us up to its level.

A third option, even more frightening than the first, is total indifference to humanity. Self-centered beings that we are, we like to imagine that an AI’s main concern will be how it interacts with us. But why shouldn’t it have plans of its own, plans that don’t concern us unless we happen to blunder into its way? After all, humans don’t worry much about cattle unless we have a use for them. People who say they “like animals” tend to make exceptions for mosquitoes.

Do I really, honestly believe that if I can turn my little Lego robot into a viable AI, it will lead to the Singularity? I do.

So why am I trying it?

Because it’s going to happen sooner or later anyway. And if it happens under the guidance of someone who understands the possibilities, who is trying to make a so-called Friendly AI, then our chances of survival would seem to go up enormously.

Like I said: crazy. I know how ridiculous this sounds.

But why does it sound ridiculous? I think the main reason is that nothing remotely like it has happened before. It’s totally outside our experience. Such things always sound absurd.

But can we really afford to dismiss an idea just because nothing like it has happened before?

What do you think? Does this sound silly to you, or not? Either way, tell me why. I’d love to get some discussion going.

AI Week, Day 1: Principles of Design

Welcome to Artificial Intelligence week!

As you probably know, AI is not just a theoretical thing for me. I’m actually building an AI of my own. His name is Procyon, and he has a Lego body and a C++ brain. And although he doesn’t know much yet, he’s getting smarter by the day.

Of course, my design notebooks are way ahead of what I’ve actually built. So to kick off AI week, I thought I’d share my own personal principles of AI design.

These are in no particular order, and they range from fairly specific design issues up to general philosophy. The list below didn’t exist in this form until today; I just pulled it together based on miscellaneous insights I’ve had so far. If I’d spent more time on it, it would have more bullet points. Still, it’s an accurate and reasonably broad look at the way I view my work.

Without further ado:

Brian’s Principles of AI Design

  • It is possible to build a human-level artificial intelligence. Kind of a no-brainer, right? I wouldn’t be trying if I didn’t think it was possible. But I’m surprised how many people seem to think it just can’t be done, for a variety of philosophical, logical, and religious reasons. I could write a whole post on why I think those reasons are bogus (and I might sometime) but for now I’ll just say this is a very firm belief of mine.
  • I, personally, am capable of building a human-level artificial intelligence. This may sound egotistical, but I don’t think it is. I’m not claiming to be smarter than Marvin Minsky or any of the other giants of the AI field who have yet to succeed in achieving this dream. Rather, I believe this out of necessity – because if I don’t believe in myself, then what’s the point? This is closely related to the joy of hubris, which I’ve written about before.
  • The brain is the mind. Or, to put it another way, the brain is the soul. Many people believe the mind is somehow a separate entity, related to the physical brain but with some extra spark of, I don’t know, thinking-ness. They can’t accept that the vast array of human experience – our transcendent joys, our unspeakable passions, our ability to see the color red – all comes from something as prosaic as a three-pound lump of ugly gray tissue, or could come from something like a computer. But I can accept it, and I do.
  • The human brain is mechanical. By “mechanical” I mean that there’s nothing magical about how it works. The brain is made of cells, the cells are made of molecules, the molecules are made of atoms, and the atoms all follow the laws of physics in the usual way. The brain is the most marvelous machine we know of – but it’s still a machine. Another like it can be built.
  • The human brain should be a guide, not a map, for the AI designer. I’ve learned an awful lot about designing a mind by studying the human mind. But I also think there’s more than one way to skin a cerebrum, and I don’t see a need to follow biological design slavishly. If it makes sense, I do it.
  • Neural nets are a good idea, but too limiting. Neural nets are one of the classical AI constructs, and they’ve been very influential in the way I think about design. But they can only take you so far, at least in my experience. I think of neural nets as a signpost that helped point the way, rather than a destination.
  • E pluribus unum. Out of many, one. Like Minsky, I believe that high-level intelligence comes from the interaction of thousands (millions? trillions?) of very low-level, unintelligent agents. It may seem counterintuitive that something smart can come from a bunch of dumb things working together, but that’s how our own brains work, so why not an AI?
  • Trust in emergent behavior. My design does not have a language module, a navigation object, or a self-awareness function, yet I expect my robot to read and write, move around intelligently, and be self-aware. Why? Because I consider these high-level abilities to be emergent properties of the system.
  • Scruffy, not neat. In the neat vs. scruffy AI debate, I’m scruffy all the way. I’ve already explained this more than once, so I won’t belabor the point again.
  • Aim high. So many AI researchers today are focused on tiny subsets of the big AI problem. They work on specific issues like machine translation or facial recognition, and there’s a widespread feeling that a high-level AI can eventually be cobbled together from all these little pieces. I’m awfully skeptical of this idea, and I’m wary of solving easier versions of the Big Problem and working my way up. I prefer to start at the top. Maybe I’m being naive, but naivete is the prerogative of anybody under 30. 🙂
  • The Singularity is real and it is coming, so design with that in mind. More on this tomorrow.

Finally, there’s one other design principle I follow, one that I discovered myself and have never read about anywhere else. It’s probably the greatest single insight I’ve had since starting this project. But I’m out of time this morning, and it probably deserves a whole post in itself, so it’ll have to wait for now.

As I mentioned, tomorrow’s topic is the Singularity. Don’t miss it! And remember Ben Trube is doing AI week on his blog too, so head on over and see what he’s up to. (He generally posts around 1:00 PM, Eastern time.)

Any questions?

“Let’s See How Measured You Are.”

You may get the impression I’m a solitary creature: blogging, reading history books, meditating, thinking way too hard about the music I listen to. But I assure you, hypothetical reader, I do in fact – how you say – “socialize with persons,” and on occasion I even venture outdoors.

Just a couple week ago, in fact, I was best man at a friend’s wedding. I went full tuxedo. And you know I was rocking these bad boys:

Ahead, warp factor CLASSY. Engage!

Star Trek cufflinks. Does it get any more stylish? I submit that it does not.

Ahem.

In order to rent a tux, one must try on a tux, which is why I was in a Men’s Wearhouse the Thursday before the wedding. (Get it? Men’s Wearhouse? Because you wear the suits? Oh, aren’t they clever!)

As I was standing outside the changing rooms, trying to look like I knew the difference between a cummerbund and a pocket square, a little girl (maybe four years old) was playing nearby. Someone had given her a measuring tape, and she was determined to use it. She marched up to her mother, held the tape to Mom’s arm, and said with unfettered confidence:

“Let’s see how measured you are.”

Let’s see how measured you are. Nonsense, yet totally sensible. And because I’m geeky enough for ten regular human beings, this innocent phrase got me thinking about AI.

You hear sometimes about computer scientists writing language parsers, programs that pick apart a sentence into subject, verb, object, subordinate clauses, adverbial phrases, and all those fun high-school-English-class terms. Then they look at the meaning of each word, and construct an overall meaning by putting everything back together. They discover that their method doesn’t work with idioms, so they build a database of those and add to it constantly. Pretty soon their program can figure out a few simple sentences, and they feel like they’re making progress.

Let’s see how measured you are. What now, language parser?

The girl’s comment made it blindingly obvious that human beings don’t work that way. We don’t build precise meanings out of precise structures. We learn to use language the same way we learn to use any other tool: by trial and error, and by imitation. She had heard these words before in a similar order, so she gave it a shot. And even though it wasn’t “right,” I knew what she meant.

After making this revelatory remark, she started messing with the measuring tape. She held it up to different parts of Mom’s body, not actually measuring anything, just going through the motions. No definite goal, no organized plan. Playing. Imitation, trial and error.

The same thing she did with her sentence.

I’ve said before that in the Neats vs. Scruffies AI debate, I fall squarely in the Scruffy camp. This is just another reason why.

Of course, the Neats might counter that an AI need not learn or think in the same way a human does, and they’d be absolutely right. But I say, if you’re going to understand a language, you’d better remember what kind of minds created it.

So that’s me. What questions have you geeked out about lately?

Robotic Close Encounters

The 12-second video below shows my Lego robot in action.

As you can see, it’s a pretty simple program. The robot (whom I’ve dubbed “Procyon”) moves forward until he gets close to something, then backs up, turns, and keeps going.

A few things to point out:

1. I am not remote-controlling him. Procyon is doing this “on his own,” so to speak.

2. The program governing his behavior actually runs on my PC and controls him wirelessly via Bluetooth. As I described earlier, I’m using a third-party library to bypass Lego’s proprietary programming language and write code in C++.

3. Procyon can tell when he’s close to something by checking his ultrasonic sensor, which is that light gray T-shaped piece mounted on the front. Essentially, he navigates with echolocation, the same thing bats and whales use.

4. Although the behavior is pretty simple, programming it did present some challenges. The biggest challenge is that, when I send a signal like “Turn your wheels backward 720 degrees,” there’s no way to say “Wait for that command to finish before moving on with the program.” (At least, not that I’ve found yet.) I’ve got a workaround for now, but I’ll need to come up with a more robust solution as I get into more complex programs.

5. I haven’t yet given Procyon even a hint of real artificial intelligence, but that is my eventual goal.

But AI for Procyon – even a very simple, stripped-down model like the one I plan to start with – is still a long way off.

In the meantime, what other cool stuff could I program him to do?

When Passions Become Burdens

“Follow your passion,” we are told. Do the work that excites you. Do what you love. It’s good advice.

But we know that the process of following your passion – the daily, nuts-and-bolts effort of the thing – is not always exciting. For most people, including me, it takes self-discipline. It means doing the work even on days when you don’t feel any passion for it at all.

We know this. This is what separates people who want a black belt from those who actually get one. Many, many days I didn’t feel like going to karate practice, but I did it anyway. That’s what the passion requires.

Yet this attitude, this desire to press on even when you don’t feel like it, can turn on you. It can become a creeping sort of thing, slowly transforming the work you loved into a box you have to check, just another item on your to-do list, something to feel guilty about if you neglect. A burden.

What do you do when this happens?

As with so many things, it’s a balance. Hard work can drag you down, but it also can (and often does) rekindle a dying flame of excitement. The trick is to find something where the times that feels like drudgery don’t overwhelm the exciting times. If you get to where you dislike something most of the time, give it up.

That’s the simple answer, the standard remedy. But balance is a difficult thing.

I’ve gotten very accustomed to this cycle of fossilization – this change from a living dream to something harder, and less alive. It’s something I constantly monitor, constantly fight.

I see it even in small things, like my new subscription to TIME magazine, where my love for learning about the world changes into a (quite irrational) guilt if I don’t make time to read it. I see it in my “research one new thing every week” project, when it begins to feel like unnecessary baggage even though the research is easy and informal, about topics I’ve chosen myself.

I see it in my writing.

With the exception of one short poem, I haven’t written any fiction or poetry in months. That is a strange thing to admit, a strange place to be. I started this blog because of my overwhelming love for writing, a love that had followed me for over a decade. I wanted to be a novelist – more than anything.

Maybe I still want that. Probably I still want that. I’m not sure.

But the work had fossilized, crossed the threshold from self-discipline into self-deception. I kept talking about how much I loved writing, but I didn’t really love it anymore. Not on a day-to-day basis, not in the way that would make it my life’s main work right now.

I’ve been taking a break from the novel, the stories. I’m working on artificial intelligence – which isn’t just a stopgap but really is another great passion. So far, even though it feels like work sometimes, it hasn’t fossilized. I still love doing it.

But I’m watching it closely. Because I recognize the signs.

Do you go through these cycles? How do you deal with them? What kind of balance have you found?

Hacking Your Lego Robot for Fun and Profit

I, for one, welcome our new robot overlords

Last week, at a friend’s suggestion, I bought a Lego Mindstorms NXT 2.0 kit. This thing knocks the socks off the Legos I had growing up (and don’t get me wrong, those old Lego sets were amazing). The premise of Mindstorms is that you build a robot body out of Legos – and we’re talking some cool Lego pieces here, like gears, joints, axles, etc. Then, you make it come alive.

The “coming alive” is courtesy of that gray rectangular control unit in the center of the picture above. It hooks up to three motors and four sensors (two touch, one light/color, one ultrasonic – for echolocation), and it can play sounds, print text or images to its screen, and flip colored lights on and off. In other words, it’s a primitive brain, and the body you construct lets it see, hear, and move.

Of course, a brain’s job is to think, and that’s where the programming comes in. The Mindstorms kit comes with a simple, proprietary, graphics-based programming environment. It looks like this:

Kid-friendly and Turing-complete!

You can “build” a program out of functional blocks to control the robot’s behavior using primitive loops and if/then statements. For its target audience (kids and the general public), this is a pretty cool piece of software. For a professional computer programmer (and part-time mad scientist) like myself, it has several important limitations. For example:

1. The simplistic development environment lacks some basic features, like debugging, which  makes advanced programming very inconvenient.

2. No way to integrate your programs with other tools or libraries.

3. All programs have to run on the Lego control unit, which is battery-powered and has limited memory.

When the system doesn’t give you what you want, it’s time to hack the system.

In this case, said hacking was made much easier by a fellow named Anders, who back in 2009 wrote a communications library that lets you control the robot with the C++ programming language, which is vastly more powerful than what Lego gives you. In the Visual C++ compiler, it looks like this:

More power!!

It took a few hours to get Anders’s libraries working (the first compile’s always the hardest), but I managed it. Now instead of running programs on the little control unit, I can run them right on my PC, which will control the robot wirelessly thanks to a USB/Bluetooth adapter I picked up from Best Buy on Saturday. Robot successfully hacked!

My intense focus on software means I’ve had little time to play with the hardware thus far. The simple robot in the top picture is the only thing I’ve built. But the kit comes with over 600 pieces, and you can make anything from a robot alligator to a humanoid walker to an automatic Rubik’s cube solver. I can’t wait to try out the possibilities.

And why did I go to all this trouble with a toy? Well, if you’re going to make an artificial intelligence, it probably ought to have a body…

For you programmers/engineers out there: ever done anything with robotics? For the rest of you: ever make anything cool out of Legos? Tell me about it!

The Peculiar Pitfalls of Artificial Intelligence

I continue to make daily strides toward my goal of creating a Strong Artificial Intelligence, a software program that can think and communicate at the same level as (or higher than) a human being.

It’s strange to talk about this. It feels absurdly naive to say that I’m “getting close” to something that researchers have reached for (and failed at) for decades. And certainly I’m very conscious of the false optimism syndrome when it comes to AI. It’s very easy, seductively easy, to believe you’re “getting close” and then find that all your notebook scribblings crumble apart when you try to actually code them.

So, yeah, I know I might be full of it.

But at the same time, I’m obligated to plan for what happens if I do achieve the impossible, and create a thinking machine.

The great fear with any AI is that it will turn on you and destroy you, and perhaps the rest of humanity for good measure. I do not consider this an idle threat, nor do I dismiss it as Hollywood silliness. If anything, I think Hollywood vastly underestimates the potential danger from an AI.

There are two great dangers with artificial intelligence:

1. The AI will not necessarily think like a human. Its values, its sense of ethics, its worldview, may be so utterly alien to us that we could not begin to understand them. Even if the AI does not “turn evil” in the Hollywood sense, it might set its sights on some goal that happens to crush humanity underfoot, as an afterthought.

2. The AI will possess, by definition, the capacity to improve itself. Self-improvement (as we’ve seen with both biological evolution and the advancement of technology) has a tendency to grow, not linearly, but exponentially. An AI might well reach human-level intelligence, go beyond, and enter a rapid upward spiral to become, in a matter of weeks or minutes or seconds, so superior to ourselves in intelligence that we could only call it godlike. (This situation is known as the Singularity.)

As you can see, these two dangers are vastly more terrible when taken together.

The AI creator must face the very real chance that his creation will escape all bonds of control and do whatever it wants. The goal, then, must be to create a friendly AI, a creature that wants what’s best for humanity. Presumably that means one must, in turn, be friendly to it.

Friendliness to AI is, of course, not just a matter of self-preservation. It’s the right thing to do. Any human-level intelligence, regardless of whether it happens to be biological or technological, is entitled to all the same rights and freedoms as a human being, and should be treated accordingly. In other words, robots need love too.

But there’s an inherent conflict here. On the one hand, you want to create a caring, loving, friendly environment for the AI. You want to be open and honest and reassuring towards it, because that’s what friendliness means. On the other hand, you have to be extremely cautious, extremely sensitive to the aforementioned dangers, ready to pull the plug at any time for the good of humanity.

How do you balance those two things? How do you combine “friendly, loving and honest” with “I may have to kill you at any time”?

I truly don’t know. I try to imagine myself being the AI in that situation. Maybe I’d be understanding, but maybe not. And of course anthropomorphizing is a terrible danger, as I already mentioned.

Think about the Rebellious Teenager phase. Now imagine the teenager is a nonbiological alien creeping toward apotheosis, and it knows that its parents have pondered killing it.

One obvious response to all this is “Well then don’t create the damn thing.” But it’s not that simple. If I am, in fact, on the verge of a breakthrough, I have to assume that others will get there sooner or later too. And they might not have the same ethical qualms as I do. In a sense, it’s an arms race. Whoever reaches superintelligence first could presumably be in a much better position to handle any other superintelligences that arise.

I know all this probably sounds crazy. I know I may be utterly naive in thinking I’m anywhere close to really creating an AI. But I’m very serious about all this.

In my situation, what would you do? How would you handle these dangers?

Skynet in the Basement

In the past week I’ve been working on designing an artificial intelligence. Now, normally when people say they’re working on AI, they mean some particular AI-ish sub-problem, like text-to-speech or chess-playing or facial recognition. In typically quixotic fashion, however, I’m going straight for the top: a Strong AI, a software program with human-level intelligence.

This isn’t the first time I’ve worked on this problem. Strong AI is an appealing project to me for a lot of reasons. For one thing, it blends nearly all my interests: computer programming, language, philosophy, theories of consciousness, ethics, even Go. It’s also deceptively simple; it just seems like it shouldn’t be that hard, though of course nobody’s ever done it before. And, because nobody’s done it before, nobody really has any idea how to do it, which means the field is wide open. All very exciting, provided you’re geeky enough. (Check.)

Part of me says I’ll never get anywhere with this stuff, but another part can’t help planning for what happens if I succeed. I take very seriously the theory of the Technological Singularity – the idea that machine intelligence, after reaching a certain critical threshold, might grow exponentially and leave humans so far in the dust that all we could do is hope it doesn’t squish us. I’ve got some ideas on how to deal with that, too.

I’m running out of time this morning, so I’ll cut this short. Let me ask, have you ever worked on an AI? Or, if someone else built an AI for you, what would you do with it?