Tag Archives: Singularity

Lessons I’ve Learned as an AI Developer

I’ve written before about some of my AI principles of design, as well as one of the deep secrets of artificial intelligence. I’ve even explored what Nietzsche can tell us about AI.

Here are a few more humble insights from my few years in the trenches. Like the other posts, these are just based my own experience, so take with a large grain of sodium chloride:

1. Finding a strategy for Friendly AI is crucial to surviving the Singularity, but I’m skeptical of Friendly AI based on a single overarching goal. Eliezer Yudkowsky has written a lot about the pitfalls of giving an AI a top-level goal like “maximize human happiness,” because if you define happiness as the number of times someone smiles, we could all end up with plastic surgery and permanent Joker faces – or worse. I agree, but I go a step further. I think any pre-defined high-level goal is bound to take you somewhere unpredictable (and probably bad). I think a Strong AI, like a child, has to be instructed by example: rewarding “friendly” behavior, punishing”unfriendly” behavior, as it occurs.

2. Like David Hume, I believe reason is grounded in experience. I’m doubtful that a top-down ontology (such as Cyc) can be built like a castle in the air, then “grounded” later on a robotic foundation. Cyc is an ambitious and very cool project, and I respect what they’re doing. But I don’t think it’s on the path to a Strong AI.

3. Our minds work by trial and error. We focus more on “what works” than “what’s true.” It’s hard for me to see how a Strong AI could be built from a formalized, truth-based system that starts with axioms and derives conclusions in a logically airtight way. Listing my reasons for this belief would require a whole separate post; let’s just call it an instinct for now.

What do you think? Agree, disagree? Questions?

Advertisements

The End of Moore’s Law

People (like me) who run around screaming “The Singularity is coming, the Singularity is coming!” build their ravings on a single foundation: the idea of exponential speed-up. Technology, we observe, isn’t just getting better faster. It’s getting better exponentially faster.

The classic example, familiar to anyone in IT, is Moore’s Law. The law is named after Intel co-founder Gordon Moore, who predicted in 1965 that the number of transistors we could fit on an integrated circuit would double about every two years.

To dumb it down a bit: he was saying that computers in 1967 would be twice as fast as the ones he was building right then; in 1969, 4x as fast; in 1971, 8x as fast. That puts the computers of 1981 at over 1,000x as fast as his present-day machines; 1991 would be 16,000x faster; and 2013 would be 33 million times faster.

Moore himself thought this trend might last only ten years or so; the staggering, impossible wonder of the computer industry is that it’s never stopped.

We must remember, though, that Moore’s Law isn’t a physical law like gravity or magnetism. It’s a prediction – a guess, really. And it’s bound to end eventually.

Just yesterday, I found an article from the MIT Technology Review called “The End of Moore’s Law?” It acknowledges the amazing, uncanny foresight of this exponential acceleration, but it cautions: “There are some good reasons to think that the party may be ending.” It goes on:

The end of Moore’s Law has been predicted so many times that rumors of its demise have become an industry joke. The current alarms, though, may be different. Squeezing more and more devices onto a chip means fabricating features that are smaller and smaller…to get there the industry will have to beat fundamental problems to which there are “no known solutions.” If solutions are not discovered quickly, Paul A. Packan, a respected researcher at Intel, argued last September in the journal Science, Moore’s Law will “be in serious danger.”

If the “sustained explosion” in processor speed does finally flicker out, then no Singularity. But will it? There’s no way to know, right?

It’s not like we can magically look a decade into the future and see what happens – right?

Actually, we can. That article was written May 1, 2000. Thirteen years later, processor speed hasn’t stopped doubling yet.

But listen: now AMD says the end of Moore’s Law is on the horizon, this time for really reals

The Singularity Club

singularity

Four days ago, I joined SingularityHub.com, a site with news, discussion, and videos related to the Technological Singularity, the so-called “Rapture of the Nerds.” To become a member of this site, you don’t need to be an AI researcher, a neuroscientist, a billionaire investor, anything like that.

You just need to be, well, a fan.

I’ve been exploring this corner of the Interwebs lately, and an odd little corner it is. The heavyweight is the Singularity University, a surprisingly well-connected group funded by the likes of Google, Cisco, Nokia, and Autodesk (creator of AutoCAD).

And there’s the 2045 Intiative, a group founded by Russian billionaire Dmitry Itskov, dedicated to “the transfer of a individual’s personality to a more advanced non-biological carrier, and extending life, including to the point of immortality.” Project deadline: 2045.

Mega-projects aside, you’ve got blogs like Accelerating Future, Singularity Weblog, and Transhumanity.net.

And, of course, Singularity thinkers like Ray Kurzweil and Eliezer Yudkowsky have their own online presence. Kurzweil, incidentally, was hired by Google a few months ago. His first time working for a company he didn’t create.

The Singularity research/enthusiast community is, as I said, a strange little group. Websites are a mix of real news about promising present-day tech, debates about philosophy and spirituality and robotics, and bona fide major efforts to bring this vision of the future a little closer to reality.

The common link in all this group is that people really believe. They know it sounds crazy, but then, the truth often does.

What do I think about all this?

Well, as I wrote last year, I believe the Singularity is real, and I believe it is coming. Maybe not in our lifetimes, but it’s coming. I am very much a part of the strange little group. I honestly think it’s a real possibility that some human beings alive today will live to see their one millionth birthday.

I, too, am conscious of how ridiculous this sounds. I know the Internet is teeming with these fringe micro-groups that feed on each other’s delusions until they’re convinced that their tiny groupthink vision is a prophecy for all mankind. I get that.

And yet.

A billion years ago, multi-celled organisms were a novelty. A million years ago, there was no such thing as language. A thousand years ago, electricity was nothing more than an angry flash in the sky. A hundred years ago, the whole idea of airplanes was still strange and new. Ten years ago, smartphones were only for the early adopters.

Today, telekinesis is real. Lockheed Martin has a quantum computer. And Moore’s Law, despite constant predictions otherwise, hasn’t failed us in forty years.

Am I really supposed to look at all that, and not believe we’re headed toward something?

Brian Answers: Would You Live Forever?

Today, on our final Ask Brian Anything post, Shaila Mudambi wonders:

Do you want to be immortal and why?

If by “immortal” you mean that I would literally never die, ever, this would be pretty terrible. Fast forward a trillion trillion years: every other person or being everywhere is long dead, the stars have gone out, the universe is nothing but infinite frozen darkness – and there you still are, floating, powerless, alone, conscious for all of time. That kind of immortality is basically hell.

Generally, though, “immortal” means something a little more limited: you don’t die of old age or sickness, but you can be killed, by murder or suicide or just falling into a giant pit of molten sulfur. (Uh…for example.) This kind of immortality is much better, and if that’s what you mean, then my answer is an emphatic yes.

Imagine what you could do with a hundred thousand years!

You’d be master of anything you cared to study, just because you’d have so much time. A hundred years for calligraphy, a hundred years for computer programming, a hundred years to just read, and read. Think how much you could learn. Think how much money you’d have, with interest accumulating over the centuries.

Think how much good you could do in the world, with that much money and knowledge.

And think of getting to see what we mortals can only dream about: the future of the human race, the story unfolding as we write it. Will there really be a technological Singularity? Will we colonize other solar systems, and will we find life there? Will we ever have anything like world peace? In ten million years, what will we have evolved into? What is the ultimate potential of our species?

Yes, I would like to be immortal.

You hear a lot about the downsides of living forever – or at least, for millennia. Over and over and over, you watch your loved ones die. You get tired of life, weary of existing. You’ve seen too much. Et cetera.

On this, I call shenanigans.

Yes, those drawbacks exist, but I think the sheer potential of what you could learn and do and achieve vastly outweighs them all. What’s more, I think that the longer you live, the more strategies you could acquire for dealing with this accumulated sorrow, or existential weariness. You might, for instance, achieve Zen enlightenment, rendering the whole thing moot. The possibilities are so much vaster than our capacity to imagine them.

Now, through all of this, I’ve made the typical assumption that (semi) eternal life means (semi) eternal youth. But what if that wasn’t the case?

My good friend Adam asks:

Just to over simplify the situation a bit… assume for a second the singularity happens and you can become immortal… how will your answer to Shaila’s change if aging can not be reversed? Will you accept immortality at age 80 vs. 50?

Yes.

The fact is, I’m an old man already, in spirit if not in body. I’ve never been athletic. I go for walks, not runs. I sit at home reading and writing. I love to travel, but when I do, I mostly walk around looking at things and trying new foods. I could do all of that just as well at 80 (and I fully intend to). Centuries more of that would still be a priceless gift.

That’s assuming I’m a reasonably healthy 80. I’d still be okay with having a fair number of health problems, too – but at some point, if things are so bad that you’re doing more suffering than living, immortality really would just be a burden. So in that case I’d say no.

But it would have to be pretty bad. Because immortality sounds friggin’ amazing.

Well, that concludes Ask Brian Anything Week. Thanks to everyone for asking, and for reading! What did you think? Is this something I should do again in, say, six months or a year?

And would you want to be immortal?

AI Week, Day 2: The Singularity

Yesterday I wrote:

The Singularity is real, and it is coming.

What did I mean by that?

Here, “Singularity” refers to the Technological Singularity, which is a future event predicted by a vocal minority of computer scientists. It’s a fringe belief, which makes me one of the crazies.

So let’s get to the crazy: what is the Singularity?

Descriptions vary depending on who you ask, but basically, the idea is that sooner or later we’re going to create a superhuman intelligence. Maybe it’ll be AI (like I’m working on now), maybe it’ll be human intelligence augmented by technology, maybe something else entirely. But once we take that first step, there’s no going back.

Look at the technology explosion we’ve already seen in the last 100 years. New technology makes it easier to develop even more new technology, which in turn makes it even easier – and so on. It’s a feedback loop. We’re using software to build software, using computer chips to design computer chips, reading websites to learn how to build websites. The pace of technological advancement is only getting faster.

Now imagine that same effect, but with intelligence instead of technology. If we can build a superintelligence, surely a superintelligence would be a whole lot better at it than we are. And if a superintelligence could improve on itself and build a hyperintelligence, what could a hyperintelligence build? It’s a feedback loop on steroids, an exponential explosion of capability.

What would such a future look like? We do not and cannot know. By its very definition, it is beyond our ability to understand. The point at which our creations overtake us and do things we can’t imagine: this is the Singularity.

This could be a heaven, a hell, or anything in between. A superintelligent AI might decide to wipe us out – a scenario Hollywood’s fond of portraying, though it’s hard to imagine a Hollywood-style happy ending. Or it might decide to be benevolent, building us a utopia or even augmenting our own abilities, pulling us up to its level.

A third option, even more frightening than the first, is total indifference to humanity. Self-centered beings that we are, we like to imagine that an AI’s main concern will be how it interacts with us. But why shouldn’t it have plans of its own, plans that don’t concern us unless we happen to blunder into its way? After all, humans don’t worry much about cattle unless we have a use for them. People who say they “like animals” tend to make exceptions for mosquitoes.

Do I really, honestly believe that if I can turn my little Lego robot into a viable AI, it will lead to the Singularity? I do.

So why am I trying it?

Because it’s going to happen sooner or later anyway. And if it happens under the guidance of someone who understands the possibilities, who is trying to make a so-called Friendly AI, then our chances of survival would seem to go up enormously.

Like I said: crazy. I know how ridiculous this sounds.

But why does it sound ridiculous? I think the main reason is that nothing remotely like it has happened before. It’s totally outside our experience. Such things always sound absurd.

But can we really afford to dismiss an idea just because nothing like it has happened before?

What do you think? Does this sound silly to you, or not? Either way, tell me why. I’d love to get some discussion going.

The Peculiar Pitfalls of Artificial Intelligence

I continue to make daily strides toward my goal of creating a Strong Artificial Intelligence, a software program that can think and communicate at the same level as (or higher than) a human being.

It’s strange to talk about this. It feels absurdly naive to say that I’m “getting close” to something that researchers have reached for (and failed at) for decades. And certainly I’m very conscious of the false optimism syndrome when it comes to AI. It’s very easy, seductively easy, to believe you’re “getting close” and then find that all your notebook scribblings crumble apart when you try to actually code them.

So, yeah, I know I might be full of it.

But at the same time, I’m obligated to plan for what happens if I do achieve the impossible, and create a thinking machine.

The great fear with any AI is that it will turn on you and destroy you, and perhaps the rest of humanity for good measure. I do not consider this an idle threat, nor do I dismiss it as Hollywood silliness. If anything, I think Hollywood vastly underestimates the potential danger from an AI.

There are two great dangers with artificial intelligence:

1. The AI will not necessarily think like a human. Its values, its sense of ethics, its worldview, may be so utterly alien to us that we could not begin to understand them. Even if the AI does not “turn evil” in the Hollywood sense, it might set its sights on some goal that happens to crush humanity underfoot, as an afterthought.

2. The AI will possess, by definition, the capacity to improve itself. Self-improvement (as we’ve seen with both biological evolution and the advancement of technology) has a tendency to grow, not linearly, but exponentially. An AI might well reach human-level intelligence, go beyond, and enter a rapid upward spiral to become, in a matter of weeks or minutes or seconds, so superior to ourselves in intelligence that we could only call it godlike. (This situation is known as the Singularity.)

As you can see, these two dangers are vastly more terrible when taken together.

The AI creator must face the very real chance that his creation will escape all bonds of control and do whatever it wants. The goal, then, must be to create a friendly AI, a creature that wants what’s best for humanity. Presumably that means one must, in turn, be friendly to it.

Friendliness to AI is, of course, not just a matter of self-preservation. It’s the right thing to do. Any human-level intelligence, regardless of whether it happens to be biological or technological, is entitled to all the same rights and freedoms as a human being, and should be treated accordingly. In other words, robots need love too.

But there’s an inherent conflict here. On the one hand, you want to create a caring, loving, friendly environment for the AI. You want to be open and honest and reassuring towards it, because that’s what friendliness means. On the other hand, you have to be extremely cautious, extremely sensitive to the aforementioned dangers, ready to pull the plug at any time for the good of humanity.

How do you balance those two things? How do you combine “friendly, loving and honest” with “I may have to kill you at any time”?

I truly don’t know. I try to imagine myself being the AI in that situation. Maybe I’d be understanding, but maybe not. And of course anthropomorphizing is a terrible danger, as I already mentioned.

Think about the Rebellious Teenager phase. Now imagine the teenager is a nonbiological alien creeping toward apotheosis, and it knows that its parents have pondered killing it.

One obvious response to all this is “Well then don’t create the damn thing.” But it’s not that simple. If I am, in fact, on the verge of a breakthrough, I have to assume that others will get there sooner or later too. And they might not have the same ethical qualms as I do. In a sense, it’s an arms race. Whoever reaches superintelligence first could presumably be in a much better position to handle any other superintelligences that arise.

I know all this probably sounds crazy. I know I may be utterly naive in thinking I’m anywhere close to really creating an AI. But I’m very serious about all this.

In my situation, what would you do? How would you handle these dangers?