Yesterday I wrote:
The Singularity is real, and it is coming.
What did I mean by that?
Here, “Singularity” refers to the Technological Singularity, which is a future event predicted by a vocal minority of computer scientists. It’s a fringe belief, which makes me one of the crazies.
So let’s get to the crazy: what is the Singularity?
Descriptions vary depending on who you ask, but basically, the idea is that sooner or later we’re going to create a superhuman intelligence. Maybe it’ll be AI (like I’m working on now), maybe it’ll be human intelligence augmented by technology, maybe something else entirely. But once we take that first step, there’s no going back.
Look at the technology explosion we’ve already seen in the last 100 years. New technology makes it easier to develop even more new technology, which in turn makes it even easier – and so on. It’s a feedback loop. We’re using software to build software, using computer chips to design computer chips, reading websites to learn how to build websites. The pace of technological advancement is only getting faster.
Now imagine that same effect, but with intelligence instead of technology. If we can build a superintelligence, surely a superintelligence would be a whole lot better at it than we are. And if a superintelligence could improve on itself and build a hyperintelligence, what could a hyperintelligence build? It’s a feedback loop on steroids, an exponential explosion of capability.
What would such a future look like? We do not and cannot know. By its very definition, it is beyond our ability to understand. The point at which our creations overtake us and do things we can’t imagine: this is the Singularity.
This could be a heaven, a hell, or anything in between. A superintelligent AI might decide to wipe us out – a scenario Hollywood’s fond of portraying, though it’s hard to imagine a Hollywood-style happy ending. Or it might decide to be benevolent, building us a utopia or even augmenting our own abilities, pulling us up to its level.
A third option, even more frightening than the first, is total indifference to humanity. Self-centered beings that we are, we like to imagine that an AI’s main concern will be how it interacts with us. But why shouldn’t it have plans of its own, plans that don’t concern us unless we happen to blunder into its way? After all, humans don’t worry much about cattle unless we have a use for them. People who say they “like animals” tend to make exceptions for mosquitoes.
Do I really, honestly believe that if I can turn my little Lego robot into a viable AI, it will lead to the Singularity? I do.
So why am I trying it?
Because it’s going to happen sooner or later anyway. And if it happens under the guidance of someone who understands the possibilities, who is trying to make a so-called Friendly AI, then our chances of survival would seem to go up enormously.
Like I said: crazy. I know how ridiculous this sounds.
But why does it sound ridiculous? I think the main reason is that nothing remotely like it has happened before. It’s totally outside our experience. Such things always sound absurd.
But can we really afford to dismiss an idea just because nothing like it has happened before?
What do you think? Does this sound silly to you, or not? Either way, tell me why. I’d love to get some discussion going.
I don’t know- this is an interesting question.
I suppose it’s possible. Easily so. But why would the A.I. want to build a bigger and better A.I. in the first place? Is it programmed to do so, or does its mind and coding lead it towards building it inevitably? Do all intelligent creatures feel the need to build something bigger and better than them?
Wouldn’t the A.I. you build share your concerns for the higher intelligence wiping it off the face of the earth too?
Interesting post. I’ll make another comment if I think of anythign else.
Why would an AI want to build a better AI? Good question. I can think of two reasonable answers.
First, for almost any goal it might decide to pursue, having a better AI (or even upgrading itself into a better AI) would probably help it reach that goal.
And second, even if one particular AI doesn’t decide to make a better AI, it’s just a matter of time until another AI comes along and does.
As to the AI being worried about a higher intelligence destroying it…maybe, maybe not. Who knows if a superintelligent being would even care about self-preservation? Maybe it would be too “enlightened” for that kind of thing. And even if it did worry about such things, there’s always the chance it could turn *itself* into that higher intelligence, as I mentioned above.
Thanks for the questions, Alex!
Presumably, an AI would have the desire to improve itself because we add that aspect to its original programming. It can also be part of its desire to survive, to step beyond its limitations.
Shameless Plug: We’ll be discussing other aspects on the Singularity and its consequences over at the Trube blog today as part of AI week.
“Presumably, an AI would have the desire to improve itself because we add that aspect to its original programming.”
Good point. An AI that isn’t programmed to improve itself, probably isn’t going to achieve intelligence in the first place.
I don’t have nearly enough understanding of technology to think anything one way or another. But I’m curious what the counterargument to the AI Singularity is. Why does the majority think it’s not a possibility?
Well, a lot of people (even in computer science) haven’t heard of the Singularity at all, or only heard about it briefly and dismissed it because it sounded science fictiony.
But there are plenty of computer scientists who take it seriously enough to build counterarguments. Wikipedia has a nice roundup of them. Some flat-out say that an artificial intelligence is impossible, while others say there’s not enough evidence for the Singularity idea, or they attack particular aspects of the arguments for it. A full discussion of the debate would probably warrant a post in itself.
Pingback: AI Week, Day 2: Repent! The Singularity Is Near! | Ben Trube