I continue to make daily strides toward my goal of creating a Strong Artificial Intelligence, a software program that can think and communicate at the same level as (or higher than) a human being.
It’s strange to talk about this. It feels absurdly naive to say that I’m “getting close” to something that researchers have reached for (and failed at) for decades. And certainly I’m very conscious of the false optimism syndrome when it comes to AI. It’s very easy, seductively easy, to believe you’re “getting close” and then find that all your notebook scribblings crumble apart when you try to actually code them.
So, yeah, I know I might be full of it.
But at the same time, I’m obligated to plan for what happens if I do achieve the impossible, and create a thinking machine.
The great fear with any AI is that it will turn on you and destroy you, and perhaps the rest of humanity for good measure. I do not consider this an idle threat, nor do I dismiss it as Hollywood silliness. If anything, I think Hollywood vastly underestimates the potential danger from an AI.
There are two great dangers with artificial intelligence:
1. The AI will not necessarily think like a human. Its values, its sense of ethics, its worldview, may be so utterly alien to us that we could not begin to understand them. Even if the AI does not “turn evil” in the Hollywood sense, it might set its sights on some goal that happens to crush humanity underfoot, as an afterthought.
2. The AI will possess, by definition, the capacity to improve itself. Self-improvement (as we’ve seen with both biological evolution and the advancement of technology) has a tendency to grow, not linearly, but exponentially. An AI might well reach human-level intelligence, go beyond, and enter a rapid upward spiral to become, in a matter of weeks or minutes or seconds, so superior to ourselves in intelligence that we could only call it godlike. (This situation is known as the Singularity.)
As you can see, these two dangers are vastly more terrible when taken together.
The AI creator must face the very real chance that his creation will escape all bonds of control and do whatever it wants. The goal, then, must be to create a friendly AI, a creature that wants what’s best for humanity. Presumably that means one must, in turn, be friendly to it.
Friendliness to AI is, of course, not just a matter of self-preservation. It’s the right thing to do. Any human-level intelligence, regardless of whether it happens to be biological or technological, is entitled to all the same rights and freedoms as a human being, and should be treated accordingly. In other words, robots need love too.
But there’s an inherent conflict here. On the one hand, you want to create a caring, loving, friendly environment for the AI. You want to be open and honest and reassuring towards it, because that’s what friendliness means. On the other hand, you have to be extremely cautious, extremely sensitive to the aforementioned dangers, ready to pull the plug at any time for the good of humanity.
How do you balance those two things? How do you combine “friendly, loving and honest” with “I may have to kill you at any time”?
I truly don’t know. I try to imagine myself being the AI in that situation. Maybe I’d be understanding, but maybe not. And of course anthropomorphizing is a terrible danger, as I already mentioned.
Think about the Rebellious Teenager phase. Now imagine the teenager is a nonbiological alien creeping toward apotheosis, and it knows that its parents have pondered killing it.
One obvious response to all this is “Well then don’t create the damn thing.” But it’s not that simple. If I am, in fact, on the verge of a breakthrough, I have to assume that others will get there sooner or later too. And they might not have the same ethical qualms as I do. In a sense, it’s an arms race. Whoever reaches superintelligence first could presumably be in a much better position to handle any other superintelligences that arise.
I know all this probably sounds crazy. I know I may be utterly naive in thinking I’m anywhere close to really creating an AI. But I’m very serious about all this.
In my situation, what would you do? How would you handle these dangers?
Personally I’d be giving great consideration to question: How do I know if I have succeeded? And the opposite.
You don’t really detail your project but I assume you are working with some sort of text in/out kind of thing.
What would you describe as the sensory environment of your AI?
What can it actually effect?
I say have at it and make sure you have a snappy line for the first interspecies communication.
For the sensory environment, I would like to give it the ability to process visual and auditory input – eyes and ears, in other words. At first I thought just text, but gradually I realized that it’s hard (impossible?) to form meanings for words if you have no senses to tie the words to.
The condition for success is the Turing test.
I think the Turing test is too harsh for a definition of successful AI. It’s more a test of natural language processing.
That’s why I asked about the sensory environment.
A pieces of software could be entirely sentient and self aware but easily distinguishable from a human just because it is has an alien experience set and (as you noted) different ethics and values.
I watched a documentary Orca training where they got very excited because the Orca put two of its vocab words together in a way that it had never been taught but which clearly expressed its intent.
“Take Ball” I think it was… meaning I don’t want to play your stupid ball game right now.
Correct, the Turing Test is a sufficient but not necessary condition for intelligence. However, it’s the closest thing I have to a formal definition at the moment. In practice, I think that intelligence is something I’ll know when I see it.
Perhaps the first models with AI shouldn’t have mobility. Or connectivity (i.e., the Internet). That ought to limit it enough to not be much of a threat, at least until we can work out some of the nuances of the questions that you pose. Because I don’t have any answers, but I recognize the importance of the discussion.
Limiting connectivity and mobility is a very good suggestion. Unfortunately, you run into several tricky situations that way:
1. A superintelligent AI might well be able to achieve connectivity and mobility in very subtle ways that we can’t even imagine because we’re simply not smart enough.
2. A superintelligent AI might well be able to convince a human, even a human committed to keeping it contained, that it must be set free. It sounds counterintuitive, but some experimental evidence suggests this is so.
3. An AI might well come to resent being “boxed up.” Resentment could have its dangers, too.
I don’t have the answers either…
You should totally feed it Microchips. I hear those AIs are hungry. First you’ll have to give it a face and then HUNGER!!!
AI’s first request: “Make me a face.”
AI’s second request: “FEED IT”
Interesting questions! I would have to say that those two things are so opposed to each other that they would be very very difficult for the human mind to balance. And, if it had internet connectivity, would it be able to hack into other random computers around the globe to increase its power?
That might be something you should avoid.
Anyway, I think maybe if you created a nice little home for it in your (Insert workplace here) and treated it kindly and respectfully, but also made sure it didn’t think of itself as too powerful and let it know there really were cruel people out there. . . well, maybe it would work.
If you do manage to create a real a.i., you also might need a LOT of memory chips, and where would you store those? (Assuming you want it to be able to remember things)
Anyway, good luck on the a.i., make sure that it doesn’t think of itself as better than everyone else because nothing good ever comes of it, etc.
All I can say is be very careful.
RE: the memory chips, I plan to use the computer’s built-in memory supplies (hard drive and RAM).
I think creating an AI that advanced should be done in a team of two.
One who will be its creator and friend. Who’s only intention is to have it grow and best it can be without a single thought about restrictions or how to keep it happy so it wouldn’t kill anyone, but to keep it happy for the sake of keeping it happy. In other words, be its friend.
The other should be the one creating the kill switch and figuring out how to keep it in check if things don’t go as plan and keep this guy completely away from the friendly guy so both the friendly guy and the AI are clueless to his existence and his plans.
Of course, if the AI is intelligent enough, I’d probably see through all of this easily. So…
I don’t know lol
That’s interesting, I hadn’t thought of that. Of course, if the person in the Friend role knew about the other person, he’d have to either tell the AI, or be dishonest (which isn’t especially friendly).
Still, it’s a cool idea. I’ll think about it. Thanks.
Im sure the military industrial complex has already accomplished A.I. and other tbings in the dark recesses of secret bases