In the past week I’ve been working on designing an artificial intelligence. Now, normally when people say they’re working on AI, they mean some particular AI-ish sub-problem, like text-to-speech or chess-playing or facial recognition. In typically quixotic fashion, however, I’m going straight for the top: a Strong AI, a software program with human-level intelligence.
This isn’t the first time I’ve worked on this problem. Strong AI is an appealing project to me for a lot of reasons. For one thing, it blends nearly all my interests: computer programming, language, philosophy, theories of consciousness, ethics, even Go. It’s also deceptively simple; it just seems like it shouldn’t be that hard, though of course nobody’s ever done it before. And, because nobody’s done it before, nobody really has any idea how to do it, which means the field is wide open. All very exciting, provided you’re geeky enough. (Check.)
Part of me says I’ll never get anywhere with this stuff, but another part can’t help planning for what happens if I succeed. I take very seriously the theory of the Technological Singularity – the idea that machine intelligence, after reaching a certain critical threshold, might grow exponentially and leave humans so far in the dust that all we could do is hope it doesn’t squish us. I’ve got some ideas on how to deal with that, too.
I’m running out of time this morning, so I’ll cut this short. Let me ask, have you ever worked on an AI? Or, if someone else built an AI for you, what would you do with it?