Four days ago, I joined SingularityHub.com, a site with news, discussion, and videos related to the Technological Singularity, the so-called “Rapture of the Nerds.” To become a member of this site, you don’t need to be an AI researcher, a neuroscientist, a billionaire investor, anything like that.
You just need to be, well, a fan.
I’ve been exploring this corner of the Interwebs lately, and an odd little corner it is. The heavyweight is the Singularity University, a surprisingly well-connected group funded by the likes of Google, Cisco, Nokia, and Autodesk (creator of AutoCAD).
And there’s the 2045 Intiative, a group founded by Russian billionaire Dmitry Itskov, dedicated to “the transfer of a individual’s personality to a more advanced non-biological carrier, and extending life, including to the point of immortality.” Project deadline: 2045.
Mega-projects aside, you’ve got blogs like Accelerating Future, Singularity Weblog, and Transhumanity.net.
And, of course, Singularity thinkers like Ray Kurzweil and Eliezer Yudkowsky have their own online presence. Kurzweil, incidentally, was hired by Google a few months ago. His first time working for a company he didn’t create.
The Singularity research/enthusiast community is, as I said, a strange little group. Websites are a mix of real news about promising present-day tech, debates about philosophy and spirituality and robotics, and bona fide major efforts to bring this vision of the future a little closer to reality.
The common link in all this group is that people really believe. They know it sounds crazy, but then, the truth often does.
What do I think about all this?
Well, as I wrote last year, I believe the Singularity is real, and I believe it is coming. Maybe not in our lifetimes, but it’s coming. I am very much a part of the strange little group. I honestly think it’s a real possibility that some human beings alive today will live to see their one millionth birthday.
I, too, am conscious of how ridiculous this sounds. I know the Internet is teeming with these fringe micro-groups that feed on each other’s delusions until they’re convinced that their tiny groupthink vision is a prophecy for all mankind. I get that.
A billion years ago, multi-celled organisms were a novelty. A million years ago, there was no such thing as language. A thousand years ago, electricity was nothing more than an angry flash in the sky. A hundred years ago, the whole idea of airplanes was still strange and new. Ten years ago, smartphones were only for the early adopters.
Today, telekinesis is real. Lockheed Martin has a quantum computer. And Moore’s Law, despite constant predictions otherwise, hasn’t failed us in forty years.
Am I really supposed to look at all that, and not believe we’re headed toward something?
You can build a car with as many features as you like, it won’t make it intelligent. You can build a robot with as many features as you like, it won’t make it intelligent (eg. ASIMO). 99.9% of AI research is pointless, because it tries to enforce our construct of what intelligence is. The more we control, the further away we get. n*Monkeys at keyboards bashing out random opcodes have a greater chance of coding true AI, but we’d kill the program or it’d run out of memory before it learned to do anything resembling intelligence to us. Nobody can be “closer” to developing AI, you either have invented it or you haven’t… there’s no in between. Just my opinion.
PS. I consider myself a believer, which is why I’m continually baffled by big budget AI projects which recreate the same problems over and over again.
I agree that a lot of AI research is window dressing that doesn’t get to the heart of the question (Strong AI). I don’t agree, though, that a program is either intelligent or not, with no in between. There are certainly different degrees of intelligence. People are smarter than chimpanzees, which are smarter than rats, which are smarter than fruit flies.
Thanks for the comment!
Point taken. Whilst the intelligence of a chimpanzee is notable, it is not the sort of reflective, abstract thought process which is going to make a significant difference to the human race. In effect (and no offense intended to chimpanzees ), animal intelligence is a flawed model of intelligence because it has no way to expand its intellectual capability. A useful AI model would be constantly improving upon its own intelligence.
Pingback: Paradigm Shift | The Middle Pane
There are various scenarios of how the Singularity will arrive.