Tag Archives: Artificial Intelligence

In about a decade

Having recently finished reading Other Minds (good book), I’ve embarked on a book called Superintelligence: Paths, Dangers, Strategies, by Nick Bostrom. As you’d guess from the title, it’s about what to do in the increasingly likely event that we create an AI that is much, much smarter than we are.

I’m still at the beginning, but so far it’s utterly fascinating. Usually when I talk to people about an AI exponential intelligence explosion, the whole conversation turns into a debate over whether that’s likely to happen at all. With this author, I feel like I’m finally talking (or rather, listening) to someone who really gets it, who’s gotten past the “Will it happen?” question and is ready to talk about “What happens when it (probably) does?”

One bit in particular struck me. He has a section on games — which ones AI can beat humans at, which ones they can’t. He notes that AI now performs at superhuman levels (beating even world champions) at checkers, chess, and Scrabble, among others. Regarding the game of Go, he says the best AI is currently “very strong amateur level” but advancing steadily, and makes this prediction:

If this rate of improvement continues, [AI] might beat the world champion in about a decade.

The book was published three years ago, in 2014.

A Go-playing AI beat the world champion four months ago.

Wherever AI is headed, it’s going to be a wild ride.


AI: Revelations


For a long time, my AI strategy has been:

  • First, figure out the AI’s knowledge structure – the way knowledge is stored inside its mind. You’d think this would be easy, but the problem of knowledge representation turns out to be nontrivial (much like the Pacific Ocean turns out to be non-dry).
  • Once I know how to represent knowledge, I will begin work on knowledge acquisition, or learning.

To me, this order made sense. A mind must have a framework for storing information before you can help it learn new information.


Well, for the past week, I’ve tackled the problem from the opposite direction. I’ve pushed aside my 5,000+ lines of old code (for the moment) and started from scratch, building an algorithm that’s focused on learning.

The result is a little program (less than 200 lines long) that reads in a text file and searches for patterns, with no preconceived notions about what constitutes a word, a punctuation mark, a consonant, or a vowel. For instance:


This AI-in-training makes short work of Hamlet, plowing through the Bard’s masterpiece in about ten seconds. The result is a meticulous, stats-based tree of patterns. I can examine any particular branch that starts with any letter or letters I like.

Here I’m looking at all the patterns it found, that start with “t”:

python output

The full list is much longer, but already you can see it’s picked up some interesting patterns. It’s noticed “the” and “there”, and it’s noticed that both are often followed by a space. It’s even started picking out which letters most commonly start the next word. And it’s noticed a pattern of words ending in “ther”, presumably “mother”, “father”, “together”, “rather”, and their kin.

This algorithm is cool, but rather limited at the moment. It can notice correlations between letters, and fairly simple strings, but it doesn’t do well with more complex patterns. I won’t bore you with the details, but rest assured I’m working on it.

In the meantime, AI is fun again. I mean, it was mostly fun before, but I was entering a dry spell where the work had started feeling like a chore. Every now and then, a fresh perspective helps get you excited again.

In this case, it also showed me that I had my strategy backwards. Just as you can’t build an ontology and weld on input/output later, it turns out likewise that you can’t build an ontology and weld on learning. Learning, it appears, must come first. The how determines the what.

And this new direction is fun for another reason, too.

Till recently, I’d been coding in C++. Now C++ is a white-bearded, venerable patriarch of a language: time-tested, powerful, respected by all. But it’s also a grumpy old man who complains mightily about syntax and insists you spend hours telling it exactly what you want to do.

This new stuff, on the other hand, I’m coding in Python. Python is a newer language, not as bare-bones efficient as C++ but a hell of a lot simpler from the programmer’s point of view, shiny and full of features and full of magic. And I’m new to Python myself, so I’m still in the honeymoon phase. I’m not saying one or the other is “better” overall, but right now, Python is a lot more fun.

And really, programming ought to be fun. Especially if you’re building a mind.

Lessons I’ve Learned as an AI Developer

I’ve written before about some of my AI principles of design, as well as one of the deep secrets of artificial intelligence. I’ve even explored what Nietzsche can tell us about AI.

Here are a few more humble insights from my few years in the trenches. Like the other posts, these are just based my own experience, so take with a large grain of sodium chloride:

1. Finding a strategy for Friendly AI is crucial to surviving the Singularity, but I’m skeptical of Friendly AI based on a single overarching goal. Eliezer Yudkowsky has written a lot about the pitfalls of giving an AI a top-level goal like “maximize human happiness,” because if you define happiness as the number of times someone smiles, we could all end up with plastic surgery and permanent Joker faces – or worse. I agree, but I go a step further. I think any pre-defined high-level goal is bound to take you somewhere unpredictable (and probably bad). I think a Strong AI, like a child, has to be instructed by example: rewarding “friendly” behavior, punishing”unfriendly” behavior, as it occurs.

2. Like David Hume, I believe reason is grounded in experience. I’m doubtful that a top-down ontology (such as Cyc) can be built like a castle in the air, then “grounded” later on a robotic foundation. Cyc is an ambitious and very cool project, and I respect what they’re doing. But I don’t think it’s on the path to a Strong AI.

3. Our minds work by trial and error. We focus more on “what works” than “what’s true.” It’s hard for me to see how a Strong AI could be built from a formalized, truth-based system that starts with axioms and derives conclusions in a logically airtight way. Listing my reasons for this belief would require a whole separate post; let’s just call it an instinct for now.

What do you think? Agree, disagree? Questions?

Isomorphism and AI

In yesterday’s post I explained group isomorphism, which points out a deep symmetry between adding and multiplying. I also showed how the natural log function could be used to map between the two operations.

But the idea of isomorphism applies to lots of things beyond math. Think about language. After all, what is language but an isomorphism between concepts and words?

“The cat is black.” An AI could parse this sentence and decide there’s a noun-adjective relationship between “cat” and “black.” So instead of:

5 × 3

we have:

“cat” (noun-adjective relationship) “black”

To be meaningful, the words and their relationship must map to their corresponding concepts. So instead of:

ln(5) + ln(3)

we have:

cat (has-property relationship) black

And we also need a function to map from the words to the concepts. So instead of:

ln(5) = 1.609438

ln(3) = 1.098612

we have:

MeaningOf(“cat”) = cat

MeaningOf(“black”) = black

All very nice and neat, in this example. But of course, if language was really that easy, we’d have built a strong AI decades ago. It turns out that conceptual isomorphism can be a hell of a lot more complicated than mathematical group isomorphism. For instance…

1. Mathematical group operations (like addition and multiplication) only take two inputs (the two numbers you’re adding or multiplying). But conceptual relationships can take any number of inputs. How many adjectives could we attach to the single noun “cat”?

2. In mathematical groups, there’s a clear distinction between elements (the numbers) and operations (addition, multiplication). But with conceptual relationships, the difference gets blurry. Let’s say cat has a likes relationship with milk, and a hates relationship with bath. But we also know that likes has an is opposite relationship with hates. So now we have relationships, not only between “things,” but between other relationships.

3. In our math example, our mapping function was the natural log, ln(x). Now ln(x) is a neat, precise, clearly-defined function, which takes exactly one input and gives exactly one output. Does language work that way? Ha! Imagine trying to evaluate MeaningOf(“run”). That can mean jogging, or a home run in a baseball game, or a tear in a stocking, or “run” as in “I’ve run out of milk,” or, or, or… What’s worse, these meanings aren’t independent, but have all sorts of relationships to each other; nor are they all equally likely; and the likelihood depends on the context of the word; and the way it depends on context can change over time; and the list of possible meanings can expand or shrink; and the mechanisms by which this occurs are not fully understood…

So, yeah. It gets complicated. But then, that’s why it’s so much fun.

Now we know how conceptual isomorphism (in AI) is like group isomorphism (in math). We’ve even established – dare I say it? – an isomorphism between the two isomorphisms. And now I’m going to stop saying “isomorphism” for a while.


How the HexBug Comes Alive



This is the HexBug Nano. I picked it up Friday from Hobby Lobby on a whim.

Sure, it looks simple. But it was only ten bucks, and I’m a sucker for robots. So I bought one.

I’m glad I did.



As it turns out, the creature really is simple. No assembly, no setup, no way to control it. There’s an on/off switch. That’s it.

What’s more, the robot has no moving parts except a buzzing, vibrating motor in its belly.

That vibration is all it can do. The legs are just rubber attached to the body. There’s no mechanical control there. It doesn’t even have any sensors.



So what can a toy that simple possibly do?

Just watch:

Watch how it skitters across the floor with a slight back-and-forth motion, as if hunting for food. Watch how it seems to avoid walls. Watch how, when I flip it over, it thrashes around till it’s upright again.

Two things about this.

First, it’s an ingenious piece of engineering. It may look simple, but getting the precise shape of the legs to keep it moving forward – the angled head so it turns when it hits a wall – the shape of the back so it flips over when necessary – that represents hundreds of man-hours of design work.

Second, for all its clever craftsmanship, it’s still orders of magnitude less sophisticated than a real insect. A real insect can seek food, evade predators, adapt to its environment, mate, reproduce, and a thousand other things. The HexBug Nano does none of that. And yet, when I watch it in motion, my brain says: That’s a bug!


Our brains love imparting life and agency to everything we see. If it moves, it’s alive. If it moves unexpectedly, it’s thinking. That’s how we process the world around us.

This human tendency to bestow life on the unliving is a blessing and a curse for A.I. developers. A blessing, because it makes even simple features seem impressive, at least initially. A curse, because we can easily fool ourselves into thinking a piece of software – or even hardware –  is smarter than it really is. We always have to be vigilant against that.

Fortunately, the HexBug isn’t SkyNet just yet.

AI Status Update

I was up till midnight working on it, so I’ll keep this brief:

  • The agent processing algorithm is much faster now.
  • I cleaned up the code, getting rid of a bunch of clutter and making the logic more elegant.
  • Sequence recognition now works to any depth. For instance: a word is a sequence of letters, a phrase is a sequence of words, a sentence is a sequence of phrases. Recognizing deeply “nested” sequences like this turns out to be a lot more complicated than you might expect. My previous algorithm was supposed to handle it, but it was convoluted and buggy. The new version is cleaner and (so far) works 100%.
  • The AI can recognize certain parts of speech (nouns, adjectives, verbs) and construct grammatical relationships between them (e.g. this adjective modifies this noun). It can then use other agents to map the grammar structure (adjective “black” modifies noun “dog”) to a semantic structure (entity “dog” has property “black”). So far this is still in its early stages, but the foundation for building it up is solid.
  • Text-to-speech is fully implemented by calling a free online text-to-speech API. Before, it could type; now, it can talk. Click here for a sample of what its voice sounds like. That’s not a static mp3 file. It’s generated dynamically based on the text you give it.
  • I gave it the ability to load websites on command. So I can type “google” and it can bring up the Google web page. Obviously that’s nothing special in itself, but it’s significant because website-loading is now fully integrated with its other capabilities. For the AI, loading a site is like flexing a muscle.

Gotta run. If you have any questions, just ask!

The Singularity Club


Four days ago, I joined SingularityHub.com, a site with news, discussion, and videos related to the Technological Singularity, the so-called “Rapture of the Nerds.” To become a member of this site, you don’t need to be an AI researcher, a neuroscientist, a billionaire investor, anything like that.

You just need to be, well, a fan.

I’ve been exploring this corner of the Interwebs lately, and an odd little corner it is. The heavyweight is the Singularity University, a surprisingly well-connected group funded by the likes of Google, Cisco, Nokia, and Autodesk (creator of AutoCAD).

And there’s the 2045 Intiative, a group founded by Russian billionaire Dmitry Itskov, dedicated to “the transfer of a individual’s personality to a more advanced non-biological carrier, and extending life, including to the point of immortality.” Project deadline: 2045.

Mega-projects aside, you’ve got blogs like Accelerating Future, Singularity Weblog, and Transhumanity.net.

And, of course, Singularity thinkers like Ray Kurzweil and Eliezer Yudkowsky have their own online presence. Kurzweil, incidentally, was hired by Google a few months ago. His first time working for a company he didn’t create.

The Singularity research/enthusiast community is, as I said, a strange little group. Websites are a mix of real news about promising present-day tech, debates about philosophy and spirituality and robotics, and bona fide major efforts to bring this vision of the future a little closer to reality.

The common link in all this group is that people really believe. They know it sounds crazy, but then, the truth often does.

What do I think about all this?

Well, as I wrote last year, I believe the Singularity is real, and I believe it is coming. Maybe not in our lifetimes, but it’s coming. I am very much a part of the strange little group. I honestly think it’s a real possibility that some human beings alive today will live to see their one millionth birthday.

I, too, am conscious of how ridiculous this sounds. I know the Internet is teeming with these fringe micro-groups that feed on each other’s delusions until they’re convinced that their tiny groupthink vision is a prophecy for all mankind. I get that.

And yet.

A billion years ago, multi-celled organisms were a novelty. A million years ago, there was no such thing as language. A thousand years ago, electricity was nothing more than an angry flash in the sky. A hundred years ago, the whole idea of airplanes was still strange and new. Ten years ago, smartphones were only for the early adopters.

Today, telekinesis is real. Lockheed Martin has a quantum computer. And Moore’s Law, despite constant predictions otherwise, hasn’t failed us in forty years.

Am I really supposed to look at all that, and not believe we’re headed toward something?