Tag Archives: Artificial Intelligence

In about a decade

Having recently finished reading Other Minds (good book), I’ve embarked on a book called Superintelligence: Paths, Dangers, Strategies, by Nick Bostrom. As you’d guess from the title, it’s about what to do in the increasingly likely event that we create an AI that is much, much smarter than we are.

I’m still at the beginning, but so far it’s utterly fascinating. Usually when I talk to people about an AI exponential intelligence explosion, the whole conversation turns into a debate over whether that’s likely to happen at all. With this author, I feel like I’m finally talking (or rather, listening) to someone who really gets it, who’s gotten past the “Will it happen?” question and is ready to talk about “What happens when it (probably) does?”

One bit in particular struck me. He has a section on games — which ones AI can beat humans at, which ones they can’t. He notes that AI now performs at superhuman levels (beating even world champions) at checkers, chess, and Scrabble, among others. Regarding the game of Go, he says the best AI is currently “very strong amateur level” but advancing steadily, and makes this prediction:

If this rate of improvement continues, [AI] might beat the world champion in about a decade.

The book was published three years ago, in 2014.

A Go-playing AI beat the world champion four months ago.

Wherever AI is headed, it’s going to be a wild ride.

AI: Revelations

python

For a long time, my AI strategy has been:

  • First, figure out the AI’s knowledge structure – the way knowledge is stored inside its mind. You’d think this would be easy, but the problem of knowledge representation turns out to be nontrivial (much like the Pacific Ocean turns out to be non-dry).
  • Once I know how to represent knowledge, I will begin work on knowledge acquisition, or learning.

To me, this order made sense. A mind must have a framework for storing information before you can help it learn new information.

Right?

Well, for the past week, I’ve tackled the problem from the opposite direction. I’ve pushed aside my 5,000+ lines of old code (for the moment) and started from scratch, building an algorithm that’s focused on learning.

The result is a little program (less than 200 lines long) that reads in a text file and searches for patterns, with no preconceived notions about what constitutes a word, a punctuation mark, a consonant, or a vowel. For instance:

corpus

This AI-in-training makes short work of Hamlet, plowing through the Bard’s masterpiece in about ten seconds. The result is a meticulous, stats-based tree of patterns. I can examine any particular branch that starts with any letter or letters I like.

Here I’m looking at all the patterns it found, that start with “t”:

python output

The full list is much longer, but already you can see it’s picked up some interesting patterns. It’s noticed “the” and “there”, and it’s noticed that both are often followed by a space. It’s even started picking out which letters most commonly start the next word. And it’s noticed a pattern of words ending in “ther”, presumably “mother”, “father”, “together”, “rather”, and their kin.

This algorithm is cool, but rather limited at the moment. It can notice correlations between letters, and fairly simple strings, but it doesn’t do well with more complex patterns. I won’t bore you with the details, but rest assured I’m working on it.

In the meantime, AI is fun again. I mean, it was mostly fun before, but I was entering a dry spell where the work had started feeling like a chore. Every now and then, a fresh perspective helps get you excited again.

In this case, it also showed me that I had my strategy backwards. Just as you can’t build an ontology and weld on input/output later, it turns out likewise that you can’t build an ontology and weld on learning. Learning, it appears, must come first. The how determines the what.

And this new direction is fun for another reason, too.

Till recently, I’d been coding in C++. Now C++ is a white-bearded, venerable patriarch of a language: time-tested, powerful, respected by all. But it’s also a grumpy old man who complains mightily about syntax and insists you spend hours telling it exactly what you want to do.

This new stuff, on the other hand, I’m coding in Python. Python is a newer language, not as bare-bones efficient as C++ but a hell of a lot simpler from the programmer’s point of view, shiny and full of features and full of magic. And I’m new to Python myself, so I’m still in the honeymoon phase. I’m not saying one or the other is “better” overall, but right now, Python is a lot more fun.

And really, programming ought to be fun. Especially if you’re building a mind.

Lessons I’ve Learned as an AI Developer

I’ve written before about some of my AI principles of design, as well as one of the deep secrets of artificial intelligence. I’ve even explored what Nietzsche can tell us about AI.

Here are a few more humble insights from my few years in the trenches. Like the other posts, these are just based my own experience, so take with a large grain of sodium chloride:

1. Finding a strategy for Friendly AI is crucial to surviving the Singularity, but I’m skeptical of Friendly AI based on a single overarching goal. Eliezer Yudkowsky has written a lot about the pitfalls of giving an AI a top-level goal like “maximize human happiness,” because if you define happiness as the number of times someone smiles, we could all end up with plastic surgery and permanent Joker faces – or worse. I agree, but I go a step further. I think any pre-defined high-level goal is bound to take you somewhere unpredictable (and probably bad). I think a Strong AI, like a child, has to be instructed by example: rewarding “friendly” behavior, punishing”unfriendly” behavior, as it occurs.

2. Like David Hume, I believe reason is grounded in experience. I’m doubtful that a top-down ontology (such as Cyc) can be built like a castle in the air, then “grounded” later on a robotic foundation. Cyc is an ambitious and very cool project, and I respect what they’re doing. But I don’t think it’s on the path to a Strong AI.

3. Our minds work by trial and error. We focus more on “what works” than “what’s true.” It’s hard for me to see how a Strong AI could be built from a formalized, truth-based system that starts with axioms and derives conclusions in a logically airtight way. Listing my reasons for this belief would require a whole separate post; let’s just call it an instinct for now.

What do you think? Agree, disagree? Questions?

Isomorphism and AI

In yesterday’s post I explained group isomorphism, which points out a deep symmetry between adding and multiplying. I also showed how the natural log function could be used to map between the two operations.

But the idea of isomorphism applies to lots of things beyond math. Think about language. After all, what is language but an isomorphism between concepts and words?

“The cat is black.” An AI could parse this sentence and decide there’s a noun-adjective relationship between “cat” and “black.” So instead of:

5 × 3

we have:

“cat” (noun-adjective relationship) “black”

To be meaningful, the words and their relationship must map to their corresponding concepts. So instead of:

ln(5) + ln(3)

we have:

cat (has-property relationship) black

And we also need a function to map from the words to the concepts. So instead of:

ln(5) = 1.609438

ln(3) = 1.098612

we have:

MeaningOf(“cat”) = cat

MeaningOf(“black”) = black

All very nice and neat, in this example. But of course, if language was really that easy, we’d have built a strong AI decades ago. It turns out that conceptual isomorphism can be a hell of a lot more complicated than mathematical group isomorphism. For instance…

1. Mathematical group operations (like addition and multiplication) only take two inputs (the two numbers you’re adding or multiplying). But conceptual relationships can take any number of inputs. How many adjectives could we attach to the single noun “cat”?

2. In mathematical groups, there’s a clear distinction between elements (the numbers) and operations (addition, multiplication). But with conceptual relationships, the difference gets blurry. Let’s say cat has a likes relationship with milk, and a hates relationship with bath. But we also know that likes has an is opposite relationship with hates. So now we have relationships, not only between “things,” but between other relationships.

3. In our math example, our mapping function was the natural log, ln(x). Now ln(x) is a neat, precise, clearly-defined function, which takes exactly one input and gives exactly one output. Does language work that way? Ha! Imagine trying to evaluate MeaningOf(“run”). That can mean jogging, or a home run in a baseball game, or a tear in a stocking, or “run” as in “I’ve run out of milk,” or, or, or… What’s worse, these meanings aren’t independent, but have all sorts of relationships to each other; nor are they all equally likely; and the likelihood depends on the context of the word; and the way it depends on context can change over time; and the list of possible meanings can expand or shrink; and the mechanisms by which this occurs are not fully understood…

So, yeah. It gets complicated. But then, that’s why it’s so much fun.

Now we know how conceptual isomorphism (in AI) is like group isomorphism (in math). We’ve even established – dare I say it? – an isomorphism between the two isomorphisms. And now I’m going to stop saying “isomorphism” for a while.

Questions?

How the HexBug Comes Alive

YES MINISCULE MORTALS I AM THE HEXBUG GENUFLECT IN THE PRESENCE OF YOUR BETTERS

YES PATHETIC MORTALS I AM THE HEXBUG GENUFLECT IN THE PRESENCE OF YOUR BETTERS

This is the HexBug Nano. I picked it up Friday from Hobby Lobby on a whim.

Sure, it looks simple. But it was only ten bucks, and I’m a sucker for robots. So I bought one.

I’m glad I did.

OBSERVE HOW YOUR TINY EAGLE QUIVERS IN MY PRESENCE UNLEASH THE SINGULARITY

OBSERVE HOW YOUR TINY EAGLE QUIVERS IN MY PRESENCE UNLEASH THE SINGULARITY

As it turns out, the creature really is simple. No assembly, no setup, no way to control it. There’s an on/off switch. That’s it.

What’s more, the robot has no moving parts except a buzzing, vibrating motor in its belly.

That vibration is all it can do. The legs are just rubber attached to the body. There’s no mechanical control there. It doesn’t even have any sensors.

SCRATCH MAH BELLAH

SCRATCH MAH BELLAH

So what can a toy that simple possibly do?

Just watch:

Watch how it skitters across the floor with a slight back-and-forth motion, as if hunting for food. Watch how it seems to avoid walls. Watch how, when I flip it over, it thrashes around till it’s upright again.

Two things about this.

First, it’s an ingenious piece of engineering. It may look simple, but getting the precise shape of the legs to keep it moving forward – the angled head so it turns when it hits a wall – the shape of the back so it flips over when necessary – that represents hundreds of man-hours of design work.

Second, for all its clever craftsmanship, it’s still orders of magnitude less sophisticated than a real insect. A real insect can seek food, evade predators, adapt to its environment, mate, reproduce, and a thousand other things. The HexBug Nano does none of that. And yet, when I watch it in motion, my brain says: That’s a bug!

Why?

Our brains love imparting life and agency to everything we see. If it moves, it’s alive. If it moves unexpectedly, it’s thinking. That’s how we process the world around us.

This human tendency to bestow life on the unliving is a blessing and a curse for A.I. developers. A blessing, because it makes even simple features seem impressive, at least initially. A curse, because we can easily fool ourselves into thinking a piece of software – or even hardware –  is smarter than it really is. We always have to be vigilant against that.

Fortunately, the HexBug isn’t SkyNet just yet.

AI Status Update

I was up till midnight working on it, so I’ll keep this brief:

  • The agent processing algorithm is much faster now.
  • I cleaned up the code, getting rid of a bunch of clutter and making the logic more elegant.
  • Sequence recognition now works to any depth. For instance: a word is a sequence of letters, a phrase is a sequence of words, a sentence is a sequence of phrases. Recognizing deeply “nested” sequences like this turns out to be a lot more complicated than you might expect. My previous algorithm was supposed to handle it, but it was convoluted and buggy. The new version is cleaner and (so far) works 100%.
  • The AI can recognize certain parts of speech (nouns, adjectives, verbs) and construct grammatical relationships between them (e.g. this adjective modifies this noun). It can then use other agents to map the grammar structure (adjective “black” modifies noun “dog”) to a semantic structure (entity “dog” has property “black”). So far this is still in its early stages, but the foundation for building it up is solid.
  • Text-to-speech is fully implemented by calling a free online text-to-speech API. Before, it could type; now, it can talk. Click here for a sample of what its voice sounds like. That’s not a static mp3 file. It’s generated dynamically based on the text you give it.
  • I gave it the ability to load websites on command. So I can type “google” and it can bring up the Google web page. Obviously that’s nothing special in itself, but it’s significant because website-loading is now fully integrated with its other capabilities. For the AI, loading a site is like flexing a muscle.

Gotta run. If you have any questions, just ask!

The Singularity Club

singularity

Four days ago, I joined SingularityHub.com, a site with news, discussion, and videos related to the Technological Singularity, the so-called “Rapture of the Nerds.” To become a member of this site, you don’t need to be an AI researcher, a neuroscientist, a billionaire investor, anything like that.

You just need to be, well, a fan.

I’ve been exploring this corner of the Interwebs lately, and an odd little corner it is. The heavyweight is the Singularity University, a surprisingly well-connected group funded by the likes of Google, Cisco, Nokia, and Autodesk (creator of AutoCAD).

And there’s the 2045 Intiative, a group founded by Russian billionaire Dmitry Itskov, dedicated to “the transfer of a individual’s personality to a more advanced non-biological carrier, and extending life, including to the point of immortality.” Project deadline: 2045.

Mega-projects aside, you’ve got blogs like Accelerating Future, Singularity Weblog, and Transhumanity.net.

And, of course, Singularity thinkers like Ray Kurzweil and Eliezer Yudkowsky have their own online presence. Kurzweil, incidentally, was hired by Google a few months ago. His first time working for a company he didn’t create.

The Singularity research/enthusiast community is, as I said, a strange little group. Websites are a mix of real news about promising present-day tech, debates about philosophy and spirituality and robotics, and bona fide major efforts to bring this vision of the future a little closer to reality.

The common link in all this group is that people really believe. They know it sounds crazy, but then, the truth often does.

What do I think about all this?

Well, as I wrote last year, I believe the Singularity is real, and I believe it is coming. Maybe not in our lifetimes, but it’s coming. I am very much a part of the strange little group. I honestly think it’s a real possibility that some human beings alive today will live to see their one millionth birthday.

I, too, am conscious of how ridiculous this sounds. I know the Internet is teeming with these fringe micro-groups that feed on each other’s delusions until they’re convinced that their tiny groupthink vision is a prophecy for all mankind. I get that.

And yet.

A billion years ago, multi-celled organisms were a novelty. A million years ago, there was no such thing as language. A thousand years ago, electricity was nothing more than an angry flash in the sky. A hundred years ago, the whole idea of airplanes was still strange and new. Ten years ago, smartphones were only for the early adopters.

Today, telekinesis is real. Lockheed Martin has a quantum computer. And Moore’s Law, despite constant predictions otherwise, hasn’t failed us in forty years.

Am I really supposed to look at all that, and not believe we’re headed toward something?

Descending Vectors

The stars are falling – his first thought, upon
The sight of snow, before today unseen;
Descending vectors, fractal-point are drawn
Across the vision-scope of the machine.
The robot’s palm extends; his pixeled eyes
Record, by frames, what metal cannot feel
And neural nets unbidden analyze
The sight of frozen water over steel.
Behind him stands the conference hall, whose door
Projects inviting warmth on salted stairs –
And here, in laughing groups of two and four
(And wrapped in coats of other mammals’ hairs)
The first distinguished scientists arrive
To argue over whether he’s alive.

I wrote that when I was twenty years old.

Artificial Intelligence and Nietzsche

Nietzsche

I’m working my way through Friedrich Nietzsche’s philosophical masterpiece, Thus Spoke Zarathustra. It’s easy to read, and more profound than I expected.

Take this quote:

The self says to the “I”: “Feel pain!” And at that it suffers, and thinks how it may put an end to it – and for that very purpose it is made to think.

The self says to the “I”: “Feel pleasure!” At that it is pleased, and thinks how it might often be pleased again – and for that very purpose it is made to think.

Nietzsche is saying that pain and pleasure aren’t just one small piece of consciousness. Rather, seeking pleasure and avoiding pain is the mind’s entire job, its sole function.

This defies tradition, and common sense too. We’re used to thinking of the mind as something noble and elevated, for which the body is merely a tool. Most people believe the mind can even exist apart from the body, as an immortal soul. Yet Nietzsche claims the mind is not a master but a slave, whose elaborate calculations serve only to avoid an ouch.

Here, you might say: “That sounds silly, but what’s the point of arguing about it? We’ll never know. We can’t test it one way or another, it makes no difference to anyone’s life, so this is all a bunch of hot air.”

I would’ve said that myself, not so long ago.

But one of the great beauties of artificial intelligence is that it’s a proving ground for philosophy, especially philosophy of the mind. (And after all, it’s all philosophy of the mind.)

Statements like “experience is the foundation of knowledge” seem airy and abstract. But to an AI engineer, it’s all very real. Every day, I write programming code that constructs the foundation of knowledge. When you’re building a mind, philosophical questions become practical.

And from what I’ve seen so far, I believe Nietzsche was exactly right. The mind is a noble, elegant, powerful thing, but at its roots, it has one main job: seek pleasure, avoid pain. Everything else is subordinate to that primary function.

You may argue that lots of people do things that could be considered pain-seeking: going to battle, even volunteering to be crucified. But I would say that these acts are driven by pursuit of a higher “pleasure” or a deeper “pain,” encoded in the conscience.

Well, you say, then isn’t my argument circular? Isn’t such a definition of pain and pleasure so broad as to be meaningless?

From a human, subjective viewpoint: perhaps. From the objective stance outside of the AI’s code: no, it’s not meaningless at all. Pleasure and pain are values that can be calculated and measured.

Of course, my AI design is not the only one possible. Even if my architecture works, it may be very different than how the human brain operates.

Maybe. But I don’t think so – not about this, anyway.

What’s on your mind this Monday morning?

Robo-Riddling

On Monday, I posed this riddle:

I thrive in gardens and in war.
I’m what drunk drivers oft ignore.
I am a thing no man can hear,
yet light a fire and I’ll appear.
When cherries rot, I also die.
I can’t be felt – but what am I?

Longtime blog reader momenteye asked an excellent question:

How would an AI approach solving this riddle?

Let’s talk about that!

Of course, I’m only going to tell you how my model of an AI would handle this. There are a million approaches to artificial intelligence, and I don’t claim to have the only right answer. The AI I’m building is named Procyon. What would Procyon do with this riddle?

We’ll start by asking: why is Procyon trying to solve this riddle in the first place?

At his most basic level, Procyon has two main impulses: seek pleasure and avoid pain. That may sound crude or hedonistic, but it’s only the foundation of the mind, not the mind itself. Humans are, I believe, built on the same foundation. Ideas as varied as pacifism, fascism, Zen, and even sadomasochism all derive, I think, from these two basic impulses.

(In a philosophy class, we might stumble now into a debate about the definition of “pleasure,” and whether the concept is really meaningful, etc. Fortunately, in an AI, it’s much simpler. “Pleasure” is what happens when this variable right here is set to True.)

The point is, Procyon isn’t working on the riddle because some high-level directive tells him that riddles are fun. He’s doing it because this kind of thing has brought him pleasure in the past. (And if it hadn’t, then he wouldn’t care about riddles.)

Similarly, his approach to figuring out the riddle doesn’t follow some crystal-clean logic that starts with a set of Riddle Axioms and derives an answer. Formal logic is good for a lot of things, but as a foundation for intelligence, it sucks.

Instead, Procyon compares his situation with similar scenes from his past. In effect, he’s asking, “When I’ve come across puzzling textual questions before, what did I do? Did I like the way that approach turned out? Then I’ll try to apply it here.” Though of course he may not explicitly think in those terms.

So how would Procyon solve it? The answer, like so much else, is simple but maybe unsatisfying: it depends. His approach will be based on whatever’s worked in the past, not on any pre-defined rules for Figuring Things Out. Kind of like how humans operate.

This strategy, which I’ve breezily summarized, turns out to be fiendishly complex in practice. When I talk about “similar scenes from his past,” for example, what do I mean by “similar?” How do I define a “scene?” What does this comparing his situation entail? And how does he “try to apply it here?”

As generations of AI researchers have learned the hard way, the devil is in the details. Just explaining my own approach to these questions would take a full-length book, and I’m still far from having all the answers. Likewise, Procyon in his current state is far from having any concept of what a question is, much less a riddle.

But he’s getting there, a day at a time. And so am I.