Friday Links

wiki

Possibly the most useful page on all of Wikipedia. A handy reference, for those times when you get confused.

spock

In case you haven’t seen it: Spock vs. Spock. That is, classic Spock (Leonard Nimoy) vs. new Spock (Zachary Quinto). Spoiler alert, n00bs get pwned.

matternet

Problem: getting supplies to many remote parts of the world is difficult. Solution: a network of autonomous quadcopters that can deliver stuff anywhere. Internet, meet MatterNet.

google style

For my programmers in the audience: did you know Google has a coding style guide? Guides for C++, Python, and more.

teh moonz

This month, we saw the brightest meteoroid impact NASA’s ever detected on the moon.

sap

Tech behemoth SAP makes a very unusual announcement. They’re looking for a few good…autistic people.

pa

I can only assume Penny Arcade has cameras in my house, because this, right here, this is me. They even nailed the cantankerous grimace.

smbc

And last of all, a word about proper parenting technique from the Internet’s leading authority on the subject: SMBC.

Stop reading, it’s over!

Is Luxury Wrong?

I’m a lucky man in many, many ways. One way I’m lucky: I have smart friends, which means I get to have a lot of interesting conversations.

A couple weeks ago, I wrote a post called The Perils of Virtue, in which I suggested that buying luxury items (even small ones, like movie tickets) could be unethical, because the money is desperately needed elsewhere. In other words, the ethical opportunity cost of luxury is very high.

Several of my friends gave good responses, and I want to examine each in detail.

Zeev commented:

Ill just use your example of buying a 20 dollar movie ticket instead of giving to doctors without borders. If you buy a movie ticket you can say that you are paying someone’s salary, supporting the movie industry and the theater industry, and growing Americas economy. America having a strong economy is incredibly important since the USA has been a tremendous force for aid/charity to people around the world. If the US economy recovers/grows the aid that it provides to the world will help a lot more than doctors without borders ever could. So one can argue that spending 20 dollars on a movie ticket is just as virtuous as donating it to doctors without borders.

I know that that example was a bit of a stretch but you get the basic problem you can run into.

In other words, the world is very complex, and who’s to say where my money will do the most good in the long run?

Another of my friends made a similar argument (IRL!), pointing out that money given to charities could be misused by corrupt charity workers, or the people whose lives you save could turn out to be war criminals, or a million other possibilities. The basic argument is, I think, the same: we can’t see the future, so how can we know the most ethical way to spend our money?

My answer is that yes, the world is complicated, and no, we can’t predict the future. But not all possibilities have equal probability. Which sounds more likely: that Doctors Without Borders has massive systemic corruption so terrible that it renders donations worthless? Or that doctors are, in fact, using the money to practice real medicine in the field? As with all decisions, we can’t be certain, but we do the best we can.

Likewise, in Zeev’s example, it’s true that growing the US economy may be a net benefit to the world. But providing medical care in poor regions also helps the world, and $20 in (let’s say) Mozambique can go a lot farther than it can in the USA.

Let’s talk about that for a second. Here’s the world:

world

And here’s the world, sized according to how rich we are:

You can be forgiven if you don’t find Mozambique on that map, although its land area is twice the size of Japan.

Looking at the second map, I’ll ask again: where do we really think $20 can do the most good?

And finally, it’s true that if we save someone’s life, they may go on to do terrible things, leading to a net ethical “loss.” They could also go on to become a great leader. We simply don’t know. But if we conclude from this that saving a life is ethically neutral, I’m forced to ask what we ever meant by “ethical” in the first place.

Meanwhile, David J. Higgins wrote an entire post responding to mine. He also argues that luxury isn’t necessarily unethical. I’ll pick out his key arguments, as I see them:

As well as producing the ability to purchase luxuries, your salary is a method of ascribing value to your actions. While there are many arguments against specific pairings of salary and job salary it is, in Western world, the method most commonly used by people to measure their worth; if you work by the rule that you are not entitled to benefit from more than a basic life as long as there is someone in need then you can strengthen the unconscious belief that you are not worth your extra salary. However flawed the salary system, the larger monetary value of a doctor to a barista is a clear sign of societal worth; is the doctor immoral for not valuing his work as only equal to the barista?

Remaining with the doctor, some of the salary is a recompense for past effort: is it not ethical to have some luxuries now to balance the extreme stress of his degree and vocational training? Will we get the most skilled people wanting to be doctors, airline pilots, or judges if it brings only the spiritual benefits of service?

In other words, shouldn’t highly skilled people be allowed to make lots of money and spend it on themselves?

The answer is a definite yes. Dave’s correct that we’ll get better people in skilled jobs if society allows them luxury. But, as I was careful to say in my original post, I’m not talking about what people should be allowed to do, I’m talking about what’s ethical to do. These are very different questions.

I believe a free society ultimately benefits all, and people should be allowed to spend their money how they want (within reason). Attempts to force a whole society to behave “ethically” have been uniformly disastrous (see: communism in Russia, China, and North Korea; the Taliban; American Prohibition). But as an individual, the situation’s very different. If I see a chance to help someone, and I knowingly pass it up, I’m as culpable for that decision as for any other.

Dave makes another important point:

…relaxation is of value. Time away from doing stressful and intensive work gives the worker the ability to achieve more better work on their return. A surgeon who spends money on a great steak is getting more than sustenance; he is also undoing the damage that stress and fatigue do to his skills.

Beyond recharging, luxuries feed creativity. How often does the solution to a problem come when you are relaxing or in the shower? To say that one person deserves a month of clean water more than another person deserves a coffee is true in the abstract, but does one person benefit from a month of clean water more than they would benefit from the opportunity for creative thought that coffee brought? In many cases probably, but not always; unless we can decide in advance who will have worthwhile ideas how can we deny anyone the right to have luxuries?

I believe this is a much stronger argument, and it’s one I actually agree with. Look at Google. Their headquarters is a playground; their employees are bathed in luxury. But that luxurious environment also helps draw the most brilliant minds in the world to work for them, creating products of enormous benefit to everyone. Relaxation does feed creativity, and mental health does have enormous value that’s hard to quantify.

So yes, luxury can be ethical – to the extent that it allows you to help others more effectively in the long run.

It may sound like I’m now using the same logic I condemned earlier, arguing that luxury is fine because the future is uncertain and nobody knows what’s best. But there’s an important difference. In my view, luxury is only ethical as long as it aids you in helping others more than you could be doing without it.

A night at the movies to unwind after a day of meaningful work; a week-long vacation to relax after months of stress; these are good things. But the ethical “price” for such luxuries is that we must funnel as much time and money as we can bear into efforts (such as charities) that do the most good in the world. Failure to do so is not merely a missed opportunity, it is wrong.

I want to emphasize again that this is a very high standard, and I certainly don’t claim that I’m meeting it. I go to the movies. I buy gadgets I don’t need. So please don’t think I’m holding myself up as some kind of perfect example here. I’m not. Nor do I want to preach to you; I’m simply stating conclusions that seem, to me, inescapable.

Gotta run. Tear me apart in the comments!

Lessons I’ve Learned as an AI Developer

I’ve written before about some of my AI principles of design, as well as one of the deep secrets of artificial intelligence. I’ve even explored what Nietzsche can tell us about AI.

Here are a few more humble insights from my few years in the trenches. Like the other posts, these are just based my own experience, so take with a large grain of sodium chloride:

1. Finding a strategy for Friendly AI is crucial to surviving the Singularity, but I’m skeptical of Friendly AI based on a single overarching goal. Eliezer Yudkowsky has written a lot about the pitfalls of giving an AI a top-level goal like “maximize human happiness,” because if you define happiness as the number of times someone smiles, we could all end up with plastic surgery and permanent Joker faces – or worse. I agree, but I go a step further. I think any pre-defined high-level goal is bound to take you somewhere unpredictable (and probably bad). I think a Strong AI, like a child, has to be instructed by example: rewarding “friendly” behavior, punishing”unfriendly” behavior, as it occurs.

2. Like David Hume, I believe reason is grounded in experience. I’m doubtful that a top-down ontology (such as Cyc) can be built like a castle in the air, then “grounded” later on a robotic foundation. Cyc is an ambitious and very cool project, and I respect what they’re doing. But I don’t think it’s on the path to a Strong AI.

3. Our minds work by trial and error. We focus more on “what works” than “what’s true.” It’s hard for me to see how a Strong AI could be built from a formalized, truth-based system that starts with axioms and derives conclusions in a logically airtight way. Listing my reasons for this belief would require a whole separate post; let’s just call it an instinct for now.

What do you think? Agree, disagree? Questions?

Postmortem: Star Trek Into Darkness (Spoiler-Free)

Rumor confirmed: Spock is hot.

I’m going to say a lot of good things about this movie, but let’s get one thing out of the way first. I don’t know who’s been picking titles for Star Trek movies lately, but they are bad at their job and they should feel sad.

The last one was just called Star Trek, but you can’t just call it Star Trek, because that’s already the name of the original show and the franchise. You can’t call it “the Star Trek movie,” because there are eleven other movies, one of which is Star Trek: The Motion Picture. You can’t call it Star Trek 11 because nobody knows what number it is, since none have been numbered since 6. And now you can’t even call it “the J.J. Abrams Star Trek movie” because there are two of them.

What the Star Trek Into Darkness title lacks in ambiguity, it makes up for in utter retardedness. Darkness might actually be the single most overused, clichéd metaphor of all time. The title suggests that our heroes are in for a supreme struggle, which might be more exciting if it weren’t already the plot of every single story since Gilgamesh.

Ahem. Okay, wow. I didn’t realize I had that much title anger. Breathe, Brian.

Mmm...still has that new-starship smell.

Mmm…still has that new-starship smell.

Anyway: nomenclature aside, the new Trek is excellent. Same director, actors, and style as Star Trek 11: The Search for a Subtitle, but a general step up in both storytelling and adrenaline. If you liked the last one, you’ll like this one too.

I’ll admit, the trailer had me worried. Exciting music aside, it looked like a mess of action and CG with no discernible storyline. I’m happy to report that there is indeed a plot, and even if it isn’t the greatest script in the franchise, it gets the job done.

Besides, Star Trek has never been about the plot. It’s about the characters, and the hard decisions they make. And fortunately, the characters in Darkness are excellent.

Kirk is bold, brash, decisive, almost (but never quite) to the point of absurdity. Spock outshines even the captain in the titular Darkness: his struggle between Vulcan control and human passion is intense and utterly believable. Scotty is funny without being a joke, and even Sulu gets in some good scenes. Only Bones seemed disappointing, though I can’t really say why. All the crew felt new and old at once, in the best possible way.

And then there’s this guy:

"Into Darkness" is also the title of my semi-autobiographical indie goth rock album.

“Into Darkness” is also the title of my semi-autobiographical indie goth rock album.

As I’m staying spoiler-free, I won’t tell you who he really is, but it’s a cool moment when you find out. Regardless, the pale dark dude is eminently menacing, with a sweet deep voice that Christian Bale’s Batman can only dream about. A starship duel has never felt more like a knife fight than with this guy at the helm.

The movie is awfully heavy on CG and action. Personally I would’ve toned it down a little. Not that it made for a bad movie, but it didn’t feel as much like a Star Trek movie. But that may be just the grumpy old man in me. J.J. Abrams, get off my lawn!

Besides, if you had any doubt this was a Star Trek movie, there’s a scene near the end that will erase your worries. For newbie fans, it’s merely an excellent scene. For veteran Trekkies, it borders on sublime.

I liked Into Darkness a lot, and if its Rotten Tomatoes score is any indication, I’m not alone.

If you’re on the fence about whether to watch this one, my advice is: boldly go.

Udacity Goodness

I’m running quite late today, but in the ten minutes I’ve got, I want to tell you about Udacity.

See, as I dive deeper and deeper into the AI rabbit-hole, I find more and more that I need to understand statistics. Only problem is, I remember almost nothing from my two stats classes in college, and the textbook was so mind-numbingly dull that I don’t fancy reading it again on my own. And of course, I don’t want to spend the money to take another college-level class on it.

What’s a guy have to do to get a free college education around here?

One answer is Udacity. It’s a site that offers online video-based classes, put together by real professors. You still get college-level content, complete with quizzes and tests, but you also get some advantages:

  • No time spent commuting to class.
  • Go at your own pace. Fast-forward through the stuff you already know, take a little extra time if you’re struggling.
  • Every couple of minutes, Udacity stops and asks you a question to make sure you understand what’s going on. This constant engagement seems to work a lot better at keeping students focused than a once-a-week quiz or homework assignment.
  • Super-smart instructors, many of whom work on advanced projects at Google. (There’s a class that explains how the Google self-driving car works!)
  • Free.

The disadvantages:

  • No college credit or degree. You’re gaining knowledge, not job-hire-ability.
  • The course catalog is pretty limited right now, and what they do have is heavily skewed toward computer science. If you want to learn about Russian literature, you’re out of luck – for now. They’re growing all the time, though.

Of course, I’ve only just started my first Udacity class, so I still have a lot to learn about them. And about statistics…

Have you done much online learning? What was your experience like?

Friday Links

I spent a good 40 minutes putting together my usual Friday Links, complete with pictures. Then I hit “Preview” and discovered WordPress had decided to delete it, sans explanation.

So in lieu of that, here’s the short version:

  • Thingiverse is an online database of downloadable 3D printer designs. You can even print a new starship in case your old one gets busted up whilst flying Into Darkness.
  • Hacking the President’s DNA isn’t possible…yet. But how far off is a future like this?
  • Speaking of the future, how about a pill that knows you’ve swallowed it?
  • Somebody mailed a package with a tiny camera inside. This three-minute video takes you inside the journey of mail.
  • Carrie Fisher confirms she’s playing Leia once more in Star Wars: Episode VII. In spite of the rampant cynicism about yet another trilogy, I think the early signs are positive. It’s too early to get excited, but it’s also too early for prophecies of doom.
  • And finally, as always, SMBC delivers.

Have a culturally enlightened weekend – or at least drink some good beer! See you Monday.

These Are a Few of My Favorite Things

It’s been a pretty serious week so far on the Buckley blog, so I thought we’d lighten the mood a little. Here’s something I wrote this morning for no particular reason.

Sing to the tune of “My Favorite Things,” from The Sound of Music.

Vulcan and Hoth, Z’ha’dum and Arrakis
Louis C.K. and Zach Galifianakis
Hitchhiker’s Guide and The Lord of the Rings
These are a few of my favorite things.

Gödel and Escher and Bach and the Beatles
Thumbing my nose at my phobia of needles
Robots that hover on bumblebee wings
These are a few of my favorite things.

C++, Anki, Mozilla, and Blizzard
Gandalf, and Turing, and all other wizards
John Stuart Mill and the Mandelbrot Set
Vincent van Gogh (and I’m not finished yet!)

Braid and the Triforce and Geno and Moogles
Trying out new applications of Google’s
Browsing on Wiki and laughing at Bing
These are a few of my favorite things!

When the news sucks
When the code breaks
When I feel like shit
I simply remember my favorite things
And then I get ohhhh…ver it!

Isomorphism and AI

In yesterday’s post I explained group isomorphism, which points out a deep symmetry between adding and multiplying. I also showed how the natural log function could be used to map between the two operations.

But the idea of isomorphism applies to lots of things beyond math. Think about language. After all, what is language but an isomorphism between concepts and words?

“The cat is black.” An AI could parse this sentence and decide there’s a noun-adjective relationship between “cat” and “black.” So instead of:

5 × 3

we have:

“cat” (noun-adjective relationship) “black”

To be meaningful, the words and their relationship must map to their corresponding concepts. So instead of:

ln(5) + ln(3)

we have:

cat (has-property relationship) black

And we also need a function to map from the words to the concepts. So instead of:

ln(5) = 1.609438

ln(3) = 1.098612

we have:

MeaningOf(“cat”) = cat

MeaningOf(“black”) = black

All very nice and neat, in this example. But of course, if language was really that easy, we’d have built a strong AI decades ago. It turns out that conceptual isomorphism can be a hell of a lot more complicated than mathematical group isomorphism. For instance…

1. Mathematical group operations (like addition and multiplication) only take two inputs (the two numbers you’re adding or multiplying). But conceptual relationships can take any number of inputs. How many adjectives could we attach to the single noun “cat”?

2. In mathematical groups, there’s a clear distinction between elements (the numbers) and operations (addition, multiplication). But with conceptual relationships, the difference gets blurry. Let’s say cat has a likes relationship with milk, and a hates relationship with bath. But we also know that likes has an is opposite relationship with hates. So now we have relationships, not only between “things,” but between other relationships.

3. In our math example, our mapping function was the natural log, ln(x). Now ln(x) is a neat, precise, clearly-defined function, which takes exactly one input and gives exactly one output. Does language work that way? Ha! Imagine trying to evaluate MeaningOf(“run”). That can mean jogging, or a home run in a baseball game, or a tear in a stocking, or “run” as in “I’ve run out of milk,” or, or, or… What’s worse, these meanings aren’t independent, but have all sorts of relationships to each other; nor are they all equally likely; and the likelihood depends on the context of the word; and the way it depends on context can change over time; and the list of possible meanings can expand or shrink; and the mechanisms by which this occurs are not fully understood…

So, yeah. It gets complicated. But then, that’s why it’s so much fun.

Now we know how conceptual isomorphism (in AI) is like group isomorphism (in math). We’ve even established – dare I say it? – an isomorphism between the two isomorphisms. And now I’m going to stop saying “isomorphism” for a while.

Questions?

Let’s Talk About Group Isomorphism

“Let’s talk about group isomorphism!” Said no one ever.

Group isomorphism is obscure, complicated, and technical. It’s also one of the coolest ideas I’ve ever encountered, one of those real wow! moments that changed the way I think about math, the universe and everything. And, I believe, it’s very relevant to AI.

So here’s the deal: I’ll do my best to lead you through the swamp, and in return, you’ll get a new perspective on mathematics – and maybe even the nature of intelligence. Agreed?

We’ll start with something simple. Or rather, two somethings: addition and multiplication, + and ×. Our old arithmetic friends. Can’t get much simpler than these guys, right?

6 + 8 = 14

3 × 7 = 21

Have you ever thought about how similar addition and multiplication are? Both of them take two numbers, do something mathy to them, and spit out another number.

In fact, the similarities don’t stop there. Both groups are also associative, meaning we can throw in parenthesis and it doesn’t change the answer:

(1 + 2) + 3 = 1 + (2 + 3) = 6

(2 × 3) × 5 = 2 × (3 × 5) = 30

That may not seem like much, but it’s a property subtraction doesn’t have.

Addition and multiplication also both have an identity element. For addition it’s zero, for multiplication it’s one. Either way, it doesn’t change the identity of whatever you pair it with.

5 + 0 = 5

7 × 1 = 7

What’s more, addition and multiplication both have the idea of an inverse. For any number, you can pair it with its inverse, and get the identity element.

18 + (-18) = 0

37 × (1/37) = 1

Addition and multiplication really are a lot alike, aren’t they? So much alike, in fact, that you almost get the feeling that at some deeper level, they’re doing the same thing, just with different symbols. Like they’re two different languages, and all we need is a translator…

In fact, you can translate (or “map”) between multiplication and addition. The key is the natural log function, ln(x).

For those who aren’t familiar with it, the natural log function is a button on any scientific calculator, and a function in Excel too (just type “=LN(5)” for example). If you want more details about what it actually means, Google is your friend. For now, what we care about is that it will translate between multiplication and addition.

What do I mean by that? Let’s see an example.

3 × 5 = 15

Well, ln(3) = 1.098612, and ln(5) = 1.609438, and ln(15) = 2.70805. These numbers may look like random garbage, but they have a remarkable property:

1.098612 + 1.609438 = 2.70805

(Ignoring rounding errors, of course.)

In other words:

ln(3) + ln(5) = ln(15)

There’s our translation.

Say we wanted to calculate 6 × 7, but we’re allergic to multiplication. We just can’t do it, for whatever reason. But we can do addition, and that’s all we need.

All I have to do is “translate” our multiplication problem into the language of addition, using the natural log function:

ln(6) = 1.791759

ln(7) = 1.94591

Now we’re in the world of addition, and I can do addition!

1.791759 + 1.94591 = 3.737669

We’ve got our answer! We just need to “translate” it back into multiplication world.

We know that ln(answer) = 3.737669. What we need is the ln function in reverse, a sort of un-natural-log, to go back to multiplication world. In mathematical terms, we’re looking for the inverse function, the function that will undo ln(x). As it happens, the inverse function for ln(x) is e^x, so we take e^3.737669 and get…

(drum roll please)

42, the answer to our original multiplication problem, 6 × 7.

Roughly speaking, addition and multiplication are known as groups in the mathematical sense. And because they are fundamentally the same, they are called isomorphic (“iso” means same, “morph” means form). Group isomorphism.

To me, it’s fascinating that addition and multiplication – which seem totally different on the surface – are somehow the exact same thing “under the covers.” Any result you derive for one operation can be “translated” to the other. Our world hides deeper symmetries than we suspect.

Tomorrow, I’ll show how some of those deeper symmetries apply to AI.

Questions?

Two-thirds

I’m sick today, so here’s a poem I wrote in college.

Two-thirds
Of a knight
Sits unmoving under burnished steel;
His sword, or someone’s, extends vertically
From a nearby shadow, pitted brick-red,
Similarly lifeless.
There are others –
Just as, on first sighting two leaves in the forest
So too are there “others.”
But the leaves, early fallen
From a blood-red autumn,
Are scarcely discernible through the surge of crows
Ebbing and roiling, black on black on black
In the lengthening twilight.

The vision dims halfway to reality.
The prophet is yet new;
Her eyes, still white with shock,
Have not yet faded into numbness
From a hundred such visions.
Presently she looks forward,
Sees again the eager boy – the soldier,
Registers his repeated question:
“Will we have victory today?”
– Victory. She does not immediately know this word,
This “victory.”
Which portion of the massacre
Corresponds to his query?
– But eventually, dutifully,
She picks out the banner
That has not yet been trampled by horse hooves
And compares with the boy’s insignia
To see if they match.