I’ve written before about some of my AI principles of design, as well as one of the deep secrets of artificial intelligence. I’ve even explored what Nietzsche can tell us about AI.
Here are a few more humble insights from my few years in the trenches. Like the other posts, these are just based my own experience, so take with a large grain of sodium chloride:
1. Finding a strategy for Friendly AI is crucial to surviving the Singularity, but I’m skeptical of Friendly AI based on a single overarching goal. Eliezer Yudkowsky has written a lot about the pitfalls of giving an AI a top-level goal like “maximize human happiness,” because if you define happiness as the number of times someone smiles, we could all end up with plastic surgery and permanent Joker faces – or worse. I agree, but I go a step further. I think any pre-defined high-level goal is bound to take you somewhere unpredictable (and probably bad). I think a Strong AI, like a child, has to be instructed by example: rewarding “friendly” behavior, punishing”unfriendly” behavior, as it occurs.
2. Like David Hume, I believe reason is grounded in experience. I’m doubtful that a top-down ontology (such as Cyc) can be built like a castle in the air, then “grounded” later on a robotic foundation. Cyc is an ambitious and very cool project, and I respect what they’re doing. But I don’t think it’s on the path to a Strong AI.
3. Our minds work by trial and error. We focus more on “what works” than “what’s true.” It’s hard for me to see how a Strong AI could be built from a formalized, truth-based system that starts with axioms and derives conclusions in a logically airtight way. Listing my reasons for this belief would require a whole separate post; let’s just call it an instinct for now.
What do you think? Agree, disagree? Questions?
To me this sounds like the wider issue of absolute vs. specific morality: whether it is possible to create even a single law that has no exceptions.
I was never comfortable with Kant’s thesis that we should judge our actions based on whether we would have the whole world do the same thing, and have become more and more of a fuzzy boundary jurist over the years as I discovered more and more on the imperfection of human senses and the potential for tiny changes to produce large alterations in a system.
I have encountered people who argue that AI can resolve this by becoming Alexis de Tocquville’s perfect ruler; however – while building perfect senses might be possible – I cannot trust that any observer might understand the thoughts of another, so there will always be an imperfection of knowledge.
I assume you’re talking about item #1. To be fair, plenty of possible top-level directives are compatible with imperfect knowledge. You can (for instance) define all your unknowns as numerical probabilities, and continue happily from there toward whatever your goal is.
Regardless, I think we humans are a lot better at ostensive definitions (i.e. “Do something like THAT”) than formal specifications. With the fate of our species on the line, let’s stick with what we’re good at, says I.
I was talking about #1.
I might be missing your point, but defining unknowns as probabilities makes them inaccurate, so the AI cannot be a perfect ruler.
It would allow them to be as moral as a human; however, as humans are unfriendly to humans, it might not turn out to be a friendly AI.
“defining unknowns as probabilities makes them inaccurate” – If you’re saying that an AI won’t have perfect knowledge, that’s certainly true. Nothing has perfect knowledge. But friendliness and knowledge have little to do with each other. Just ask a golden retriever.
Ah… I see where we might be speaking from different assumptions. I would not categorise a Retriever as intelligent, so would not consider it’s behaviour when considering what was friendly AI.
I wouldn’t call a golden retriever intelligent (compared to a human) either. That’s my point. Intelligence and friendliness are largely unrelated. So the question of whether an AI can have “perfect knowledge” may be interesting for its own sake, but it’s fairly irrelevant to the issue of Friendly AI.
I was trying (and failing) to say that to me there is a difference between friendliness in intelligent beings and friendliness in other things.
Intelligence grants the ability to analyse behaviour, so – whilst the symptoms may be the same – a friendly intelligence is acting differently from a friendly Retriever. So for me, intelligence (or lack thereof) and friendship are linked.
Inherent quality/instinct vs Reasoned Choice is possible not a perfect distinction, but it serves me better than the alternative which produces a toaster that is friendly because it does not burn my toast.