In yesterday’s post I explained group isomorphism, which points out a deep symmetry between adding and multiplying. I also showed how the natural log function could be used to map between the two operations.
But the idea of isomorphism applies to lots of things beyond math. Think about language. After all, what is language but an isomorphism between concepts and words?
“The cat is black.” An AI could parse this sentence and decide there’s a noun-adjective relationship between “cat” and “black.” So instead of:
5 × 3
we have:
“cat” (noun-adjective relationship) “black”
To be meaningful, the words and their relationship must map to their corresponding concepts. So instead of:
ln(5) + ln(3)
we have:
cat (has-property relationship) black
And we also need a function to map from the words to the concepts. So instead of:
ln(5) = 1.609438
ln(3) = 1.098612
we have:
MeaningOf(“cat”) = cat
MeaningOf(“black”) = black
All very nice and neat, in this example. But of course, if language was really that easy, we’d have built a strong AI decades ago. It turns out that conceptual isomorphism can be a hell of a lot more complicated than mathematical group isomorphism. For instance…
1. Mathematical group operations (like addition and multiplication) only take two inputs (the two numbers you’re adding or multiplying). But conceptual relationships can take any number of inputs. How many adjectives could we attach to the single noun “cat”?
2. In mathematical groups, there’s a clear distinction between elements (the numbers) and operations (addition, multiplication). But with conceptual relationships, the difference gets blurry. Let’s say cat has a likes relationship with milk, and a hates relationship with bath. But we also know that likes has an is opposite relationship with hates. So now we have relationships, not only between “things,” but between other relationships.
3. In our math example, our mapping function was the natural log, ln(x). Now ln(x) is a neat, precise, clearly-defined function, which takes exactly one input and gives exactly one output. Does language work that way? Ha! Imagine trying to evaluate MeaningOf(“run”). That can mean jogging, or a home run in a baseball game, or a tear in a stocking, or “run” as in “I’ve run out of milk,” or, or, or… What’s worse, these meanings aren’t independent, but have all sorts of relationships to each other; nor are they all equally likely; and the likelihood depends on the context of the word; and the way it depends on context can change over time; and the list of possible meanings can expand or shrink; and the mechanisms by which this occurs are not fully understood…
So, yeah. It gets complicated. But then, that’s why it’s so much fun.
Now we know how conceptual isomorphism (in AI) is like group isomorphism (in math). We’ve even established – dare I say it? – an isomorphism between the two isomorphisms. And now I’m going to stop saying “isomorphism” for a while.
Questions?