Mr. Spock prides himself on thinking logically. When someone is being “highly illogical,” he isn’t afraid to let them know. He’s half Vulcan — Vulcans being an entire race of logical thinkers.
Lots of humans try to do the same, and often feel they’ve succeeded. “Just think about this logically,” you’ll hear someone say, implying that they know how to do this, whereas others do not. A lot of people seem to believe that “thinking logically” is something fairly well-defined, that you can just decide to do, if you’re smart enough to know how.
I’m pretty sure it’s not that simple.
Don’t get me wrong — I’m a huge believer in logic. I firmly believe that logic (along with its children, math and science and engineering) is one of the greatest things we’ve ever come up with, as a species. It’s worthy of deep study and careful practice. Certainly you can think more logically or less logically, and you should generally shoot for the first one.
But logic has its limits. And if you believe that logical thinking is a sort of binary state that you can flip on or off, then you’re being, well, highly illogical. Logic is a strange, subtle, slippery creature, and even if you can get hold of it firmly (good luck, btw) there are some doors it simply can’t open.
Let’s look at some of the difficulties.
Logic is only as good as its inputs — and logic can’t choose your inputs for you.
In a comment a few days ago, blog reader Anthony Lee Collins pointed out that logic only works well if you use it “starting with things that are true.” In other words: Garbage in, garbage out. This is a subtler point than you might think, because logic takes various kinds of inputs. Some input is raw sensory data (e.g., something you saw with your own eyes). Other input isn’t direct from the senses, but consists of conclusions you’ve made from earlier thoughts (e.g., he’s sneezing so he’s probably allergic to something). Some definitions are spelled out fairly clearly in your brain (e.g., a kilogram is a thousand grams), while others are implicit, axiomatic (e.g., it’s possible to define the concept of “a number” formally, but almost nobody does, or should).
The description I’m giving is vastly oversimplified, but already we see how much of a mess our input is. To put it mildly, we can never 100% trust our senses, or our prior conclusions, or our definitions of terms, whether implicit or explicit. What’s more, we can’t even quantify the degree to which we trust these things, and making the attempt would take an enormous amount of time.
Beyond all that, we can’t even use logic to decide which inputs to use, at least not in an absolute sense. Yes, I can use some logical criteria to decide, say, which news reports or which people I trust more or less. But how did I decide those criteria? Did I use logic there, too? Then that logical process must have had inputs … and so on. You see the problem. You go down far enough, sooner or later you’ll hit a gooey blob of intuition. This necessarily happens, to some extent, even in the most rigorous of formal mathematical proofs (because you can’t build a castle on air). How much more does it happen in our ordinary thinking?
All this difficulty, and we haven’t even gotten to the actual “logic” part yet.
Logic can’t choose your goals or values for you.
Logic and science deal in facts, in building up basic data into more complex and (hopefully) more useful data. Logic can tell you something like: “If you vaccinate these 1,000 people against this disease, there is a 99% chance that at least 975 of them will survive.” What it can’t tell you is whether or not to care about people living or dying. This is what philosopher David Hume famously called the is-ought problem: No matter how much “is” data you have (objective facts), there is no purely logical way to jump into the world of what you “should” do (subjective values). At some level, this must always rely on intuition.
People try to get around this in various ways. Christians can say “God commands us to do this.” Okay — but even if you believe in God, why obey him? Because he’s supremely good and wise? Okay — but why go along with what’s supremely good and wise? (If that sounds silly, that’s your intuition talking.) Conversely, atheists might conceivably say that, according to the theory of evolution, the only purpose of life is to reproduce and improve. But in fact the theory of evolution says no such thing. Like all scientific theories, it is purely descriptive; it says that, over time, trillions of organisms have reproduced, and this is the mechanism by which species “improve” (however you define that). It does not and cannot advise you on whether to pursue this goal yourself. All other value systems have the same difficulty.
What logic can do — and what it does very well, in fact — is define subgoals and lesser values as a consequence of your ultimate, foundational values. For instance, logic can’t tell you that you should value life, as a standalone statement. But it can tell you (more or less) that if you value life, then vaccination aligns with your values, based on the data we have.
And even if you have good input data and clearly defined goals …
Logic is expensive.
Thinking logically takes time and energy. It simply isn’t possible to think with precise logic about every decision you make. You have to intuit most of your life, in fact. Another complication: Some decisions (e.g., choosing a new car) may permit you to take weeks or months to think it over at your leisure, whereas other decisions (e.g., whether to brake for that brownish blur that’s suddenly racing across the road) may allow you less than a second.
Okay, you say, that’s not so bad: Come up with a logical system for when to spend more time on logic, and when to spend less (or none). Okay, but … what method do I use to create such a system? How do I evaluate its success? How do I apply it in everyday life? How and when do I make changes to the method? Again, intuition will soak into all these areas.
And finally …
Logic is really, really hard.
What if we could somehow live in an “ideal” world, with good inputs, clear goals, and infinite time for thinking? Logic is still incredibly difficult, especially if you happen to be human. Even the most basic of statements is fraught with possible errors. Take this example:
It is likely to rain, therefore I will take an umbrella.
If someone asked me, I’d say that statement is logical enough, by ordinary human standards. But if we’re really serious about thinking logically, there are a million things to consider before we’d pronounce it solid. Things like:
- How likely is it to rain?
- What probability of rain is the threshold for deciding to take an umbrella?
- When is the rain likely to come? (Is there a fixed time? A fixed window of time? A probability distribution? If so, how is it defined?)
- Am I also likely to be outside at the time it’s going to rain? How likely?
- Will the umbrella protect me from getting wet? How likely is it to do so?
- What level of wetness is acceptable? Is the umbrella likely to meet that standard?
- How difficult will it be to get the umbrella? Is it right nearby, or do I need to search for it? How difficult does umbrella-getting need to be before I decide I’d rather get wet? How do I measure difficulty?
- Will the umbrella work? If it doesn’t, what will I do?
And on, and on, and on. And really, even the objections above are generous and oversimplified, because the original statement doesn’t even pretend to follow any sort of truly formal logic.
Again, you might say my questions about the umbrella are silly — but again, that’s your intuition talking. Part of the job of intuition is to bypass the staggering complexity of rigorous logic and produce something that’s good enough. Intuition is necessary. Intuition is what gets us through the day.
There are even more limits and qualifications and things to consider when you talk about logical thinking. You could write hundreds of books on the subject, and people have. My point is simply that logic — even though it’s really, really great — is also woefully inadequate when standing on its own, just as intuition is woefully inadequate on its own. They’re complementary, and if we’re smart, we’ll recognize that, and we’ll understand that “thinking logically” just means “thinking somewhat more logically than usual” — and, for the most part, that’s a good thing.
Maybe, like Mr. Spock, we’re all half-human after all.
Very interesting. You raise a lot of interesting points. But I think, at least to some degree, you are over-thinking this. I think that some of what you are calling “intuition” is the process of your brain asking and answering a lot of those questions without you even realizing it. And some of the other questions are just plain irrelevant. Take your “looks like rain” scenario. You know from experience that no forecast is perfect, so might rain, might not. You also know from past experience with umbrellas that they can help a lot, but are not a perfect solution. But taking an umbrella is an easy and inexpensive precaution. The question of where is the umbrella and how difficult a task is it to actually take it, is asked and answered by the subconscious in most cases. Questions like how dry will it keep you, and what is an acceptable degree of wetness are irrelevant to whether or not you take the umbrella. It may not rain at all, but you are prepared if it does. However, to be logical, a person who has never been in rain, and has never used an umbrella may indeed need to answer those questions and a lot more.
An android, programmed with AI will use logic in everything it does. But as it learns, it will stop asking the kinds of questions you are suggesting, and make decisions based on more general questions like “what has worked well in the past”, and “is there any information that would suggest that this will be any different?”
The questions are irrelevant, sure, but it’s intuition (derived from experience) that tells you that. You don’t go through and deduce every little thing, because fortunately, you don’t have to. Intuition clears the path for logic to function — decluttering the brain, so to speak.
“An android, programmed with AI will use logic in everything it does.” Any particular reason you say this? AI doesn’t do this currently, at least not in any meaningful sense.
I am getting far afield from my element but to me, AI starts with programming and programming is by definition based on logic. Conclusions drawn by AI may yield faulty results but it is still logical. As you said, inputs can be wrong resulting in faulty logic but it is still logical.
Well … it’s a bit like saying that every .jpg image is based entirely on squares. It’s technically true, because they’re ultimately nothing but pixels. But in a more meaningful sense, .jpg images can be circles, spirals, trees, faces, anything. The square-ish aspect is just an implementation detail; it doesn’t tell you much about how the image really looks.