What’s so Great About the Truth Anyway?
“The general principle of antifragility, it is much better to do things you cannot explain than explain things you cannot do.” – Nassim Nicholas Taleb
Among the points Scott Adams makes in “How to Fail at Almost Everything and Still Win Big” (previously reviewed here) is that it may be helpful to believe in naive models of causality despite their naiveté. 1 The example he provides is his dog staring at him to convince him to play fetch. Adams suggests that his dog is unaware that Adams is the one who actually decides whether to play or not, but from the dog’s perspective, this doesn’t matter: eventually he gets to play fetch. 2
Adams goes on to suggest that we ought to be more like his dog and embrace such ignorance when it works even if at an intellectual level we know the causal model is wrong. An example he provides is the old trick of identifying words in your resume you would be willing to get rid of in exchange for $100. No one is actually going to give you the money, but it is still useful if you want a good, parsimonious resume.
Given that I have outted myself as a strict consequentialist, I perhaps ought to embrace this philosophy. I find this difficult.
Clearly, some people are able to do this. I think there probably are some people who decided to go ahead and believe in G-d either because of Pascal’s wager or the empirical data that suggests religious belief leads to longer, happier lives. Personally though, the idea of deciding to believe a proposition based on something other than the truth of that proposition is mental sacrilege.
Still, the potential benefits to that sacrilege cannot be ignored. Scott Alexander calmly folds and sets aside his drawers before taking a dump on cognitive behavioral theory, the psychology of which is the laughably simplistic “people have problems because they believe wrong things”.
Despite being the most widely practiced theory of psychology out there, it does about as well as the competing theory of which the founding belief is “people have problems because they want to have sex with their mothers.” This did not stop Alexander from needing to amend his post with the following disclaimer:
Do not stop going to psychotherapy after reading this post! All psychotherapies, including placebo psychotherapies, are much better than nothing at all (kinda like how all psychiatric medications, including placebo medications, are much better than nothing at all).
Ordinarily, I don’t have a problem with accepting solutions without having a deep understanding of how they work. I can take a drug without knowing how it works or even expecting my doctor to know how it works. What matters is that it does actually work regardless of the mechanism.
Psychotherapy, though, requires you to embrace the theory of mind presented by your therapist. In contrast, aspirin ought to alleviate pain more than a placebo whether you believe in it or not.
So, why bother with truth at all? What’s so great about the truth that you should believe it over falsehood? As a scientist of sorts, I find it uncomfortable to even type such a question, which is an indicator of a problem and a hint at where it lies: Why science? What’s so great about science?
Phrased this way, I can attempt an answer. Science is the best way to build things that are likely to work. If you try non-scientific, non-truth-based ways to calculate the static forces in buildings, you will eventually kill or injure yourself or others. Truth gives you a stable building. Falsehood gives you death.
But as just noted, this isn’t always true. Sure, buildings need to be built based on truth, but does that mean your theory of mind needs to be truth-based as well?
The psychology literature seems to answer “no”. If you have an issue, you are better off believing that your hypnotist will help you than not. Science seems to be telling us that the truth is that sometimes the truth ought to be ignored.
- My phrasing, obviously.
- I don’t believe Adams is making the correct interpretation in this example. It flatters human sensibilities to think that Adams is truly the one making the decision, but his dog is in fact causing Adams to play fetch with him. The dog purposefully performs an action that produces a result. That’s what deciding is.
So, Cargo Cult behavior is not always bad as long as it’s not trying to influence the physical world.Report
This seems to put Descartes before the horse.Report
10 points. well played.Report
If it makes him happy, there’s no real problem with Scott Adams’s belief that he’s smarter than his dog is.Report
There is an argument that while Just World theory might be wrong, it does seem to help people go through their lives. Believing in untrue things seems to serve as layer of protection from unrelenting hard reality to many people. Exposure to the truth varyingly defined gets people depressed, it paralyzes them and prevents action. However, believing in unreal things comes with a cost. It can open people up to scammers, con artists, and snake oil merchants. It causes people to do stupid things. There is no good or easy solution.Report
“all psychiatric medications, including placebo medications, are much better than nothing at all”
I think I know what he was trying to say, but boy that statement is wrong as-is. It would be true if there were only one mental problem and one medication to treat it at only one proper dose, with no side effects, and the mental problem could be diminished by the placebo effect. As a long-time OCD’er, I can say that every one of those conditions doesn’t conform to reality.Report
The Nocebo Effect is the first thing that I thought of.Report
And there’s also the point that those medications, therapies, etc., cost money and time. If one’s symptoms are bad, but still “sub-clinical” and in some ways manageable, then those costs might not be worth it.Report
Au contraire.Report
Where comprehension and understanding are incomplete, deteriorating, or ossified, perhaps simple, workable models are best.
Nonetheless, it is in the nature of all things which exist to change.
Where the possibility of understanding being augmented exists, negligible margins of error are preferable.Report
I suppose the question is “what is your goal?”
If you want to get what you’ve always gotten, doing what you’ve always done is a pretty good plan.
If, however, what you’re doing isn’t particularly sustainable, there will be a point at which you’re going to have to change. Like, there will not be any choice available *BUT* to change. All the appeals to justice and how things ought to be will be worth exactly as much as they are right now, but it’ll be obvious instead of camouflaged by beautiful daydreams.
At that point, it will be *VERY USEFUL* to have people around who are interested in how stuff works and have an idea of how stuff works that measurably and repeatably seems to approximate how stuff works. You’re going to need those people to hammer out how stuff might work going forward.
But until you have to change? Eh. Might as well do some cargo culting while talking about how nice it would be if things worked the way we wished they could.Report
I have to say that the part of the post that deals with mental illness is shot through with the sort of prejudice and poor understanding that is typical among the broad culture.
CBT works. Dialectical Behavior Therapy, a refinement developed to treat people with Borderline Personality Disorder, is probably better. There is a big body of data that say these techniques help people.
Describing it as “people have problems because they believe wrong things” seems to drip with disdain. It seems to suggest that the idea is ridiculous on its face.
But in fact, one’s beliefs about why something happened, and what it means, have huge consequences for how one feels about it. How could they not?
For instance, your friend was late to the lunch date. Was this because they don’t care about you, or because there was bad traffic? This will have consequences for how you feel about it.
Interestingly enough, there is some endorsement for the thesis of the OP from one of the fields foremost researchers, Martin Seligman. In one or another of his books (Sorry I don’t remember which), he frames depression as a kind of “learned pessimism” and states that there is also “learned optimism”. And this is what processes like CBT teach one. He also says that people who are slightly depressed are better at predicting outcomes than people who are more optimistic.
But “slightly depressed” is maybe not the worst thing in the world. There are emotional conditions that are much more of an issue. But perhaps this is why CBT seems to hit a ceiling – it makes things better up to a point, then it hits a ceiling.
Bringing in other ideas will help at this point. For instance, one can learn to just embrace a bad feeling, have it, and be done with it. But you won’t learn this from CBT, per se.Report
As someone with absolutely no expertise on the subject, I think I agree with what you’re saying about CBT. To add a point (again, admitting my lack of expertise), it strikes me that people tend to focus on the “cognitive” and eschew much consideration of the “behavioral” of CBT. It’s not “only” modifying what or how one believes things, it’s also about modifying specific behaviors.
When I say “people,” I probably mean laypersons like me who have read a self-help book or two, but don’t really know much. I do wonder, though, if many therapists also focus too much on the “cognitive.” (And it’s also possible I’m just mis-grokking CBT.)Report
I dunno. It seems to me some truths (or rather untruths) are more consequential:
“We did not use polluted canal water on this lettuce you are about to eat, and our restaurant washed the lettuce before feeding it to you”
“The airbags in your car do not generate shrapnel that could kill you if they deploy”
I suppose the real answer is that the regulations in place to protect us from things tend more to set up PENALTIES for people who lie than magically protect us from the bad things, though I suppose also the idea is that they serve as a deterrent to doing the cheap thing and lying that you’ve done the safe thing. I don’t know.
I want to believe that truth matters because I feel otherwise like the universe is chaos and nothing matters and so why even bother? But I also recognize that that’s an emotional response – I need to feel like the world is orderly in order to function (and yes, I kind of need the idea of a “just world” even as more and more I realize it’s an illusion). I need to believe that it’s not all random and that every good thing you enjoy can be snatched from you suddenly without warning….which is kind of how some things have gone in my life the past two years, which is also maybe why I’ve become more anxious and rigid about things.
Though I will say on the dog thing: there do seem to be an awful lot of people who operate as if HOPING something will happen to benefit them will make it happen. I deal with this regularly from students who ask me to bend/break rules I have no intention of bending or breaking. (And they keep asking, that’s the thing that makes me crazy)
I’ve never tried CBT, but it always seemed to me more like it was aimed at “identifying thought patterns that lead to distress for the person, and helping the person to change the script” which is different than “you’re sad because you think wrong things”Report
We don’t know what we don’t know. Even if @pinky and @doctor-jay were wrong about the effectiveness of mental health interventions [1], that just means we don’t know that of an approach to psychotherapy that’s better than a placebo. Doesn’t mean that we will never find one, and when we do we want to understand whether it works.
And once we do find that thing, how will we know?
I also think there’s an additional subtle trap in giving yourself leave to believe pleasant lies, which is that you’re going to do that anyway. To quote Richard Feynman, “The easiest person to fool is yourself.”
You don’t want to give yourself more reasons to give in to a potentially extremely dangerous temptation.
[1] I believe they are correct, but I think it’s interesting to assume Adams is right.Report
I haven’t read the original research, but if the scenario revolves around those types of mental illness which are helped by talking things through, and the placebo is an attentive listener who is replying in ways that seem credible, then I’m perfectly comfortable with the apparent results.Report
That Taleb quote sounds like something The Sphynx from Mystery Men would say, which isn’t a good start.
More seriously, I work in a field where people who thought they could “do things they couldn’t explain” have managed to kill hundreds of millions of people. Sometimes its better to wait until you have understanding.Report
The problem I see is that the word “science” is doing two jobs here.
One of those jobs is describing a method of acquiring knowledge, a way of doing things. If I am making beer and I need to move my wort from my brewkettle to my fermenting vessel with minimal spillage, science offers me several options. A siphon, exploiting Bernoulli’s principle, allows me to move the liquid up and out of one vessel and down a hose into another, neatly exploiting gravity to pump the wort into a fermenting vessel. Science as method tells me what to do and why it will work. Properly executed, it yields better results than non-scientific beermaking would — there is less chance my beer will be contaminated if I understand germ theory and take steps to sanitize my equipment; the end product will taste more like my intention when I understand the acidity of my hops and control the temperature of the wort during fermentation.
But another job science does is cultural, mythological. In this sense, science as myth offers a lens through which I view and understand the world. I understand why the mash is boiled, why the yeast transforms my wort into delicious life-giving beer. That dovetails into the method but is a separate way of thinking about what’s going on. A non-scientific beermaker will understand the process of transforming water into mash into wort into beer differently; there will be other explanations for it like the spirit of Dionysius responding to my prayer, or the gradual condensation of heavier humours away from lighter humours in my liquid. But the result makes sense to the non-scientific beermaker: you still do certain things, and then other things happen because of what you’ve done, and soon enough you’ve created beer.
This is a bit hard for some to appreciate because we in the industrialized West of the 21st century, and its cultural satellites elsewhere in the world, have all very thoroughly bought into the materialistic, rationalistic myth of science. You don’t have to understand science at any significant level to accept it as a mythology: cause and effect are impersonal, universal, knowable, and manipulable. THe results of this world view are plain to us: we call these products of scientific process things like “technology” and “medicine.” And in that myth, “truth” has inherent value; it is in a way the telos of the scientific process.
It is hard for us to understand that our ancestors did not subscribe to this myth, and understood the world as a battleground between angels and demons, for instance, and were comfortable enough with accepting this as simply the cold, hard reality of the universe whether you liked it or not, just like we accept that the impersonal and sometimes non-obvious laws of physics govern everything we perceive, including the manner in which our bodies and minds engage in the act of perception. And in a non-scientific world view, “truth” may not mean the same thing as it does to the modern scientific-myth thinker. Nor is it necessary the telos of that world view; one might value the salvation of one’s eternal soul or attaining proximity to the Gods higher than one values understanding why it is that the apple falls down from the tree towards the ground.
It may be the case that various kinds of non-scientific mythological outlooks on the world yield, on net, greater happiness than a scientific mythology. Hard to say; it’s hard to shed the intellectual superstructure of one’s own mythology.Report
I’d also say that “science” is often used to refer to a community (or communities) of experts who supposedly come to a consensus on certain matters and through those instances of consensus speak on what is “true” and what isn’t. Or, as the members of that community might put it, speak on what is attested and what isn’t, or what is disproved and what isn’t, or which theories better explain known phenomena better than other theories. In that sense, “science” is used as “authority.”Report
This is a trippy post. Good thinking and pondering makes me giddy inside. Love it!
Science as a descriptor of the physical world assumes it is itself a sort of “unified field theory” – able to describe and know everything. But following a rigorous intellectual life that precludes any other imagination of the world does not lead to more fulfillment. It will not necessarily make the world a better place. Science after all, can cure cancer, but it can also destroy life as we know it.
Shall we abandon Truth then? We might also ask, “Shall we abandon love?” Folks firmly answering no to the first might get a little queasy answering no to the second. To allow our imaginations to run free, we cannot cling too tenaciously to a rigid, clinical definition of truth. I love science and spend half my day buried in quantitative studies. But there is more to life than is dreamt of in your philosophies dear your’ratio.Report
There was an essay at Slate Star Codex a few months back that I revisited a couple days ago that this essay is having me chew on again.
It’s the two different ways to deal with disagreements. Conflict vs. Mistake. While the essay is discussing things in reference to Marxism, it makes a pretty interesting distinction between Conflict Theory and Mistake Theory:
When it comes to the issue of “what’s so great about the truth?”, it comes down to the issue of what you’re shooting for. I’m someone who thinks that we can asymptotically approach figuring out how stuff works. Whether “stuff” refers to really repeatable stuff like physics or less repeatable stuff like people, it’s important to get the truth because the better we get at this stuff, the better we *CAN* get at this stuff.
We might figure out something that works but completely misunderstand why in the short term. Imagine what we’d be able to do if we had the understanding that they’ll be teaching to their kids at our fingertips right now. I sigh in envy.
For conflict theory, though, sometimes science needs to be whipped into shape a bit before we have the knowledge that we’ll eventually have. Sometimes science needs to catch up to the stuff we know socially. Sometimes we have to shame or shun (or worse) the people who use science as a rhetorical weapon. And when they make appeals to their limited truth, we have to make appeals to greater truths.
To get to the point of Marxism, for example, there are a number of examples of Marxist Governments that ended up doing some pretty awful things for reasons that don’t really make a lot of sense if you assume mistake theory. If you assume conflict theory, though, they make perfect sense.
If you don’t hit your targets for grain production, it must be hoarders or wreckers (or both) who are undercutting the program. It’s the people who aren’t helping enough. We know that this plan will work, we can see the Capitalists doing big grain numbers… why aren’t we doing big numbers? It must be because of saboteurs. Throw them into the gulag and make them work up there. Look at the USSR. Look at China. Hell, look at Venezuela. Why isn’t this working? It’s because people aren’t buying in. They’re not helping. They’re hoarding. They’re wrecking. If things get bad enough that stuff collapses and the country has to remake itself with capitalism this time?
Well, we know that that old incarnation wasn’t *REAL* socialism, it wasn’t *REAL* Marxism.
The real version has people buying in. It doesn’t have wreckers.
Mistake theory is useful for explaining why socialism that looks real enough in 2012 will end up being not the real kind in hindsight in 2018.
But the things that come out of a Mistake Theorist’s mouth are also the things that a saboteur would say…
So what’s the use of the truth?
Well, what’s your goal? What’s your theory of why someone might say something that you know, deep in your heart, isn’t true?Report
I have long had a similar thought about the 10,000 Hours hypothesis.
The data clearly proves the 10,000 Hours hypothesis is faulty. However, I have no doubt in my mind that those who embrace the myth and so take time and effort to practice being better at X are more likely to be successful in X than those who mock them for such beliefs.Report
One of the exampled he used was Mozart. I’m pretty sure 99.9999% of humanity could study music their entire lives and never write anything as good as Eine Kleine Nachtmusik.
https://www.youtube.com/watch?v=Qb_jQBgzU-IReport