- Necessary Nuance
- Posts
- Truth vs. Fact
Truth vs. Fact
How do we distinguish between truth and facts? Why does it matter?
Hello all, welcome to the first edition of my newsletter! If you like what you read, please consider subscribing!
When the term “fake news” burst into popular consciousness in 2017 it was used to describe everything from malicious fabrications to stories with a partisan tilt to conspiracy theories to “analysis I disagree with.” Media literacy practitioners have teased out the different types of misleading information; I believe we also need to tease out the different things we mean when we say a story is true. We tend to use the terms “facts” and “truth” interchangeably, and while there is a fair amount of overlap in that Venn diagram it’s worth making a distinction when we are engaging in discussions where emotions can run high.
What is truth?
I’m not going to wade into the philosophical debates about the nature of truth (as tempting as it may be); I’ll be using the definitions I’ve found most useful when thinking and teaching about these skills.
Facts are objectively verifiable and proven through evidence, but truth speaks to a larger belief system that shapes how we interpret and respond to facts and opinions. Something we hold to be true can’t necessarily be proven or disproven, but it can not be counterfactual.
Fact: 2023 was the hottest year recorded (according to data and records going back to 1850)
Truth: Humans need to take significant action to stop and reverse the impact of climate change.
Easy enough, right? If only. We have more access to more facts than we can handle, and there are enough facts that most folks can find at least a few that support their truth. That can lead to interesting, insightful conversations in which people define and defend their own worldviews. It can also lead to frustration and shouting matches. And when we consider that a lot of the facts we encounter have been stripped of context, things get really messy.
In an article in Educational Psychologist, A review of educational responses to the “post-truth” condition: Four lenses on “post-truth” problems, Sarit Barzilai and Clark Chinn identify four educational lenses through which we can look as we grapple with this problem:
Not knowing how to know - Gaps in the skills and knowledge needed to analyze information
Fallible ways of knowing - Cognitive biases, lazy (or motivated) reasoning, and/or misjudging our ability to evaluate information
Not caring about truth (enough) - Beholden to personal goals and values and prioritization of point of view over facts. Accuracy is not the primary goal
Disagreeing about how to know - Disagreements about how to assess accuracy and who is an authority (I’ll be talking about how we determine authority in a future newsletter)
All four are important to contend with, but for now I’ll be focusing on the first two.
How do we figure out what’s true?
There are concrete things we can do to build the skills to analyze information, and strategies for countering fallible ways of knowing. For many years, educators relied on acronyms and checklists for evaluating sources (you may be familiar with RADCAB, CRAAP, or another similar acronym) but these checklists come up short in a few ways:
They rely on signals of reliability that can be easily gamed
They ignore the influence our emotions, our prior knowledge and experience, and the things we hold to be true have on how we interpret information
As Mike Caufield explains in A Short History of CRAAP, they are built on evaluation criteria that pre-date the internet - which means they don’t take advantage of the interconnected nature of online information
They do not prompt us to engage in the type of critical thinking that is necessary to distinguish facts from truths
Caufield developed the SIFT heuristics as an alternative to checklist-driven source evaluation.
SIFT focuses less on a good/bad binary and relies more on establishing the context in which the information was created and what Caufield and Sam Wineburg call “the context of you” (what expertise do you have/need, what are you interested in, what makes the source or claim compelling to you).
We could spend a long time on each step of the SIFT process but for now I’d like to focus on the first step: Stop. When we stop we ask ourselves two questions: Do I know what I’m looking at? and: What response am I having to it?
Do I know what I’m looking at?
I often hear students say that a source “looks authoritative” or that it’s good “because it has a lot of information” or some other surface-level evaluation. Students struggle to identify what they’re looking at because everything is open in a browser tab - websites, news stories, magazine articles, academic journals, personal narratives, etc. Students don’t have the same clues about what type of source they’re looking at as they would with print sources, which makes it even more challenging for them to develop source literacy. We need to know what we’re looking at in order to figure out if it meets our information needs - if we’re looking for background knowledge, a breaking news story won’t work.
This is also important to figure out because different types of sources require different types of evaluation. The qualities that make an academic journal reliable are different from the qualities we would look for in a personal narrative (I’ll be talking more about this in a future newsletter).
What response am I having to it?
Truth is emotional. You can’t separate emotion and bias from the evaluation process and you probably shouldn’t try. Your response to both the source itself and the information you find while fact-checking gives you useful clues as to what you want to pay attention to. What you find compelling/what surprises you points you in the direction of what you should be paying attention to when you dig deeper, and what you should be spending your time on. Seeing a headline that the wildly popular Stanley cups contain lead probably provokes an emotional response - especially if you use one. Doing a search and discovering that the lead is hidden in a disk that does not come into contact with whatever is in the cup is probably surprising; you may still want to learn more, but the sense of urgency has likely decreased.
The more closely held a truth is, the more likely we will have trouble integrating facts that are (or seem to be) in conflict with that truth. To adapt an example from Matthew Inman’s excellent comic about the backfire effect, our truth about George Washington may be that he was a great man who played a foundational role in the founding of America. And one of the facts we know about Washington is that he had false teeth that were made from ivory, gold, lead and real teeth - from both animals and enslaved humans. That fact is hard to reconcile with what we believe to be true about Washington, but the answer is not to ignore facts or abandon our truth. The answer is to acknowledge those truths (and our own biases) and how they influence what we pay attention to and our interpretation of the facts.
How good are we at fact-checking?
It would be nice if the way to counteract misinformation was as simple as telling people to evaluate news stories themselves before believing and sharing them. If you read that and thought “oh, I’ve encountered plenty of people who ‘did their own research’ and it sent them deeper into conspiracy thinking” I hear you. It’s a concern shared by researchers. A paper published in the journal Nature found that telling people to search online to evaluate the accuracy of a news story actually increases the chance that people will believe the false or misleading articles. Joshua Beton at Nieman Labs has a great write-up of the study and its findings.
There are a couple of possibilities for why asking people to fact-check a news story might result in their becoming more certain a false/misleading story is true.
First, the news stories participants in this study were being asked to evaluate were new (24-48 hours old) which means it’s likely that participants would not come across news articles that were debunking any of the articles they were checking. The role of Politifact, FactCheck.org, Snopes and other fact-checking sites is important to helping stop the spread of misinformation and for evaluating larger trends in misinformation, but they cannot keep up with the volume of information that is produced every day.
People need the skills to evaluate news stories before fact-checkers can get to them - and before they spread misinformation. This brings us to the second reason people struggle to determine the accuracy of a story (and the one I see playing the most significant role for student researchers): they use bad search terms.
The study in Nature referenced above found that people who used the headline, lede, or URL from the source of misinformation as the basis of their search returned at least one unreliable source among the top ten results 77% of the time. Which is alarming! For anyone who teaches search strategies, however, it’s probably not that surprising. The search terms we use tend to be influenced by both our own confirmation bias, but also by anchoring bias (relying too much on the first piece of information we find). One of the news stories used in the Nature study was headlined “U.S. faces engineered famine as COVID lockdowns and vax mandates could lead to widespread hunger, unrest this winter.” If you search [u.s. famine covid lockdown] you will get several results that confirm that story. If instead you search [u.s. covid food security] you get a much different set of results.
Search engines return results based on the search terms you use, not based on what facts or truth lie at the heart of your search. When I work with students I tell them to think of (and search with) keywords that are likely to be in their answer, not the words from their question. It’s a small but powerful switch that makes a significant difference in the likelihood that we’ll find multiple perspectives, not just sources that confirm our pre-existing beliefs (or the information in the source we’re looking at).
What does this mean for those challenging conversations?
Most of the most contentious questions we engage with have both lots of facts and lots of truths. We get frustrated when people talk about truth without acknowledging facts, or only talk about facts without acknowledging truths. There are strategies we can use to evaluate factual accuracy but truth requires synthesis, and there’s no way to speed run that process. In addition to establishing facts we need to make room for the beliefs, values, and experiences that inform our understanding of the truth.
Want to work with me to help your community develop the skills they need to be savvy consumers and creators of information? Get in touch!
Reply