Weaponizing the scientific method

We’re experiencing an unprecedented level of lying in American politics right now, mostly courtesy of our “so-called” president. What is there to do about this? Right now it seems like no amount of calling out bullshit stops Trump from baldly lying, or his supporters from accepting the lies, or congressional Republicans from spinelessly hiding in a corner pretending it’s not happening. Trump is doubling down on his lies and lashing out at the media for exposing him, which threatens our free press and our ability to fight back against his wannabe-authoritarian regime. I’ve talked about how hopeless this makes everything seem before, but also how we don’t have much of a choice except to fight back. We are still scrambling to figure out the best way to fight.

As a scientist and someone who prefers to rely on logic to make decisions, I believe that one way to do this is by teaching people to get comfortable using the scientific method all the time. It’s not really that hard once you train yourself to think that way and apply it to all kinds of situations, even those that occur outside of the lab. So what is the scientific method? In a nutshell, it’s a set of principles used to conduct evidence-based inquiry. It’s founded in the idea that you can make an observation (“X phenomenon occurs”), extrapolate a hypothesis to explain the observation (“Y is acting on Z to cause X phenomenon”), and then do tests to find out if your hypothesis is correct or not (“If I remove Y, does X phenomenon still occur?”). A good scientist will form multiple possible hypotheses, including null hypotheses (“Y is not acting on Z to cause X” or “Y and Z have nothing to do with X”), and set up tests that will generate observations to address each hypothesis. At the root of the scientific method is thinking of multiple possible scenarios and applying skepticism to all of them, as opposed to just accepting the first one you think of without questioning it further. Oftentimes you come up with a hypothesis that makes logical sense and is the simplest explanation for a phenomenon (i.e. it’s parsimonious), but when you investigate it further you find that it is not at all correct.

Let’s apply this to politics today. Here’s a basic example: Trump lost the popular vote by nearly 2.9 million votes, but he keeps claiming that this is due to massive voter fraud where millions of people voted illegally. You could just take his claim at face value and not question it. After all, he is the president, so he’s privy to lots of information the general public doesn’t have, and as the leader of the U.S. he should be acting with integrity, right? You could use this logic to form the hypothesis that he is correct and there is evidence of massive voter fraud. Or you could apply a bit of skepticism and formulate an alternative hypothesis: Trump’s claim is false, and there’s no evidence of the massive voter fraud that he cites. How to test this? You can look for the evidence that he claims exists just by Googling… and you won’t actually find any. What you will find are instances of his surrogates claiming that the voter fraud is an established fact, various reputable news agencies and fact checkers debunking the claim (even right-leaning Fox News admits there is no evidence for the claim), and a complete dearth of any official reports that massive voter fraud occurred (which I would link to, but I can’t link to something that doesn’t exist). In support of the claim you will find right-wing conspiracy websites like InfoWars that don’t cite any actual evidence. So based on that inquiry, you could conclude that Trump’s claim is false. It didn’t take much skepticism or effort to address the question, just enough to ask “Can I easily find any solid evidence of this?”

One problem with this strategy: we aren’t doing a great job at teaching people how to think rationally and critically. On a large scale, we would do this by refocusing our education standards around critical thinking (part of what Common Core and Next Generation Science Standards aim to do, albeit with mixed results over the methods and implementation with which they teach the scientific method/critical thought). There has been a lot of pushback to this, particularly in conservative areas. Why? One explanation could be general distrust of government and antipathy towards regulations/standards demanded by the federal government. Another could be over-reliance on religion, which fundamentally demands faith in the absence of evidence. I’m not going to fully wade into this murky debate right now, and before I do I’ll say that not all religion is bad, and it’s not true that no religions teach their followers to think critically and analyze things. But some religious groups don’t teach these principles, and they rely much more on encouraging their followers to just believe what they’re told by their pastor (or whatever religious leader), or face moral doom. This dangerously reinforces the idea that it’s okay to just blindly follow certain authority figures without question. It extends past the church doors, to the school teacher slut-shaming high school girls instead of providing them with comprehensive sex education, to the public official who expresses skepticism over climate change despite an abundance of evidence that it’s happening and caused by humans. People believe what these authority figures say. We’re seeing it now, as more than half of Republicans accept Trump’s claim that he really won the popular vote, with the percentage being higher among Republicans with less education. One of the main reasons we hear from Trump supporters why they voted for him was his tough stance on immigration, and they seem to be happy with his controversial travel ban, even though just recently the Department of Homeland Security found that people from the countries targeted by the ban pose no extraordinary threat compared to people from other Muslim-majority countries (Trump rejected this report, even though it was ordered by the White House). This underscores the importance of providing people with an opportunity to learn and practice critical thought. Rather than putting in a tiny bit of effort to look for the evidence that those claims are true, or thinking about their news source to figure out if it’s biased or not, they blindly trust what Trump says.

Fixing the education system to provide more training in critical thought and use of the scientific method is absolutely necessary long-term, but right now we need a strategy to deal with people who don’t have that training. So if you have conservative friends or family and you’re brave enough to talk politics with them, ask them why they believe things to be true. If they cite something as evidence that isn’t rigorous, ask them why they trust that source. I think it’s possible to do this without talking down to them; most people are probably capable of applying logic in their thinking even if they haven’t been trained to do so. Instead of trying to argue that someone is wrong and back it up by saying “here’s a fact to support my argument and you should believe it because it’s true” (even if that’s accurate), ask them why they think your fact isn’t true, and respectfully lead them to your evidence to back up your argument. Perhaps it won’t work all the time, but if the seed of skepticism can be planted in at least some people, they may be more careful in their voting decisions in the future. If you rely on logic and rational skepticism to make your decisions, you have an obligation to help other people do the same. It’s worth a try.

 

Featured image: Barbara Lee’s town hall, February 18th, Oakland.

Open access and peer review, in a nutshell

Let’s talk about something scientific.

One of the key underpinnings of the scientific process is the ability to share research results with others. Before scientists do this, we share our results with each other to get feedback on our work and suggestions on what other experiments we can do to provide more solid evidence for our claims*. This is a fundamental part of the publishing process and known as peer review, which is basically just scientists checking each other’s work. Who is more qualified to do this than other scientists in the field? If you’ve looked at an article published in a scientific journal recently, you might notice that 1) it’s very dense, and 2) the techniques used, and often the questions asked, are pretty complex. It would likely be hard for someone with no scientific training, or even a scientist from a different field, to provide useful critique, or to spot things the authors may have overlooked. So when we submit our manuscripts to journals for review, we try to have them reviewed by other scientists in our sub-field who are most familiar with the techniques and questions discussed in the manuscript (and thus, the benefits and pitfalls of what we discuss).

If scientists didn’t rely on peer review, we’d be able to publish just about anything and claim it to be fact, and then it would be up to the general public to critique it and spread the word about whether or not the results are valid. That would just be inefficient, and highly unlikely to succeed. Peer review acts as both a filter and a stamp of approval**.

After a manuscript is peer-reviewed and published in a journal, that information is theoretically available to the public and part of the established scientific knowledge base. But scientific research isn’t truly available to the public unless it’s actually accessible. Many journals are behind what’s known as a pay wall, where you have to subscribe in order to access the content beyond the abstract (a summary of what an article is about). This is similar to how the New York Times charges $2.75/week for digital access. The difference is that the subscription costs of many journals are exceedingly high, such that most individual people can’t afford a subscription, let alone multiple subscriptions to different journals. Scientists can usually access these articles because they work for a university or company that shoulders the cost of subscriptions to many journals, but depending on how well-funded your employer is, the cost may still be prohibitive.

Why is this a problem? There’s the obvious issue of forcing published science into a black box that remains mysterious to the general public, which helps to feed the perception that scientists’ work is beyond the reach of “normal people” and blocks public interest in all but the sexiest or weirdest stories. There’s also the fact that a majority of scientific research is paid for by the government, which uses taxpayer money to fund grants. So taxpayer funds are going to facilitate scientific research, but then most taxpayers can’t actually read about the research they paid for. If the research isn’t even made available to all scientists, it prevents future scientific progress (what do scientists have to build on if they don’t know the current state of the field?) This is where the idea of open access comes in.

Some publications are open access, like the PLOS journals and eLife, and these publications do not require a reader to pay to view their articles. Other publications, like Nature and Science, charge a subscription fee. Nature‘s fee is $3.90/issue. Perhaps that sounds on par with subscriptions to non-scientific magazines and newspapers, but keep in mind one fundamental difference: you can get the news from multiple sources, so if something important happens, several news agencies will report on the same story, and you don’t necessarily need to pay for it. With scientific publications, the research article will only be published in one journal, so to access all research as it comes out, you’d have to pay for a subscription to many different journals. It adds up quickly, and effectively leads to people paying twice for scientific research (assuming they already pay taxes).

Together, peer review and open access are fundamental to scientists’ ability to share our work with the public, demonstrate convincingly that our findings are accurate, and allow non-scientists to engage in scientific research. Attempts to limit this are wholly detrimental to the scientific process and public understanding of science. Last month, the Trump administration ordered a media blackout on several government agencies including the EPA, and also indicated that research from EPA scientists would need to be approved by the Trump administration so as to “reflect the new administration”. The Trump administration is not run by scientists, and it’s unclear who in the administration would be reviewing scientific results. This amounts to unqualified, politically motivated people deciding based on their agenda what science gets published—clearly problematic and fundamentally counter to widely-held standards of scientific integrity.

Regardless of who is in office, scientists should be working to improve peer review and the general public’s access to scientific research. On top of that, we should work to help non-scientists understand the process of doing research and the lengths we go to in order to demonstrate that, to the best of our knowledge, our findings are accurate. Without this line of communication, we will be forever holed up in our ivory towers, piddling away on experiments that will never make as great of an impact as they should, because people either cannot hear or cannot understand us.

~~

*Or to find out if our claims really are accurate; sometimes you do another experiment and it demonstrates that what you thought was an interesting phenomenon is actually just noise, or that it’s less significant than you thought.

**It’s not a perfect system, of course. Peer-reviewed results do get published that are eventually shown to be false upon further testing, or sometimes after it’s discovered that data was fudged. An ideal system would have scientists acting with integrity 100% of the time. But like every other field, people are sometimes deceptive, and do things that undermine the system when it looks like it will benefit them. Sometimes we publish results that we think are correct, but later advances in the field or attempts to repeat an experiment show that those results are not correct. This is an ongoing struggle, and peer review is one of the things that combats this.

Featured image: Berkeley neighborhood flower.