The Nature of the Scientific Method

Photo by Louis Reed on Unsplash

Photo by Louis Reed on Unsplash

I have vivid memories of being taught the scientific method back in grade school. At the time, I planned on becoming a scientist, a dream that only ended after a semester of graduate school in astrophysics.

I seem to remember five steps, but like the early universe, the scientific method appears to have undergone some degree of inflation. From what I can tell, there are now eight steps:

Observe → Ask a Question → Gather Information → Form a Hypothesis → Test the Hypothesis Make Conclusions → Report → Evaluate

I also remember being told this was an iterative process. After you finish the last step, you go back to the beginning and start again. There is no end point to science. Your results give you new questions to ask, new issues to consider. You never expect to get it completely right.

Newton’s Principia was initially published in 1687. Widely regarded as one of the most important works in the history of science, the Principia laid the foundations for classical mechanics, including the laws of gravitation and planetary motion. It remained the definitive explanation for hundreds of years, and even today NASA uses calculations based on Newton’s laws to guide spacecraft through the solar system.

But, of course, Newton’s laws are wrong. On most scales that matter to us, they are extraordinarily excellent approximations, so much so that we usually can’t even observe an error. But where gravity is powerful, when things move at a substantial percentage of the speed of light, Newton’s laws break down. Einstein’s genius in the early twentieth century was to find new mathematical models that fixed the errors and laid the foundations for a deeper understanding of the universe.

But Einstein’s theories of relativity themselves don’t appear to be the final answer. On celestial scales, they work amazingly well, and so far no one has been able to find evidence of error. But on extremely small scales, where quantum effects come into play, relativity fails. Scientists have been hunting for the “theory of everything” that can explain events on both the tiniest and most enormous of scales now for generations, so far without success.

So does that mean that Newton was a liar? That Einstein was trying to hide the truth from us? That they were both sloppy scientists who couldn’t get anything right?

Well, no. Of course not.

That’s just how science works. You hypothesize, you test, you evaluate, and repeat. You don’t expect to get it right the first time, maybe not ever. But you keep getting closer and closer, your understanding growing, your theories better.

If you want to know where Mars will be on any given night, you can use Newton’s laws and know precisely where Mars will be (or as precisely as you need to be for pretty much any purposes whatsoever). With Einstein’s laws, you can calculate the strength and wavelength of gravitational waves arising from colliding black holes or neutron stars millions of light years away.

Both Newton’s laws and Einstein’s laws break down in certain circumstances. But we can use them with complete confidence where those circumstances are not in play. We don’t just throw up our hands and say, until we have it perfectly right, these theories are useless. They are both enormously useful, and used by scientists all the time.

Usually, the scientific method plays out in laboratories or the halls of universities and technology companies. Far more often than not, the public never sees the process.  Studies get published, and out of the public view, one study begets another, which begets another, each building on what has been discovered, as our understanding grows.

Unfortunately, when a study does make it into the general public sphere, the results are far too often portrayed as “the answer”. Scientific studies always have margins of error, along with limitations that are usually called out in the text but are misunderstood (or ignored). Complexity and nuance are disregarded, in favor of a dramatic headline.

Then when a new study emerges which calls into question the prior one, or qualifies its conclusion, or when that conclusion is not borne out by future events, people may be confused or dismayed.

How can it be that these scientists got it wrong?

If you understand the scientific method, of course, such an event isn’t surprising. In fact, you expect it. But if not, then you may conclude all of science is untrustworthy, or ascribe ulterior motives or incompetence to the scientists.

We’ve seen this most recently in connection with the models used to predict infection and death rates of COVID-19. Early modeling predicted millions of potential deaths in the US alone. Later models have downgraded that number, some initially to the range of 50,000 (which we have now exceeded), others between 100,000 and 250,000.

Does this mean, as some have suggested, that the scientists who produced and published those early results were incompetent, or intended to mislead the public and policymakers for their own ends?

While I suppose that could be the case, there has been little if any evidence produced, other than the fact that millions are not in fact yet dead in the US from the virus.

Does this mean that the enormous economic suffering caused by policies taken by state governments based on those models was unnecessary? If the early models were flawed, should the results of further modeling be ignored?

I think these questions again evidence a lack of understanding of the scientific method.

Epidemiological modeling is extremely difficult. The characteristics of the SARS-CoV-2 virus are still being determined, and remain subject to much uncertainty. The factors that may enhance or inhibit the spread of the virus are complex, involving the movements of millions of individual people, societal behaviors, the demographics of various populations, the presence or absence of co-morbidities, and other factors that we may not even know of yet. Moreover, the models are only as good as the values input into them. The sources we rely on for data are by nature incomplete, inconsistent and uncertain.

Over time, better and more complete data will be available, and the models will improve. Just like with all other scientific investigations, predictions for COVID-19 will always be subject to a margin of error, but they should get better over time.

Science isn’t supposed to be perfect. And science does not dictate policy. That is for our elected officials and the policymakers they engage. But policy should be driven by the best science we have to date, wherever it may lead. Always imperfect, yes. But to refuse to act when the analysis may not be perfect would be to never act.

Previous
Previous

Subpoenas and the President

Next
Next

When is the Right Time?