The Enlightenment and the scientific method

A while ago I wrote about how, when DARPA funded research leading to distributive computing and our effort to land a man on the moon drove the search for ever-more-powerful and energy-conserving computing capability, no one could possibly have envisioned the internet, cell phones and iPads of today. The message, of course, is that it’s impossible to predict where science will take us. It was an appeal for understanding of why basic research is a good thing; most of the time we don’t have an inkling of what will come of it. Frequently, of course, it’s nothing practical. But every so often it’s a cell phone or personal computer or LED headlights. And it’s the scientific method that gets us there.

Basically, the scientific method is nothing more than a process for finding stuff out in reliably reproducible ways. There’s a few basic steps that make up the core of the scientific method:

  • Ask a question
  • Come up with a possible answer to your question (create a hypothesis)
  • Figure out a way to test your hypothesis (conduct an experiment)
  • Evaluate what happened
  • Draw a conclusion
  • Tell others (publish your results)

If done correctly, each cycle of these steps (for the last step should be followed by either you or someone else starting over by asking another question), gradually adds to our knowledge of how things work. Each of these steps is important, but to my mind “telling others” is probably the most critical, for what it implies: telling others exactly what you did and how you did it gives them the opportunity to try the same thing for themselves.

Not all that long ago, mankind had no idea what caused earthquakes, volcanoes, tornados or what your dog is thinking as he watches you get dressed. With the exception of the last, we’ve now got a pretty good idea and can explain them pretty thoroughly (after the fact); accurate prediction is getting closer but still a ways off. But that’s the point: we believe that there IS a way to predict earthquakes, because they behave by rules; we just don’t yet have a clear enough understanding of the rules (or of how to measure them), but it’s coming. Back when we didn’t understand the physics involved, “God caused it” was the go-to explanation.

The Enlightenment was a period in (predominately) European history when “God caused it” was no longer considered the best answer to phenomena we couldn’t explain. It was roughly from the mid-17th century through the mid-18th (about 1650-1800), and during that time the great thinkers of that era developed the conviction that the universe and everything in it behaved according to rules; it wasn’t just “God’s will.” The realization that there was an explanation for everything was incredibly important and freeing. Once you knew what those rules were, it was possible to figure out what makes things do what they do, and more importantly, what they are going to do next. For the first time in all history, man was no longer at the mercy of the gods but could begin to predict what was coming. All because, as it turns out, the universe plays by rules.

As we get better and better at understanding how these rules apply, “God did it,” or “it’s a miracle” can be used less and less. Assigning events we can’t explain to divine intervention is called “the God in the gaps” argument. It’s called a logical fallacy, and for good reason; the Enlightenment and an understanding of the scientific method has taught us that we don’t need to fall back on divine intervention; we can figure it out. It would be more accurate to say “we can’t yet explain it” and leave it at that.

This is obviously a threat to many peoples’ view of the Divine.

About BigBill

Stats: Married male boomer. Hobbies: Hiking, woodworking, reading, philosophy, good conversation.
This entry was posted in Religion and philosophy, Science. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *