The Knowing Problem

Image from salon.com

A few days ago, I posted something about what I called the consensus problem, the idea that many people have that scientists have to completely agree on something in order for the idea to be accepted.  There is, however, an even deeper issue involved and that is what it even means to “know” something is true.  When a scientist of group of scientists reports that, indeed, something is true, what does that mean?

I think that this is an important thing to discuss because I have often encountered the statement, “Well, you can’t be 100% sure, now can you?” in discussion.  I find this statement completely ridiculous; can anyone ever be 100% sure of anything?  But, as metaphysical of a question this that is, it’s important to understand what “knowing” means in science.

The Knowing Problem

Science generally comes in two flavors: experimental and theoretical.  Most often, some physical phenomenon is observed in the world (by experimentalists) and then the scientific community struggles to explain it  and a formal framework is developed (by theorists) that can be used to make further predictions and such.  Take, for example, the Danish scientist Hans Christian Oersted.  During an experiment in the early 1800’s, he happened to have a magnetic compass sitting on a table near a wire.  Completely by accident, he noticed that when the battery that was connected to that wire was switched on and off, the needle of the compass was deflected from True North (I find the concept of True North amusing, especially considering what I’m talking about).  Turns out, he serendipitously discovered that moving electric current creates a magnetic field.  This hitherto unknown connection between electricity and magnetism lead to a revolution in the way that physics was treated and eventually, 100 years later, overturned the behemoth of Newtonian mechanics by establishing that the speed of light was the universal speed limit.

On rarer occasion, someone has a stroke of brilliance and comes up with a theory that can then be established experimentally.  In the early 1900’s, Albert Einstein had the brilliant idea that gravity was a warping of the curvature of space-time as a response to the presence of matter.  This was, needless to say, a revolutionary idea…with no common experience.  It was determined that, since light travels through space and gravity supposedly altered space, we would expect light to be altered by gravity as well.  A massive object such as the Sun, should “bend” the path of light from distance sources as they entered its realm of gravitational control, making them appear elsewhere in the sky like a great cosmic mirage.  In 1919, an experiment was performed where the position stars near the Sun were observed during a solar eclipse and then compared to their positions without the Sun present.  Indeed, as predicted, they were off by just the right amount to show that their light had bent around the Sun due to Einstein’s General Relativity theory.  So here we have theory as a precursor to experiment.

So, in whatever way, a model is presented to explain a particular phenomenon.  Then next step is to then verify that the model is correct, that it fits Nature.  This is where the “100%” argument comes into play.  We now have to measure something and see if the results fit our predictions.  Measurement, however, is messy.  It is imprecise.  In fact, it is absolutely impossible to measure something to infinite accuracy, that is, it is impossible to know a measured value 100%.

Say, for example, I want to measure the width of the laptop computer I’m writing this on.  How do I do it?  I could estimate it; it’s about as wide at the length of my forearm from my elbow to my wrist.  Not very convincing nor precise since your arm probably isn’t the same length.  So, I rummage around and find a ruler (which, surprisingly, took way longer than expected)…14.125 inches.  Well, the edge was somewhere in between 1/8 and 3/16, but closer to 1/8 so…let’s call it 1/8.  But is that any better than saying it’s about the length of my forearm?  I could get a better ruler, one that has divisions down to 1/32 of an inch but I’d still have the same problem.  Hell, I could take the computer to a lab and use an atomic force microscope to literally count how many number of atoms across the laptop is.  Would that be any better?  Maybe if I measure at one point, I count 1 billion atoms (fyi, it would be waaaaaaaaaaaay more than a billion), but I measure somewhere else it’s 1 billion and 5 atoms.  Which is correct?  Maybe I should take the measurement a thousand times and average the values?  What is the width of an atom anyway?

At some point, this exercise becomes ridiculous; the phrase “good enough for government work” comes to mind.  But, it illustrates my point, there is a difference between the concept in our mind and the actuality of it in the world.  Back in the day, Plato referred to this as a Form.  A Form is the idealization of a thing in the mind that can not be realized in the material world.  It is, Plato thought, the highest form of reality.  We can easily think of a triangle but, in reality, anything that we attempt to construct, however precise, is not as perfect as the triangle we imagine.  Maybe we create a triangle by setting individual atoms down on a surface in straight lines.  If one atom is out of place, the side is “kinked” and we no longer have a triangle.  We can think of the number 4, but can we ever truly have 4 things?  If I say I have 4 cookies (delicious, delicious cookies), what am I counting?  What if one cookie is bigger than the rest, is it more than one cookie?  Maybe I have 4.2 cookies.

This all seems pretty silly, right?  But, in science, it’s important.  A theoretical model is a product of the mind, it is one of Plato’s Forms, if you will.  So, if we measure something in the real world, we have to accept that it will not perfectly fit.  And, like the width of the laptop, we will have to choose the level of accuracy which we require for the measurement to be “correct”.  Every piece of equipment scientists use to measure anything have some amount of error: the ruler only has so many divisions, the voltmeter only measures to 3 decimal places, the telescope can only resolve an image to 3 arc-seconds, and so on.  When a measurement is made in science, every data point has what is known as an error bar.  Unlike most graphs, a data point is not really a point, it is a region; the more precisely we can measure a value, the smaller this region is.  It is never be a true point, however, no matter how precisely we measure; a point has, by definition, zero dimension…it is also one of Plato’s Forms.  If we measure 1000 data “points” and the pattern predicted by the theory passes through the region, or fits, say, 100 of the “points”, then the model probably isn’t very good.  If, however, it fits 950 of them, then it’s accurate to say that the model is “correct”.

Good scientists will spend a lot of time minimizing error and accounting for anomalies so that the results can be said to be “true” or “false” to high levels of reliability.  There are many measurements in science that are ridiculously difficult to make and, thus, have a large window of accuracy.  The mass of the electron (rest mass, before an internet tough guy gives me any shit) is known to be 9.10938215(45)×10−31 kg.  The (45) at the end are the decimal points that are not precisely known.  In other words, we know to stupid accuracy the mass of the electron.  The age of the known universe is 13.798±0.037×109 years, still pretty good.  But, we know the mass of the electron to half a billionth of a percent (!), while we only know the age of the universe to 0.4%…that’s a factor of 100 million times more imprecise.  In systems such as the Earth’s climate, precision may only be known to 1% or 10% due to the complexity of the system and variables that are hidden from view.  The more we know, the smaller we can make the error.

What’s the point?  Telling me that I don’t know something 100% is ridiculous because a) neither do you, b) neither does anyone else, and c) no one ever can.  We have to choose the level of precision when we say we know something to be correct.  The higher the level of precision, the more accurate and more trusted the value is.  In addition, that measurement is repeated by others; the more measurements that yield the same result, the better it is.  If a value is then reported to what scientists call “5-sigma accuracy”, a typical level, it is accurate to 1 millionth of a percent.  And that is, indeed, good enough for government work.

Comments

  1. My mind is on fire.

  2. Jen

    I think my brain melted… But I can’t be 100% sure! 🙂

    1. kdavenport

      LOL

  3. Ted

    How imprecise of you.

  4. Devan

    I agree or at least I agree with you 99.99999999% which is good enough I suppose.

  5. Pingback: “Gigantic multiplied by colossal multiplied by staggeringly huge…” | gravitonbomb.com

Leave a Reply

Your email address will not be published. Required fields are marked *