Heavy Metal

629634-periodic-table

As you may have seen, scientists have now confirmed the existence of element 115, temporarily (and amusingly) named ununpentium.  If you haven’t heard, which wouldn’t be surprising since it doesn’t involve twerking, socialism, gun control/violence, or chemical weapons, then behold this article to find out more.

It’s not at all surprising that 115 exists.  After all, the atomic Tetris game that is the Periodic Table, created by Dmitri Mendeleev in 1869 (a Russian…coincidence?), pretty much guarantees that an element would be found there.  Indeed, 114 and 116 had already been discovered, long enough ago that have “real” names: flerovium and livermorium.  Personally, I liked ununquadrium and ununhexium, but that’s just me.

So, the question is why do this?  Why spend the time to find something that is most likely already there and extremely unstable?  After all, these experiments are costly and time-consuming.  The original experiment that was performed to discover 115 in the first place involved bombarding an americium target (LOL element names) with energized calcium nuclei for a solid month!  During this process, they discovered 4 atoms of 115, a fact they only knew due to the radiation it gave off as it decayed since the lifetime of these atoms is measured tens of milliseconds.

The simple answer: because…SCIENCE!

You’ve no doubt heard Sir Edmund Hilary’s famous quote when asked why climb Everest: “Because it’s there.”  Finding 115 is kind of like that.

There you are, some mad scientist in an underground bunker, looking at your periodic table.  Your OCD keeps you fixated on the missing square between 114 and 116.  You can’t stop staring.  You MUST fill in the hole.

Now, the actual discovery wasn’t that ridiculous, but it highlights the essence of science, that it is a process to answer questions about Nature.  Science can pretty much be boiled down to  the following: “I wonder if X?”, experiment, “Yes/No”.  Simple asking the question “Does element 115 exist?” begs science to answer.

The complex answer: because discovering an element is the ultimate form of creativity

Think about it.  Human beings have an innate creative impulse.  We developed our brains over the eons so that we could build tools and structures of ever-increasing complexity.  Well, nuclear physics is the ultimate Erector Set.

Everything in the universe is constructed from 100-some-odd elements on the Periodic Table.  Take eleven protons, mix them with a few neutrons and you get sodium, a light, silver-colored metal.  Take 17 more protons, throw in a few more neutrons, and you get chlorine, a wispy, corrosive, green gas.  But, take those two Lego bricks and snap them together and you get table salt, sodium chloride.  Snap sodium together with fluorine instead, the element just above chlorine on the table, and you get sodium fluoride, the key ingredient in toothpaste.  Add a couple of extra neutrons to that fluorine and make it a different isotope and now you have the dye they use in a PET scan.  You can build anything with the right combination of atoms.  So, who wouldn’t want to add a new piece to the toybox?  It would be like a painter coming up with a hitherto unknown color and painting with it.  Which brings me to the most popular reason…

The capitalist answer: with a new element, we could make new things

Most of the super-heavy elements, everything past uranium on the table, are unstable and disintegrate in a matter of hours, if not seconds.  Needless to say, a material that turns into something else in a few seconds isn’t very useful.  These heavy atoms are unstable because of the immense energy that is required to hold their nuclei together.  They are simply too large to hold themselves together and they break apart spontaneously, or sometimes due to collisions, into smaller, lighter elements.

There is a, however, a theoretical expectation of something called the “island of stability”, a part of the periodic table where super-heavy elements are symmetric and efficient enough to last for days, even years, rather than seconds.  This “island” is expected to appear around element 120, unbinilium (I love these names).  So, scientists keep pushing the envelope to reach this stable region.

Who cares?  So what if you can make a bar of unbinilium that lasts longer than the Sun?  The reason to care is that we have no idea what kind of fantastic material properties compounds of these new elements could have.  Take for example the so-called “noble gases”; they are in the column on the far right of the table, things like neon and argon.  For a long time, they were thought to be completely inert (the term noble gas comes from the idea that they were too aloof to hang out with the other elements) and didn’t form compounds with anything.  However, thanks to the relentless process of science, compounds involving them were form and are very useful.  Xenic acid, for instance, is a dissolved compound of the noble gas xenon that is a fantastic oxidizing agent (essentially, a very powerful cleaner and disinfectant).  It has the benefit that, when it reacts with material, it doesn’t contaminate the sample since xenon itself is non-reactive.  This makes it ideal for situations like creating high-end electronics where contamination would ruin the device.

If we could synthesize compounds of new super-heavy elements, we may be able to create new super-strong materials to build with, new materials for medical imaging and research, new fuels to use in the reactors of the future.  New types of material for the next generation of permanent magnets to power electric vehicles.  We really have no idea.  No one could have predicted how the discovery of the properties of silicon change humanity, who’s to say there isn’t a better, more amazing version of silicon out there? (Maybe 117, since it is in a position to be a semi-metal, like silicon…)  Who knows!

That’s why discovering element 115 is important, because discovering new things and learning how to harness them (for better or worse) is what we humans do.  Plus, it’s just awesome…

The Knowing Problem

Image from salon.com

A few days ago, I posted something about what I called the consensus problem, the idea that many people have that scientists have to completely agree on something in order for the idea to be accepted.  There is, however, an even deeper issue involved and that is what it even means to “know” something is true.  When a scientist of group of scientists reports that, indeed, something is true, what does that mean?

I think that this is an important thing to discuss because I have often encountered the statement, “Well, you can’t be 100% sure, now can you?” in discussion.  I find this statement completely ridiculous; can anyone ever be 100% sure of anything?  But, as metaphysical of a question this that is, it’s important to understand what “knowing” means in science.

The Knowing Problem

Science generally comes in two flavors: experimental and theoretical.  Most often, some physical phenomenon is observed in the world (by experimentalists) and then the scientific community struggles to explain it  and a formal framework is developed (by theorists) that can be used to make further predictions and such.  Take, for example, the Danish scientist Hans Christian Oersted.  During an experiment in the early 1800’s, he happened to have a magnetic compass sitting on a table near a wire.  Completely by accident, he noticed that when the battery that was connected to that wire was switched on and off, the needle of the compass was deflected from True North (I find the concept of True North amusing, especially considering what I’m talking about).  Turns out, he serendipitously discovered that moving electric current creates a magnetic field.  This hitherto unknown connection between electricity and magnetism lead to a revolution in the way that physics was treated and eventually, 100 years later, overturned the behemoth of Newtonian mechanics by establishing that the speed of light was the universal speed limit.

On rarer occasion, someone has a stroke of brilliance and comes up with a theory that can then be established experimentally.  In the early 1900’s, Albert Einstein had the brilliant idea that gravity was a warping of the curvature of space-time as a response to the presence of matter.  This was, needless to say, a revolutionary idea…with no common experience.  It was determined that, since light travels through space and gravity supposedly altered space, we would expect light to be altered by gravity as well.  A massive object such as the Sun, should “bend” the path of light from distance sources as they entered its realm of gravitational control, making them appear elsewhere in the sky like a great cosmic mirage.  In 1919, an experiment was performed where the position stars near the Sun were observed during a solar eclipse and then compared to their positions without the Sun present.  Indeed, as predicted, they were off by just the right amount to show that their light had bent around the Sun due to Einstein’s General Relativity theory.  So here we have theory as a precursor to experiment.

So, in whatever way, a model is presented to explain a particular phenomenon.  Then next step is to then verify that the model is correct, that it fits Nature.  This is where the “100%” argument comes into play.  We now have to measure something and see if the results fit our predictions.  Measurement, however, is messy.  It is imprecise.  In fact, it is absolutely impossible to measure something to infinite accuracy, that is, it is impossible to know a measured value 100%.

Say, for example, I want to measure the width of the laptop computer I’m writing this on.  How do I do it?  I could estimate it; it’s about as wide at the length of my forearm from my elbow to my wrist.  Not very convincing nor precise since your arm probably isn’t the same length.  So, I rummage around and find a ruler (which, surprisingly, took way longer than expected)…14.125 inches.  Well, the edge was somewhere in between 1/8 and 3/16, but closer to 1/8 so…let’s call it 1/8.  But is that any better than saying it’s about the length of my forearm?  I could get a better ruler, one that has divisions down to 1/32 of an inch but I’d still have the same problem.  Hell, I could take the computer to a lab and use an atomic force microscope to literally count how many number of atoms across the laptop is.  Would that be any better?  Maybe if I measure at one point, I count 1 billion atoms (fyi, it would be waaaaaaaaaaaay more than a billion), but I measure somewhere else it’s 1 billion and 5 atoms.  Which is correct?  Maybe I should take the measurement a thousand times and average the values?  What is the width of an atom anyway?

At some point, this exercise becomes ridiculous; the phrase “good enough for government work” comes to mind.  But, it illustrates my point, there is a difference between the concept in our mind and the actuality of it in the world.  Back in the day, Plato referred to this as a Form.  A Form is the idealization of a thing in the mind that can not be realized in the material world.  It is, Plato thought, the highest form of reality.  We can easily think of a triangle but, in reality, anything that we attempt to construct, however precise, is not as perfect as the triangle we imagine.  Maybe we create a triangle by setting individual atoms down on a surface in straight lines.  If one atom is out of place, the side is “kinked” and we no longer have a triangle.  We can think of the number 4, but can we ever truly have 4 things?  If I say I have 4 cookies (delicious, delicious cookies), what am I counting?  What if one cookie is bigger than the rest, is it more than one cookie?  Maybe I have 4.2 cookies.

This all seems pretty silly, right?  But, in science, it’s important.  A theoretical model is a product of the mind, it is one of Plato’s Forms, if you will.  So, if we measure something in the real world, we have to accept that it will not perfectly fit.  And, like the width of the laptop, we will have to choose the level of accuracy which we require for the measurement to be “correct”.  Every piece of equipment scientists use to measure anything have some amount of error: the ruler only has so many divisions, the voltmeter only measures to 3 decimal places, the telescope can only resolve an image to 3 arc-seconds, and so on.  When a measurement is made in science, every data point has what is known as an error bar.  Unlike most graphs, a data point is not really a point, it is a region; the more precisely we can measure a value, the smaller this region is.  It is never be a true point, however, no matter how precisely we measure; a point has, by definition, zero dimension…it is also one of Plato’s Forms.  If we measure 1000 data “points” and the pattern predicted by the theory passes through the region, or fits, say, 100 of the “points”, then the model probably isn’t very good.  If, however, it fits 950 of them, then it’s accurate to say that the model is “correct”.

Good scientists will spend a lot of time minimizing error and accounting for anomalies so that the results can be said to be “true” or “false” to high levels of reliability.  There are many measurements in science that are ridiculously difficult to make and, thus, have a large window of accuracy.  The mass of the electron (rest mass, before an internet tough guy gives me any shit) is known to be 9.10938215(45)×10−31 kg.  The (45) at the end are the decimal points that are not precisely known.  In other words, we know to stupid accuracy the mass of the electron.  The age of the known universe is 13.798±0.037×109 years, still pretty good.  But, we know the mass of the electron to half a billionth of a percent (!), while we only know the age of the universe to 0.4%…that’s a factor of 100 million times more imprecise.  In systems such as the Earth’s climate, precision may only be known to 1% or 10% due to the complexity of the system and variables that are hidden from view.  The more we know, the smaller we can make the error.

What’s the point?  Telling me that I don’t know something 100% is ridiculous because a) neither do you, b) neither does anyone else, and c) no one ever can.  We have to choose the level of precision when we say we know something to be correct.  The higher the level of precision, the more accurate and more trusted the value is.  In addition, that measurement is repeated by others; the more measurements that yield the same result, the better it is.  If a value is then reported to what scientists call “5-sigma accuracy”, a typical level, it is accurate to 1 millionth of a percent.  And that is, indeed, good enough for government work.

Science is not consensus

Image from http://www.negotiationlawblog.comImage from http://www.negotiationlawblog.com

Writing about the nature of belief in reference to science  the other day started me thinking about other specific issues I have encountered in discussing science with non-scientists.  Coffee also helped this endeavor.

Something I hear in discussion a lot, especially about topics such as climate change and evolution, is the following: “Yeah, well, I read an article by this one guy who says that X is/isn’t true.  I thought you were all in agreement?  I guess you really don’t know, then, do you?”  You’ve probably encountered this argument before as well.  Really, it can be broken into two pieces that can be addressed individually: the consensus problem and the knowing problem.  I’ll address the first today and the second in a later post.

The Consensus Problem

The whole “I though you were all in agreement” statement falls apart once you understand that science is not a consensus.  In order for a scientific theory to be considered “true”, or more accurately, for it to be considered the working model of a phenomenon, consensus is not required.  What is required is that the presented theory fits the observed facts.  Can there be more than one model?  Sure, it happens all the time.  However, as more and more data is collected, the observations give more credence to one of the models at the expense of the others.

Take the example of the theory of plate tectonics, the idea that the Earth’s crust is fractured into many smaller plates that float around on the warm, chewy nougat mantle due to the convection of heat between the hot core and the cooler surface.  When the theory was first presented at the beginning of the 20th century, no one really took it seriously.  How could the continents drift?  That’s absurd!  In the beginning, the “motor” of convection wasn’t known.  But, similarities in fossils and a variety of geologic features seemed to point to the idea that at sometime in the past, the continents were all mashed together and then somehow broke apart and moved to their present locations.  Over the years, more and more data was presented to support the idea; the discovery of the mid-ocean ridges, the magnetization of rock samples separated by thousands of miles, the obvious jigsaw-like coastlines of the continents themselves.  Eventually, a majority of the scientific community could no longer deny that plate tectonics was the preferred model and every aspect of earth science changed.

Now, did the entire scientific community just up and decide that the theory was correct in a magical moment when every single earth scientist just said, “Yes. Plate tectonics is the way”?  No.  Indeed there was, particularly in the 1950’s and 1960’s, fierce debate over its validity.  There are probably a few outliers today that still do not accept the theory.  But, science chose the model that fit Nature.

Another possible outcome here is that one of the theories turns out to be a special case of something greater.  When Einstein presented the General Theory of Relativity, a new take on the force of gravity and the nature of space and time, the established framework of Newtonian mechanics became a subset of that theory.  Newtonian mechanics made all sorts of assumptions that, it turned out, were false.  In our every day experience, we would never notice these errors; Newtonian mechanics is a fantastic description of everyday motion.  However, go to the scale of interstellar space and it just isn’t enough to describe what we see.  Indeed, General Relativity was born from the inability of Newtonian mechanics to explain how Mercury orbits the Sun.  Again, it’s all about whether or not the theory explains the observed details, not whether every single person agrees with it.  Needless to say, Einstein’s overthrow of 300 years of theory from the great Isaac Newton did not go over well in the beginning.  But, like with plate tectonics, scientists eventually acquiesced that General Relativity was a better model, it more closely fit nature.

My point here is this: science does not require a consensus.  It doesn’t need to fit the belief structure of those observing it.  It only needs to fit the observed data.  Say you are teaching a science lab at a high school.  You give each of your 40 students an identical cube of metal and ask them to find out what it is by calculating its density.  Thirty-nine of them tell you it’s iron but one says it’s silver.  What conclusion should we draw from this?  That the concept of density is somehow flawed?  Hardly…

In a later post, I will discuss the knowing problem, an issue with deeper philosophical roots, I suppose.  Until then, I’ll brew some more coffee.