Numbers…How Do They Work!?

maxresdefault

As usual, I was sitting here drinking coffee and thinking about things this morning…this morning oatmeal was also involved.  When I just sit and stare, I find all sorts of strange things go on in my mind, noise in the internal dialog.  These things take many forms, but they are usually predictable.  One thing that often happens is that I begin to tap out the drum cadence from when I was in marching band in high school; to any of my friends who read this that were also in the band, I assume you do as well.  Sometimes, I find myself staring at objects and internally “highlighting” geometric patterns, like finding Tetris blocks in floor tiles and such.  Other times, I find myself comparing background sounds or smells to completely unrelated things, like how this bird next door makes a two-pitch call that is the exact same sound that repeats over and over again in “Feel Good, Inc.” by The Gorillaz or how the eggs that my wife just cooked smell like a combination of anise seed and vinyl gloves.  It always strikes me as interesting what the brain does when you stop paying attention, the seeming randomness that “bubbles up”, as it were.

Today, I found myself counting.  Counting in French, no less.  This happens a lot, for some reason, like I’m unconsciously reviewing for a test.  I’ll be vacuuming the front room and then I just start counting internally…un, deux, trios, quatre, cinq, six, sept, huit, neuf, dix…no idea why.  It’s never in English, it’s always in French, or any other language I can think of numbers in…eins, zwei, drei, vier, fünf, sechs, sieben, acht, neun, zehn.

Any time the French counting ensues, and I become aware of it, I find myself going through a series of questions in my head.  First, why are the “numbers” in French different than in English?  Next, why is arithmetic apparently built into French and not into English?  Next, what does this say about how our brains work?  Finally, what does this say about how the universe works to make our brains function as such.  Now that I have a nifty blog, I decided to actually write this series of events down.

Why are the numbers different?

A lot can be said about different numbering systems and why we have them.  Of course, we now count in base 10, that is each digit in a number that we write down expresses some power of 10.  Why do we do this?  Probably because we have ten fingers and we use them to count.  A lot of languages have unique words for one through ten and then riff on those or combine them in some way to make the rest of the numbers, at least up to 100.  There are some interesting questions along the way.  For instance, in English, what makes eleven and twelve so cool that they get their own words, unlike 13 through 19?  In French, we have onze (11), douze (12), treize (13), quatorze (14), quinze (15), and seize (16) before we get to the “teens” like 17, 18 and 19…dix-sept, dix-huit, and dix-nuef.  Wat?

Base 10, of course, isn’t the only option for counting.  If we go with the idea that we use base 10 because we have ten fingers, why not use our toes as well, right?  Let’s count in groups of twenty instead of ten, something called the vigesimal system (rather than the decimal system).  Indeed, some cultures have developed that system, notably the Maya, the Aztec, and several African cultures.  The Mayan Calander is based on blocks of 20 and multiples thereof.

Another one that pops up is the sexagesimal system, base-60.  It originated way back in the days of ancient Sumeria and Babylonia, apparently motivated by economic trade.  The (perhaps apocryphal) story goes that Babylon chose 60 as the base unit of its currency because all of the surrounding city-states had currency that were in units that could be easily divided into 60 (which is divisible by 2, 3, 4, 5, 10, 12, 15, 20, 30, and 60).   The Egyptians used it a lot as well.  There’s not much left of that system today, except for the fact that the Babylonian calendar was situated on a circle (upon which the Sun travelled) that was divided up into 360 days, which is why there are 360° in a circle.  In addition, we still measure time in the sexagesimal system, with 60 seconds in a minute and 60 minutes in an hour.

Something that linguists have pointed out is that a lot of European languages have a vigesimal system above a certain point in counting.  On theory is that the Basque culture, which used a vigesimal system, imprinted their counting technique on Europe, which was then passed around by the Normans.  In French, for instance, the number 20 is vingt, the number 80 is quatre-vingts, literally “four twenties”.  In Danish, the vigesimal system is used for numbers between 50 and 99; tresindstyve is the Danish word for 60, which means “three times twenty”.  This brings me to my second point…

Why do some languages have arithmetic built in?

So, why “four twenties” of “three times twenty”?  Of course, the English “eighty” implies some sort of arithmetical difference from “eight”, but it is not explicit in the language.  In French, 21 is given by “vingt et un”, which is “twenty and one”.  In Danish, you have “enogtyve”, which is “one and twenty”.  In English, we say “twenty one”, but in French and Danish, the “and” is explicit.  Here, the word “and” is synonymous with “plus”.  In fact, the plus sign, +, is derived from a bastardization of the Latin “et”, which means “and” , probably from errors in transcriptions in the days before printing presses.  So, French and Danish have arithmetic built in to their numbers in a way that English does not.  Also, it’s interesting that English is Germanic, as is Danish, but the order of twenty and one that we use is the same as the French.  Those damn Normans and their invasion.

As I mentioned before, the French word for 80, “quatre-vingts”, implies the process of multiplication.  Danish is even more ridiculous than French in that their language also has fractions built in.  The Danish number for 50, halvtredsindstyve, means “one-half third times twenty”.  Here, the “3rd one-half” is 2½ (the “1st” is ½ and the “2nd” is 1½).  And, indeed, 2½ times 20 is 50.  How is that convenient?

So, what is the point of this so far?  Cardinality, the mental concept of number is universal; the “fiveness” of something is understood by everyone.  Numerality, the way we express cardinality using language, is most-definitely NOT universal and is, apparently, completely ridiculous.  Hell, even in English, a wise man once said “Four score and seven years ago…” instead of “eighty seven”.  Also, when do we hyphenate the number and when do we not?  Language is just stupid…

What can be learn about the brain from the way language expresses numbers?

So, we all count differently.  Who cares?  What I find interesting is what that difference implies about the way we learn things, in this case mathematics, and how we internalize data.

Language is a manifestation of the physiology of the brain and is restricted by the way the brain can interpret and understand input.  The psycologist, cognitive scientist, and linguist Steven Pinker, in a book called “The Stuff of Thought” (which is amazing and should be read by all), gives an interesting example in the way that children learn certain verbs.  He points out a certain class of verbs, known as object-locative and container-locative verbs, and how use then.

Say you have a process which involves putting an object into another object, such as water into a glass.  There are two ways you can structure this sentence and get the same point across.  You can say “I pour water into the glass”, in which case the word “pour” is object-locative because it acts on the object, water, which is being put into a container, the glass.  We can, however, also say “I fill the glass with water”, where “fill” is container-locative, as it acts on “the glass”, into which “water” is being put.

However, take a similar process, loading a truck with boxes.  One could say “I load the truck with boxes”, but one can also say “I load boxes into the truck”.  In this case, the word “load” can be used both ways.  It is both container and object-locative.  However, one would never say “I fill water into the glass” or “I pour the glass with water”.  It seems that “pour” and “fill” are one-way.

What’s more interesting, as Pinker points out, is that a child learning language will never make that mistake.  A toddler will never say, “Daddy, pour my glass with water!”  They screw up words, for sure, but that structure is always preserved.  Why?

Well, the process of loading and unloading a truck is reversible, presumably due to the nature of the objects being loaded and unloaded: they are solid.  Pouring and filling, however, are usually reserved for operations involving a liquid.  There is a certain implied irreversibility in this process that is not present in the load/unload sense.  The fact that I can even say load/unload implies that the word is different than pour and fill; what sense can we make of fill/unfill and pour/unpour.  Whatever pouring and filling is, you can’t “un” them, you have to do something else.  You fill/pour and then empty.

In addition, this idea applies to any language.  In French, “I load books into the truck” is “Je charge des livres dans le camion”, but “I load the truck with books” is “Je charge le camion avec des livres”.  Indeed, “to load” is “charger” and to unload is “décharger”.  There is not a word for unfill or unpour.

This all implies that the words pour, fill, and load are tied to physical processes in the world that we describe with language that is restricted by the way the process works and the way we gather the information.  Neat.

Now, when we consider the way we use language to describe number, what does that say about the way we work?  Is there something universal in the world about the idea of counting?  Number itself is universal, but is counting?  It doesn’t look like it for the ridiculous variations in the way we count.  But, does the way we count say something about our brains?  Are some people predisposed to, say, mathematics, simply because they have a language that more deeply ingrains the concept than others?  If I had learned French as my native tongue instead of English, would I be better at arithmetic now because I would have basically had to have learned arithmetic to count?  Who knows.  I’ve looked around and read some interesting papers from linguistics, mathematics, and neuroscience, but I can’t really find anything substantial.

Humans seem to have ease with counting to about 4.  Once you get past that, however, we have difficulty remembering individual pieces of information.  We start to do a process referred as “chunking”, we assemble the pieces of information into bigger pieces and then remember those.  Since our number systems have roots in practical application, perhaps some of the crazy features come from collecting things together to make it easier to remember?  Maybe we say “four-twenties” in French simply because we don’t have to remember another word for 80?  In Swiss French, however, the word “huitante” is used for 80, so who knows.

Like I said, language is stupid…

Heavy Metal

629634-periodic-table

As you may have seen, scientists have now confirmed the existence of element 115, temporarily (and amusingly) named ununpentium.  If you haven’t heard, which wouldn’t be surprising since it doesn’t involve twerking, socialism, gun control/violence, or chemical weapons, then behold this article to find out more.

It’s not at all surprising that 115 exists.  After all, the atomic Tetris game that is the Periodic Table, created by Dmitri Mendeleev in 1869 (a Russian…coincidence?), pretty much guarantees that an element would be found there.  Indeed, 114 and 116 had already been discovered, long enough ago that have “real” names: flerovium and livermorium.  Personally, I liked ununquadrium and ununhexium, but that’s just me.

So, the question is why do this?  Why spend the time to find something that is most likely already there and extremely unstable?  After all, these experiments are costly and time-consuming.  The original experiment that was performed to discover 115 in the first place involved bombarding an americium target (LOL element names) with energized calcium nuclei for a solid month!  During this process, they discovered 4 atoms of 115, a fact they only knew due to the radiation it gave off as it decayed since the lifetime of these atoms is measured tens of milliseconds.

The simple answer: because…SCIENCE!

You’ve no doubt heard Sir Edmund Hilary’s famous quote when asked why climb Everest: “Because it’s there.”  Finding 115 is kind of like that.

There you are, some mad scientist in an underground bunker, looking at your periodic table.  Your OCD keeps you fixated on the missing square between 114 and 116.  You can’t stop staring.  You MUST fill in the hole.

Now, the actual discovery wasn’t that ridiculous, but it highlights the essence of science, that it is a process to answer questions about Nature.  Science can pretty much be boiled down to  the following: “I wonder if X?”, experiment, “Yes/No”.  Simple asking the question “Does element 115 exist?” begs science to answer.

The complex answer: because discovering an element is the ultimate form of creativity

Think about it.  Human beings have an innate creative impulse.  We developed our brains over the eons so that we could build tools and structures of ever-increasing complexity.  Well, nuclear physics is the ultimate Erector Set.

Everything in the universe is constructed from 100-some-odd elements on the Periodic Table.  Take eleven protons, mix them with a few neutrons and you get sodium, a light, silver-colored metal.  Take 17 more protons, throw in a few more neutrons, and you get chlorine, a wispy, corrosive, green gas.  But, take those two Lego bricks and snap them together and you get table salt, sodium chloride.  Snap sodium together with fluorine instead, the element just above chlorine on the table, and you get sodium fluoride, the key ingredient in toothpaste.  Add a couple of extra neutrons to that fluorine and make it a different isotope and now you have the dye they use in a PET scan.  You can build anything with the right combination of atoms.  So, who wouldn’t want to add a new piece to the toybox?  It would be like a painter coming up with a hitherto unknown color and painting with it.  Which brings me to the most popular reason…

The capitalist answer: with a new element, we could make new things

Most of the super-heavy elements, everything past uranium on the table, are unstable and disintegrate in a matter of hours, if not seconds.  Needless to say, a material that turns into something else in a few seconds isn’t very useful.  These heavy atoms are unstable because of the immense energy that is required to hold their nuclei together.  They are simply too large to hold themselves together and they break apart spontaneously, or sometimes due to collisions, into smaller, lighter elements.

There is a, however, a theoretical expectation of something called the “island of stability”, a part of the periodic table where super-heavy elements are symmetric and efficient enough to last for days, even years, rather than seconds.  This “island” is expected to appear around element 120, unbinilium (I love these names).  So, scientists keep pushing the envelope to reach this stable region.

Who cares?  So what if you can make a bar of unbinilium that lasts longer than the Sun?  The reason to care is that we have no idea what kind of fantastic material properties compounds of these new elements could have.  Take for example the so-called “noble gases”; they are in the column on the far right of the table, things like neon and argon.  For a long time, they were thought to be completely inert (the term noble gas comes from the idea that they were too aloof to hang out with the other elements) and didn’t form compounds with anything.  However, thanks to the relentless process of science, compounds involving them were form and are very useful.  Xenic acid, for instance, is a dissolved compound of the noble gas xenon that is a fantastic oxidizing agent (essentially, a very powerful cleaner and disinfectant).  It has the benefit that, when it reacts with material, it doesn’t contaminate the sample since xenon itself is non-reactive.  This makes it ideal for situations like creating high-end electronics where contamination would ruin the device.

If we could synthesize compounds of new super-heavy elements, we may be able to create new super-strong materials to build with, new materials for medical imaging and research, new fuels to use in the reactors of the future.  New types of material for the next generation of permanent magnets to power electric vehicles.  We really have no idea.  No one could have predicted how the discovery of the properties of silicon change humanity, who’s to say there isn’t a better, more amazing version of silicon out there? (Maybe 117, since it is in a position to be a semi-metal, like silicon…)  Who knows!

That’s why discovering element 115 is important, because discovering new things and learning how to harness them (for better or worse) is what we humans do.  Plus, it’s just awesome…

The Knowing Problem

Image from salon.com

A few days ago, I posted something about what I called the consensus problem, the idea that many people have that scientists have to completely agree on something in order for the idea to be accepted.  There is, however, an even deeper issue involved and that is what it even means to “know” something is true.  When a scientist of group of scientists reports that, indeed, something is true, what does that mean?

I think that this is an important thing to discuss because I have often encountered the statement, “Well, you can’t be 100% sure, now can you?” in discussion.  I find this statement completely ridiculous; can anyone ever be 100% sure of anything?  But, as metaphysical of a question this that is, it’s important to understand what “knowing” means in science.

The Knowing Problem

Science generally comes in two flavors: experimental and theoretical.  Most often, some physical phenomenon is observed in the world (by experimentalists) and then the scientific community struggles to explain it  and a formal framework is developed (by theorists) that can be used to make further predictions and such.  Take, for example, the Danish scientist Hans Christian Oersted.  During an experiment in the early 1800’s, he happened to have a magnetic compass sitting on a table near a wire.  Completely by accident, he noticed that when the battery that was connected to that wire was switched on and off, the needle of the compass was deflected from True North (I find the concept of True North amusing, especially considering what I’m talking about).  Turns out, he serendipitously discovered that moving electric current creates a magnetic field.  This hitherto unknown connection between electricity and magnetism lead to a revolution in the way that physics was treated and eventually, 100 years later, overturned the behemoth of Newtonian mechanics by establishing that the speed of light was the universal speed limit.

On rarer occasion, someone has a stroke of brilliance and comes up with a theory that can then be established experimentally.  In the early 1900’s, Albert Einstein had the brilliant idea that gravity was a warping of the curvature of space-time as a response to the presence of matter.  This was, needless to say, a revolutionary idea…with no common experience.  It was determined that, since light travels through space and gravity supposedly altered space, we would expect light to be altered by gravity as well.  A massive object such as the Sun, should “bend” the path of light from distance sources as they entered its realm of gravitational control, making them appear elsewhere in the sky like a great cosmic mirage.  In 1919, an experiment was performed where the position stars near the Sun were observed during a solar eclipse and then compared to their positions without the Sun present.  Indeed, as predicted, they were off by just the right amount to show that their light had bent around the Sun due to Einstein’s General Relativity theory.  So here we have theory as a precursor to experiment.

So, in whatever way, a model is presented to explain a particular phenomenon.  Then next step is to then verify that the model is correct, that it fits Nature.  This is where the “100%” argument comes into play.  We now have to measure something and see if the results fit our predictions.  Measurement, however, is messy.  It is imprecise.  In fact, it is absolutely impossible to measure something to infinite accuracy, that is, it is impossible to know a measured value 100%.

Say, for example, I want to measure the width of the laptop computer I’m writing this on.  How do I do it?  I could estimate it; it’s about as wide at the length of my forearm from my elbow to my wrist.  Not very convincing nor precise since your arm probably isn’t the same length.  So, I rummage around and find a ruler (which, surprisingly, took way longer than expected)…14.125 inches.  Well, the edge was somewhere in between 1/8 and 3/16, but closer to 1/8 so…let’s call it 1/8.  But is that any better than saying it’s about the length of my forearm?  I could get a better ruler, one that has divisions down to 1/32 of an inch but I’d still have the same problem.  Hell, I could take the computer to a lab and use an atomic force microscope to literally count how many number of atoms across the laptop is.  Would that be any better?  Maybe if I measure at one point, I count 1 billion atoms (fyi, it would be waaaaaaaaaaaay more than a billion), but I measure somewhere else it’s 1 billion and 5 atoms.  Which is correct?  Maybe I should take the measurement a thousand times and average the values?  What is the width of an atom anyway?

At some point, this exercise becomes ridiculous; the phrase “good enough for government work” comes to mind.  But, it illustrates my point, there is a difference between the concept in our mind and the actuality of it in the world.  Back in the day, Plato referred to this as a Form.  A Form is the idealization of a thing in the mind that can not be realized in the material world.  It is, Plato thought, the highest form of reality.  We can easily think of a triangle but, in reality, anything that we attempt to construct, however precise, is not as perfect as the triangle we imagine.  Maybe we create a triangle by setting individual atoms down on a surface in straight lines.  If one atom is out of place, the side is “kinked” and we no longer have a triangle.  We can think of the number 4, but can we ever truly have 4 things?  If I say I have 4 cookies (delicious, delicious cookies), what am I counting?  What if one cookie is bigger than the rest, is it more than one cookie?  Maybe I have 4.2 cookies.

This all seems pretty silly, right?  But, in science, it’s important.  A theoretical model is a product of the mind, it is one of Plato’s Forms, if you will.  So, if we measure something in the real world, we have to accept that it will not perfectly fit.  And, like the width of the laptop, we will have to choose the level of accuracy which we require for the measurement to be “correct”.  Every piece of equipment scientists use to measure anything have some amount of error: the ruler only has so many divisions, the voltmeter only measures to 3 decimal places, the telescope can only resolve an image to 3 arc-seconds, and so on.  When a measurement is made in science, every data point has what is known as an error bar.  Unlike most graphs, a data point is not really a point, it is a region; the more precisely we can measure a value, the smaller this region is.  It is never be a true point, however, no matter how precisely we measure; a point has, by definition, zero dimension…it is also one of Plato’s Forms.  If we measure 1000 data “points” and the pattern predicted by the theory passes through the region, or fits, say, 100 of the “points”, then the model probably isn’t very good.  If, however, it fits 950 of them, then it’s accurate to say that the model is “correct”.

Good scientists will spend a lot of time minimizing error and accounting for anomalies so that the results can be said to be “true” or “false” to high levels of reliability.  There are many measurements in science that are ridiculously difficult to make and, thus, have a large window of accuracy.  The mass of the electron (rest mass, before an internet tough guy gives me any shit) is known to be 9.10938215(45)×10−31 kg.  The (45) at the end are the decimal points that are not precisely known.  In other words, we know to stupid accuracy the mass of the electron.  The age of the known universe is 13.798±0.037×109 years, still pretty good.  But, we know the mass of the electron to half a billionth of a percent (!), while we only know the age of the universe to 0.4%…that’s a factor of 100 million times more imprecise.  In systems such as the Earth’s climate, precision may only be known to 1% or 10% due to the complexity of the system and variables that are hidden from view.  The more we know, the smaller we can make the error.

What’s the point?  Telling me that I don’t know something 100% is ridiculous because a) neither do you, b) neither does anyone else, and c) no one ever can.  We have to choose the level of precision when we say we know something to be correct.  The higher the level of precision, the more accurate and more trusted the value is.  In addition, that measurement is repeated by others; the more measurements that yield the same result, the better it is.  If a value is then reported to what scientists call “5-sigma accuracy”, a typical level, it is accurate to 1 millionth of a percent.  And that is, indeed, good enough for government work.