Sir Isaac Newton: “I Can Calculate the Motions of the Planets, but I Cannot Calculate the Madness of Men”

Isaac Newton, the most incisive mind in the history of science, reportedly uttered that sentiment about human nature. Why would he infer such negativity about his fellow humans? Newton’s scientific greatness stemmed from his ability to see well beyond human horizons. His brilliance was amply demonstrated in his great book, Philosophiae Naturalis Principia Mathematica in which he logically constructed his “system of the world,” using mathematics. The book’s title translates from Latin as Mathematical Principles of Natural Philosophy, often shortened to “the Principia” for convenience.

The Principia is the greatest scientific book ever published. Its enduring fame reflects Newton’s ground-breaking application of mathematics, including aspects of his then-fledgling calculus, to the seemingly insurmountable difficulties of explaining motion physics. An overwhelming challenge for the best mathematicians and “natural philosophers” (scientists) in the year 1684 was to demonstrate mathematically that the planets in our solar system should revolve around the sun in elliptically shaped orbits as opposed to circles or some other geometric path. The fact that they do move in elliptical paths was carefully observed by Johann Kepler and noted in his 1609 masterwork, Astronomia Nova.

In 1687, Newton’s Principia was published after three intense years of effort by the young, relatively unknown Cambridge professor of mathematics. Using mathematics and his revolutionary new concept of universal gravitation, Newton provided precise justification of Kepler’s laws of planetary motion in the Principia. In the process, he revolutionized motion physics and our understanding of how and why bodies of mass, big and small (planets, cannonballs, etc.), move the way they do. Newton did, indeed, as he stated, show us in the Principia how to calculate the motion of heavenly bodies.

In his personal relationships, Newton found dealing with people and human nature to be even more challenging than the formidable problems of motion physics. As one might suspect, Newton did not easily tolerate fools and pretenders in the fields of science and mathematics – “little smatterers in mathematicks,” he called them. Nor did he tolerate much of basic human nature and its shortcomings.

 In the Year 1720, Newton Came Face-to-Face with
His Own Human Vulnerability… in the “Stock Market!”

 In 1720, Newton’s own human fallibility was clearly laid bare as he invested foolishly and lost a small fortune in one of investing’s all-time market collapses. Within our own recent history, we have had suffered through the stock market crash of 1929 and the housing market bubble of 2008/2009. In these more recent “adventures,” society and government had allowed human nature and its greed propensity to over-inflate Wall Street to a ridiculous extent, so much so that a collapse was quite inevitable to any sensible person…and still it continued.

Have you ever heard of the great South Sea Bubble in England? Investing in the South Sea Trading Company – a government sponsored banking endeavor out of London – became a favorite past-time of influential Londoners in the early eighteenth century. Can you guess who found himself caught-up in the glitter of potential investment returns only to end up losing a very large sum? Yes, Isaac Newton was that individual along with thousands of others.

It was this experience that occasioned the remark about his own inability to calculate the madness of men (including himself)!

Indeed, he should have known better than to re-enter the government sponsored South Sea enterprise after initially making a tidy profit from an earlier investment in the stock. As can be seen from the graph below, Newton re-invested (with a lot!) in the South Sea offering for the second time as the bubble neared its peak and just prior to its complete collapse. Newton lost 20,000 English pounds (three million dollars in today’s valuations) when the bubble suddenly burst.

Clearly, Newton’s comment, which is the theme of this post, reflects his view that human nature is vulnerable to fits of emotion (like greed, envy, ambition) which in turn provoke foolish, illogical behaviors. When Newton looked in the mirror after his ill-advised financial misadventure, he saw staring back at him the very madness of men which he then proceeded to rail against! Knowing Newton through the many accounts of his life that I have studied, I can well imagine that his financial fiasco must have been a very tough pill for him to swallow. Many are the times in his life that Newton “railed” to vent his anger against something or someone; his comment concerning the “madness of men” is typical of his outbursts. Certainly, he could disapprove of his fellow man for fueling such an obvious investment bubble. In the end, and most painful for him, was his realization that he paid a stiff price for foolishly ignoring the bloody obvious. For anyone who has risked and lost on the market of Wall Street, the mix of feelings is well understood. Even the great Newton had his human vulnerabilities – in spades, and greed was one of them. One might suspect that Newton, the absorbed scientist, was merely naïve when it came to money matters.

That would be a very erroneous assumption. Sir Isaac Newton held the top-level government position of Master of the Mint in England, during those later years of his scientific retirement – in charge of the entire coinage of the realm!

 

For more on Isaac Newton and the birth of the Principia click on the link: https://reasonandreflection.wordpress.com/2013/10/27/the-most-important-scientific-book-ever-written-conceived-in-a-london-coffee-house/

Sir Humphry Davy: Pioneer Chemist and His Invention of the Coal Miner’s “Safe Lamp” at London’s Royal Institution – 1815

humphry-davy-51Among the many examples to be cited of science serving the cause of humanity, one story stands out as exemplary. That narrative profiles a young, pioneering “professional” chemist and his invention which saved the lives of thousands of coal miners while enabling the industrial revolution in nineteenth-century England. The young man was Humphry Davy, who quickly rose to become the most famous chemist/scientist in all of England and Europe by the year 1813. His personal history and the effects of his invention on the growth of “professionalism” in science are a fascinating story.

The year was 1799, and a significant event had occurred. The place: London, England. The setting: The dawning of the industrial revolution, shortly to engulf England and most of Europe. The significant event of which I speak: The chartering of a new, pioneering entity located in the fashionable Mayfair district of London. In 1800, the Royal Institution of Great Britain began operation in a large building at 21 Albemarle Street. Its pioneering mission: To further the cause of scientific research/discovery, particularly as it serves commerce and humanity.

21-albemarle-street-then

The original staff of the Royal Institution was tiny, headed by its founder, the notable scientist and bon-vivant, Benjamin Thompson, also known as Count Rumford. Quickly, by 1802, a few key members of the founding staff, including Rumford, were gone and the fledgling organization found itself in dis-array and close to closing its doors. Just one year earlier, in 1801, two staff additions had materialized, men who were destined to make their scientific marks in physics and chemistry while righting the floundering ship of the R.I. by virtue of their brilliance – Thomas Young and the object of this post, a young, relatively unknown, pioneering chemist from Penzance/Cornwall, Humphry Davy.

By the year 1800, the industrial revolution was gaining momentum in England and Europe. Science and commerce had already begun to harness the forces of nature required to drive industrial progress rapidly forward. James Watt had invented the steam engine whose motive horsepower was now bridled and serving the cause by the year 1800. The looming industrial electrical age was to dawn two decades later, spearheaded by Michael Faraday, the most illustrious staff member of the Royal Institution, ever, and one of the greatest physicists in the history of science.

In the most unlikely of scenarios at the Royal Institution, Humphry Davy interviewed and hired the very young Faraday as a lab assistant (essentially lab “gofer”) in 1813. By that time, Davy’s star had risen as the premier chemist in England and Europe; little did he know that the young Faraday, who had less than a grade-school education and who worked previously as a bookbinder, would, in twenty short years, ascend to the pinnacle of physics and chemistry and proceed to father the industrial electrical age. The brightness of Faraday’s scientific star soon eclipsed even that of Davy’s, his illustrious benefactor and supervisor.

For more on that story click on this link to my previous post on Michael Faraday: https://reasonandreflection.wordpress.com/2013/08/04/the-electrical-age-born-at-this-place-and-fathered-by-this-great-man/

Wanted: Ever More Coal from England’s Mines 
at the Expense of Thousands Lost in Mine Explosions

Within two short years of obtaining his position at the Royal Institution in 1813, young Faraday found himself working with his idol/mentor Davy on an urgent research project – a chemical examination of the properties of methane gas, or “fire damp,” as it was known by the “colliers,” or coal miners.

The need for increasing amounts of coal to fuel the burgeoning boilers and machinery of the industrial revolution had forced miners deeper and deeper underground in search of rich coal veins. Along with the coal they sought far below the surface, the miners encountered larger pockets of methane gas which, when exposed to the open flame of their miner’s lamp, resulted in a growing series of larger and more deadly mine explosions. The situation escalated to a national crisis in England and resulted in numerous appeals for help from the colliers and from national figures.

By 1815, Humphry Davy at the Royal Institution had received several petitions for help, one of which came from a Reverend Dr. Gray from Sunderland, England, who served as a spokesman/activist for the colliers of that region.

Davy and the Miner’s Safe Lamp:
Science Serving the “Cause of Humanity”

Working feverishly from August and into October, 1815, Davy and Faraday produced what was to become known as the “miner’s safe lamp,” an open flame lamp designed not to explode the pockets of methane gas found deep underground. The first announcement of Davy’s progress and success in his work came in this historic letter to the Reverend Gray dated October 30, 1815.

davy_1x

The announcement heralds one of the earliest, concrete examples of chemistry (and science) put to work to provide a better life for humanity.

Royal Institution
Albermarle St.
Oct 30

 My Dear Sir

                               As it was in consequence of your invitation that I endeavored to investigate the nature of the fire damp I owe to you the first notice of the progress of my experiments.

 My results have been successful far beyond my expectations. I shall inclose a little sketch of my views on the subject & I hope in a few days to be able to send a paper with the apparatus for the Committee.

 I trust the safe lamp will answer all the objects of the collier.

 I consider this at present as a private communication. I wish you to examine the lamps I had constructed before you give any account of my labours to the committee. I have never received so much pleasure from the results of my chemical labours, for I trust the cause of humanity will gain something by it. I beg of you to present my best respects to Mrs. Gray & to remember me to your son.

 I am my dear Sir with many thanks for your hospitality & kindness when I was at Sunderland.

                                                              Your….

                                                                             H. Davy

This letter is clearly Davy’s initial announcement of a scientifically-based invention which ultimately had a pronounced real and symbolic effect on the nascent idea of “better living through chemistry” – a phrase I recall from early television ads run by a large industrial company like Dupont or Monsanto.

davy_3crop

In 1818, Davy published his book on the urgent, but thorough scientific researches he and Faraday conducted in 1815 on the nature of the fire damp (methane gas) and its flammability.

dscn4290x

Davy’s coal miner’s safety lamp was the subject of papers presented by Davy before the Royal Society of London in 1816. The Royal Society was, for centuries since its founding by King Charles II in 1662, the foremost scientific body in the world. Sir Isaac Newton, the greatest scientific mind in history, presided as its president from 1703 until his death in 1727. The Society’s presence and considerable influence is still felt today, long afterward.

davy41Davy’s safe lamp had an immediate effect on mine explosions and miner safety, although there were problems which required refinements to the design. The first models featured a wire gauze cylinder surrounding the flame chamber which affected the temperature of the air/methane mixture in the vicinity of the flame. This approach took advantage of the flammability characteristics of methane gas which had been studied so carefully by Davy and his recently hired assistant, Michael Faraday. Ultimately, the principles of the Davy lamp were refined sufficiently to allow the deep-shaft mining of coal to continue in relative safety, literally fueling the industrial revolution.

Humphry Davy was a most unusual individual, as much poet and philosopher in addition to his considerable talents as a scientist. He was close friends with and a kindred spirit to the poets Coleridge, Southey, and Wordsworth. He relished rhetorical flourish and exhibited a personal idealism in his earlier years, a trait on open display in the letter to the Reverend Gray, shown above, regarding his initial success with the miner’s safe lamp.

“I have never received so much pleasure from the results of my chemical labours, for I trust the cause of humanity will gain something by it.”

As proof of the sincerity of this sentiment, Davy refused to patent his valuable contribution to the safety of thousands of coal miners!

Davy has many scientific “firsts” to his credit:

-Experimented with the physiological effects of the gas nitrous oxide (commonly known as “laughing gas”) and first proposed it as a possible medical/dental anesthetic – which it indeed became years later, in 1829.

-Pioneered the new science of electrochemistry using the largest voltaic pile (battery) in the world, constructed for Davy in the basement of the R.I. Alessandro Volta first demonstrated the principles of the electric pile in 1800, and within two years, Davy was using his pile to perfect electrolysis techniques for separating and identifying “new” fundamental elements from common chemical compounds.

-Separated/identified the elements potassium and sodium in 1807, soon followed by others such as calcium and magnesium.

-In his famous, award-winning Bakerian Lecture of 1806, On Some Chemical Agencies of Electricity, Davy shed light on the entire question concerning the constituents of matter and their chemical properties.

-Demonstrated the “first electric light” in the form of an electric arc-lamp which gave off brilliant light.

-Wrote several books including Elements of Chemical Philosophy in 1812.

In addition to his pioneering scientific work, Davy’s heritage still resonates today for other, more general reasons:

-He pioneered the notion of “professional scientist,” working, as he did, as paid staff in one of the world’s first organized/chartered bodies for the promulgation of science and technology, the Royal Institution of Great Britain.

-As previously noted, Davy is properly regarded as the savior of the Royal Institution. Without him, its doors surely would have closed after only two years. His public lectures in the Institution’s lecture theatre quickly became THE rage of established society in and around London. Davy’s charismatic and informative presentations brought the excitement of the “new sciences” like chemistry and electricity front and center to both ladies and gentlemen. Ladies were notably and fashionably present at his lectures, swept up by Davy’s personal charisma and seduced by the thrill of their newly acquired knowledge… and enlightenment!

royal_institution_-_humphry_davy1

The famous 1802 engraving/cartoon by satirist/cartoonist James Gillray
Scientific Researches!….New Discoveries on Pneumaticks!…or…An
Experimental Lecture on the Power of Air!

This very famous hand-colored engraving from 1802 satirically portrays an early public demonstration in the lecture hall of the Royal Institution of the powers of the gas, nitrous oxide (laughing gas). Humphry Davy is shown manning the gas-filled bellows! Note the well-heeled gentry in the audience including many ladies of London. Davy’s scientific reputation led to his eventual English title of Baronet and the honor of Knighthood, thus making him Sir Humphry Davy.

The lecture tradition at the R.I. was begun by Davy in 1801 and continued on for many years thereafter by the young, uneducated man hired by Davy himself in 1813 as lab assistant. Michael Faraday was to become, in only eight short years, the long-tenured shining star of the Royal Institution and a physicist whose contributions to science surpassed those of Davy and were but one rank below the legacies of Galileo, Newton, Einstein, and Maxwell. Faraday’s lectures at the R.I. were brilliantly conceived and presented – a must for young scientific minds, both professional and public – and the Royal Institution in London remained a focal point of science for more than three decades under Faraday’s reign, there.

ri-prospectus-1800faraday-ticket-cropped

The charter and by-laws of the R.I. published in 1800 and an admission ticket to Michael Faraday’s R.I. lecture on electricity written and signed by him: “Miss Miles or a friend / May 1833”

Although once again facing economic hard times, the Royal Institution exists today – in the same original quarters at 21 Albemarle Street. Its fabulous legacy of promulgating science for over 217 years would not exist were it not for Humphry Davy and Michael Faraday. It was Davy himself who ultimately offered that the greatest of all his discoveries was …Michael Faraday.

Charles Darwin’s Journey on the Beagle: History’s Most Significant Adventure

In 1831, a young, unknown, amateur English naturalist boarded the tiny ship, HMS Beagle, and embarked, as crew member, on a perilous, five-year journey around the world. His observations and the detailed journal he kept of his various experiences in strange, far-off lands would soon revolutionize man’s concept of himself and his place on planet earth. Darwin’s revelations came in the form of his theory of natural selection – popularly referred to as “evolution.”

H.M.S. Beagle_Galapagos_John Chancellor

Since the publication of his book, On the Origin of Species in 1859, which revealed to the scientific community his startling conclusions about all living things based on his voyage journal, Darwin has rightfully been ranked in the top tier of great scientists. In my estimation, he is the most important and influential natural scientist of all time, and I would rank him right behind Isaac Newton and Albert Einstein as the most significant and influential scientific figures of modern times.

Young Charles Darwin enrolled at the University of Edinburgh in 1825 to pursue a career in medicine. His father, a wealthy, prominent physician had attended Edinburgh and, indeed, exerted considerable influence on young Charles to follow him in a medical career. At Edinburgh, the sixteen-year old Darwin quickly found the study of anatomy with its dissecting theatre an odious experience. More than once, he had to flee the theatre to vomit outside after witnessing the dissection process. The senior Darwin, although disappointed in his son’s unsuitability for medicine, soon arranged for Charles to enroll at Cambridge University to study for the clergy. In Darwin’s own words: “He [the father] was very properly vehement against my turning an idle sporting man, which seemed my probable destination.”

Darwin graduated tenth in his class of 168 with a B.A. and very little interest in the clergy! During his tenure at Cambridge, most of young Darwin’s spare time was spent indulging his true and developing passion: Collecting insects with a special emphasis on beetles. Along the way, he became good friends with John Steven Henslow, professor of geology, ardent naturalist, and kindred spirit to the young Charles.

Wanted: A Naturalist to Sail On-Board the Beagle

On 24 August, 1831, in one of history’s most prescient communiques, Professor Henslow wrote his young friend and protegee: “I have been asked by [George] Peacock…to recommend him a naturalist as companion to Capt. Fitzroy employed by Government to survey the S. extremity of America [the coasts of South America]. I have stated that I considered you to be the best qualified person I know of who is likely to undertake such a situation. I state this not on the supposition of ye being a finished naturalist, but as amply qualified for collecting, observing, & noting any thing worthy to be noted in natural history.” Seldom in history has one man “read” another so well in terms of future potential as did Henslow in that letter to young Darwin!

Charles’ father expressed his opposition to the voyage, in part, on the following grounds as summarized by young Darwin:

-That such an adventure could prove “disreputable to my [young Darwin’s] character as a Clergyman hereafter.”

-That it seems “a wild scheme.”

-That the position of naturalist “not being [previously] accepted there must be some serious objection to the vessel or expedition.”

-That [Darwin] “should never settle down to a steady life hereafter.”

-That “it would be a useless undertaking.”

Darwin 1840_RichmondThe young man appealed to his uncle Josiah Wedgewood [of pottery family fame] whose judgement he valued. Scientific history hung in the balance as Uncle Josiah promptly weighed-in with the senior Darwin, offering convincing arguments in favor of the voyage. In rebuttal to the objection from Darwin’s father that “it would be a useless undertaking,” the Uncle reasoned: “The undertaking would be useless as regards his profession [future clergyman], but looking upon him as a man of enlarged curiosity, it affords him the opportunity of seeing men and things as happens to few.” Enlarged curiosity, indeed! How true that proved to be. The senior Darwin then made his decision in the face of Uncle Josiah’s clear vision and counsel: Despite lingering reservations, he gave his permission for Charles to embark on the historic sea voyage, one which more than any other, changed mankind’s sense of self. Had the decision been otherwise, Darwin’s abiding respect for his father’s opinion and authority would have bequeathed the world yet another clergyman while greatly impeding the chronicle of man and all living things on this planet.

On 27 December, 1831, HMS Beagle with Darwin aboard put out to sea, beginning an adventure that would circle the globe and take almost five years. Right from the start, young Charles became violently seasick, often confined to his swaying hammock hanging in the cramped quarters of the ship. Seasickness dogged young Darwin throughout the voyage. I marvel at the fortitude displayed by this young, recently graduated “gentleman from Cambridge” as he undertook such a daunting voyage. Given that the voyage would entail many months at sea, under sail, Capt. Fitzroy and Darwin had agreed from the start that Charles would spend most of his time on land, in ports of call, while the Beagle would busy itself surveying the local coastline per its original government charter. While on land, Darwin’s mission was to observe and record what he saw and experienced concentrating, of course, on the flora, fauna, and geology of the various diverse regions he would visit.

St. Jago, an island off the east coast of South America was the Beagle’s first stop on 16 January, 1832. It was here he made one of his first significant observations. Quoting from his journal: “The geology of this island is the most interesting part of its natural history. On entering the harbour, a perfectly horizontal white band in the face of the sea cliff, may be seen running for some miles along the coast, and at the height of about forty-five feet above the water. Upon examination, this white stratum is found to consist of calcareous [calcium] matter, with numerous shells embedded, most or all of which now exist on the neighboring coast.”

Darwin goes on to conclude that a stratum of sea-shells very much higher than the current water line speaks to ancient, massive upheavals of the earth in the region. From the simple, focused collector of beetles in his Cambridge days, Darwin had now become obsessed with the bigger picture of nature, a view which embraced the importance of geology/environment as key to decoding nature’s secrets.

In a fascinating section of his journal, Darwin describes his astonishment at the primitive state of the native inhabitants of Tierra Del Fuego, at the southern tip of South America. From the journal entry of 17 December, 1832: “In the morning, the Captain sent a party to communicate with the Fuegians. When we came within hail, one of the four natives who were present advanced to receive us, and began to shout most vehemently, wishing to direct us where to land. When we were on shore the party looked rather alarmed, but continued talking and making gestures with great rapidity. It was without exception the most curious and interesting spectacle I ever beheld: I could not have believed how wide was the difference between savage and civilized man; it is greater than between a wild and domesticated animal, inasmuch as in man there is a greater power of improvement.” A separate reference I recall reading referring to Darwin’s encounter with the Fuegians stated that he could scarcely believe that the naked, dirty, and primitive savages before his eyes were of the same species as the sherry-sipping professors back at Cambridge University – so vividly stated.

On 2 October, 1836, the Beagle arrived at Falmouth, Cornwall, her nearly five-year journey circumnavigating the globe complete. Throughout the trip, Darwin recorded life on the high seas and, most importantly, his myriad observations on the geology of the many regions visited on foot and horseback as well as the plant and animal life.

I often invoke the mantra to which I ardently subscribe: That fact is always stranger than fiction…and so much more interesting and important. Picturing Darwin, the elite Englishman and budding naturalist, riding horseback amidst the rough-hewn vaqueros [cowboys] of Chile speaks to the improbability of the entire venture. When studying Darwin, it quickly becomes clear to the reader that his equable nature and noble intents were obvious to those whose approval and cooperation were vital for the success of his venture. That was particularly true of the seaman crew of the Beagle and of Capt. Fitzroy whose private cabin on the ship, Darwin shared. Fortunately, Fitzroy was a man of considerable ability and efficiency in captaining the Beagle. He was, at heart, a man sensible of the power and importance of scientific knowledge, and that made his less admirable qualities bearable to Darwin. The crew made good-natured fun of the intellectual, newbie naturalist in their midst, but spared no effort in helping Darwin pack his considerable array of collected natural specimens, large and small, in boxes and barrels for shipment back to Professor Henslow at Cambridge. Many of these never arrived, but most did make their way “home.”

When Darwin returned to Cambridge after arriving back home at Cornwall, he was surprised to learn that Professor Henslow had spread news among his friends at Cambridge of the Beagle’s whereabouts in addition to sharing, with his university colleagues, the specimens sent home by his young protegee. Darwin had embarked on the Beagle’s voyage as an amateur collector of insects. Now, to his great surprise, he had become a naturalist with a reputation and a following within the elite circles at Cambridge, thanks to Professor Henslow.

Charles_Darwin_seated_crop[1]Once home, Charles Darwin wasted little time tackling the immense task of studying and categorizing the many specimens he had sent back during the voyage. By 1838, the vestiges of natural selection had begun to materialize in his mind. One situation of particular note that he recorded in the Galapagos Islands fueled his speculations. There, he noted that a species of bird indigenous to several of the islands in the archipelago seemed to have unique beaks depending upon which island they inhabited. In virtually all other aspects, the birds closely resembled one another – all members of a single species. Darwin noticed that the beaks in each case seemed most ideally suited to the particular size and shape of the seeds most plentiful on that particular island. Darwin took great pains to document these finches of the Galapagos, suspecting that they harbored important clues to nature’s ways. Darwin reasoned that somehow the birds seemed to be well-adapted to their environment/food source in the various islands. Clues such as this shaped his thought processes as he carefully distilled the notes entered in his journal during the voyage. By 1844, Charles Darwin had formulated the framework for his explanation of animal/plant adaptation to the environment. Except for one or two close and trusted colleagues, Darwin kept his budding theory to himself for years to come for important reasons which I discuss shortly.

 

Darwin published his book, Journal of Researches, in 1839. The book was taken from his copious journal entries during the voyage; within its pages resides the seed-stock from which would germinate Darwin’s ultimate ideas and his theory of natural selection. This book remained, to Darwin’s dying day, closer to his affections and satisfaction than any other including On the Origin of Species.

 

 

What Is the Essence of Natural Selection?

Darwin’s theory of natural selection proposed that species are not immutable across time and large numbers of individuals. There appear random variations in this or that characteristic in a particular individual within a large population. Such variations, beginning with that individual, could be passed along to future generations through its immediate offspring. In the case of a singular Galapagos finch born with a significantly longer and narrower beak than that of a typical bird in the species, that specimen and its offspring which might inherit the tendency will be inevitably subjected to “trial by nature.” If the longer, narrower beak makes it easier for these new birds to obtain and eat the seeds and insects present in their environment, these birds will thrive and go on, over time, to greatly out-reproduce others of their species who do not share the “genetic advantage.” Eventually that new characteristic, in this example, the longer, narrower beak, will predominate within the population in that environment. This notion is the essence of Darwin’s theory of natural selection. If the random variation at hand proves to be disadvantageous, future generations possessing it will be less likely to survive than those individuals without it.

Note that this description, natural selection, is far more scientifically specific than the oft-used/misused phrase applied to Darwin’s work: theory of evolution. To illustrate: “theory of evolution” is a very general phrase admitting even the possibility that giraffes have long necks because they have continually stretched them over many generations reaching for food on the higher tree canopies. That is precisely the thinking of one of the early supporters of evolution theory, the Frenchman, Lamarck, as expressed in his 1809 publication on the subject. Darwin’s “natural selection” explains the specific mechanism by which evolution occurs – except for one vital, missing piece… which we now understand.

Genetics, Heredity, and the DNA Double Helix:
 Random Mutations – the Key to Natural Selection!

Darwin did not know – could not know – the source of the random significant variations in species which were vital to his theory of natural selection. He came to believe that there was some internal genetic blueprint in living things that governed the species at hand while transmitting obvious “familial traits” to offspring. Darwin used the name “gemmules” referring to these presumed discrete building blocks, but he could go no further in explaining their true nature or behavior given the limited scientific knowledge of the time.

James Watson and Francis Crick won the 1962 Nobel Prize in medicine and physiology for their discovery in 1953 of the DNA double helix which carries the genetic information of all living things. The specific arrangement of chemical base-pair connections, or rungs, along the double helix ladder is precisely the genetic blueprint which Darwin suspected. The human genome has been decoded within the last twenty years yielding tremendous knowledge about nature’s life-processes. We know, for instance, that one particular – just one – hereditary base-pair error along the double helix can result in a devastating medical condition called Tay-Sachs, wherein initially healthy brains of newborns are destroyed in just a few years due to the body’s inability to produce a necessary protein. Literally every characteristic of all living things is dictated by the genetic sequence of four different chemical building blocks called bases which straddle the DNA double helix. The random variations necessary for the viability of Darwin’s theory of natural selection are precisely those which stem from random base-pair mutations, or variations, along the helix. These can occur spontaneously during genetic DNA replication, or they can result from something as esoteric as the alpha particles of cosmic radiation hitting a cell nucleus and altering its DNA. The end result of the sub-microscopic change might be trivial, beneficial, or catastrophic in some way to the individual.

Gregor Mendel: The Father of Genetics…Unknown to Darwin

In 1865, a sequestered Austrian monk published an obscure scientific paper in, of all things, a regional bee-keepers journal. Like Darwin, originally, Mendel had no formal scientific qualifications, only a strong curiosity and interest in the pea plants he tended in the monastery garden. He had wondered about the predominant colors of the peas from those plants, green and yellow, and pondered the possible mechanisms which could determine the color produced by a particular plant. To determine this, he concocted a series of in-breeding experiments to find out more. After exhaustive trials using pea color, size of plant, and five other distinguishing characteristics of pea plants, Mendel found that the statistics of inheritance involved distinct numerical ratios, as for example, a “one-in-four chance” for a specific in-breeding outcome. The round numbers present in Mendel’s experimental results suggested the existence of distinct, discrete genetic mechanisms at work – what Darwin vaguely had termed “gemmules.” Mendel’s 1865 paper describing his findings, and the work behind it cements Mendel’s modern reputation as the “Father of Genetics.” Incredibly and unfortunately virtually no one took serious notice of his paper until it was re-discovered in 1900, thirty-five years after its publication, by the English geneticist William Bateson!

Original offprints (limited initial printings for the author) of Mendel’s paper are among the rarest and most desirable of historical works in the history of science, selling for hundreds of thousands of dollars on the rare book/manuscript market. We know that only forty were printed and scarcely half of these have been accounted for. Question: Did Mendel send an offprint of his pea plant experiments to Charles Darwin in 1865, well after the publication of Darwin’s groundbreaking On the Origin of Species in 1859? An uncut [meaning unopened, thus unread] offprint was presumably found among Darwin’s papers after his death, according to one Mendel reference source. Certainly, no mention of it was ever made by Charles Darwin.

 It is an intriguing thought that the key, missing component of Darwin’s natural selection theory as espoused in his Origin of Species possibly resided unread and unnoticed on Darwin’s bookshelf! And is it not a shame that Mendel lived out his life in the abbey essentially unknown and without due credit for his monumental work in the new science of genetics, a specialty which he founded?

Darwin’s Reluctance to Publish His Theory Nearly Cost Him His Due Credit

Darwin finally revealed his theory of natural selection to the public and the scientific community at large in 1859 with the book publication of On the Origin of Species. In fact, the central tenets of the book had congealed in Darwin’s mind long before, by 1844. He had held the framework of his theory close to the vest for all that time! Why? Because to espouse evolutionary ideas in the middle of the nineteenth century was to invite scorn and condemnation from creationists within many religions. No one was more averse to a more secular universe which promoted the notion of a less personal creator, one which did not create man and animals in more or less final form (despite obvious diversity) than Emma Wedgewood Darwin, Darwin’s very religious wife. She believed in an afterlife in which she and her beloved husband would be joined together for eternity. Charles was becoming less and less certain of this religious ideal as the years went by and nature continued to reveal herself to the ever-inquiring self-made naturalist who had set out to probe her ways.

To espouse a natural world which, once its fundamental constituents were brought together, would henceforth change and regulate itself without further involvement by the Creator would be a painful repudiation of Emma’s fundamental beliefs in a personal God. For this very personal reason and because of the professional risk of being ostracized by the community of naturalists for promulgating radical, anti-religious ideas, Darwin put off publication of his grand book, the book which would insure him priority and credit for one of the greatest of all scientific conclusions.

After stalling publication for years and with his manuscript only half completed, Darwin was shocked into feverish activity on his proposed book by a paper he received on 18 June, 1858. It was from a fellow naturalist of Darwin’s acquaintance, one Alfred Russel Wallace. In his paper, Wallace outlined his version of natural selection which eerily resembled the very theory Darwin was planning to eventually publish to secure his priority. There was no doubt that Wallace had arrived independently at the same conclusions that Darwin had reached many years earlier. Wallace’s paper presented an extremely difficult problem for Darwin in that Wallace had requested that Darwin pass his [Wallace’s] paper on to their mutual friend, the pathfinding geologist, Charles Lyell.

Darwin in a Corner: Academic Priority at Stake
Over One of the Great Scientific Breakthroughs

Now Darwin felt completely cornered. If he passed Wallace’s paper on to Lyell as requested, essentially making it public, the academic community would naturally steer credit for the theory of natural selection to Wallace. On the other hand, having just received Wallace’s paper on the subject, how would it look if he, Darwin, suddenly announced publicly that he had already deciphered nature and her ways – well before Wallace had? That course of action could inspire suspicions of plagiary on Darwin’s part.

The priority stakes were as high as any since the time of Isaac Newton when he and the mathematician Gottfried Liebniz locked horns in a bitter battle over credit for development of the calculus. It had been years since Darwin’s voyage on the Beagle which began the long gestation of his ideas on natural selection. He had been sitting on his conclusions since 1844 for fear of publishing, and now he was truly cornered, “forestalled,” as he called it. Darwin, drawing on the better angels of his morose feelings, quickly proposed to Wallace that he [Darwin] would see to it that his [Wallace’s] paper be published in any journal of Wallace’s choosing. In what became a frenzied period in his life, he reached out to two of his closest colleagues and trusted confidants, Charles Lyell and Joseph Hooker for advice. The two been entrusted with the knowledge of Darwin’s work on natural selection for a long time; they well understood Darwin’s priority in the matter, and he needed them now. The two friends came up with a proposal: Publish both Wallace’s paper and a synopsis by Darwin outlining his own long-standing efforts and results. The Linnean Society presented their joint papers in their scientific journal on 1 July, 1858. Fortunately for Darwin, Alfred Russel Wallace was of a conciliatory nature regarding the potential impasse over priority by way of his tacit acknowledgement that his colleague had, indeed, been first to formulate his opinions on natural selection.

Nonetheless, for Darwin, the cat was out of the bag, and the task ahead was to work full-steam to complete the large book that would contain all the details of natural selection and insure his priority. He worked feverishly on his book, On the Origins of Species, right up to its publication by John Murray. The book went on sale on 22 November, 1859, and all 1250 copies sold quickly. This was an excruciating period of Darwin’s life. He was not only under unrelenting pressure to complete one of the greatest scientific books of all time, he was intermittently very ill throughout the process presumably from a systemic problem contracted during his early travels associated with the Beagle voyage. Yes, the expected controversy was to come immediately after publication of the book, but Darwin and his contentions have long weathered the storm. Few of his conclusions have not stood the test of time and modern scrutiny.

The Origin was his great book, but the book that was the origin of the Origin, his 1839 Journal of Researches always remained his favorite. Certainly, the Journal was written at a much happier time in Darwin’s life, a time flush with excitement over his prospects as a newly full-fledged naturalist. For me, the Journal brims with the excitement of travel and scientific discovery/fact-finding – the seed-corn of scientific knowledge (and new technologies). The Origin represents the resultant harvest from that germinated seed-corn.

“Endless Forms Most Beautiful” –
Natural Selection in Darwin’s Own Words

In his Introduction to the Origin, Darwin describes the essence of natural selection:

“In the next chapter, the struggle for existence amongst all organic beings throughout the world, which inevitably follows from their high geometrical powers of increase, will be treated of. This is the doctrine of Malthus, applied to the whole animal and vegetable kingdoms. As many more individuals of each species are born that can possibly survive; and as, consequently, there is a frequently occurring struggle for existence, it follows that any being, if it vary however slightly in any manner profitable to itself, under the complex and sometimes varying conditions of life, will have a better chance of surviving, and thus be naturally selected. From the strong principle of inheritance, any selected variety will tend to propagate its new and modified form.

Darwin and Religion

Charles Darwin, educated for the clergy at Cambridge, increasingly drifted away from orthodox religious views as his window on nature and her ways became more transparent to him over the decades. Never an atheist, his attitudes were increasingly agnostic as he increasingly embraced the results of his lifelong study of the natural world. The Creator, which Darwin believed in, was not, to him, the involved, shepherd of all living things in this world. Rather, he seemed more like the watchmaker who, after his watch was first assembled, wound it up and let it run on its own while retreating to the background.

 Another viewpoint, which I tend to favor and which may apply to Darwin: God, whom we cannot fully know in this life, created not only all living things at the beginning, but also the entire structure of natural law (science) which dictates not only the motion of the planets, but the future course of life forms. Natural selection, hence evolution as well, are central tenants of that complete structure of natural law. The laws of nature, which permanently bear the fingerprints of the creator and his creation, thus enable the self-powered, self-regulating behaviors of the physical and natural world – without contradiction.

 Charles Darwin: Humble Man and Scientific Titan

questioning-darwin-1024[1]

In writing this post, my re-acquaintance with Darwin has brought great joy. Some years, now, after initially reading the biographies and perusing his works, I re-discover the life and legacy which is so important to science. His body of work includes several other very important books beside his Journal and Origin. Beyond his scientific importance and the science, itself, lies the man himself – a man of very high character and superb intellect. Darwin was gifted with intense curiosity, that magical motor that drives great accomplishment in science. Passion and curiosity: Isaac Newton had them in great abundance, and so, too, did Albert Einstein. Yet, Charles Darwin was different in several respects from those two great scientists: First, he was fortunate enough to have been born to privilege and was thus comfortably able to devote his working life to science from the beginning. Second, Darwin was a very happily married man who fathered ten children, each of which he loved and doted upon. Third, Darwin’s character was impeccable in all respects. His personality was stiffened a bit by the English societal conventions prevalent then, but his humanity shows through in so many ways. His struggle with religion is one most of us can relate to.

Reading Darwin’s works is a joy both because he was an articulate, educated Englishman and because the contents of his books like the Journal and Origin are easily digestible compared to the major works of Newton and Einstein. Like Darwin himself, my favorite book of his is The Journal of Researches, sometimes referred to as the Voyage of the Beagle. What an adventure.

Darwins_Thinking_Path[1]

The “sandwalk” path around the extended property of his long-held estate, Down House. Darwin frequently traversed this closed path on solitary walks around the estate while he gathered his thoughts about matters both big and small.

Marking the Passage of Time: The Elusive Nature of the Concept

Nature presents us with few mysteries more tantalizing than the concept of “time.” Youngsters, today, might not think the subject worthy of much rumination: After all, one’s personal iPhone can conveniently provide the exact time at any location on our planet.

thumb_IMG_3884_1024

Human beings have long struggled with two fundamental questions regarding time:

  1. What are the fundamental units in nature used to express time? More simply, what constitutes one second of time? How is one second determined?
  2. How can we “accurately” measure time using the units chosen to express it?

The simple answers for those so inclined might be: We measure time in units of seconds, minutes, hours, and days, etc., and we have designed carefully constructed and calibrated clocks to measure time! That was easy, wasn’t it?

The bad news: Dealing with the concept of time is not quite that simple.
The good news: The fascinating surprises and insights gained from taking a closer, yet still cursory, look at “time” are well worth the effort to do so. To do the subject justice requires far more than a simple blog post – scholarly books, in fact – but my intent, here, is to illustrate how fascinating the concept of time truly is.

Webster’s dictionary defines time as “a period or interval…the period between two events or during which ‘something’ exists, happens, or acts.”

For us humans the rising and setting of the sun – the cycle of day and night is a “something” that happens, repeats itself, and profoundly effects our existence. It is that very cycle which formed our first concept of time. The time required for the earth to make one full revolution on its axis is but one of many repeating natural phenomena, and it was, from the beginning of man’s existence, uniquely qualified to serve as the arbitrary definition of time measurement. Other repeatable natural phenomena could have anchored our definition of time: For instance, the almost constant period of the earth’s rotation around the sun (our year) or certain electron- jump vibrations at the atomic level could have been chosen except that such technology was unknown and unthinkable to ancient man. In fact, today’s universally accepted time standard utilizes a second defined by the extraordinarily stable and repeatable electron jumps within Cesium 133 atoms – the so-called atomic clock which has replaced the daily rotation of the earth as the prime determinant of the second.

Why use atomic clocks instead of the earth’s rotation period to define the second? Because the earth’s rotational period varies from month to month due to the shape of our planet’s orbit around the sun. Its period also changes over many centuries as the earth’s axis “precesses” (a slowly rotating change of direction) relative to the starry firmament, all around. By contrast, atomic clocks are extremely regular in their behavior.

Timekeepers on My Desk: From Drizzling Sand to Atomic Clocks!

I have on my desk two time-keepers which illustrate the startling improvement in time-keeping over the centuries. One is the venerable hour-glass: Tip it over and the sand takes roughly thirty minutes (in mine) to drizzle from top chamber to bottom. The other timekeeper is one of the first radio-controlled clocks readily available – the German-built Junghans Mega which I purchased in 1999. It features an analog display (clock-hands, not digital display) based on a very accurate internal quartz electronic heartbeat: The oscillations of its tiny quartz-crystal resonator. Even the quartz oscillator may stray from absolute accuracy by as much as 0.3 seconds per day in contrast to the incredible regularity of the cesium atomic clocks which now define the international second as 9,192,631,770 atomic “vibrations” of cesium 133 atoms – an incredibly stable natural phenomena. The Junghans Mega uses its internal radio capability to automatically tune in every evening at 11 pm to the atomic clocks operating in Fort Collins, Colorado. Precise time-sync signals broadcast from there are utilized to “reset” the Mega to the precise time each evening at eleven.

I love this beautifully rendered German clock which operates all year on one tiny AA battery and requires almost nothing from the operator in return for continuously accurate time and date information. Change the battery once each year and its hands will spin to 12:00 and sit there until the next radio query to Colorado. At that point, the hands will spin to the exact second of time for your world time zone, and off it goes….so beautiful!

Is Having Accurate Time So Important?
You Bet Your Life…and Many Did!

Yes, keeping accurate time is far more important than not arriving late for your doctor’s appointment! The fleets of navies and the world of seagoing commerce require accurate time…on so many different levels. In 1714, the British Admiralty offered the then-huge sum of 20,000 pounds to anyone who could concoct a practical way to measure longitude at sea. That so-called Longitude Act was inspired by a great national tragedy involving the Royal Navy. On October 22, 1707, a fleet of ships was returning home after a sojourn at sea. Despite intense fog, the flagship’s navigators assured Admiral Sir Cloudisley Shovell that the fleet was well clear of the treacherous Scilly Islands, some twenty miles off the southwest coast of England. Such was not the case, however, and the admiral’s flagship, Association, struck the shoals first, quickly sinking followed by three other vessels. Two thousand lives were lost in the churning waters that day. Of those who went down, only two managed to wash ashore alive. One was Sir Cloudesley Shovell. As an interesting aside, the story has it that a woman combing the beach happened across the barely alive admiral, noticed the huge emerald ring on his finger, and promptly lifted it, finishing him off in the process. She confessed the deed some thirty years later, offering the ring as proof.

The inability of seafarers to navigate safely by determining their exact location at sea was of great concern to sea powers like England who had a great investment in both their fleet of fighting ships and their commerce shipping. A ship’s latitude could be quite accurately determined on clear days by “shooting” the height of the sun above the horizon using a sextant, but its longitude position was only an educated guess. The solution to the problem of determining longitude-at-sea materialized in the form of an extremely accurate timepiece carried aboard ship and commonly known ever since as a “chronometer.” Using such a steady, accurate time-keeper, longitude could be calculated.

For the details, I recommend Dava Sobel’s book titled “Longitude.” The later, well-illustrated version is the one to read. In her book, the author relates the wonderfully improbable story of an English country carpenter who parlayed his initial efforts building large wooden clocks into developing the world’s first chronometer timepiece accurate enough to solve the “longitude problem.” After frustrating decades of dedicated effort pursuing both the technical challenge and the still-to-be-claimed prize money, John Harrison was finally able to collect the 20,000 pound admiralty award.

Why Mention Cuckoo Clocks? Enter Galileo and Huygens

Although the traditional cuckoo clock from the Black Forest of Germany does not quite qualify as a maritime chronometer, its pendulum principle plays an historical role in the overall story of time and time-keeping. With a cuckoo clock or any pendulum clock, the ticking rate is dependent only on the effective length of the pendulum, and not its weight or construction. If a cuckoo clock runs too fast, one must lower the typical wood-carved leaf cluster on the pendulum shaft to increase the pendulum period and slow the clock-rate.

No less illustrious a name than Galileo Galilei was the first to propose the possibilities of the pendulum clock in the early 1600’s. Indeed, Galileo was the first to understand pendulum motion and, with an assistant late in life, produced a sketch of a possible pendulum clock. A few decades later, in 1658, the great French scientist, Christian Huygens, wrote his milestone book of science and mathematics, Horologium Oscillatorium, in which he presented a detailed mathematical treatment of pendulum motion-physics. By 1673, Huygens had constructed the first pendulum clock following the principles set forth in his book.

thumb_IMG_3908_1024

In 1669, a very notable scientific paper appeared in the seminal English journal of science, The Philosophical Transactions of the Royal Society. That paper was the first English translation of a treatise originally published by Christian Huygens in 1665. In his paper, Huygens presents “Instructions concerning the use of pendulum-watches for finding the longitude at sea, together with a journal of a method for such watches.” The paper outlines a timekeeping method using the “equation of time” (which quantifies the monthly variations of the earth’s rotational period) and capitalizes on the potential accuracy of his proposed pendulum timekeeper. The year 1669 in which Huygens’ paper on finding the longitude-at-sea appeared in The Philosophical Transactions preceded by thirty-eight years the disastrous navigational tragedy of the British fleet and Sir Cloudesley Shovell in 1707.

As mentioned earlier, John Harrison was the first to design and construct marine chronometers having the accuracy necessary to determine the longitude-at-sea. After many years of utilizing large balanced pendulums in his bulky designs, Harrison’s ultimate success came decades later in the form of a large “watch” design which utilized the oscillating balance-wheel mechanism, so familiar today, rather than the pendulum principle. Harrison’s chronometer taxed his considerable ingenuity and perseverance to the max. The device had to keep accurate time at sea – under the worst conditions imaginable ranging from temperature and humidity extremes to the rolling/heaving motion of a ship at sea

The Longitude Act of 1714 specified that less than two minutes of deviation from true time is required over a six-week sea voyage to permit a longitude determination to within one-half degree of true longitude (35 miles at the equator). Lost time, revenue, and human lives were the price to be paid for excessive timekeeper inaccuracies.

Einstein and Special Relativity: Speeding Clocks that Run Slow

Albert Einstein revolutionized physics in 1905 with his special theory of relativity. Contrary to the assumptions of Isaac Newton, relativity dictates that there is no absolute flow of time in the universe – no master clock, as it were. An experiment will demonstrate what this implies: Two identical cesium 133 atomic clocks (the time-standard which defines the “second”) will run in virtual synchronization when sitting side by side in a lab. We would expect that to be true. If we take one of the two and launch it in an orbital space vehicle which then circles the earth at 18,000 miles per hour, from our vantage point on earth, we would observe that the orbiting clock now runs slightly slower than its identical twin still residing in our lab, here on earth. Indeed, upon returning to earth and the lab after some period of time spent in orbit, the elapsed time registered by the returning clock will be less than that of its twin which stayed put on earth even though its run-rate again matches its stationary twin! In case you are wondering, this experiment has indeed been tried many times. Unerringly, the results of such tests support Einstein’s contention that clocks moving with respect to an observer “at rest” will always run slower (as recorded by the observer) than they would were they not moving relative to the observer. Since the constant speed of light is 186,000 miles per second based on the dictates of relativity, the tiny time dilation which an orbital speed of 18,000 miles per hour would produce could only be observed using such an incredibly stable, high resolution time-source as an atomic clock. If two identical clocks passed each other traveling at one-third the speed of light, the “other” clock would seem to have slowed by 4.6%. At one-tenth the speed of light, the “other” clock slows by only 0.5%. This phenomena of slowing clocks applies to any timekeeper – from atomic clocks to hourglasses. Accordingly, the effect is not related to any construction aspects of timekeepers, only to our limitation “to observe” imposed by the non-infinite, constant speed of light dictated by relativity.

For most practical systems that we deal with, here on earth, relative velocities between systems are peanuts compared to the speed of light and the relativistic effects, although always present, are so small as to be insignificant, usually undetectable. There are important exceptions, however, and one of the most important involves the GPS (Global Positioning System). Another exception involves particle accelerators used by physicists. The GPS system uses earth-orbiting satellites traveling at a tiny fraction of the speed of light relative to the earth’s surface. In a curious demonstration of mathematical déjà vu when recalling the problem of finding the longitude-at-sea, even tiny variations in the timing signals sent between the satellites and earth can cause our position information here on earth to off by many miles. With such precise GPS timing requirements, the relativistic effect of time dilation on orbiting clocks – we are talking tiny fractions of a second! – would be enough to cause position location errors of many miles! For this reason, relativity IS and must be taken into account in order for the GPS system to be of any practical use whatsoever!

Is it not ironic that, as in the longitude-at-sea problem three centuries ago, accurate time plays such a crucial role in today’s satellite-based GPS location systems?

I hope this post has succeeded in my attempt to convey to you, the reader, the wonderful mysteries and importance of that elusive notion that we call time.

Finally, as we have all experienced throughout our lives, time is short and….

TIME AND TIDE WAIT FOR NO MAN

 

Relativity and the Birth of Quantum Physics: Two Major Problems for Physics in the Year 1900

Max-Planck-[1]In the year 1900, two critical questions haunted physicists, and both involved that elusive entity, light. The ultimate answers to these troublesome questions materialized during the dawn of the twentieth century and resulted in the most recent two of the four major upheavals in the history of physics. Albert Einstein was responsible for the third of those four upheavals in the form of his theory of special relativity which he published in 1905. Einstein’s revolutionary theory was his response to one of those two critical questions facing physics in the year 1900. A German scientist named Max Planck addressed the second critical question while igniting the fourth great upheaval in the history of physics. Max Planck began his Nobel Prize-winning investigation into the nature of heat/light radiation in the year 1894. His later discovery of the quantized nature of such radiation gave birth to the new realm of quantum physics which, in turn, led to a new picture of the atom and its behavior. Planck’s work directly addressed the second critical question nagging science in 1900. The aftermath of his findings ultimately changed physics and man’s view of physical reality, forever.

What were the two nagging problems in physics in 1900?

The nature of light and its behavior had long challenged the best minds in physics. For example: Is light composed of “particles,” or does it manifest itself as “waves” travelling through space? By the eighteenth century, two of science’s greatest names had voiced their opinions. Isaac Newton said that light is “particle” in nature. His brilliant French contemporary, Christian Huygens, claimed that light is comprised of “waves.”

Newton_Kneller_ 1702_1         huygens[1]

                  Isaac Newton                                                      Christian Huygens

By 1865, the great Scottish physicist, James Clerk Maxwell, had deduced that light, indeed, acted as an electromagnetic wave traveling at a speed of roughly 186,000 miles per second! Maxwell’s groundbreaking establishment of an all-encompassing electromagnetic theory represents the second of the four major historical revolutions in physics of which we speak. Ironically, this second great advance in the history of physics with its theoretically established speed of light led directly to the first of the two nagging issues facing physics in 1900. To understand that dilemma, a bit of easily digestible background is in order!

Maxwell began by determining that visible light is merely a small slice of the greater electromagnetic wave frequency spectrum which, today, includes radio waves at the low frequency end and x-rays at the high frequency end. Although the speed of light (thus all electromagnetic waves) had been determined fairly accurately by experiments made by others prior to 1865, Maxwell’s ability to theoretically predict the speed of light through space using the mathematics of his new science of electrodynamics was a tribute to his supreme command of physics and mathematics. The existence of Maxwell’s purely theoretical (at that time) electromagnetic waves was verified in 1887 via laboratory experiment conducted by the German scientist, Heinrich Hertz.

The first of the two quandaries on physicist’s minds in 1900 had been brewing during the latter part of the nineteenth century as physicists struggled to define the “medium” through which Maxwell’s electromagnetic waves of light propagated across seemingly empty space. Visualize a small pebble dropped into a still pond: Its entry into the water causes waves, or ripples, to propagate circularly from the point of disturbance. These “waves” of water represent mechanical energy being propagated across the water. Light is also a wave, but it propagates through space and carries electromagnetic energy.

Here is the key question which arose from Maxwell’s work and so roiled physics: What is the nature of the medium in presumably “empty space” which supports electromagnetic wave propagation…and can we detect it? Water is the obvious medium for transmitting the mechanical energy waves created by a pebble dropped into it. Air is the medium which is necessary to propagate mechanical sound-pressure waves to our ears – no air, no sound! Yet light waves travel readily through “empty space” and vacuums!

Lacking any evidence concerning the nature of a medium suitable for electromagnetic wave propagation, physicists nevertheless came up with a name for it….the “ether,” and pressed on to learn more about its presumed reality. Clever but futile attempts were made to detect the “ether sea” through which light appears to propagate. The famous Michelson-Morley experiments of 1881 and 1887 conclusively failed to detect ether’s existence. Science was forced to conclude that there is no detectable/describable medium! Rather, the cross-coupled waves of Maxwell’s electric and magnetic fields which comprise light (and all electromagnetic waves) “condition” the empty space of a perfect vacuum in such a manner as to allow the waves to propagate through that space. In expressing the seeming lack of an identifiable transmission medium and what to do about it, the best advice to physicists seemed: “It is what it is….deal with it!”

“Dealing with it” was easier said than done, because one huge problem remained. Maxwell and his four famous “Maxwell’s equations” which form the framework for all electromagnetic phenomena calculate one and only ONE value for the speed of light – everywhere, for all observers in the universe. One single value for the speed of light would have worked for describing its propagation speed relative to an “ether sea,” but there is no detectable ether sea!

The Great “Ether Conundrum” – Addressed by Einstein’s Relativity

In the absence of an ether sea through which to measure the speed of light as derived by Maxwell, here is the problem which results, as illustrated by two distant observers, A and B, who are rapidly traveling toward each other at half the speed of light: How can a single, consistent value for the speed of light apply both to the light measured by observer A as it leaves his flashlight (pointed directly at observer B) and observer B who will measure the incoming speed of the very same light beam as he receives it? Maxwell’s equations imply that each observer must measure the same beam of light at 186,000 miles per second, measured with respect to themselves and their surroundings – no matter what the relative speed between the two observers. This made no sense and represented a very big problem for physicists!

The Solution and Third Great Revolution in Physics:
 Einstein’s Relativity Theories

As already mentioned, the solution to this “ether dilemma” involving the speed of light was provided by Albert Einstein in his 1905 theory of special relativity – the third great revolution in physics. Special relativity completely revamped the widely accepted but untenable notions of absolute space and absolute time – holdovers from Newtonian physics – and time and space are the underpinnings of any notion/definition of “speed.” Einstein showed that a strange universe of slowing clocks and shrinking yardsticks is required to accommodate the constant speed of light for all observers regardless of their relative motion to each other. Einstein declared the constant speed of light for all observers to be a new, inviolable law of physics. Furthermore, he proved that nothing can travel faster than the speed of light.

The constant speed of light for all observers coupled with Einstein’s insistence that there is no way to measure one’s position or speed/velocity through empty space are the two notions which anchor special relativity and all its startling ramifications.

 The Year is 1900: Enter Max Planck and Quantum Physics –
The Fourth Great Revolution in Physics

The second nagging question facing the physics community in 1900 involved the spectral nature of radiation emanating from a so-called black-body radiator as it is heated to higher and higher temperatures. Objects that are increasingly made hotter emanate light whose colors change from predominately red to orange to white to a bluish color as the temperature rises. A big problem in 1900 was this: There is little experimental evidence indicating large levels of ultraviolet radiation produced at high temperatures – a situation completely contrary to the theoretical predictions of physics based on our scientific knowledge in the year 1900. Physics at that time predicted a so-called “ultraviolet catastrophe” at high temperatures generating huge levels of ultraviolet radiation – enough to damage the eyes with any significant exposure. The fact that there was no evidence of such levels of ultraviolet radiation was, in itself, a catastrophe for physics because it called into serious question our knowledge and assumptions of the atomic/molecular realm.

The German physicist, Max Planck, began tackling the so-called “ultraviolet catastrophe” disconnect as early as 1894. Using the experimental data available to him, Planck attempted to discern a new theory of spectral radiation for heated bodies which would match the observed results. Planck worked diligently on the problem but could not find a solution by working along conventional lines.

Finally, he explored an extremely radical approach – a technique which reflected his desperation. The resulting new theory matched the empirical results perfectly!

When Planck had completed formulation of his new theory in 1900, he called his son into his study and stated that he had just made a discovery which would change science forever – a rather startling proclamation for a conservative, methodical scientist. Planck’s new theory ultimately proved as revolutionary to physics as was Einstein’s theory of relativity which would come a mere five years later.

Max Planck declared that the radiation energy emanating from heated bodies is not continuous in nature; that is, the energy radiates in “bundles” which he referred to as “quanta.” Furthermore, Planck formulated the precise numerical values of these bundles through his famous equation which states:

 E = h times Frequency

where “h” is his newly-declared “Planck’s constant” and “Frequency” is the spectral frequency of the radiation being considered. Here is a helpful analogy: The radiation energy from heated bodies was always considered to be continuous – like water flowing through a garden hose. Planck’s new assertion maintained that radiation comes in bundles whose “size” is proportional to the frequency of radiation being considered. Visualize water emanating from a garden hose in distinct bursts rather than a continuous flow! Planck’s new theory of the energy “quanta” was the only way he saw fit to resolve the existing dilemma between theory and experiment.

The following chart reveals the empirical spectral nature of black-body radiation at different temperatures. Included is a curve which illustrates the “ultraviolet catastrophe” at 5000 degrees Kelvin predicted by (1900) classical physics. The catastrophe is represented by off-the-chart values of radiation in the “UV” range of short wavelength (high frequency).

Black_body copy

This chart plots radiated energy (vertical axis) versus radiation wavelength (horizontal axis) plotted for each of three temperatures in degrees K (degrees Kelvin). The wavelength of radiation is inversely proportional to the frequency of radiation. Higher frequency ultraviolet radiation (beyond the purple side of the visible spectrum) is thus portrayed at the left side of the graph (shorter wavelengths).

Note the part of the radiation spectrum which consists of frequencies in the visible light range. The purple curve for 5000 degrees Kelvin has a peak radiation “value” in the middle of the visible spectrum and proceeds to zero at higher frequencies (shorter wavelengths). This experimental purple curve is consistent with Planck’s new theory and is drastically different from the black curve on the plot which shows the predicted radiation at 5000 degrees Kelvin using the scientific theories in place prior to 1900 and Planck’s revolutionary findings. Clearly, the high frequency (short wavelength) portion of that curve heads toward infinite radiation energy in the ultraviolet range – a non-plausible possibility. Planck’s simple but revolutionary new radiation law expressed by E = h times Frequency served to perfectly match theory with experiment.

Why Max Planck Won the 1918 Nobel Prize
in Physics for His Discovery of the Energy Quanta

One might be tempted to ask why the work of Max Planck is rated so highly relative to Einstein’s theories of relativity which restructured no less than all of our assumptions regarding space and time! Here is the reason in a nutshell: Planck’s discovery led quickly to the subsequent work of Neils Bohr, Rutherford, De Broglie, Schrodinger, Pauli, Heisenberg, Dirac, and others who followed the clues inherent in Planck’s most unusual discovery and built the superstructure of atomic physics as we know it today. Our knowledge of the atom and its constituent particles stems directly from that subsequent work which was born of Planck and his discovery. The puzzling non-presence of the “ultraviolet catastrophe” predicted by pre-1900 physics was duly answered by the ultimate disclosure that the atom itself radiates in discrete manners thus preventing the high ultraviolet content of heated body radiation as predicted by the old, classical theories of physics.

Albert Einstein in 1905: The Photoelectric Effect –
Light and its Particle Nature

Published in the same 1905 volume of the German scientific journal, Annalen Der Physik, as Einstein’s revolutionary theory of special relativity, was his paper on the photoelectric effect. In that paper, Einstein described light’s seeming particle behavior. Electrons were knocked free of their atoms in metal targets by bombarding the targets with light in the form of energy bundles called “photons.” These photons were determined by Einstein to represent light energy at its most basic level – as discrete bundles of light energy. The governing effect which proved revolutionary was the fact that the intensity of light (the number of photons) impinging on the metal target was not the determining factor in their ability to knock electrons free of the target: The frequency of the light source was the governing factor. Increasing the intensity of light had no effect on the liberation of electrons from their metal atoms: The frequency of the light source had a direct and obvious effect. Einstein proved that these photons, these bundles of light energy which acted like bullets for displacing electrons from their metal targets, have discrete energies whose values depend only on the frequency of the light itself. The higher the frequency of the light, the greater is the energy of the photons emitted. As with Planck’s characterization of heat radiation from heated bodies, photon energies involve Planck’s constant and frequency. Einstein’s findings went beyond the quanta energy conceptualizations of Planck by establishing the physical reality of light photons. Planck interpreted his findings on energy quanta as atomic reactions to stimulation as opposed to discrete realities. Einstein’s findings earned him the 1921 Nobel Prize in physics for his paper on the photoelectric effect….and not for his work on relativity!

Deja Vu All Over Again: Is Light a Particle or a Wave?

My EinsteinAlong with Planck, Einstein is considered to be “the father of quantum physics.” The subsequent development by others of quantum mechanics (the methods of dealing with quantum physics) left Einstein sharply skeptical. For one, quantum physics and its principle of particle/wave duality dictates that light behaves both as particle and wave – depending on the experiment conducted. That, in itself, would trouble a physicist like Einstein for whom deterministic (cause and effect) physics was paramount, but there were other, startling ramifications of quantum mechanics which repulsed Einstein. The notion that events in the sub-atomic world could be statistical in nature rather than cause-and-effect left Einstein cold. “God does not play dice with the universe,” was Einstein’s opinion. Others, like the father of atomic theory, Neils Bohr, believed the evidence undeniable that nature is governed at some level by chance.

In one of the great ironies of physics, Einstein, one of the two fathers of quantum physics, felt compelled to abandon his brain-child because of philosophical/scientific conflicts within his own psyche. He never completely came to terms with the new science of quantum physics – a situation which left him somewhat outside the greater mainstream of physics in his later years.

Like Einstein’s relativity theories, quantum physics has stood the test of time. Quantum mechanics works, and no experiments have ever been conducted to prove the method wrong. Despite the truly mysterious realm of the energy quanta and quantum physics, the science works beautifully. Perhaps Einstein was right: Quantum mechanics, as currently formulated, may work just fine, but it is not the final, complete picture of the sub-atomic world. No one could appreciate that possibility in the pursuit of physics more than Einstein. After all, it was his general theory of relativity in 1916 which replaced Isaac Newton’s long-held and supremely useful force-at-a-distance theory of gravity with the more complete and definitive concept of four-dimensional, curved space-time.

By the way, and in conclusion, it is Newton’s mathematics-based science of dynamics (the science of force and motion) that defines the very first major upheaval in the history of physics – as recorded in his masterwork book from 1687, the Principia – the greatest scientific book ever written. Stay tuned.

Back-to-School Time: Have You Nurtured Your Student’s Curiosity Lately?

96497_Kubitz_cvr.inddYes, it is back-to-school time for many of the world’s youngsters. In America, late August and early September is when students return to school to meet new teachers who will be entrusted by parents to help educate their children.

Have you, as parents, guardians, or mentors nurtured your student’s curiosity this summer? My book on education, learning, and mentoring suggests that successful learning and top student performance stem from a healthy curiosity – the desire to know and understand the world around us. Such a “learning attitude” (or lack thereof) is influenced primarily by the home environment and the adults at home – not by the students’ school and teachers. Equipped with a good “learning attitude” acquired in the home, students prosper at school; without a proper attitude, many disinterested youngsters flounder in class while being easily distracted by social media and the associated electronic connectedness so prevalent today.

Sadly, many of these children will, in the course of their schooling, waste the most precious opportunity that society will ever offer them – a good education and a pathway to lifelong learning. It need not be that way, however.

My book is a hands-on, how-to manual for parenting/mentoring with the end goal of insuring school success for students – especially in science and mathematics.

Nurturing Curiosity and Success in Science, Math, and Learning, is available from Amazon for $14.95. This link will take you directly to Amazon and the book.

www.amazon.com/

The Monterey Bay Aquarium: A Wondrous, Humbling Experience!

It was every bit akin to visiting another universe – our experience at the fabulous Monterey Bay Aquarium in Monterey, California. The notion of fanciful space-aliens has nothing over the truly bizarre denizens of the world’s oceans! Evolution’s underwater handiwork with its seemingly endless variety is staggering.

 IMG_2524 A Sea-Nettle at the Aquarium

Our granddaughters were visiting from Southern California this past week, and we decided to treat them to their first visit to the world-famous Monterey Bay Aquarium. It had been some years since Linda and I had visited, so we all eagerly anticipated the outing.

At seventy-four years of age, I have seen and done a lot, but I was unexpectedly surprised by the personal impact of this, my most recent visit to the aquarium. There were two overwhelming reactions: First, the verification that fact is, indeed, stranger than fiction; second, that the scope and grandeur of nature easily swamp the significance of us individual humans and our little personal problems.

I have always taken some comfort in the implication that we, as individuals, have little significance in nature’s vast scheme. This may not be welcome news for many who believe in a “personal” creator, but it works for me as the conclusive evidence of a higher power! These emotions were especially stirred by the exhibits featuring swarming schools of seemingly identical silver anchovies and Pacific sardines swimming relentlessly in circles around the perimeters of large cylindrical tanks. Their behaviors seem somehow so symbolic of the human condition whereby we swim for all we are worth, all the while oblivious to the greater ocean of truth which surrounds us.

The smaller, gleaming silver anchovies exhibit the following interesting behavior: At any given moment, ten percent of them have their mouths (very) wide open for several seconds as they siphon up tiny micro-organisms in the water – for food. In human parlance, that is called “eating on the run!”

northern-anchovy[1]

Compared to the startling diversity in the oceans, our land-based animal life seems rather quaint and limited – even when comparing elephants to leopards. The pictures illustrate the point, but they are no substitute for a visit to the aquarium to see for yourself.

IMG_4748

 

Sea Otters: More Fun than Any Other Creature

We witnessed the sea otter feeding/training demonstration. What fun those animals have in the water, and how agile they are. We humans look positively clumsy on land compared to otters in their natural habitat – water. Using the shellfish food which they crave, the “handlers” put their assigned otter through various exercises – like voluntarily entering a land crate when given a hand signal. That behavior comes very much in handy at times when otters must be moved within the facility. Given the chance to return to this world as an animal in the distant future, the life of a sea otter appears most attractive. I cannot think of any animal who seemingly has more continuous fun in their environment that the ever-playful sea otter.

 IMG_4721Otter Lunchtime!

IMG_4737The Monterey Bay Aquarium is a world-class facility for oceanographic research and wildlife study. For even the most casual visitor, it becomes quickly obvious just how much high-level expertise it takes to staff and operate a facility like this. Thanks to such places, our knowledge of the oceans and the life they contain grows steadily with each passing year.

IMG_2508

IMG_2548 PS