J. Robert Oppenheimer and the Atomic Bomb: Triumph and Tragedy

J. Robert Oppenheimer: Along with Albert Einstein, one of the most interesting and important figures in modern history. Although very different in world-view and personality, the names of these two men are both linked to arguably the most significant human endeavor and resultant “success” in recorded history. The effort in question was the monumental task of the United States government to harness the energy of the atom in a new and devastating weapon of war, the atomic bomb. The super-secret Manhattan Project was a crash program formally authorized by president Franklin Roosevelt on Dec. 6, 1941. The program’s goal: In a time-frame of less than four years and against all odds, to capitalize on very recent scientific discoveries and rapidly develop an operational military weapon of staggering destructive power.

Albert Einstein and the Atomic Bomb

Albert Einstein, whose scientific resume ranks just behind that of Isaac Newton, had virtually no role in this weapons program save for two notable exceptions. First and foremost, it was Einstein’s follow-up paper to his milestone theory of special relativity in 1905 which showed that, contrary to long-standing belief, mass and energy are one and the same, theoretically convertible from one to another. That relationship is expressed by the most famous equation in science, e = mc2, where e is the energy inherent in mass, m is the mass in question, and c is the constant speed of light. One careful look at this relationship reveals its profoundness. Since the speed of light is a very large number (300 million meters per second), a tiny bit of mass (material) converted into its energy equivalent yields a phenomenal amount of energy. Note that Einstein had proposed a theoretical, nonetheless real, relationship in his equation. The big question: Would it ever be possible to produce that predicted yield of energy in practice? In 1938, two chemists in Hitler’s Germany, Hahn and Strassman, demonstrated nuclear fission in the laboratory, on a tiny scale. That news spread quickly throughout the world physics community – like ripples on a giant pond. It now appeared feasible to harness the nuclear power inherent in the atom as expressed by Einstein’s equation.

In August of 1939, alarmed by the recent news from Germany, Hungarian physicist Leo Szilard asked his colleague, Albert Einstein, to affix his signature to a letter addressed to President Roosevelt. The letter warned of recent German scientific advances and Germany’s sudden interest in uranium deposits in the Belgian Congo of Africa. Einstein, a German Jew who fled his homeland in 1932 for fear of Hitler’s growing influence, dutifully but reluctantly signed his name to the letter. Einstein’s imprimatur on the letter was Szilard’s best hope of affixing Roosevelt’s attention on the growing feasibility of an atomic bomb. Einstein and many other European scientists were, from personal experience, justifiably terrified at the prospect of Hitler’s Germany acquiring such a weapon, and the Germans had first-class scientific talent available to tackle such a challenge.

Einstein, one of history’s great pacifists, was thus ironically tied to the atomic bomb program, but his involvement went no further. Einstein never worked on the project and, after the war when Germany was shown to have made no real progress toward a weapon, he stated: “Had I known that the Germans would not succeed in producing an atomic bomb, I never would have lifted a finger.”

Stranger Than Fiction: The High Desert of Los Alamos, New Mexico

By early 1943, peculiar “invitations” from Washington were being received by many of this country’s finest scientific/engineering minds. A significant number of these ranked among the world’s top physicists including Nobel Prize winners who had emigrated from Europe. These shadowy “requests” from the government called for the best and the brightest to head (with their families in many cases) to the wide-open high desert country of New Mexico. Upon arrival, they would be further informed (to a limited extent) of the very important, secret work to be undertaken there. I have always believed that fact is stranger than fiction, and much more interesting and applicable. What transpired at Los Alamos over the next three years under the direction of J. Robert Oppenheimer and Army General Leslie Groves is scarcely believable, and yet it truly happened, and it has changed our lives unalterably.

One of my favorite narratives from Jon Else’s wonderful documentary film on the atomic bomb, The Day After Trinity, beautifully describes the ludicrous situation: “Oppenheimer had brought scientists and their families fresh from distinguished campuses all over the country – ivied halls, soaring campaniles, vaulted chapels. Los Alamos was a boom town – hastily constructed wooden buildings, dirt streets, coal stoves, and [at one point] only five bathtubs / There were no sidewalks. The streets were all dirt. The water situation was always bad / It was not at all unusual to open your faucet and have worms come out.” Los Alamos was like a California gold-rush boom town, constructed in a jiffy with the greatest assemblage of world-class scientific talent that will ever be gathered in one location. General Groves once irreverently quipped (with humor and perhaps some frustration) that Los Alamos had the greatest assemblage of “crack-pots” the world has ever known.

As improbable as the situation and the task at hand appeared – even given an open check-book from Roosevelt and Congress – Groves and Oppenheimer made it happen. I cannot think of any human endeavor in history so complex, so unlikely…and so “successful.” The triumph of NASA in space comes in a close second, but even realizing JFK’s promise of a man on the moon by 1969 cannot top the extraordinary scenario which unfolded at Los Alamos, New Mexico – all largely shielded from view.

The initial (and only) test of the atomic bomb took place on July 16, 1945, on the wide expanse of the New Mexico desert near Los Alamos. The test was code-named “Trinity.” The accompanying picture shows Oppenheimer and General Groves at ground zero of the blast, the site of the high tower from which the bomb was detonated. Evidence of desert sand fused into glass by the intense heat abounds. The test was a complete technical success – vindication for the huge government outlay and the dedication on the part of so many who put their lives on hold by moving to the high desert of New Mexico and literally “willing” their work to success for fear of the Germans. By July of 1945, however, Germany was vanquished without having made any real progress toward an atomic bomb.

The World Would Never Be the Same

That first nuclear detonation signaled a necessary reset for much of human thought and behavior. Many events quickly followed that demonstrated the power of that statement. Of immediate impact was the abrupt termination of World War II, brought about by two atomic bombs successfully dropped on Japan just weeks after the first and only test of the device (Hiroshima, August 6, 1945; Nagasaki, August 9, 1945). The resulting destruction of these two cities accomplished what many thousands of invading U.S. troops might have taken months to complete – with terrible losses. The horrific effect of these two bombs on the people of Japan has been well documented since 1945. Many, including a significant number of those who worked on the development of these weapons protested that such weapons should never be used again. Once the initial flush of “success” passed, the man most responsible for converting scientific theory into a practical weapon of mass destruction quickly realized that the “nuclear genie” was irretrievably out of the bottle, never to be predictably and reliably restrained. Indeed, Russia shocked the world by detonating its first atomic bomb in 1949. The inevitable arms race that Oppenheimer foresaw had already begun… the day after Trinity.

The Matter of J. Robert Oppenheimer, the Man

J. Robert Oppenheimer had been under tremendous pressure as technical leader of the super-secret Manhattan project since being appointed by the military man in charge of the entire project, Army general Leslie Groves. Groves was a military man through and through, accustomed to the disciplined hierarchy of the service, yet he hand-picked as technical lead for the whole program the brilliant physicist and mercurial liberal intellectual, J. Robert Oppenheimer – the most unlikely of candidates. Oppenheimer’s communist wife and brother prompted the FBI to vigorously protest the choice. Groves got his way, however.

Groves’ choice of J. Robert Oppenheimer for the challenging and consuming task of technical leader on the project proved to be a stroke of genius on his part; virtually everyone who worked on the Manhattan Project agreed there was no-one but Oppenheimer who could have made it happen as it did.

“Oppie,” as he was known and referred to by many on the Manhattan Project, directed the efforts of hundreds of the finest scientific and engineering minds on the planet. Foreign-born Nobel prize winners in physics were very much in evidence at Los Alamos. Despite the formidable scientific credentials of such luminaries as Hans Bethe, I.I. Rabi, Edward Teller, Enrico Fermi, and Freeman Dyson, Oppenheimer proved to be their intellectual equal. Oppenheimer either already knew and understood the nuclear physics, the chemistry, and the metallurgy involved at Los Alamos, or he very quickly learned it from the others. His intellect was lightning-quick and very deep. His interests extended well beyond physics as evidenced by his great interest in French metaphysical poetry and his multi-lingual capability. Almost more incredible than his technical grasp of all the work underway at Los Alamos was his unanticipated ability to manage all aspects of this, the most daring, ambitious, and important scientific/engineering endeavor ever undertaken. People who knew well his scientific brilliance from earlier years were amazed at the overnight evolution of “Oppie, the brilliant physicist and academic” into “Oppie, the effective, efficient manager” and co-leader of the project with General Groves.

Indelibly imprinted upon my mind is the interview scene with famous Nobel Laureate Hans Bethe conducted by Jon Else, producer of The Day After Trinity. Bethe was Oppie’s pick to be group leader for all physics on the project. The following comments of Bethe, himself a giant in theoretical physics, cast a penetrating light on the intellectual brilliance of J. Robert Oppenheimer and his successful role in this, the most daring and difficult scientific project ever attempted:

– “He was a tremendous intellect. I don’t believe I have known another person who was quite so quick in comprehending both scientific and general knowledge.”
– “He knew and understood everything that went on in the laboratory, whether it was chemistry, theoretical physics, or machine-shop. He could keep it all in his head and coordinate it. It was clear also at Los Alamos, that he was intellectually superior to us.”

The work was long, hard, and often late into the night at Los Alamos for its two thousand residents, but there was a social life at Los Alamos, and, according to reports, Robert Oppenheimer was invariably the center of attention. He could and often did lead discussions given his wide-ranging knowledge …on most everything! Dorothy McKibben (seated on Oppenheimer’s right in the following picture) was the “Gatekeeper of Los Alamos” according to all who (necessarily) passed through her tiny Manhattan Project Office at 109 East Palace Avenue, Santa Fe, New Mexico. There, they checked-in and collected the credentials and maps required to reach the highly secured desert site of Los Alamos. Ms. McKibben was affluent in her praise of Oppenheimer: “If you were in a large hall, and you saw several groups of people, the largest groups would be hovering around Oppenheimer. He was great at a party, and women simply loved him and still do.”

The Nuclear Weapons Advantage Proves to be Short-Lived

What was believed in 1945 to represent a long term, decided military advantage for the United States turned out to be an illusion, much as Oppenheimer likely suspected. With the help of spies Klaus Fuchs at Los Alamos, Julius Rosenberg, and others, Russia detonated their first atomic bomb only four years later.

Oppenheimer knew better, because he understood the physics involved and that, once demonstrated, nuclear weapons would rapidly pose a problem for the world community. When interviewed years later at Princeton where he had been head of the Institute for Advanced Studies (and Albert Einstein’s “boss”) he is shown in The Day After Trinity responding to the question, “[Can you tell us] what your thoughts are about the proposal of Senator Robert Kennedy that President Johnson initiate talks with the view to halt the spread of nuclear weapons?” Oppenheimer replied rather impatiently, “It’s twenty years too late. It should have been done the day after Trinity.”

J. Robert Oppenheimer fully appreciated, on July 16, 1945, the dangers inherent in the nuclear genie let loose from the bottle. His fears were well founded. Within a few years after Los Alamos, talk surfaced of a new, more powerful bomb based on nuclear fusion rather than fission, nevertheless still in accordance with e = mc2. This became popularly known as the “hydrogen bomb.” Physicist Edward Teller now stepped forward to promote its development in opposition to Oppenheimer’s stated wish to curtail the further use and development of nuclear weapons.

Arguments raged over the “Super” bomb as it was designated, and Teller prevailed. The first device was detonated by the U.S. in 1952. A complex and toxic cocktail of Oppenheimer’s reticence toward development of the Super combined with the past communist leanings of his wife, brother Frank, and other friends led to the Atomic Energy Commission, under President Eisenhower, revoking Oppenheimer’s security clearance in 1954. That action ended any opportunity for Oppenheimer to even continue advising Washington on nuclear weapons policy. The Oppenheimer file was thick, and the ultimate security hearings were dramatic and difficult for all involved. As for the effect on J. Robert Oppenheimer, we have the observations of Hans Bethe and I.I. Rabi, both participants at Los Alamos and Nobel prize winners in physics:

– I.I. Rabi: “I think to a certain extent it actually almost killed him, spiritually, yes. It achieved just what his opponents wanted to achieve. It destroyed him.”
– Hans Bethe: “He had very much the feeling that he was giving the best to the United States in the years during the war and after the war. In my opinion, he did. But others did not agree. And in 1954, he was hauled before a tribunal and accused of being a security risk – a risk to the United States. A risk to betray secrets.”

Later, in 1964, attitudes softened and Edward Teller nominated Oppenheimer for the prestigious Enrico Fermi award which was presented by President Johnson. As I.I. Rabi observed, however, the preceding events had, for all intents and purposes, already destroyed him. Oppenheimer was a conflicted man with a brilliant wide-ranging intellect. While one might readily agree with Hans Bethe’s assessment that Oppenheimer felt he was “giving the best to the United States in the years during and after the war,” there is perhaps more to the story than a significantly patriotic motivation. Oppenheimer was a supremely competent and confident individual whose impatient nature was tinged with a palpable arrogance. These characteristics often worked to his disadvantage with adversaries and co-workers.
Then there was the suggestion that, in addition to his patriotic motives, Oppenheimer was seized by “the glitter and the power of nuclear weapons” and the unprecedented opportunity to do physics on a grand scale at Los Alamos, and those were also major motivations. Other colleagues on the project later confessed to feeling the glitter and power of nuclear weapons, themselves. A brilliant man of many contradictions was Oppenheimer – that much is certain. Also certain is the likelihood that the man was haunted afterward by misgivings concerning his pivotal role, whatever his motivations, in letting loose the nuclear genie. The sadness in his eyes late in life practically confirms the suspicion. That is the tragedy of J. Robert Oppenheimer. Triumph has a way of extracting its penalty, its pound of flesh. I can think of no better example than Oppenheimer.

Immediately upon hearing of the bombing of Hiroshima, Hans Bethe recalled, “The first reaction which we had was one of fulfillment. Now it has been done. Now the work which we have been engaged in has contributed to the war. The second reaction, of course, was one of shock and horror. What have we done? What have we done? And the third reaction: It shouldn’t be done again.”

Nuclear Weapons: The Current State and Future Outlook

In the headlines of today’s news broadcasts as I write this is the looming threat of North Korean nuclear-tipped intercontinental ballistic missiles. The North Koreans have developed and tested nuclear warheads and are currently test-launching long-range missiles which could reach the U.S. mainland, as far east as Chicago. Likewise, Iran is close to having both nuclear weapons and targetable intermediate-range missiles. Nuclear proliferation is alive and well on this earth.

To illustrate the present situation, consider one staple of the U.S. nuclear arsenal -the one megaton thermonuclear, or hydrogen, bomb with the explosive equivalent of just over one million tons of TNT. That explosive energy is fifty times that of the plutonium fission bomb which destroyed the city of Nagasaki, Japan (twenty-two thousand tons of TNT). The number of such powerful weapons in today’s U.S. and Russian nuclear stockpiles is truly staggering, especially when one considers that a single one megaton weapon could essentially flatten and incinerate the core of Manhattan, New York. Such a threat is no longer limited to a device dropped from an aircraft. Nuclear-tipped ICBMs present an even more ominous threat.

The surprise success of the first Russian earth-orbiting satellite, “Sputnik,” in 1957 had far more significance than the loss of prestige in space for the United States. Accordingly, the second monumental and historic U.S. government program – on the very heels of the Manhattan Project – was heralded by the creation of NASA in 1958 and its role in the race to the moon. President John F. Kennedy issued his audacious challenge in 1963 for NASA to regain lost technical ground in rocketry by being first to put a man on the moon …in the decade of the sixties – in less than seven years! Many in the technical community thought the challenge was simply “nuts” given the state of U.S. rocket technology in 1963. As with the then very-recent, incredibly difficult and urgent program to build an atomic bomb, the nation once again accomplished the near-impossible by landing Armstrong and Aldrin on the moon on July 20, 1969 – well ahead of the Russians. And it was important that we surpassed Russia in rocket technology, for our ICBMs, which are the key delivery vehicle for nuclear weapons and thus crucial to most of the U.S. strategic defense, were born of this country’s efforts in space.

“Fat Man,” the bomb used on Nagasaki – 22 kilotons of TNT

Photo: Paul Shambroom

B83 1 megaton hydrogen bombs…compact and deadly

The above picture of a man casually sweeping the warehouse floor in front of nearly ten megatons of explosive, destructive power, enough to level the ten largest cities in America gives one pause to reflect. On our visit to Los Alamos in 2003, I recall the uneasy emotions I felt merely standing next to a dummy casing of this bomb in the visitor’s center and reflecting on the awesome power of the “live” device. Minus their huge development and high “delivery” costs, such bombs are, in fact, very “cheap” weapons from a military point of view.

One conclusion: Unlike the man with the broom in the above picture, we must never casually accept the presence of these weapons in our midst. One mistake, one miscalculation, and nuclear Armageddon may be upon us. The collective angels of man’s better nature had better soon decide on a way to render such weapons unnecessary on this planet. Albert Einstein expressed the situation elegantly and succinctly:

“The unleashing of [the] power of the atom has changed everything but our modes of thinking and thus we drift toward unparalleled catastrophes.”

Under a brilliant New Mexico sky on October 16, 1945, the residents of the Los Alamos mesa gathered for a ceremony on J. Robert Oppenheimer’s last day as director of the laboratory. The occasion: The receipt of a certificate of appreciation from the Secretary of War honoring the contributions of Oppenheimer and Los Alamos.

In his remarks, Oppenheimer stated: “It is our hope that in years to come we may look at this scroll, and all that it signifies, with pride. Today, that pride must be tempered with a profound concern. If atomic bombs are to be added as new weapons to the arsenals of a warring world, or to the arsenals of nations preparing for war, then the time will come when mankind will curse the names of Los Alamos and Hiroshima. The peoples of the world must unite, or they will perish.”

In today’s world, each step along the path of nuclear proliferation brings humanity ever closer to the ultimate fear shared by J. Robert Oppenheimer and Albert Einstein. The world had best heed their warnings.

Sir Isaac Newton: “I Can Calculate the Motions of the Planets, but I Cannot Calculate the Madness of Men”

Isaac Newton, the most incisive mind in the history of science, reportedly uttered that sentiment about human nature. Why would he infer such negativity about his fellow humans? Newton’s scientific greatness stemmed from his ability to see well beyond human horizons. His brilliance was amply demonstrated in his great book, Philosophiae Naturalis Principia Mathematica in which he logically constructed his “system of the world,” using mathematics. The book’s title translates from Latin as Mathematical Principles of Natural Philosophy, often shortened to “the Principia” for convenience.

The Principia is the greatest scientific book ever published. Its enduring fame reflects Newton’s ground-breaking application of mathematics, including aspects of his then-fledgling calculus, to the seemingly insurmountable difficulties of explaining motion physics. An overwhelming challenge for the best mathematicians and “natural philosophers” (scientists) in the year 1684 was to demonstrate mathematically that the planets in our solar system should revolve around the sun in elliptically shaped orbits as opposed to circles or some other geometric path. The fact that they do move in elliptical paths was carefully observed by Johann Kepler and noted in his 1609 masterwork, Astronomia Nova.

In 1687, Newton’s Principia was published after three intense years of effort by the young, relatively unknown Cambridge professor of mathematics. Using mathematics and his revolutionary new concept of universal gravitation, Newton provided precise justification of Kepler’s laws of planetary motion in the Principia. In the process, he revolutionized motion physics and our understanding of how and why bodies of mass, big and small (planets, cannonballs, etc.), move the way they do. Newton did, indeed, as he stated, show us in the Principia how to calculate the motion of heavenly bodies.

In his personal relationships, Newton found dealing with people and human nature to be even more challenging than the formidable problems of motion physics. As one might suspect, Newton did not easily tolerate fools and pretenders in the fields of science and mathematics – “little smatterers in mathematicks,” he called them. Nor did he tolerate much of basic human nature and its shortcomings.

 In the Year 1720, Newton Came Face-to-Face with
His Own Human Vulnerability… in the “Stock Market!”

 In 1720, Newton’s own human fallibility was clearly laid bare as he invested foolishly and lost a small fortune in one of investing’s all-time market collapses. Within our own recent history, we have had suffered through the stock market crash of 1929 and the housing market bubble of 2008/2009. In these more recent “adventures,” society and government had allowed human nature and its greed propensity to over-inflate Wall Street to a ridiculous extent, so much so that a collapse was quite inevitable to any sensible person…and still it continued.

Have you ever heard of the great South Sea Bubble in England? Investing in the South Sea Trading Company – a government sponsored banking endeavor out of London – became a favorite past-time of influential Londoners in the early eighteenth century. Can you guess who found himself caught-up in the glitter of potential investment returns only to end up losing a very large sum? Yes, Isaac Newton was that individual along with thousands of others.

It was this experience that occasioned the remark about his own inability to calculate the madness of men (including himself)!

Indeed, he should have known better than to re-enter the government sponsored South Sea enterprise after initially making a tidy profit from an earlier investment in the stock. As can be seen from the graph below, Newton re-invested (with a lot!) in the South Sea offering for the second time as the bubble neared its peak and just prior to its complete collapse. Newton lost 20,000 English pounds (three million dollars in today’s valuations) when the bubble suddenly burst.

Clearly, Newton’s comment, which is the theme of this post, reflects his view that human nature is vulnerable to fits of emotion (like greed, envy, ambition) which in turn provoke foolish, illogical behaviors. When Newton looked in the mirror after his ill-advised financial misadventure, he saw staring back at him the very madness of men which he then proceeded to rail against! Knowing Newton through the many accounts of his life that I have studied, I can well imagine that his financial fiasco must have been a very tough pill for him to swallow. Many are the times in his life that Newton “railed” to vent his anger against something or someone; his comment concerning the “madness of men” is typical of his outbursts. Certainly, he could disapprove of his fellow man for fueling such an obvious investment bubble. In the end, and most painful for him, was his realization that he paid a stiff price for foolishly ignoring the bloody obvious. For anyone who has risked and lost on the market of Wall Street, the mix of feelings is well understood. Even the great Newton had his human vulnerabilities – in spades, and greed was one of them. One might suspect that Newton, the absorbed scientist, was merely naïve when it came to money matters.

That would be a very erroneous assumption. Sir Isaac Newton held the top-level government position of Master of the Mint in England, during those later years of his scientific retirement – in charge of the entire coinage of the realm!

 

For more on Isaac Newton and the birth of the Principia click on the link: https://reasonandreflection.wordpress.com/2013/10/27/the-most-important-scientific-book-ever-written-conceived-in-a-london-coffee-house/

Sir Humphry Davy: Pioneer Chemist and His Invention of the Coal Miner’s “Safe Lamp” at London’s Royal Institution – 1815

humphry-davy-51Among the many examples to be cited of science serving the cause of humanity, one story stands out as exemplary. That narrative profiles a young, pioneering “professional” chemist and his invention which saved the lives of thousands of coal miners while enabling the industrial revolution in nineteenth-century England. The young man was Humphry Davy, who quickly rose to become the most famous chemist/scientist in all of England and Europe by the year 1813. His personal history and the effects of his invention on the growth of “professionalism” in science are a fascinating story.

The year was 1799, and a significant event had occurred. The place: London, England. The setting: The dawning of the industrial revolution, shortly to engulf England and most of Europe. The significant event of which I speak: The chartering of a new, pioneering entity located in the fashionable Mayfair district of London. In 1800, the Royal Institution of Great Britain began operation in a large building at 21 Albemarle Street. Its pioneering mission: To further the cause of scientific research/discovery, particularly as it serves commerce and humanity.

21-albemarle-street-then

The original staff of the Royal Institution was tiny, headed by its founder, the notable scientist and bon-vivant, Benjamin Thompson, also known as Count Rumford. Quickly, by 1802, a few key members of the founding staff, including Rumford, were gone and the fledgling organization found itself in dis-array and close to closing its doors. Just one year earlier, in 1801, two staff additions had materialized, men who were destined to make their scientific marks in physics and chemistry while righting the floundering ship of the R.I. by virtue of their brilliance – Thomas Young and the object of this post, a young, relatively unknown, pioneering chemist from Penzance/Cornwall, Humphry Davy.

By the year 1800, the industrial revolution was gaining momentum in England and Europe. Science and commerce had already begun to harness the forces of nature required to drive industrial progress rapidly forward. James Watt had invented the steam engine whose motive horsepower was now bridled and serving the cause by the year 1800. The looming industrial electrical age was to dawn two decades later, spearheaded by Michael Faraday, the most illustrious staff member of the Royal Institution, ever, and one of the greatest physicists in the history of science.

In the most unlikely of scenarios at the Royal Institution, Humphry Davy interviewed and hired the very young Faraday as a lab assistant (essentially lab “gofer”) in 1813. By that time, Davy’s star had risen as the premier chemist in England and Europe; little did he know that the young Faraday, who had less than a grade-school education and who worked previously as a bookbinder, would, in twenty short years, ascend to the pinnacle of physics and chemistry and proceed to father the industrial electrical age. The brightness of Faraday’s scientific star soon eclipsed even that of Davy’s, his illustrious benefactor and supervisor.

For more on that story click on this link to my previous post on Michael Faraday: https://reasonandreflection.wordpress.com/2013/08/04/the-electrical-age-born-at-this-place-and-fathered-by-this-great-man/

Wanted: Ever More Coal from England’s Mines 
at the Expense of Thousands Lost in Mine Explosions

Within two short years of obtaining his position at the Royal Institution in 1813, young Faraday found himself working with his idol/mentor Davy on an urgent research project – a chemical examination of the properties of methane gas, or “fire damp,” as it was known by the “colliers,” or coal miners.

The need for increasing amounts of coal to fuel the burgeoning boilers and machinery of the industrial revolution had forced miners deeper and deeper underground in search of rich coal veins. Along with the coal they sought far below the surface, the miners encountered larger pockets of methane gas which, when exposed to the open flame of their miner’s lamp, resulted in a growing series of larger and more deadly mine explosions. The situation escalated to a national crisis in England and resulted in numerous appeals for help from the colliers and from national figures.

By 1815, Humphry Davy at the Royal Institution had received several petitions for help, one of which came from a Reverend Dr. Gray from Sunderland, England, who served as a spokesman/activist for the colliers of that region.

Davy and the Miner’s Safe Lamp:
Science Serving the “Cause of Humanity”

Working feverishly from August and into October, 1815, Davy and Faraday produced what was to become known as the “miner’s safe lamp,” an open flame lamp designed not to explode the pockets of methane gas found deep underground. The first announcement of Davy’s progress and success in his work came in this historic letter to the Reverend Gray dated October 30, 1815.

davy_1x

The announcement heralds one of the earliest, concrete examples of chemistry (and science) put to work to provide a better life for humanity.

Royal Institution
Albermarle St.
Oct 30

 My Dear Sir

                               As it was in consequence of your invitation that I endeavored to investigate the nature of the fire damp I owe to you the first notice of the progress of my experiments.

 My results have been successful far beyond my expectations. I shall inclose a little sketch of my views on the subject & I hope in a few days to be able to send a paper with the apparatus for the Committee.

 I trust the safe lamp will answer all the objects of the collier.

 I consider this at present as a private communication. I wish you to examine the lamps I had constructed before you give any account of my labours to the committee. I have never received so much pleasure from the results of my chemical labours, for I trust the cause of humanity will gain something by it. I beg of you to present my best respects to Mrs. Gray & to remember me to your son.

 I am my dear Sir with many thanks for your hospitality & kindness when I was at Sunderland.

                                                              Your….

                                                                             H. Davy

This letter is clearly Davy’s initial announcement of a scientifically-based invention which ultimately had a pronounced real and symbolic effect on the nascent idea of “better living through chemistry” – a phrase I recall from early television ads run by a large industrial company like Dupont or Monsanto.

davy_3crop

In 1818, Davy published his book on the urgent, but thorough scientific researches he and Faraday conducted in 1815 on the nature of the fire damp (methane gas) and its flammability.

dscn4290x

Davy’s coal miner’s safety lamp was the subject of papers presented by Davy before the Royal Society of London in 1816. The Royal Society was, for centuries since its founding by King Charles II in 1662, the foremost scientific body in the world. Sir Isaac Newton, the greatest scientific mind in history, presided as its president from 1703 until his death in 1727. The Society’s presence and considerable influence is still felt today, long afterward.

davy41Davy’s safe lamp had an immediate effect on mine explosions and miner safety, although there were problems which required refinements to the design. The first models featured a wire gauze cylinder surrounding the flame chamber which affected the temperature of the air/methane mixture in the vicinity of the flame. This approach took advantage of the flammability characteristics of methane gas which had been studied so carefully by Davy and his recently hired assistant, Michael Faraday. Ultimately, the principles of the Davy lamp were refined sufficiently to allow the deep-shaft mining of coal to continue in relative safety, literally fueling the industrial revolution.

Humphry Davy was a most unusual individual, as much poet and philosopher in addition to his considerable talents as a scientist. He was close friends with and a kindred spirit to the poets Coleridge, Southey, and Wordsworth. He relished rhetorical flourish and exhibited a personal idealism in his earlier years, a trait on open display in the letter to the Reverend Gray, shown above, regarding his initial success with the miner’s safe lamp.

“I have never received so much pleasure from the results of my chemical labours, for I trust the cause of humanity will gain something by it.”

As proof of the sincerity of this sentiment, Davy refused to patent his valuable contribution to the safety of thousands of coal miners!

Davy has many scientific “firsts” to his credit:

-Experimented with the physiological effects of the gas nitrous oxide (commonly known as “laughing gas”) and first proposed it as a possible medical/dental anesthetic – which it indeed became years later, in 1829.

-Pioneered the new science of electrochemistry using the largest voltaic pile (battery) in the world, constructed for Davy in the basement of the R.I. Alessandro Volta first demonstrated the principles of the electric pile in 1800, and within two years, Davy was using his pile to perfect electrolysis techniques for separating and identifying “new” fundamental elements from common chemical compounds.

-Separated/identified the elements potassium and sodium in 1807, soon followed by others such as calcium and magnesium.

-In his famous, award-winning Bakerian Lecture of 1806, On Some Chemical Agencies of Electricity, Davy shed light on the entire question concerning the constituents of matter and their chemical properties.

-Demonstrated the “first electric light” in the form of an electric arc-lamp which gave off brilliant light.

-Wrote several books including Elements of Chemical Philosophy in 1812.

In addition to his pioneering scientific work, Davy’s heritage still resonates today for other, more general reasons:

-He pioneered the notion of “professional scientist,” working, as he did, as paid staff in one of the world’s first organized/chartered bodies for the promulgation of science and technology, the Royal Institution of Great Britain.

-As previously noted, Davy is properly regarded as the savior of the Royal Institution. Without him, its doors surely would have closed after only two years. His public lectures in the Institution’s lecture theatre quickly became THE rage of established society in and around London. Davy’s charismatic and informative presentations brought the excitement of the “new sciences” like chemistry and electricity front and center to both ladies and gentlemen. Ladies were notably and fashionably present at his lectures, swept up by Davy’s personal charisma and seduced by the thrill of their newly acquired knowledge… and enlightenment!

royal_institution_-_humphry_davy1

The famous 1802 engraving/cartoon by satirist/cartoonist James Gillray
Scientific Researches!….New Discoveries on Pneumaticks!…or…An
Experimental Lecture on the Power of Air!

This very famous hand-colored engraving from 1802 satirically portrays an early public demonstration in the lecture hall of the Royal Institution of the powers of the gas, nitrous oxide (laughing gas). Humphry Davy is shown manning the gas-filled bellows! Note the well-heeled gentry in the audience including many ladies of London. Davy’s scientific reputation led to his eventual English title of Baronet and the honor of Knighthood, thus making him Sir Humphry Davy.

The lecture tradition at the R.I. was begun by Davy in 1801 and continued on for many years thereafter by the young, uneducated man hired by Davy himself in 1813 as lab assistant. Michael Faraday was to become, in only eight short years, the long-tenured shining star of the Royal Institution and a physicist whose contributions to science surpassed those of Davy and were but one rank below the legacies of Galileo, Newton, Einstein, and Maxwell. Faraday’s lectures at the R.I. were brilliantly conceived and presented – a must for young scientific minds, both professional and public – and the Royal Institution in London remained a focal point of science for more than three decades under Faraday’s reign, there.

ri-prospectus-1800faraday-ticket-cropped

The charter and by-laws of the R.I. published in 1800 and an admission ticket to Michael Faraday’s R.I. lecture on electricity written and signed by him: “Miss Miles or a friend / May 1833”

Although once again facing economic hard times, the Royal Institution exists today – in the same original quarters at 21 Albemarle Street. Its fabulous legacy of promulgating science for over 217 years would not exist were it not for Humphry Davy and Michael Faraday. It was Davy himself who ultimately offered that the greatest of all his discoveries was …Michael Faraday.

Charles Darwin’s Journey on the Beagle: History’s Most Significant Adventure

In 1831, a young, unknown, amateur English naturalist boarded the tiny ship, HMS Beagle, and embarked, as crew member, on a perilous, five-year journey around the world. His observations and the detailed journal he kept of his various experiences in strange, far-off lands would soon revolutionize man’s concept of himself and his place on planet earth. Darwin’s revelations came in the form of his theory of natural selection – popularly referred to as “evolution.”

H.M.S. Beagle_Galapagos_John Chancellor

Since the publication of his book, On the Origin of Species in 1859, which revealed to the scientific community his startling conclusions about all living things based on his voyage journal, Darwin has rightfully been ranked in the top tier of great scientists. In my estimation, he is the most important and influential natural scientist of all time, and I would rank him right behind Isaac Newton and Albert Einstein as the most significant and influential scientific figures of modern times.

Young Charles Darwin enrolled at the University of Edinburgh in 1825 to pursue a career in medicine. His father, a wealthy, prominent physician had attended Edinburgh and, indeed, exerted considerable influence on young Charles to follow him in a medical career. At Edinburgh, the sixteen-year old Darwin quickly found the study of anatomy with its dissecting theatre an odious experience. More than once, he had to flee the theatre to vomit outside after witnessing the dissection process. The senior Darwin, although disappointed in his son’s unsuitability for medicine, soon arranged for Charles to enroll at Cambridge University to study for the clergy. In Darwin’s own words: “He [the father] was very properly vehement against my turning an idle sporting man, which seemed my probable destination.”

Darwin graduated tenth in his class of 168 with a B.A. and very little interest in the clergy! During his tenure at Cambridge, most of young Darwin’s spare time was spent indulging his true and developing passion: Collecting insects with a special emphasis on beetles. Along the way, he became good friends with John Steven Henslow, professor of geology, ardent naturalist, and kindred spirit to the young Charles.

Wanted: A Naturalist to Sail On-Board the Beagle

On 24 August, 1831, in one of history’s most prescient communiques, Professor Henslow wrote his young friend and protegee: “I have been asked by [George] Peacock…to recommend him a naturalist as companion to Capt. Fitzroy employed by Government to survey the S. extremity of America [the coasts of South America]. I have stated that I considered you to be the best qualified person I know of who is likely to undertake such a situation. I state this not on the supposition of ye being a finished naturalist, but as amply qualified for collecting, observing, & noting any thing worthy to be noted in natural history.” Seldom in history has one man “read” another so well in terms of future potential as did Henslow in that letter to young Darwin!

Charles’ father expressed his opposition to the voyage, in part, on the following grounds as summarized by young Darwin:

-That such an adventure could prove “disreputable to my [young Darwin’s] character as a Clergyman hereafter.”

-That it seems “a wild scheme.”

-That the position of naturalist “not being [previously] accepted there must be some serious objection to the vessel or expedition.”

-That [Darwin] “should never settle down to a steady life hereafter.”

-That “it would be a useless undertaking.”

Darwin 1840_RichmondThe young man appealed to his uncle Josiah Wedgewood [of pottery family fame] whose judgement he valued. Scientific history hung in the balance as Uncle Josiah promptly weighed-in with the senior Darwin, offering convincing arguments in favor of the voyage. In rebuttal to the objection from Darwin’s father that “it would be a useless undertaking,” the Uncle reasoned: “The undertaking would be useless as regards his profession [future clergyman], but looking upon him as a man of enlarged curiosity, it affords him the opportunity of seeing men and things as happens to few.” Enlarged curiosity, indeed! How true that proved to be. The senior Darwin then made his decision in the face of Uncle Josiah’s clear vision and counsel: Despite lingering reservations, he gave his permission for Charles to embark on the historic sea voyage, one which more than any other, changed mankind’s sense of self. Had the decision been otherwise, Darwin’s abiding respect for his father’s opinion and authority would have bequeathed the world yet another clergyman while greatly impeding the chronicle of man and all living things on this planet.

On 27 December, 1831, HMS Beagle with Darwin aboard put out to sea, beginning an adventure that would circle the globe and take almost five years. Right from the start, young Charles became violently seasick, often confined to his swaying hammock hanging in the cramped quarters of the ship. Seasickness dogged young Darwin throughout the voyage. I marvel at the fortitude displayed by this young, recently graduated “gentleman from Cambridge” as he undertook such a daunting voyage. Given that the voyage would entail many months at sea, under sail, Capt. Fitzroy and Darwin had agreed from the start that Charles would spend most of his time on land, in ports of call, while the Beagle would busy itself surveying the local coastline per its original government charter. While on land, Darwin’s mission was to observe and record what he saw and experienced concentrating, of course, on the flora, fauna, and geology of the various diverse regions he would visit.

St. Jago, an island off the east coast of South America was the Beagle’s first stop on 16 January, 1832. It was here he made one of his first significant observations. Quoting from his journal: “The geology of this island is the most interesting part of its natural history. On entering the harbour, a perfectly horizontal white band in the face of the sea cliff, may be seen running for some miles along the coast, and at the height of about forty-five feet above the water. Upon examination, this white stratum is found to consist of calcareous [calcium] matter, with numerous shells embedded, most or all of which now exist on the neighboring coast.”

Darwin goes on to conclude that a stratum of sea-shells very much higher than the current water line speaks to ancient, massive upheavals of the earth in the region. From the simple, focused collector of beetles in his Cambridge days, Darwin had now become obsessed with the bigger picture of nature, a view which embraced the importance of geology/environment as key to decoding nature’s secrets.

In a fascinating section of his journal, Darwin describes his astonishment at the primitive state of the native inhabitants of Tierra Del Fuego, at the southern tip of South America. From the journal entry of 17 December, 1832: “In the morning, the Captain sent a party to communicate with the Fuegians. When we came within hail, one of the four natives who were present advanced to receive us, and began to shout most vehemently, wishing to direct us where to land. When we were on shore the party looked rather alarmed, but continued talking and making gestures with great rapidity. It was without exception the most curious and interesting spectacle I ever beheld: I could not have believed how wide was the difference between savage and civilized man; it is greater than between a wild and domesticated animal, inasmuch as in man there is a greater power of improvement.” A separate reference I recall reading referring to Darwin’s encounter with the Fuegians stated that he could scarcely believe that the naked, dirty, and primitive savages before his eyes were of the same species as the sherry-sipping professors back at Cambridge University – so vividly stated.

On 2 October, 1836, the Beagle arrived at Falmouth, Cornwall, her nearly five-year journey circumnavigating the globe complete. Throughout the trip, Darwin recorded life on the high seas and, most importantly, his myriad observations on the geology of the many regions visited on foot and horseback as well as the plant and animal life.

I often invoke the mantra to which I ardently subscribe: That fact is always stranger than fiction…and so much more interesting and important. Picturing Darwin, the elite Englishman and budding naturalist, riding horseback amidst the rough-hewn vaqueros [cowboys] of Chile speaks to the improbability of the entire venture. When studying Darwin, it quickly becomes clear to the reader that his equable nature and noble intents were obvious to those whose approval and cooperation were vital for the success of his venture. That was particularly true of the seaman crew of the Beagle and of Capt. Fitzroy whose private cabin on the ship, Darwin shared. Fortunately, Fitzroy was a man of considerable ability and efficiency in captaining the Beagle. He was, at heart, a man sensible of the power and importance of scientific knowledge, and that made his less admirable qualities bearable to Darwin. The crew made good-natured fun of the intellectual, newbie naturalist in their midst, but spared no effort in helping Darwin pack his considerable array of collected natural specimens, large and small, in boxes and barrels for shipment back to Professor Henslow at Cambridge. Many of these never arrived, but most did make their way “home.”

When Darwin returned to Cambridge after arriving back home at Cornwall, he was surprised to learn that Professor Henslow had spread news among his friends at Cambridge of the Beagle’s whereabouts in addition to sharing, with his university colleagues, the specimens sent home by his young protegee. Darwin had embarked on the Beagle’s voyage as an amateur collector of insects. Now, to his great surprise, he had become a naturalist with a reputation and a following within the elite circles at Cambridge, thanks to Professor Henslow.

Charles_Darwin_seated_crop[1]Once home, Charles Darwin wasted little time tackling the immense task of studying and categorizing the many specimens he had sent back during the voyage. By 1838, the vestiges of natural selection had begun to materialize in his mind. One situation of particular note that he recorded in the Galapagos Islands fueled his speculations. There, he noted that a species of bird indigenous to several of the islands in the archipelago seemed to have unique beaks depending upon which island they inhabited. In virtually all other aspects, the birds closely resembled one another – all members of a single species. Darwin noticed that the beaks in each case seemed most ideally suited to the particular size and shape of the seeds most plentiful on that particular island. Darwin took great pains to document these finches of the Galapagos, suspecting that they harbored important clues to nature’s ways. Darwin reasoned that somehow the birds seemed to be well-adapted to their environment/food source in the various islands. Clues such as this shaped his thought processes as he carefully distilled the notes entered in his journal during the voyage. By 1844, Charles Darwin had formulated the framework for his explanation of animal/plant adaptation to the environment. Except for one or two close and trusted colleagues, Darwin kept his budding theory to himself for years to come for important reasons which I discuss shortly.

 

Darwin published his book, Journal of Researches, in 1839. The book was taken from his copious journal entries during the voyage; within its pages resides the seed-stock from which would germinate Darwin’s ultimate ideas and his theory of natural selection. This book remained, to Darwin’s dying day, closer to his affections and satisfaction than any other including On the Origin of Species.

 

 

What Is the Essence of Natural Selection?

Darwin’s theory of natural selection proposed that species are not immutable across time and large numbers of individuals. There appear random variations in this or that characteristic in a particular individual within a large population. Such variations, beginning with that individual, could be passed along to future generations through its immediate offspring. In the case of a singular Galapagos finch born with a significantly longer and narrower beak than that of a typical bird in the species, that specimen and its offspring which might inherit the tendency will be inevitably subjected to “trial by nature.” If the longer, narrower beak makes it easier for these new birds to obtain and eat the seeds and insects present in their environment, these birds will thrive and go on, over time, to greatly out-reproduce others of their species who do not share the “genetic advantage.” Eventually that new characteristic, in this example, the longer, narrower beak, will predominate within the population in that environment. This notion is the essence of Darwin’s theory of natural selection. If the random variation at hand proves to be disadvantageous, future generations possessing it will be less likely to survive than those individuals without it.

Note that this description, natural selection, is far more scientifically specific than the oft-used/misused phrase applied to Darwin’s work: theory of evolution. To illustrate: “theory of evolution” is a very general phrase admitting even the possibility that giraffes have long necks because they have continually stretched them over many generations reaching for food on the higher tree canopies. That is precisely the thinking of one of the early supporters of evolution theory, the Frenchman, Lamarck, as expressed in his 1809 publication on the subject. Darwin’s “natural selection” explains the specific mechanism by which evolution occurs – except for one vital, missing piece… which we now understand.

Genetics, Heredity, and the DNA Double Helix:
 Random Mutations – the Key to Natural Selection!

Darwin did not know – could not know – the source of the random significant variations in species which were vital to his theory of natural selection. He came to believe that there was some internal genetic blueprint in living things that governed the species at hand while transmitting obvious “familial traits” to offspring. Darwin used the name “gemmules” referring to these presumed discrete building blocks, but he could go no further in explaining their true nature or behavior given the limited scientific knowledge of the time.

James Watson and Francis Crick won the 1962 Nobel Prize in medicine and physiology for their discovery in 1953 of the DNA double helix which carries the genetic information of all living things. The specific arrangement of chemical base-pair connections, or rungs, along the double helix ladder is precisely the genetic blueprint which Darwin suspected. The human genome has been decoded within the last twenty years yielding tremendous knowledge about nature’s life-processes. We know, for instance, that one particular – just one – hereditary base-pair error along the double helix can result in a devastating medical condition called Tay-Sachs, wherein initially healthy brains of newborns are destroyed in just a few years due to the body’s inability to produce a necessary protein. Literally every characteristic of all living things is dictated by the genetic sequence of four different chemical building blocks called bases which straddle the DNA double helix. The random variations necessary for the viability of Darwin’s theory of natural selection are precisely those which stem from random base-pair mutations, or variations, along the helix. These can occur spontaneously during genetic DNA replication, or they can result from something as esoteric as the alpha particles of cosmic radiation hitting a cell nucleus and altering its DNA. The end result of the sub-microscopic change might be trivial, beneficial, or catastrophic in some way to the individual.

Gregor Mendel: The Father of Genetics…Unknown to Darwin

In 1865, a sequestered Austrian monk published an obscure scientific paper in, of all things, a regional bee-keepers journal. Like Darwin, originally, Mendel had no formal scientific qualifications, only a strong curiosity and interest in the pea plants he tended in the monastery garden. He had wondered about the predominant colors of the peas from those plants, green and yellow, and pondered the possible mechanisms which could determine the color produced by a particular plant. To determine this, he concocted a series of in-breeding experiments to find out more. After exhaustive trials using pea color, size of plant, and five other distinguishing characteristics of pea plants, Mendel found that the statistics of inheritance involved distinct numerical ratios, as for example, a “one-in-four chance” for a specific in-breeding outcome. The round numbers present in Mendel’s experimental results suggested the existence of distinct, discrete genetic mechanisms at work – what Darwin vaguely had termed “gemmules.” Mendel’s 1865 paper describing his findings, and the work behind it cements Mendel’s modern reputation as the “Father of Genetics.” Incredibly and unfortunately virtually no one took serious notice of his paper until it was re-discovered in 1900, thirty-five years after its publication, by the English geneticist William Bateson!

Original offprints (limited initial printings for the author) of Mendel’s paper are among the rarest and most desirable of historical works in the history of science, selling for hundreds of thousands of dollars on the rare book/manuscript market. We know that only forty were printed and scarcely half of these have been accounted for. Question: Did Mendel send an offprint of his pea plant experiments to Charles Darwin in 1865, well after the publication of Darwin’s groundbreaking On the Origin of Species in 1859? An uncut [meaning unopened, thus unread] offprint was presumably found among Darwin’s papers after his death, according to one Mendel reference source. Certainly, no mention of it was ever made by Charles Darwin.

 It is an intriguing thought that the key, missing component of Darwin’s natural selection theory as espoused in his Origin of Species possibly resided unread and unnoticed on Darwin’s bookshelf! And is it not a shame that Mendel lived out his life in the abbey essentially unknown and without due credit for his monumental work in the new science of genetics, a specialty which he founded?

Darwin’s Reluctance to Publish His Theory Nearly Cost Him His Due Credit

Darwin finally revealed his theory of natural selection to the public and the scientific community at large in 1859 with the book publication of On the Origin of Species. In fact, the central tenets of the book had congealed in Darwin’s mind long before, by 1844. He had held the framework of his theory close to the vest for all that time! Why? Because to espouse evolutionary ideas in the middle of the nineteenth century was to invite scorn and condemnation from creationists within many religions. No one was more averse to a more secular universe which promoted the notion of a less personal creator, one which did not create man and animals in more or less final form (despite obvious diversity) than Emma Wedgewood Darwin, Darwin’s very religious wife. She believed in an afterlife in which she and her beloved husband would be joined together for eternity. Charles was becoming less and less certain of this religious ideal as the years went by and nature continued to reveal herself to the ever-inquiring self-made naturalist who had set out to probe her ways.

To espouse a natural world which, once its fundamental constituents were brought together, would henceforth change and regulate itself without further involvement by the Creator would be a painful repudiation of Emma’s fundamental beliefs in a personal God. For this very personal reason and because of the professional risk of being ostracized by the community of naturalists for promulgating radical, anti-religious ideas, Darwin put off publication of his grand book, the book which would insure him priority and credit for one of the greatest of all scientific conclusions.

After stalling publication for years and with his manuscript only half completed, Darwin was shocked into feverish activity on his proposed book by a paper he received on 18 June, 1858. It was from a fellow naturalist of Darwin’s acquaintance, one Alfred Russel Wallace. In his paper, Wallace outlined his version of natural selection which eerily resembled the very theory Darwin was planning to eventually publish to secure his priority. There was no doubt that Wallace had arrived independently at the same conclusions that Darwin had reached many years earlier. Wallace’s paper presented an extremely difficult problem for Darwin in that Wallace had requested that Darwin pass his [Wallace’s] paper on to their mutual friend, the pathfinding geologist, Charles Lyell.

Darwin in a Corner: Academic Priority at Stake
Over One of the Great Scientific Breakthroughs

Now Darwin felt completely cornered. If he passed Wallace’s paper on to Lyell as requested, essentially making it public, the academic community would naturally steer credit for the theory of natural selection to Wallace. On the other hand, having just received Wallace’s paper on the subject, how would it look if he, Darwin, suddenly announced publicly that he had already deciphered nature and her ways – well before Wallace had? That course of action could inspire suspicions of plagiary on Darwin’s part.

The priority stakes were as high as any since the time of Isaac Newton when he and the mathematician Gottfried Liebniz locked horns in a bitter battle over credit for development of the calculus. It had been years since Darwin’s voyage on the Beagle which began the long gestation of his ideas on natural selection. He had been sitting on his conclusions since 1844 for fear of publishing, and now he was truly cornered, “forestalled,” as he called it. Darwin, drawing on the better angels of his morose feelings, quickly proposed to Wallace that he [Darwin] would see to it that his [Wallace’s] paper be published in any journal of Wallace’s choosing. In what became a frenzied period in his life, he reached out to two of his closest colleagues and trusted confidants, Charles Lyell and Joseph Hooker for advice. The two been entrusted with the knowledge of Darwin’s work on natural selection for a long time; they well understood Darwin’s priority in the matter, and he needed them now. The two friends came up with a proposal: Publish both Wallace’s paper and a synopsis by Darwin outlining his own long-standing efforts and results. The Linnean Society presented their joint papers in their scientific journal on 1 July, 1858. Fortunately for Darwin, Alfred Russel Wallace was of a conciliatory nature regarding the potential impasse over priority by way of his tacit acknowledgement that his colleague had, indeed, been first to formulate his opinions on natural selection.

Nonetheless, for Darwin, the cat was out of the bag, and the task ahead was to work full-steam to complete the large book that would contain all the details of natural selection and insure his priority. He worked feverishly on his book, On the Origins of Species, right up to its publication by John Murray. The book went on sale on 22 November, 1859, and all 1250 copies sold quickly. This was an excruciating period of Darwin’s life. He was not only under unrelenting pressure to complete one of the greatest scientific books of all time, he was intermittently very ill throughout the process presumably from a systemic problem contracted during his early travels associated with the Beagle voyage. Yes, the expected controversy was to come immediately after publication of the book, but Darwin and his contentions have long weathered the storm. Few of his conclusions have not stood the test of time and modern scrutiny.

The Origin was his great book, but the book that was the origin of the Origin, his 1839 Journal of Researches always remained his favorite. Certainly, the Journal was written at a much happier time in Darwin’s life, a time flush with excitement over his prospects as a newly full-fledged naturalist. For me, the Journal brims with the excitement of travel and scientific discovery/fact-finding – the seed-corn of scientific knowledge (and new technologies). The Origin represents the resultant harvest from that germinated seed-corn.

“Endless Forms Most Beautiful” –
Natural Selection in Darwin’s Own Words

In his Introduction to the Origin, Darwin describes the essence of natural selection:

“In the next chapter, the struggle for existence amongst all organic beings throughout the world, which inevitably follows from their high geometrical powers of increase, will be treated of. This is the doctrine of Malthus, applied to the whole animal and vegetable kingdoms. As many more individuals of each species are born that can possibly survive; and as, consequently, there is a frequently occurring struggle for existence, it follows that any being, if it vary however slightly in any manner profitable to itself, under the complex and sometimes varying conditions of life, will have a better chance of surviving, and thus be naturally selected. From the strong principle of inheritance, any selected variety will tend to propagate its new and modified form.

Darwin and Religion

Charles Darwin, educated for the clergy at Cambridge, increasingly drifted away from orthodox religious views as his window on nature and her ways became more transparent to him over the decades. Never an atheist, his attitudes were increasingly agnostic as he increasingly embraced the results of his lifelong study of the natural world. The Creator, which Darwin believed in, was not, to him, the involved, shepherd of all living things in this world. Rather, he seemed more like the watchmaker who, after his watch was first assembled, wound it up and let it run on its own while retreating to the background.

 Another viewpoint, which I tend to favor and which may apply to Darwin: God, whom we cannot fully know in this life, created not only all living things at the beginning, but also the entire structure of natural law (science) which dictates not only the motion of the planets, but the future course of life forms. Natural selection, hence evolution as well, are central tenants of that complete structure of natural law. The laws of nature, which permanently bear the fingerprints of the creator and his creation, thus enable the self-powered, self-regulating behaviors of the physical and natural world – without contradiction.

 Charles Darwin: Humble Man and Scientific Titan

questioning-darwin-1024[1]

In writing this post, my re-acquaintance with Darwin has brought great joy. Some years, now, after initially reading the biographies and perusing his works, I re-discover the life and legacy which is so important to science. His body of work includes several other very important books beside his Journal and Origin. Beyond his scientific importance and the science, itself, lies the man himself – a man of very high character and superb intellect. Darwin was gifted with intense curiosity, that magical motor that drives great accomplishment in science. Passion and curiosity: Isaac Newton had them in great abundance, and so, too, did Albert Einstein. Yet, Charles Darwin was different in several respects from those two great scientists: First, he was fortunate enough to have been born to privilege and was thus comfortably able to devote his working life to science from the beginning. Second, Darwin was a very happily married man who fathered ten children, each of which he loved and doted upon. Third, Darwin’s character was impeccable in all respects. His personality was stiffened a bit by the English societal conventions prevalent then, but his humanity shows through in so many ways. His struggle with religion is one most of us can relate to.

Reading Darwin’s works is a joy both because he was an articulate, educated Englishman and because the contents of his books like the Journal and Origin are easily digestible compared to the major works of Newton and Einstein. Like Darwin himself, my favorite book of his is The Journal of Researches, sometimes referred to as the Voyage of the Beagle. What an adventure.

Darwins_Thinking_Path[1]

The “sandwalk” path around the extended property of his long-held estate, Down House. Darwin frequently traversed this closed path on solitary walks around the estate while he gathered his thoughts about matters both big and small.

Marking the Passage of Time: The Elusive Nature of the Concept

Nature presents us with few mysteries more tantalizing than the concept of “time.” Youngsters, today, might not think the subject worthy of much rumination: After all, one’s personal iPhone can conveniently provide the exact time at any location on our planet.

thumb_IMG_3884_1024

Human beings have long struggled with two fundamental questions regarding time:

  1. What are the fundamental units in nature used to express time? More simply, what constitutes one second of time? How is one second determined?
  2. How can we “accurately” measure time using the units chosen to express it?

The simple answers for those so inclined might be: We measure time in units of seconds, minutes, hours, and days, etc., and we have designed carefully constructed and calibrated clocks to measure time! That was easy, wasn’t it?

The bad news: Dealing with the concept of time is not quite that simple.
The good news: The fascinating surprises and insights gained from taking a closer, yet still cursory, look at “time” are well worth the effort to do so. To do the subject justice requires far more than a simple blog post – scholarly books, in fact – but my intent, here, is to illustrate how fascinating the concept of time truly is.

Webster’s dictionary defines time as “a period or interval…the period between two events or during which ‘something’ exists, happens, or acts.”

For us humans the rising and setting of the sun – the cycle of day and night is a “something” that happens, repeats itself, and profoundly effects our existence. It is that very cycle which formed our first concept of time. The time required for the earth to make one full revolution on its axis is but one of many repeating natural phenomena, and it was, from the beginning of man’s existence, uniquely qualified to serve as the arbitrary definition of time measurement. Other repeatable natural phenomena could have anchored our definition of time: For instance, the almost constant period of the earth’s rotation around the sun (our year) or certain electron- jump vibrations at the atomic level could have been chosen except that such technology was unknown and unthinkable to ancient man. In fact, today’s universally accepted time standard utilizes a second defined by the extraordinarily stable and repeatable electron jumps within Cesium 133 atoms – the so-called atomic clock which has replaced the daily rotation of the earth as the prime determinant of the second.

Why use atomic clocks instead of the earth’s rotation period to define the second? Because the earth’s rotational period varies from month to month due to the shape of our planet’s orbit around the sun. Its period also changes over many centuries as the earth’s axis “precesses” (a slowly rotating change of direction) relative to the starry firmament, all around. By contrast, atomic clocks are extremely regular in their behavior.

Timekeepers on My Desk: From Drizzling Sand to Atomic Clocks!

I have on my desk two time-keepers which illustrate the startling improvement in time-keeping over the centuries. One is the venerable hour-glass: Tip it over and the sand takes roughly thirty minutes (in mine) to drizzle from top chamber to bottom. The other timekeeper is one of the first radio-controlled clocks readily available – the German-built Junghans Mega which I purchased in 1999. It features an analog display (clock-hands, not digital display) based on a very accurate internal quartz electronic heartbeat: The oscillations of its tiny quartz-crystal resonator. Even the quartz oscillator may stray from absolute accuracy by as much as 0.3 seconds per day in contrast to the incredible regularity of the cesium atomic clocks which now define the international second as 9,192,631,770 atomic “vibrations” of cesium 133 atoms – an incredibly stable natural phenomena. The Junghans Mega uses its internal radio capability to automatically tune in every evening at 11 pm to the atomic clocks operating in Fort Collins, Colorado. Precise time-sync signals broadcast from there are utilized to “reset” the Mega to the precise time each evening at eleven.

I love this beautifully rendered German clock which operates all year on one tiny AA battery and requires almost nothing from the operator in return for continuously accurate time and date information. Change the battery once each year and its hands will spin to 12:00 and sit there until the next radio query to Colorado. At that point, the hands will spin to the exact second of time for your world time zone, and off it goes….so beautiful!

Is Having Accurate Time So Important?
You Bet Your Life…and Many Did!

Yes, keeping accurate time is far more important than not arriving late for your doctor’s appointment! The fleets of navies and the world of seagoing commerce require accurate time…on so many different levels. In 1714, the British Admiralty offered the then-huge sum of 20,000 pounds to anyone who could concoct a practical way to measure longitude at sea. That so-called Longitude Act was inspired by a great national tragedy involving the Royal Navy. On October 22, 1707, a fleet of ships was returning home after a sojourn at sea. Despite intense fog, the flagship’s navigators assured Admiral Sir Cloudisley Shovell that the fleet was well clear of the treacherous Scilly Islands, some twenty miles off the southwest coast of England. Such was not the case, however, and the admiral’s flagship, Association, struck the shoals first, quickly sinking followed by three other vessels. Two thousand lives were lost in the churning waters that day. Of those who went down, only two managed to wash ashore alive. One was Sir Cloudesley Shovell. As an interesting aside, the story has it that a woman combing the beach happened across the barely alive admiral, noticed the huge emerald ring on his finger, and promptly lifted it, finishing him off in the process. She confessed the deed some thirty years later, offering the ring as proof.

The inability of seafarers to navigate safely by determining their exact location at sea was of great concern to sea powers like England who had a great investment in both their fleet of fighting ships and their commerce shipping. A ship’s latitude could be quite accurately determined on clear days by “shooting” the height of the sun above the horizon using a sextant, but its longitude position was only an educated guess. The solution to the problem of determining longitude-at-sea materialized in the form of an extremely accurate timepiece carried aboard ship and commonly known ever since as a “chronometer.” Using such a steady, accurate time-keeper, longitude could be calculated.

For the details, I recommend Dava Sobel’s book titled “Longitude.” The later, well-illustrated version is the one to read. In her book, the author relates the wonderfully improbable story of an English country carpenter who parlayed his initial efforts building large wooden clocks into developing the world’s first chronometer timepiece accurate enough to solve the “longitude problem.” After frustrating decades of dedicated effort pursuing both the technical challenge and the still-to-be-claimed prize money, John Harrison was finally able to collect the 20,000 pound admiralty award.

Why Mention Cuckoo Clocks? Enter Galileo and Huygens

Although the traditional cuckoo clock from the Black Forest of Germany does not quite qualify as a maritime chronometer, its pendulum principle plays an historical role in the overall story of time and time-keeping. With a cuckoo clock or any pendulum clock, the ticking rate is dependent only on the effective length of the pendulum, and not its weight or construction. If a cuckoo clock runs too fast, one must lower the typical wood-carved leaf cluster on the pendulum shaft to increase the pendulum period and slow the clock-rate.

No less illustrious a name than Galileo Galilei was the first to propose the possibilities of the pendulum clock in the early 1600’s. Indeed, Galileo was the first to understand pendulum motion and, with an assistant late in life, produced a sketch of a possible pendulum clock. A few decades later, in 1658, the great French scientist, Christian Huygens, wrote his milestone book of science and mathematics, Horologium Oscillatorium, in which he presented a detailed mathematical treatment of pendulum motion-physics. By 1673, Huygens had constructed the first pendulum clock following the principles set forth in his book.

thumb_IMG_3908_1024

In 1669, a very notable scientific paper appeared in the seminal English journal of science, The Philosophical Transactions of the Royal Society. That paper was the first English translation of a treatise originally published by Christian Huygens in 1665. In his paper, Huygens presents “Instructions concerning the use of pendulum-watches for finding the longitude at sea, together with a journal of a method for such watches.” The paper outlines a timekeeping method using the “equation of time” (which quantifies the monthly variations of the earth’s rotational period) and capitalizes on the potential accuracy of his proposed pendulum timekeeper. The year 1669 in which Huygens’ paper on finding the longitude-at-sea appeared in The Philosophical Transactions preceded by thirty-eight years the disastrous navigational tragedy of the British fleet and Sir Cloudesley Shovell in 1707.

As mentioned earlier, John Harrison was the first to design and construct marine chronometers having the accuracy necessary to determine the longitude-at-sea. After many years of utilizing large balanced pendulums in his bulky designs, Harrison’s ultimate success came decades later in the form of a large “watch” design which utilized the oscillating balance-wheel mechanism, so familiar today, rather than the pendulum principle. Harrison’s chronometer taxed his considerable ingenuity and perseverance to the max. The device had to keep accurate time at sea – under the worst conditions imaginable ranging from temperature and humidity extremes to the rolling/heaving motion of a ship at sea

The Longitude Act of 1714 specified that less than two minutes of deviation from true time is required over a six-week sea voyage to permit a longitude determination to within one-half degree of true longitude (35 miles at the equator). Lost time, revenue, and human lives were the price to be paid for excessive timekeeper inaccuracies.

Einstein and Special Relativity: Speeding Clocks that Run Slow

Albert Einstein revolutionized physics in 1905 with his special theory of relativity. Contrary to the assumptions of Isaac Newton, relativity dictates that there is no absolute flow of time in the universe – no master clock, as it were. An experiment will demonstrate what this implies: Two identical cesium 133 atomic clocks (the time-standard which defines the “second”) will run in virtual synchronization when sitting side by side in a lab. We would expect that to be true. If we take one of the two and launch it in an orbital space vehicle which then circles the earth at 18,000 miles per hour, from our vantage point on earth, we would observe that the orbiting clock now runs slightly slower than its identical twin still residing in our lab, here on earth. Indeed, upon returning to earth and the lab after some period of time spent in orbit, the elapsed time registered by the returning clock will be less than that of its twin which stayed put on earth even though its run-rate again matches its stationary twin! In case you are wondering, this experiment has indeed been tried many times. Unerringly, the results of such tests support Einstein’s contention that clocks moving with respect to an observer “at rest” will always run slower (as recorded by the observer) than they would were they not moving relative to the observer. Since the constant speed of light is 186,000 miles per second based on the dictates of relativity, the tiny time dilation which an orbital speed of 18,000 miles per hour would produce could only be observed using such an incredibly stable, high resolution time-source as an atomic clock. If two identical clocks passed each other traveling at one-third the speed of light, the “other” clock would seem to have slowed by 4.6%. At one-tenth the speed of light, the “other” clock slows by only 0.5%. This phenomena of slowing clocks applies to any timekeeper – from atomic clocks to hourglasses. Accordingly, the effect is not related to any construction aspects of timekeepers, only to our limitation “to observe” imposed by the non-infinite, constant speed of light dictated by relativity.

For most practical systems that we deal with, here on earth, relative velocities between systems are peanuts compared to the speed of light and the relativistic effects, although always present, are so small as to be insignificant, usually undetectable. There are important exceptions, however, and one of the most important involves the GPS (Global Positioning System). Another exception involves particle accelerators used by physicists. The GPS system uses earth-orbiting satellites traveling at a tiny fraction of the speed of light relative to the earth’s surface. In a curious demonstration of mathematical déjà vu when recalling the problem of finding the longitude-at-sea, even tiny variations in the timing signals sent between the satellites and earth can cause our position information here on earth to off by many miles. With such precise GPS timing requirements, the relativistic effect of time dilation on orbiting clocks – we are talking tiny fractions of a second! – would be enough to cause position location errors of many miles! For this reason, relativity IS and must be taken into account in order for the GPS system to be of any practical use whatsoever!

Is it not ironic that, as in the longitude-at-sea problem three centuries ago, accurate time plays such a crucial role in today’s satellite-based GPS location systems?

I hope this post has succeeded in my attempt to convey to you, the reader, the wonderful mysteries and importance of that elusive notion that we call time.

Finally, as we have all experienced throughout our lives, time is short and….

TIME AND TIDE WAIT FOR NO MAN

 

Relativity and the Birth of Quantum Physics: Two Major Problems for Physics in the Year 1900

Max-Planck-[1]In the year 1900, two critical questions haunted physicists, and both involved that elusive entity, light. The ultimate answers to these troublesome questions materialized during the dawn of the twentieth century and resulted in the most recent two of the four major upheavals in the history of physics. Albert Einstein was responsible for the third of those four upheavals in the form of his theory of special relativity which he published in 1905. Einstein’s revolutionary theory was his response to one of those two critical questions facing physics in the year 1900. A German scientist named Max Planck addressed the second critical question while igniting the fourth great upheaval in the history of physics. Max Planck began his Nobel Prize-winning investigation into the nature of heat/light radiation in the year 1894. His later discovery of the quantized nature of such radiation gave birth to the new realm of quantum physics which, in turn, led to a new picture of the atom and its behavior. Planck’s work directly addressed the second critical question nagging science in 1900. The aftermath of his findings ultimately changed physics and man’s view of physical reality, forever.

What were the two nagging problems in physics in 1900?

The nature of light and its behavior had long challenged the best minds in physics. For example: Is light composed of “particles,” or does it manifest itself as “waves” travelling through space? By the eighteenth century, two of science’s greatest names had voiced their opinions. Isaac Newton said that light is “particle” in nature. His brilliant French contemporary, Christian Huygens, claimed that light is comprised of “waves.”

Newton_Kneller_ 1702_1         huygens[1]

                  Isaac Newton                                                      Christian Huygens

By 1865, the great Scottish physicist, James Clerk Maxwell, had deduced that light, indeed, acted as an electromagnetic wave traveling at a speed of roughly 186,000 miles per second! Maxwell’s groundbreaking establishment of an all-encompassing electromagnetic theory represents the second of the four major historical revolutions in physics of which we speak. Ironically, this second great advance in the history of physics with its theoretically established speed of light led directly to the first of the two nagging issues facing physics in 1900. To understand that dilemma, a bit of easily digestible background is in order!

Maxwell began by determining that visible light is merely a small slice of the greater electromagnetic wave frequency spectrum which, today, includes radio waves at the low frequency end and x-rays at the high frequency end. Although the speed of light (thus all electromagnetic waves) had been determined fairly accurately by experiments made by others prior to 1865, Maxwell’s ability to theoretically predict the speed of light through space using the mathematics of his new science of electrodynamics was a tribute to his supreme command of physics and mathematics. The existence of Maxwell’s purely theoretical (at that time) electromagnetic waves was verified in 1887 via laboratory experiment conducted by the German scientist, Heinrich Hertz.

The first of the two quandaries on physicist’s minds in 1900 had been brewing during the latter part of the nineteenth century as physicists struggled to define the “medium” through which Maxwell’s electromagnetic waves of light propagated across seemingly empty space. Visualize a small pebble dropped into a still pond: Its entry into the water causes waves, or ripples, to propagate circularly from the point of disturbance. These “waves” of water represent mechanical energy being propagated across the water. Light is also a wave, but it propagates through space and carries electromagnetic energy.

Here is the key question which arose from Maxwell’s work and so roiled physics: What is the nature of the medium in presumably “empty space” which supports electromagnetic wave propagation…and can we detect it? Water is the obvious medium for transmitting the mechanical energy waves created by a pebble dropped into it. Air is the medium which is necessary to propagate mechanical sound-pressure waves to our ears – no air, no sound! Yet light waves travel readily through “empty space” and vacuums!

Lacking any evidence concerning the nature of a medium suitable for electromagnetic wave propagation, physicists nevertheless came up with a name for it….the “ether,” and pressed on to learn more about its presumed reality. Clever but futile attempts were made to detect the “ether sea” through which light appears to propagate. The famous Michelson-Morley experiments of 1881 and 1887 conclusively failed to detect ether’s existence. Science was forced to conclude that there is no detectable/describable medium! Rather, the cross-coupled waves of Maxwell’s electric and magnetic fields which comprise light (and all electromagnetic waves) “condition” the empty space of a perfect vacuum in such a manner as to allow the waves to propagate through that space. In expressing the seeming lack of an identifiable transmission medium and what to do about it, the best advice to physicists seemed: “It is what it is….deal with it!”

“Dealing with it” was easier said than done, because one huge problem remained. Maxwell and his four famous “Maxwell’s equations” which form the framework for all electromagnetic phenomena calculate one and only ONE value for the speed of light – everywhere, for all observers in the universe. One single value for the speed of light would have worked for describing its propagation speed relative to an “ether sea,” but there is no detectable ether sea!

The Great “Ether Conundrum” – Addressed by Einstein’s Relativity

In the absence of an ether sea through which to measure the speed of light as derived by Maxwell, here is the problem which results, as illustrated by two distant observers, A and B, who are rapidly traveling toward each other at half the speed of light: How can a single, consistent value for the speed of light apply both to the light measured by observer A as it leaves his flashlight (pointed directly at observer B) and observer B who will measure the incoming speed of the very same light beam as he receives it? Maxwell’s equations imply that each observer must measure the same beam of light at 186,000 miles per second, measured with respect to themselves and their surroundings – no matter what the relative speed between the two observers. This made no sense and represented a very big problem for physicists!

The Solution and Third Great Revolution in Physics:
 Einstein’s Relativity Theories

As already mentioned, the solution to this “ether dilemma” involving the speed of light was provided by Albert Einstein in his 1905 theory of special relativity – the third great revolution in physics. Special relativity completely revamped the widely accepted but untenable notions of absolute space and absolute time – holdovers from Newtonian physics – and time and space are the underpinnings of any notion/definition of “speed.” Einstein showed that a strange universe of slowing clocks and shrinking yardsticks is required to accommodate the constant speed of light for all observers regardless of their relative motion to each other. Einstein declared the constant speed of light for all observers to be a new, inviolable law of physics. Furthermore, he proved that nothing can travel faster than the speed of light.

The constant speed of light for all observers coupled with Einstein’s insistence that there is no way to measure one’s position or speed/velocity through empty space are the two notions which anchor special relativity and all its startling ramifications.

 The Year is 1900: Enter Max Planck and Quantum Physics –
The Fourth Great Revolution in Physics

The second nagging question facing the physics community in 1900 involved the spectral nature of radiation emanating from a so-called black-body radiator as it is heated to higher and higher temperatures. Objects that are increasingly made hotter emanate light whose colors change from predominately red to orange to white to a bluish color as the temperature rises. A big problem in 1900 was this: There is little experimental evidence indicating large levels of ultraviolet radiation produced at high temperatures – a situation completely contrary to the theoretical predictions of physics based on our scientific knowledge in the year 1900. Physics at that time predicted a so-called “ultraviolet catastrophe” at high temperatures generating huge levels of ultraviolet radiation – enough to damage the eyes with any significant exposure. The fact that there was no evidence of such levels of ultraviolet radiation was, in itself, a catastrophe for physics because it called into serious question our knowledge and assumptions of the atomic/molecular realm.

The German physicist, Max Planck, began tackling the so-called “ultraviolet catastrophe” disconnect as early as 1894. Using the experimental data available to him, Planck attempted to discern a new theory of spectral radiation for heated bodies which would match the observed results. Planck worked diligently on the problem but could not find a solution by working along conventional lines.

Finally, he explored an extremely radical approach – a technique which reflected his desperation. The resulting new theory matched the empirical results perfectly!

When Planck had completed formulation of his new theory in 1900, he called his son into his study and stated that he had just made a discovery which would change science forever – a rather startling proclamation for a conservative, methodical scientist. Planck’s new theory ultimately proved as revolutionary to physics as was Einstein’s theory of relativity which would come a mere five years later.

Max Planck declared that the radiation energy emanating from heated bodies is not continuous in nature; that is, the energy radiates in “bundles” which he referred to as “quanta.” Furthermore, Planck formulated the precise numerical values of these bundles through his famous equation which states:

 E = h times Frequency

where “h” is his newly-declared “Planck’s constant” and “Frequency” is the spectral frequency of the radiation being considered. Here is a helpful analogy: The radiation energy from heated bodies was always considered to be continuous – like water flowing through a garden hose. Planck’s new assertion maintained that radiation comes in bundles whose “size” is proportional to the frequency of radiation being considered. Visualize water emanating from a garden hose in distinct bursts rather than a continuous flow! Planck’s new theory of the energy “quanta” was the only way he saw fit to resolve the existing dilemma between theory and experiment.

The following chart reveals the empirical spectral nature of black-body radiation at different temperatures. Included is a curve which illustrates the “ultraviolet catastrophe” at 5000 degrees Kelvin predicted by (1900) classical physics. The catastrophe is represented by off-the-chart values of radiation in the “UV” range of short wavelength (high frequency).

Black_body copy

This chart plots radiated energy (vertical axis) versus radiation wavelength (horizontal axis) plotted for each of three temperatures in degrees K (degrees Kelvin). The wavelength of radiation is inversely proportional to the frequency of radiation. Higher frequency ultraviolet radiation (beyond the purple side of the visible spectrum) is thus portrayed at the left side of the graph (shorter wavelengths).

Note the part of the radiation spectrum which consists of frequencies in the visible light range. The purple curve for 5000 degrees Kelvin has a peak radiation “value” in the middle of the visible spectrum and proceeds to zero at higher frequencies (shorter wavelengths). This experimental purple curve is consistent with Planck’s new theory and is drastically different from the black curve on the plot which shows the predicted radiation at 5000 degrees Kelvin using the scientific theories in place prior to 1900 and Planck’s revolutionary findings. Clearly, the high frequency (short wavelength) portion of that curve heads toward infinite radiation energy in the ultraviolet range – a non-plausible possibility. Planck’s simple but revolutionary new radiation law expressed by E = h times Frequency served to perfectly match theory with experiment.

Why Max Planck Won the 1918 Nobel Prize
in Physics for His Discovery of the Energy Quanta

One might be tempted to ask why the work of Max Planck is rated so highly relative to Einstein’s theories of relativity which restructured no less than all of our assumptions regarding space and time! Here is the reason in a nutshell: Planck’s discovery led quickly to the subsequent work of Neils Bohr, Rutherford, De Broglie, Schrodinger, Pauli, Heisenberg, Dirac, and others who followed the clues inherent in Planck’s most unusual discovery and built the superstructure of atomic physics as we know it today. Our knowledge of the atom and its constituent particles stems directly from that subsequent work which was born of Planck and his discovery. The puzzling non-presence of the “ultraviolet catastrophe” predicted by pre-1900 physics was duly answered by the ultimate disclosure that the atom itself radiates in discrete manners thus preventing the high ultraviolet content of heated body radiation as predicted by the old, classical theories of physics.

Albert Einstein in 1905: The Photoelectric Effect –
Light and its Particle Nature

Published in the same 1905 volume of the German scientific journal, Annalen Der Physik, as Einstein’s revolutionary theory of special relativity, was his paper on the photoelectric effect. In that paper, Einstein described light’s seeming particle behavior. Electrons were knocked free of their atoms in metal targets by bombarding the targets with light in the form of energy bundles called “photons.” These photons were determined by Einstein to represent light energy at its most basic level – as discrete bundles of light energy. The governing effect which proved revolutionary was the fact that the intensity of light (the number of photons) impinging on the metal target was not the determining factor in their ability to knock electrons free of the target: The frequency of the light source was the governing factor. Increasing the intensity of light had no effect on the liberation of electrons from their metal atoms: The frequency of the light source had a direct and obvious effect. Einstein proved that these photons, these bundles of light energy which acted like bullets for displacing electrons from their metal targets, have discrete energies whose values depend only on the frequency of the light itself. The higher the frequency of the light, the greater is the energy of the photons emitted. As with Planck’s characterization of heat radiation from heated bodies, photon energies involve Planck’s constant and frequency. Einstein’s findings went beyond the quanta energy conceptualizations of Planck by establishing the physical reality of light photons. Planck interpreted his findings on energy quanta as atomic reactions to stimulation as opposed to discrete realities. Einstein’s findings earned him the 1921 Nobel Prize in physics for his paper on the photoelectric effect….and not for his work on relativity!

Deja Vu All Over Again: Is Light a Particle or a Wave?

My EinsteinAlong with Planck, Einstein is considered to be “the father of quantum physics.” The subsequent development by others of quantum mechanics (the methods of dealing with quantum physics) left Einstein sharply skeptical. For one, quantum physics and its principle of particle/wave duality dictates that light behaves both as particle and wave – depending on the experiment conducted. That, in itself, would trouble a physicist like Einstein for whom deterministic (cause and effect) physics was paramount, but there were other, startling ramifications of quantum mechanics which repulsed Einstein. The notion that events in the sub-atomic world could be statistical in nature rather than cause-and-effect left Einstein cold. “God does not play dice with the universe,” was Einstein’s opinion. Others, like the father of atomic theory, Neils Bohr, believed the evidence undeniable that nature is governed at some level by chance.

In one of the great ironies of physics, Einstein, one of the two fathers of quantum physics, felt compelled to abandon his brain-child because of philosophical/scientific conflicts within his own psyche. He never completely came to terms with the new science of quantum physics – a situation which left him somewhat outside the greater mainstream of physics in his later years.

Like Einstein’s relativity theories, quantum physics has stood the test of time. Quantum mechanics works, and no experiments have ever been conducted to prove the method wrong. Despite the truly mysterious realm of the energy quanta and quantum physics, the science works beautifully. Perhaps Einstein was right: Quantum mechanics, as currently formulated, may work just fine, but it is not the final, complete picture of the sub-atomic world. No one could appreciate that possibility in the pursuit of physics more than Einstein. After all, it was his general theory of relativity in 1916 which replaced Isaac Newton’s long-held and supremely useful force-at-a-distance theory of gravity with the more complete and definitive concept of four-dimensional, curved space-time.

By the way, and in conclusion, it is Newton’s mathematics-based science of dynamics (the science of force and motion) that defines the very first major upheaval in the history of physics – as recorded in his masterwork book from 1687, the Principia – the greatest scientific book ever written. Stay tuned.

A Greater Light for Mariners! Fresnel and His Life-Saving Lighthouse Lens

A recent drive north of San Francisco to Point Reyes National Seashore with its famous Point Reyes lighthouse was enough to stir many emotions. California’s rocky and picturesque northern coastline is reason enough to make the trip, but the lure of its famous lighthouse proves irresistible.

Lighthouse[1]Point Reyes Lighthouse

The Point Reyes lighthouse is perched on a high, notoriously treacherous point of land that extends well into the Pacific Ocean from the main coastline. Many a ship found its final resting place on these rocky shores, going back to the time when sailing vessels and their intrepid sailors first plied the waters, here. The first on the scene was likely Sir Francis Drake who is believed to have safely landed immediately south of here in 1579 at what today is known as “Drake’s Bay.”

The Point Reyes lighthouse first lit its first-order Fresnel (pronounced fray-nel) light source on December 1, 1870. The oil-lamp used was nestled at the focal point of the 6,000 pound rotating Fresnel lens assembly, and its focused light could be seen all the way to the horizon on clear nights – roughly twenty-four miles out in the ocean. The weight-driven, precision clockwork mechanism which rotates the huge lens assembly once every two minutes sweeps a beam of light past a given point every five seconds, a beam that can be seen three or four times farther out to sea than previous lights – thanks to the revolutionary lens design of the French engineer/scientist Augustin-Jean Fresnel. Prior to Fresnel’s published treatise on light diffraction in 1818 and the subsequent appearance of his revolutionary lens design in 1823, lighthouses relied on conventional, inefficient and heavy glass lenses and mirrors to focus light. Fresnel lenses were soon universally adopted for lighthouses based their superior performance. The 6,000 pound first-order Fresnel lens assembly and clockwork drive installed at Point Reyes in 1870 was purchased by the U.S. Government at the great Paris Exposition in 1867.

IMG_5053Looking up into the Fresnel lens assembly and pedestal

Fresnel lenses are ranked in order of their size (focal distance from internal light source to lens), and range from first-order at approximately 36 inches to just under 6 inches for a sixth-order lens. Point Reyes is renowned as the windiest location on the Pacific Coast and the second foggiest in all of North America. Given those credentials and the treacherous rocky point on which it sits, the Point Reyes lighthouse certainly merited the biggest Fresnel lens obtainable!

Fresnel Engraving_BW_8X10_2Augustin-Jean Fresnel

Point_Reyes_Lighthouse_1871[1]

Edweard Muybridge photo – 1888
The domed Fresnel lens is clearly visible inside.

Linda and I were at Point Reyes celebrating our 49th wedding anniversary. Upon arriving at the lighthouse after 22 miles of driving from “town,” we were greeted with the warning that the path down to the lighthouse is comprised of 308 steps – (equivalent to a 13 story building) and that the “faint-of-heart” should not attempt the trip. We looked at each other, smiled, shrugged, and off we went. Though narrow, the cement steps are solid and shallow, so the trip back up was not bad!

Folks with fear-of-heights issues are NOT going to enjoy the stairs, however, as the light itself is perched high above the ocean on a treacherous ridge. In the old days, before there were stairs, the light-keeper occasionally had to get down on hands and knees on the rocky trail to complete the trip in howling winds and dense fog. Winds have been clocked higher than 130 mph at Point Reyes! After seeing the site, first-hand, it is easy to imagine just how difficult the light-keeper’s job was in the old days – keeping the light lit and the weight-driven clockwork running 24/7. The gravity-powered mechanism required “rewinding” every 2 ½ hours!

IMG_5069On the way back up!

Heading for the ShoalsHeading for the Shoals!

The terror of being “off course” in wild seas along a rugged coastline must have been overwhelming to seafarers. Lighthouses played a significant role in reducing the incidence of shipwreck for more than a century, but today’s GPS satellite navigational aids have all but rendered them superfluous. Among lighthouses that continue to operate today, the light source is a high-tech electric bulb within the lens, not an oil lamp. Many of yesterday’s Fresnel lens assemblies are relegated to static displays in a museum building adjoining the lighthouse in which they served. Point Reyes’ light remains in operating condition, still in its original position. The last of its resident “keepers” left Point Reyes in 1985. The lighthouse is now under the jurisdiction of the National Park Service.

As for Augustin-Jean Fresnel, the French hero of this scientific/seafaring drama: He died young in 1827 at age 39. Although honored in his day with membership in the prestigious Royal (scientific) Society of London and by its award of the prestigious Rumford Medal in 1824, his name is little known, today, outside of science. Anyone who visits lighthouses is bound to learn of him and his famous lenses, however, and of the importance of his work to both scientific-optics and seafaring. His name is engraved on the Eiffel Tower in Paris together with a long-list of other illustrious Frenchmen.

Two Fine Resources:

Short Bright Flash

IMG_2657

I recommend the recently published book by Theresa Levitt on the history of the modern lighthouse and Augustin-Jean Fresnel whose pioneering work on scientific optics and subsequent lens design influenced both science and seafaring.

The other book specifically on the Point Reyes Lighthouse is a beautifully rendered historical and photographic treatment of the subject by Richard Blair and Kathleen Goodwin. I was delighted to find this fine book when we were in the town bookstore. I purchased two copies at a very reasonable price!

IMG_2663

An example of the beautiful photography in my copy of The Point Reyes Lighthouse by Richard Blair and Kathleen Goodwin: The photo shows the interior of the Point Reyes first-order Fresnel lens with the modern electric light source(s) clearly visible. This book is published by Color & Light Editions which specializes in Point Reyes literature and art.