Sir Isaac Newton: “I Can Calculate the Motions of the Planets, but I Cannot Calculate the Madness of Men”

Isaac Newton, the most incisive mind in the history of science, reportedly uttered that sentiment about human nature. Why would he infer such negativity about his fellow humans? Newton’s scientific greatness stemmed from his ability to see well beyond human horizons. His brilliance was amply demonstrated in his great book, Philosophiae Naturalis Principia Mathematica in which he logically constructed his “system of the world,” using mathematics. The book’s title translates from Latin as Mathematical Principles of Natural Philosophy, often shortened to “the Principia” for convenience.

The Principia is the greatest scientific book ever published. Its enduring fame reflects Newton’s ground-breaking application of mathematics, including aspects of his then-fledgling calculus, to the seemingly insurmountable difficulties of explaining motion physics. An overwhelming challenge for the best mathematicians and “natural philosophers” (scientists) in the year 1684 was to demonstrate mathematically that the planets in our solar system should revolve around the sun in elliptically shaped orbits as opposed to circles or some other geometric path. The fact that they do move in elliptical paths was carefully observed by Johann Kepler and noted in his 1609 masterwork, Astronomia Nova.

In 1687, Newton’s Principia was published after three intense years of effort by the young, relatively unknown Cambridge professor of mathematics. Using mathematics and his revolutionary new concept of universal gravitation, Newton provided precise justification of Kepler’s laws of planetary motion in the Principia. In the process, he revolutionized motion physics and our understanding of how and why bodies of mass, big and small (planets, cannonballs, etc.), move the way they do. Newton did, indeed, as he stated, show us in the Principia how to calculate the motion of heavenly bodies.

In his personal relationships, Newton found dealing with people and human nature to be even more challenging than the formidable problems of motion physics. As one might suspect, Newton did not easily tolerate fools and pretenders in the fields of science and mathematics – “little smatterers in mathematicks,” he called them. Nor did he tolerate much of basic human nature and its shortcomings.

 In the Year 1720, Newton Came Face-to-Face with
His Own Human Vulnerability… in the “Stock Market!”

 In 1720, Newton’s own human fallibility was clearly laid bare as he invested foolishly and lost a small fortune in one of investing’s all-time market collapses. Within our own recent history, we have had suffered through the stock market crash of 1929 and the housing market bubble of 2008/2009. In these more recent “adventures,” society and government had allowed human nature and its greed propensity to over-inflate Wall Street to a ridiculous extent, so much so that a collapse was quite inevitable to any sensible person…and still it continued.

Have you ever heard of the great South Sea Bubble in England? Investing in the South Sea Trading Company – a government sponsored banking endeavor out of London – became a favorite past-time of influential Londoners in the early eighteenth century. Can you guess who found himself caught-up in the glitter of potential investment returns only to end up losing a very large sum? Yes, Isaac Newton was that individual along with thousands of others.

It was this experience that occasioned the remark about his own inability to calculate the madness of men (including himself)!

Indeed, he should have known better than to re-enter the government sponsored South Sea enterprise after initially making a tidy profit from an earlier investment in the stock. As can be seen from the graph below, Newton re-invested (with a lot!) in the South Sea offering for the second time as the bubble neared its peak and just prior to its complete collapse. Newton lost 20,000 English pounds (three million dollars in today’s valuations) when the bubble suddenly burst.

Clearly, Newton’s comment, which is the theme of this post, reflects his view that human nature is vulnerable to fits of emotion (like greed, envy, ambition) which in turn provoke foolish, illogical behaviors. When Newton looked in the mirror after his ill-advised financial misadventure, he saw staring back at him the very madness of men which he then proceeded to rail against! Knowing Newton through the many accounts of his life that I have studied, I can well imagine that his financial fiasco must have been a very tough pill for him to swallow. Many are the times in his life that Newton “railed” to vent his anger against something or someone; his comment concerning the “madness of men” is typical of his outbursts. Certainly, he could disapprove of his fellow man for fueling such an obvious investment bubble. In the end, and most painful for him, was his realization that he paid a stiff price for foolishly ignoring the bloody obvious. For anyone who has risked and lost on the market of Wall Street, the mix of feelings is well understood. Even the great Newton had his human vulnerabilities – in spades, and greed was one of them. One might suspect that Newton, the absorbed scientist, was merely naïve when it came to money matters.

That would be a very erroneous assumption. Sir Isaac Newton held the top-level government position of Master of the Mint in England, during those later years of his scientific retirement – in charge of the entire coinage of the realm!


For more on Isaac Newton and the birth of the Principia click on the link:

Sir Humphry Davy: Pioneer Chemist and His Invention of the Coal Miner’s “Safe Lamp” at London’s Royal Institution – 1815

humphry-davy-51Among the many examples to be cited of science serving the cause of humanity, one story stands out as exemplary. That narrative profiles a young, pioneering “professional” chemist and his invention which saved the lives of thousands of coal miners while enabling the industrial revolution in nineteenth-century England. The young man was Humphry Davy, who quickly rose to become the most famous chemist/scientist in all of England and Europe by the year 1813. His personal history and the effects of his invention on the growth of “professionalism” in science are a fascinating story.

The year was 1799, and a significant event had occurred. The place: London, England. The setting: The dawning of the industrial revolution, shortly to engulf England and most of Europe. The significant event of which I speak: The chartering of a new, pioneering entity located in the fashionable Mayfair district of London. In 1800, the Royal Institution of Great Britain began operation in a large building at 21 Albemarle Street. Its pioneering mission: To further the cause of scientific research/discovery, particularly as it serves commerce and humanity.


The original staff of the Royal Institution was tiny, headed by its founder, the notable scientist and bon-vivant, Benjamin Thompson, also known as Count Rumford. Quickly, by 1802, a few key members of the founding staff, including Rumford, were gone and the fledgling organization found itself in dis-array and close to closing its doors. Just one year earlier, in 1801, two staff additions had materialized, men who were destined to make their scientific marks in physics and chemistry while righting the floundering ship of the R.I. by virtue of their brilliance – Thomas Young and the object of this post, a young, relatively unknown, pioneering chemist from Penzance/Cornwall, Humphry Davy.

By the year 1800, the industrial revolution was gaining momentum in England and Europe. Science and commerce had already begun to harness the forces of nature required to drive industrial progress rapidly forward. James Watt had invented the steam engine whose motive horsepower was now bridled and serving the cause by the year 1800. The looming industrial electrical age was to dawn two decades later, spearheaded by Michael Faraday, the most illustrious staff member of the Royal Institution, ever, and one of the greatest physicists in the history of science.

In the most unlikely of scenarios at the Royal Institution, Humphry Davy interviewed and hired the very young Faraday as a lab assistant (essentially lab “gofer”) in 1813. By that time, Davy’s star had risen as the premier chemist in England and Europe; little did he know that the young Faraday, who had less than a grade-school education and who worked previously as a bookbinder, would, in twenty short years, ascend to the pinnacle of physics and chemistry and proceed to father the industrial electrical age. The brightness of Faraday’s scientific star soon eclipsed even that of Davy’s, his illustrious benefactor and supervisor.

For more on that story click on this link to my previous post on Michael Faraday:

Wanted: Ever More Coal from England’s Mines 
at the Expense of Thousands Lost in Mine Explosions

Within two short years of obtaining his position at the Royal Institution in 1813, young Faraday found himself working with his idol/mentor Davy on an urgent research project – a chemical examination of the properties of methane gas, or “fire damp,” as it was known by the “colliers,” or coal miners.

The need for increasing amounts of coal to fuel the burgeoning boilers and machinery of the industrial revolution had forced miners deeper and deeper underground in search of rich coal veins. Along with the coal they sought far below the surface, the miners encountered larger pockets of methane gas which, when exposed to the open flame of their miner’s lamp, resulted in a growing series of larger and more deadly mine explosions. The situation escalated to a national crisis in England and resulted in numerous appeals for help from the colliers and from national figures.

By 1815, Humphry Davy at the Royal Institution had received several petitions for help, one of which came from a Reverend Dr. Gray from Sunderland, England, who served as a spokesman/activist for the colliers of that region.

Davy and the Miner’s Safe Lamp:
Science Serving the “Cause of Humanity”

Working feverishly from August and into October, 1815, Davy and Faraday produced what was to become known as the “miner’s safe lamp,” an open flame lamp designed not to explode the pockets of methane gas found deep underground. The first announcement of Davy’s progress and success in his work came in this historic letter to the Reverend Gray dated October 30, 1815.


The announcement heralds one of the earliest, concrete examples of chemistry (and science) put to work to provide a better life for humanity.

Royal Institution
Albermarle St.
Oct 30

 My Dear Sir

                               As it was in consequence of your invitation that I endeavored to investigate the nature of the fire damp I owe to you the first notice of the progress of my experiments.

 My results have been successful far beyond my expectations. I shall inclose a little sketch of my views on the subject & I hope in a few days to be able to send a paper with the apparatus for the Committee.

 I trust the safe lamp will answer all the objects of the collier.

 I consider this at present as a private communication. I wish you to examine the lamps I had constructed before you give any account of my labours to the committee. I have never received so much pleasure from the results of my chemical labours, for I trust the cause of humanity will gain something by it. I beg of you to present my best respects to Mrs. Gray & to remember me to your son.

 I am my dear Sir with many thanks for your hospitality & kindness when I was at Sunderland.


                                                                             H. Davy

This letter is clearly Davy’s initial announcement of a scientifically-based invention which ultimately had a pronounced real and symbolic effect on the nascent idea of “better living through chemistry” – a phrase I recall from early television ads run by a large industrial company like Dupont or Monsanto.


In 1818, Davy published his book on the urgent, but thorough scientific researches he and Faraday conducted in 1815 on the nature of the fire damp (methane gas) and its flammability.


Davy’s coal miner’s safety lamp was the subject of papers presented by Davy before the Royal Society of London in 1816. The Royal Society was, for centuries since its founding by King Charles II in 1662, the foremost scientific body in the world. Sir Isaac Newton, the greatest scientific mind in history, presided as its president from 1703 until his death in 1727. The Society’s presence and considerable influence is still felt today, long afterward.

davy41Davy’s safe lamp had an immediate effect on mine explosions and miner safety, although there were problems which required refinements to the design. The first models featured a wire gauze cylinder surrounding the flame chamber which affected the temperature of the air/methane mixture in the vicinity of the flame. This approach took advantage of the flammability characteristics of methane gas which had been studied so carefully by Davy and his recently hired assistant, Michael Faraday. Ultimately, the principles of the Davy lamp were refined sufficiently to allow the deep-shaft mining of coal to continue in relative safety, literally fueling the industrial revolution.

Humphry Davy was a most unusual individual, as much poet and philosopher in addition to his considerable talents as a scientist. He was close friends with and a kindred spirit to the poets Coleridge, Southey, and Wordsworth. He relished rhetorical flourish and exhibited a personal idealism in his earlier years, a trait on open display in the letter to the Reverend Gray, shown above, regarding his initial success with the miner’s safe lamp.

“I have never received so much pleasure from the results of my chemical labours, for I trust the cause of humanity will gain something by it.”

As proof of the sincerity of this sentiment, Davy refused to patent his valuable contribution to the safety of thousands of coal miners!

Davy has many scientific “firsts” to his credit:

-Experimented with the physiological effects of the gas nitrous oxide (commonly known as “laughing gas”) and first proposed it as a possible medical/dental anesthetic – which it indeed became years later, in 1829.

-Pioneered the new science of electrochemistry using the largest voltaic pile (battery) in the world, constructed for Davy in the basement of the R.I. Alessandro Volta first demonstrated the principles of the electric pile in 1800, and within two years, Davy was using his pile to perfect electrolysis techniques for separating and identifying “new” fundamental elements from common chemical compounds.

-Separated/identified the elements potassium and sodium in 1807, soon followed by others such as calcium and magnesium.

-In his famous, award-winning Bakerian Lecture of 1806, On Some Chemical Agencies of Electricity, Davy shed light on the entire question concerning the constituents of matter and their chemical properties.

-Demonstrated the “first electric light” in the form of an electric arc-lamp which gave off brilliant light.

-Wrote several books including Elements of Chemical Philosophy in 1812.

In addition to his pioneering scientific work, Davy’s heritage still resonates today for other, more general reasons:

-He pioneered the notion of “professional scientist,” working, as he did, as paid staff in one of the world’s first organized/chartered bodies for the promulgation of science and technology, the Royal Institution of Great Britain.

-As previously noted, Davy is properly regarded as the savior of the Royal Institution. Without him, its doors surely would have closed after only two years. His public lectures in the Institution’s lecture theatre quickly became THE rage of established society in and around London. Davy’s charismatic and informative presentations brought the excitement of the “new sciences” like chemistry and electricity front and center to both ladies and gentlemen. Ladies were notably and fashionably present at his lectures, swept up by Davy’s personal charisma and seduced by the thrill of their newly acquired knowledge… and enlightenment!


The famous 1802 engraving/cartoon by satirist/cartoonist James Gillray
Scientific Researches!….New Discoveries on Pneumaticks!…or…An
Experimental Lecture on the Power of Air!

This very famous hand-colored engraving from 1802 satirically portrays an early public demonstration in the lecture hall of the Royal Institution of the powers of the gas, nitrous oxide (laughing gas). Humphry Davy is shown manning the gas-filled bellows! Note the well-heeled gentry in the audience including many ladies of London. Davy’s scientific reputation led to his eventual English title of Baronet and the honor of Knighthood, thus making him Sir Humphry Davy.

The lecture tradition at the R.I. was begun by Davy in 1801 and continued on for many years thereafter by the young, uneducated man hired by Davy himself in 1813 as lab assistant. Michael Faraday was to become, in only eight short years, the long-tenured shining star of the Royal Institution and a physicist whose contributions to science surpassed those of Davy and were but one rank below the legacies of Galileo, Newton, Einstein, and Maxwell. Faraday’s lectures at the R.I. were brilliantly conceived and presented – a must for young scientific minds, both professional and public – and the Royal Institution in London remained a focal point of science for more than three decades under Faraday’s reign, there.


The charter and by-laws of the R.I. published in 1800 and an admission ticket to Michael Faraday’s R.I. lecture on electricity written and signed by him: “Miss Miles or a friend / May 1833”

Although once again facing economic hard times, the Royal Institution exists today – in the same original quarters at 21 Albemarle Street. Its fabulous legacy of promulgating science for over 217 years would not exist were it not for Humphry Davy and Michael Faraday. It was Davy himself who ultimately offered that the greatest of all his discoveries was …Michael Faraday.

Charles Darwin’s Journey on the Beagle: History’s Most Significant Adventure

In 1831, a young, unknown, amateur English naturalist boarded the tiny ship, HMS Beagle, and embarked, as crew member, on a perilous, five-year journey around the world. His observations and the detailed journal he kept of his various experiences in strange, far-off lands would soon revolutionize man’s concept of himself and his place on planet earth. Darwin’s revelations came in the form of his theory of natural selection – popularly referred to as “evolution.”

H.M.S. Beagle_Galapagos_John Chancellor

Since the publication of his book, On the Origin of Species in 1859, which revealed to the scientific community his startling conclusions about all living things based on his voyage journal, Darwin has rightfully been ranked in the top tier of great scientists. In my estimation, he is the most important and influential natural scientist of all time, and I would rank him right behind Isaac Newton and Albert Einstein as the most significant and influential scientific figures of modern times.

Young Charles Darwin enrolled at the University of Edinburgh in 1825 to pursue a career in medicine. His father, a wealthy, prominent physician had attended Edinburgh and, indeed, exerted considerable influence on young Charles to follow him in a medical career. At Edinburgh, the sixteen-year old Darwin quickly found the study of anatomy with its dissecting theatre an odious experience. More than once, he had to flee the theatre to vomit outside after witnessing the dissection process. The senior Darwin, although disappointed in his son’s unsuitability for medicine, soon arranged for Charles to enroll at Cambridge University to study for the clergy. In Darwin’s own words: “He [the father] was very properly vehement against my turning an idle sporting man, which seemed my probable destination.”

Darwin graduated tenth in his class of 168 with a B.A. and very little interest in the clergy! During his tenure at Cambridge, most of young Darwin’s spare time was spent indulging his true and developing passion: Collecting insects with a special emphasis on beetles. Along the way, he became good friends with John Steven Henslow, professor of geology, ardent naturalist, and kindred spirit to the young Charles.

Wanted: A Naturalist to Sail On-Board the Beagle

On 24 August, 1831, in one of history’s most prescient communiques, Professor Henslow wrote his young friend and protegee: “I have been asked by [George] Peacock…to recommend him a naturalist as companion to Capt. Fitzroy employed by Government to survey the S. extremity of America [the coasts of South America]. I have stated that I considered you to be the best qualified person I know of who is likely to undertake such a situation. I state this not on the supposition of ye being a finished naturalist, but as amply qualified for collecting, observing, & noting any thing worthy to be noted in natural history.” Seldom in history has one man “read” another so well in terms of future potential as did Henslow in that letter to young Darwin!

Charles’ father expressed his opposition to the voyage, in part, on the following grounds as summarized by young Darwin:

-That such an adventure could prove “disreputable to my [young Darwin’s] character as a Clergyman hereafter.”

-That it seems “a wild scheme.”

-That the position of naturalist “not being [previously] accepted there must be some serious objection to the vessel or expedition.”

-That [Darwin] “should never settle down to a steady life hereafter.”

-That “it would be a useless undertaking.”

Darwin 1840_RichmondThe young man appealed to his uncle Josiah Wedgewood [of pottery family fame] whose judgement he valued. Scientific history hung in the balance as Uncle Josiah promptly weighed-in with the senior Darwin, offering convincing arguments in favor of the voyage. In rebuttal to the objection from Darwin’s father that “it would be a useless undertaking,” the Uncle reasoned: “The undertaking would be useless as regards his profession [future clergyman], but looking upon him as a man of enlarged curiosity, it affords him the opportunity of seeing men and things as happens to few.” Enlarged curiosity, indeed! How true that proved to be. The senior Darwin then made his decision in the face of Uncle Josiah’s clear vision and counsel: Despite lingering reservations, he gave his permission for Charles to embark on the historic sea voyage, one which more than any other, changed mankind’s sense of self. Had the decision been otherwise, Darwin’s abiding respect for his father’s opinion and authority would have bequeathed the world yet another clergyman while greatly impeding the chronicle of man and all living things on this planet.

On 27 December, 1831, HMS Beagle with Darwin aboard put out to sea, beginning an adventure that would circle the globe and take almost five years. Right from the start, young Charles became violently seasick, often confined to his swaying hammock hanging in the cramped quarters of the ship. Seasickness dogged young Darwin throughout the voyage. I marvel at the fortitude displayed by this young, recently graduated “gentleman from Cambridge” as he undertook such a daunting voyage. Given that the voyage would entail many months at sea, under sail, Capt. Fitzroy and Darwin had agreed from the start that Charles would spend most of his time on land, in ports of call, while the Beagle would busy itself surveying the local coastline per its original government charter. While on land, Darwin’s mission was to observe and record what he saw and experienced concentrating, of course, on the flora, fauna, and geology of the various diverse regions he would visit.

St. Jago, an island off the east coast of South America was the Beagle’s first stop on 16 January, 1832. It was here he made one of his first significant observations. Quoting from his journal: “The geology of this island is the most interesting part of its natural history. On entering the harbour, a perfectly horizontal white band in the face of the sea cliff, may be seen running for some miles along the coast, and at the height of about forty-five feet above the water. Upon examination, this white stratum is found to consist of calcareous [calcium] matter, with numerous shells embedded, most or all of which now exist on the neighboring coast.”

Darwin goes on to conclude that a stratum of sea-shells very much higher than the current water line speaks to ancient, massive upheavals of the earth in the region. From the simple, focused collector of beetles in his Cambridge days, Darwin had now become obsessed with the bigger picture of nature, a view which embraced the importance of geology/environment as key to decoding nature’s secrets.

In a fascinating section of his journal, Darwin describes his astonishment at the primitive state of the native inhabitants of Tierra Del Fuego, at the southern tip of South America. From the journal entry of 17 December, 1832: “In the morning, the Captain sent a party to communicate with the Fuegians. When we came within hail, one of the four natives who were present advanced to receive us, and began to shout most vehemently, wishing to direct us where to land. When we were on shore the party looked rather alarmed, but continued talking and making gestures with great rapidity. It was without exception the most curious and interesting spectacle I ever beheld: I could not have believed how wide was the difference between savage and civilized man; it is greater than between a wild and domesticated animal, inasmuch as in man there is a greater power of improvement.” A separate reference I recall reading referring to Darwin’s encounter with the Fuegians stated that he could scarcely believe that the naked, dirty, and primitive savages before his eyes were of the same species as the sherry-sipping professors back at Cambridge University – so vividly stated.

On 2 October, 1836, the Beagle arrived at Falmouth, Cornwall, her nearly five-year journey circumnavigating the globe complete. Throughout the trip, Darwin recorded life on the high seas and, most importantly, his myriad observations on the geology of the many regions visited on foot and horseback as well as the plant and animal life.

I often invoke the mantra to which I ardently subscribe: That fact is always stranger than fiction…and so much more interesting and important. Picturing Darwin, the elite Englishman and budding naturalist, riding horseback amidst the rough-hewn vaqueros [cowboys] of Chile speaks to the improbability of the entire venture. When studying Darwin, it quickly becomes clear to the reader that his equable nature and noble intents were obvious to those whose approval and cooperation were vital for the success of his venture. That was particularly true of the seaman crew of the Beagle and of Capt. Fitzroy whose private cabin on the ship, Darwin shared. Fortunately, Fitzroy was a man of considerable ability and efficiency in captaining the Beagle. He was, at heart, a man sensible of the power and importance of scientific knowledge, and that made his less admirable qualities bearable to Darwin. The crew made good-natured fun of the intellectual, newbie naturalist in their midst, but spared no effort in helping Darwin pack his considerable array of collected natural specimens, large and small, in boxes and barrels for shipment back to Professor Henslow at Cambridge. Many of these never arrived, but most did make their way “home.”

When Darwin returned to Cambridge after arriving back home at Cornwall, he was surprised to learn that Professor Henslow had spread news among his friends at Cambridge of the Beagle’s whereabouts in addition to sharing, with his university colleagues, the specimens sent home by his young protegee. Darwin had embarked on the Beagle’s voyage as an amateur collector of insects. Now, to his great surprise, he had become a naturalist with a reputation and a following within the elite circles at Cambridge, thanks to Professor Henslow.

Charles_Darwin_seated_crop[1]Once home, Charles Darwin wasted little time tackling the immense task of studying and categorizing the many specimens he had sent back during the voyage. By 1838, the vestiges of natural selection had begun to materialize in his mind. One situation of particular note that he recorded in the Galapagos Islands fueled his speculations. There, he noted that a species of bird indigenous to several of the islands in the archipelago seemed to have unique beaks depending upon which island they inhabited. In virtually all other aspects, the birds closely resembled one another – all members of a single species. Darwin noticed that the beaks in each case seemed most ideally suited to the particular size and shape of the seeds most plentiful on that particular island. Darwin took great pains to document these finches of the Galapagos, suspecting that they harbored important clues to nature’s ways. Darwin reasoned that somehow the birds seemed to be well-adapted to their environment/food source in the various islands. Clues such as this shaped his thought processes as he carefully distilled the notes entered in his journal during the voyage. By 1844, Charles Darwin had formulated the framework for his explanation of animal/plant adaptation to the environment. Except for one or two close and trusted colleagues, Darwin kept his budding theory to himself for years to come for important reasons which I discuss shortly.


Darwin published his book, Journal of Researches, in 1839. The book was taken from his copious journal entries during the voyage; within its pages resides the seed-stock from which would germinate Darwin’s ultimate ideas and his theory of natural selection. This book remained, to Darwin’s dying day, closer to his affections and satisfaction than any other including On the Origin of Species.



What Is the Essence of Natural Selection?

Darwin’s theory of natural selection proposed that species are not immutable across time and large numbers of individuals. There appear random variations in this or that characteristic in a particular individual within a large population. Such variations, beginning with that individual, could be passed along to future generations through its immediate offspring. In the case of a singular Galapagos finch born with a significantly longer and narrower beak than that of a typical bird in the species, that specimen and its offspring which might inherit the tendency will be inevitably subjected to “trial by nature.” If the longer, narrower beak makes it easier for these new birds to obtain and eat the seeds and insects present in their environment, these birds will thrive and go on, over time, to greatly out-reproduce others of their species who do not share the “genetic advantage.” Eventually that new characteristic, in this example, the longer, narrower beak, will predominate within the population in that environment. This notion is the essence of Darwin’s theory of natural selection. If the random variation at hand proves to be disadvantageous, future generations possessing it will be less likely to survive than those individuals without it.

Note that this description, natural selection, is far more scientifically specific than the oft-used/misused phrase applied to Darwin’s work: theory of evolution. To illustrate: “theory of evolution” is a very general phrase admitting even the possibility that giraffes have long necks because they have continually stretched them over many generations reaching for food on the higher tree canopies. That is precisely the thinking of one of the early supporters of evolution theory, the Frenchman, Lamarck, as expressed in his 1809 publication on the subject. Darwin’s “natural selection” explains the specific mechanism by which evolution occurs – except for one vital, missing piece… which we now understand.

Genetics, Heredity, and the DNA Double Helix:
 Random Mutations – the Key to Natural Selection!

Darwin did not know – could not know – the source of the random significant variations in species which were vital to his theory of natural selection. He came to believe that there was some internal genetic blueprint in living things that governed the species at hand while transmitting obvious “familial traits” to offspring. Darwin used the name “gemmules” referring to these presumed discrete building blocks, but he could go no further in explaining their true nature or behavior given the limited scientific knowledge of the time.

James Watson and Francis Crick won the 1962 Nobel Prize in medicine and physiology for their discovery in 1953 of the DNA double helix which carries the genetic information of all living things. The specific arrangement of chemical base-pair connections, or rungs, along the double helix ladder is precisely the genetic blueprint which Darwin suspected. The human genome has been decoded within the last twenty years yielding tremendous knowledge about nature’s life-processes. We know, for instance, that one particular – just one – hereditary base-pair error along the double helix can result in a devastating medical condition called Tay-Sachs, wherein initially healthy brains of newborns are destroyed in just a few years due to the body’s inability to produce a necessary protein. Literally every characteristic of all living things is dictated by the genetic sequence of four different chemical building blocks called bases which straddle the DNA double helix. The random variations necessary for the viability of Darwin’s theory of natural selection are precisely those which stem from random base-pair mutations, or variations, along the helix. These can occur spontaneously during genetic DNA replication, or they can result from something as esoteric as the alpha particles of cosmic radiation hitting a cell nucleus and altering its DNA. The end result of the sub-microscopic change might be trivial, beneficial, or catastrophic in some way to the individual.

Gregor Mendel: The Father of Genetics…Unknown to Darwin

In 1865, a sequestered Austrian monk published an obscure scientific paper in, of all things, a regional bee-keepers journal. Like Darwin, originally, Mendel had no formal scientific qualifications, only a strong curiosity and interest in the pea plants he tended in the monastery garden. He had wondered about the predominant colors of the peas from those plants, green and yellow, and pondered the possible mechanisms which could determine the color produced by a particular plant. To determine this, he concocted a series of in-breeding experiments to find out more. After exhaustive trials using pea color, size of plant, and five other distinguishing characteristics of pea plants, Mendel found that the statistics of inheritance involved distinct numerical ratios, as for example, a “one-in-four chance” for a specific in-breeding outcome. The round numbers present in Mendel’s experimental results suggested the existence of distinct, discrete genetic mechanisms at work – what Darwin vaguely had termed “gemmules.” Mendel’s 1865 paper describing his findings, and the work behind it cements Mendel’s modern reputation as the “Father of Genetics.” Incredibly and unfortunately virtually no one took serious notice of his paper until it was re-discovered in 1900, thirty-five years after its publication, by the English geneticist William Bateson!

Original offprints (limited initial printings for the author) of Mendel’s paper are among the rarest and most desirable of historical works in the history of science, selling for hundreds of thousands of dollars on the rare book/manuscript market. We know that only forty were printed and scarcely half of these have been accounted for. Question: Did Mendel send an offprint of his pea plant experiments to Charles Darwin in 1865, well after the publication of Darwin’s groundbreaking On the Origin of Species in 1859? An uncut [meaning unopened, thus unread] offprint was presumably found among Darwin’s papers after his death, according to one Mendel reference source. Certainly, no mention of it was ever made by Charles Darwin.

 It is an intriguing thought that the key, missing component of Darwin’s natural selection theory as espoused in his Origin of Species possibly resided unread and unnoticed on Darwin’s bookshelf! And is it not a shame that Mendel lived out his life in the abbey essentially unknown and without due credit for his monumental work in the new science of genetics, a specialty which he founded?

Darwin’s Reluctance to Publish His Theory Nearly Cost Him His Due Credit

Darwin finally revealed his theory of natural selection to the public and the scientific community at large in 1859 with the book publication of On the Origin of Species. In fact, the central tenets of the book had congealed in Darwin’s mind long before, by 1844. He had held the framework of his theory close to the vest for all that time! Why? Because to espouse evolutionary ideas in the middle of the nineteenth century was to invite scorn and condemnation from creationists within many religions. No one was more averse to a more secular universe which promoted the notion of a less personal creator, one which did not create man and animals in more or less final form (despite obvious diversity) than Emma Wedgewood Darwin, Darwin’s very religious wife. She believed in an afterlife in which she and her beloved husband would be joined together for eternity. Charles was becoming less and less certain of this religious ideal as the years went by and nature continued to reveal herself to the ever-inquiring self-made naturalist who had set out to probe her ways.

To espouse a natural world which, once its fundamental constituents were brought together, would henceforth change and regulate itself without further involvement by the Creator would be a painful repudiation of Emma’s fundamental beliefs in a personal God. For this very personal reason and because of the professional risk of being ostracized by the community of naturalists for promulgating radical, anti-religious ideas, Darwin put off publication of his grand book, the book which would insure him priority and credit for one of the greatest of all scientific conclusions.

After stalling publication for years and with his manuscript only half completed, Darwin was shocked into feverish activity on his proposed book by a paper he received on 18 June, 1858. It was from a fellow naturalist of Darwin’s acquaintance, one Alfred Russel Wallace. In his paper, Wallace outlined his version of natural selection which eerily resembled the very theory Darwin was planning to eventually publish to secure his priority. There was no doubt that Wallace had arrived independently at the same conclusions that Darwin had reached many years earlier. Wallace’s paper presented an extremely difficult problem for Darwin in that Wallace had requested that Darwin pass his [Wallace’s] paper on to their mutual friend, the pathfinding geologist, Charles Lyell.

Darwin in a Corner: Academic Priority at Stake
Over One of the Great Scientific Breakthroughs

Now Darwin felt completely cornered. If he passed Wallace’s paper on to Lyell as requested, essentially making it public, the academic community would naturally steer credit for the theory of natural selection to Wallace. On the other hand, having just received Wallace’s paper on the subject, how would it look if he, Darwin, suddenly announced publicly that he had already deciphered nature and her ways – well before Wallace had? That course of action could inspire suspicions of plagiary on Darwin’s part.

The priority stakes were as high as any since the time of Isaac Newton when he and the mathematician Gottfried Liebniz locked horns in a bitter battle over credit for development of the calculus. It had been years since Darwin’s voyage on the Beagle which began the long gestation of his ideas on natural selection. He had been sitting on his conclusions since 1844 for fear of publishing, and now he was truly cornered, “forestalled,” as he called it. Darwin, drawing on the better angels of his morose feelings, quickly proposed to Wallace that he [Darwin] would see to it that his [Wallace’s] paper be published in any journal of Wallace’s choosing. In what became a frenzied period in his life, he reached out to two of his closest colleagues and trusted confidants, Charles Lyell and Joseph Hooker for advice. The two been entrusted with the knowledge of Darwin’s work on natural selection for a long time; they well understood Darwin’s priority in the matter, and he needed them now. The two friends came up with a proposal: Publish both Wallace’s paper and a synopsis by Darwin outlining his own long-standing efforts and results. The Linnean Society presented their joint papers in their scientific journal on 1 July, 1858. Fortunately for Darwin, Alfred Russel Wallace was of a conciliatory nature regarding the potential impasse over priority by way of his tacit acknowledgement that his colleague had, indeed, been first to formulate his opinions on natural selection.

Nonetheless, for Darwin, the cat was out of the bag, and the task ahead was to work full-steam to complete the large book that would contain all the details of natural selection and insure his priority. He worked feverishly on his book, On the Origins of Species, right up to its publication by John Murray. The book went on sale on 22 November, 1859, and all 1250 copies sold quickly. This was an excruciating period of Darwin’s life. He was not only under unrelenting pressure to complete one of the greatest scientific books of all time, he was intermittently very ill throughout the process presumably from a systemic problem contracted during his early travels associated with the Beagle voyage. Yes, the expected controversy was to come immediately after publication of the book, but Darwin and his contentions have long weathered the storm. Few of his conclusions have not stood the test of time and modern scrutiny.

The Origin was his great book, but the book that was the origin of the Origin, his 1839 Journal of Researches always remained his favorite. Certainly, the Journal was written at a much happier time in Darwin’s life, a time flush with excitement over his prospects as a newly full-fledged naturalist. For me, the Journal brims with the excitement of travel and scientific discovery/fact-finding – the seed-corn of scientific knowledge (and new technologies). The Origin represents the resultant harvest from that germinated seed-corn.

“Endless Forms Most Beautiful” –
Natural Selection in Darwin’s Own Words

In his Introduction to the Origin, Darwin describes the essence of natural selection:

“In the next chapter, the struggle for existence amongst all organic beings throughout the world, which inevitably follows from their high geometrical powers of increase, will be treated of. This is the doctrine of Malthus, applied to the whole animal and vegetable kingdoms. As many more individuals of each species are born that can possibly survive; and as, consequently, there is a frequently occurring struggle for existence, it follows that any being, if it vary however slightly in any manner profitable to itself, under the complex and sometimes varying conditions of life, will have a better chance of surviving, and thus be naturally selected. From the strong principle of inheritance, any selected variety will tend to propagate its new and modified form.

Darwin and Religion

Charles Darwin, educated for the clergy at Cambridge, increasingly drifted away from orthodox religious views as his window on nature and her ways became more transparent to him over the decades. Never an atheist, his attitudes were increasingly agnostic as he increasingly embraced the results of his lifelong study of the natural world. The Creator, which Darwin believed in, was not, to him, the involved, shepherd of all living things in this world. Rather, he seemed more like the watchmaker who, after his watch was first assembled, wound it up and let it run on its own while retreating to the background.

 Another viewpoint, which I tend to favor and which may apply to Darwin: God, whom we cannot fully know in this life, created not only all living things at the beginning, but also the entire structure of natural law (science) which dictates not only the motion of the planets, but the future course of life forms. Natural selection, hence evolution as well, are central tenants of that complete structure of natural law. The laws of nature, which permanently bear the fingerprints of the creator and his creation, thus enable the self-powered, self-regulating behaviors of the physical and natural world – without contradiction.

 Charles Darwin: Humble Man and Scientific Titan


In writing this post, my re-acquaintance with Darwin has brought great joy. Some years, now, after initially reading the biographies and perusing his works, I re-discover the life and legacy which is so important to science. His body of work includes several other very important books beside his Journal and Origin. Beyond his scientific importance and the science, itself, lies the man himself – a man of very high character and superb intellect. Darwin was gifted with intense curiosity, that magical motor that drives great accomplishment in science. Passion and curiosity: Isaac Newton had them in great abundance, and so, too, did Albert Einstein. Yet, Charles Darwin was different in several respects from those two great scientists: First, he was fortunate enough to have been born to privilege and was thus comfortably able to devote his working life to science from the beginning. Second, Darwin was a very happily married man who fathered ten children, each of which he loved and doted upon. Third, Darwin’s character was impeccable in all respects. His personality was stiffened a bit by the English societal conventions prevalent then, but his humanity shows through in so many ways. His struggle with religion is one most of us can relate to.

Reading Darwin’s works is a joy both because he was an articulate, educated Englishman and because the contents of his books like the Journal and Origin are easily digestible compared to the major works of Newton and Einstein. Like Darwin himself, my favorite book of his is The Journal of Researches, sometimes referred to as the Voyage of the Beagle. What an adventure.


The “sandwalk” path around the extended property of his long-held estate, Down House. Darwin frequently traversed this closed path on solitary walks around the estate while he gathered his thoughts about matters both big and small.

Marking the Passage of Time: The Elusive Nature of the Concept

Nature presents us with few mysteries more tantalizing than the concept of “time.” Youngsters, today, might not think the subject worthy of much rumination: After all, one’s personal iPhone can conveniently provide the exact time at any location on our planet.


Human beings have long struggled with two fundamental questions regarding time:

  1. What are the fundamental units in nature used to express time? More simply, what constitutes one second of time? How is one second determined?
  2. How can we “accurately” measure time using the units chosen to express it?

The simple answers for those so inclined might be: We measure time in units of seconds, minutes, hours, and days, etc., and we have designed carefully constructed and calibrated clocks to measure time! That was easy, wasn’t it?

The bad news: Dealing with the concept of time is not quite that simple.
The good news: The fascinating surprises and insights gained from taking a closer, yet still cursory, look at “time” are well worth the effort to do so. To do the subject justice requires far more than a simple blog post – scholarly books, in fact – but my intent, here, is to illustrate how fascinating the concept of time truly is.

Webster’s dictionary defines time as “a period or interval…the period between two events or during which ‘something’ exists, happens, or acts.”

For us humans the rising and setting of the sun – the cycle of day and night is a “something” that happens, repeats itself, and profoundly effects our existence. It is that very cycle which formed our first concept of time. The time required for the earth to make one full revolution on its axis is but one of many repeating natural phenomena, and it was, from the beginning of man’s existence, uniquely qualified to serve as the arbitrary definition of time measurement. Other repeatable natural phenomena could have anchored our definition of time: For instance, the almost constant period of the earth’s rotation around the sun (our year) or certain electron- jump vibrations at the atomic level could have been chosen except that such technology was unknown and unthinkable to ancient man. In fact, today’s universally accepted time standard utilizes a second defined by the extraordinarily stable and repeatable electron jumps within Cesium 133 atoms – the so-called atomic clock which has replaced the daily rotation of the earth as the prime determinant of the second.

Why use atomic clocks instead of the earth’s rotation period to define the second? Because the earth’s rotational period varies from month to month due to the shape of our planet’s orbit around the sun. Its period also changes over many centuries as the earth’s axis “precesses” (a slowly rotating change of direction) relative to the starry firmament, all around. By contrast, atomic clocks are extremely regular in their behavior.

Timekeepers on My Desk: From Drizzling Sand to Atomic Clocks!

I have on my desk two time-keepers which illustrate the startling improvement in time-keeping over the centuries. One is the venerable hour-glass: Tip it over and the sand takes roughly thirty minutes (in mine) to drizzle from top chamber to bottom. The other timekeeper is one of the first radio-controlled clocks readily available – the German-built Junghans Mega which I purchased in 1999. It features an analog display (clock-hands, not digital display) based on a very accurate internal quartz electronic heartbeat: The oscillations of its tiny quartz-crystal resonator. Even the quartz oscillator may stray from absolute accuracy by as much as 0.3 seconds per day in contrast to the incredible regularity of the cesium atomic clocks which now define the international second as 9,192,631,770 atomic “vibrations” of cesium 133 atoms – an incredibly stable natural phenomena. The Junghans Mega uses its internal radio capability to automatically tune in every evening at 11 pm to the atomic clocks operating in Fort Collins, Colorado. Precise time-sync signals broadcast from there are utilized to “reset” the Mega to the precise time each evening at eleven.

I love this beautifully rendered German clock which operates all year on one tiny AA battery and requires almost nothing from the operator in return for continuously accurate time and date information. Change the battery once each year and its hands will spin to 12:00 and sit there until the next radio query to Colorado. At that point, the hands will spin to the exact second of time for your world time zone, and off it goes….so beautiful!

Is Having Accurate Time So Important?
You Bet Your Life…and Many Did!

Yes, keeping accurate time is far more important than not arriving late for your doctor’s appointment! The fleets of navies and the world of seagoing commerce require accurate time…on so many different levels. In 1714, the British Admiralty offered the then-huge sum of 20,000 pounds to anyone who could concoct a practical way to measure longitude at sea. That so-called Longitude Act was inspired by a great national tragedy involving the Royal Navy. On October 22, 1707, a fleet of ships was returning home after a sojourn at sea. Despite intense fog, the flagship’s navigators assured Admiral Sir Cloudisley Shovell that the fleet was well clear of the treacherous Scilly Islands, some twenty miles off the southwest coast of England. Such was not the case, however, and the admiral’s flagship, Association, struck the shoals first, quickly sinking followed by three other vessels. Two thousand lives were lost in the churning waters that day. Of those who went down, only two managed to wash ashore alive. One was Sir Cloudesley Shovell. As an interesting aside, the story has it that a woman combing the beach happened across the barely alive admiral, noticed the huge emerald ring on his finger, and promptly lifted it, finishing him off in the process. She confessed the deed some thirty years later, offering the ring as proof.

The inability of seafarers to navigate safely by determining their exact location at sea was of great concern to sea powers like England who had a great investment in both their fleet of fighting ships and their commerce shipping. A ship’s latitude could be quite accurately determined on clear days by “shooting” the height of the sun above the horizon using a sextant, but its longitude position was only an educated guess. The solution to the problem of determining longitude-at-sea materialized in the form of an extremely accurate timepiece carried aboard ship and commonly known ever since as a “chronometer.” Using such a steady, accurate time-keeper, longitude could be calculated.

For the details, I recommend Dava Sobel’s book titled “Longitude.” The later, well-illustrated version is the one to read. In her book, the author relates the wonderfully improbable story of an English country carpenter who parlayed his initial efforts building large wooden clocks into developing the world’s first chronometer timepiece accurate enough to solve the “longitude problem.” After frustrating decades of dedicated effort pursuing both the technical challenge and the still-to-be-claimed prize money, John Harrison was finally able to collect the 20,000 pound admiralty award.

Why Mention Cuckoo Clocks? Enter Galileo and Huygens

Although the traditional cuckoo clock from the Black Forest of Germany does not quite qualify as a maritime chronometer, its pendulum principle plays an historical role in the overall story of time and time-keeping. With a cuckoo clock or any pendulum clock, the ticking rate is dependent only on the effective length of the pendulum, and not its weight or construction. If a cuckoo clock runs too fast, one must lower the typical wood-carved leaf cluster on the pendulum shaft to increase the pendulum period and slow the clock-rate.

No less illustrious a name than Galileo Galilei was the first to propose the possibilities of the pendulum clock in the early 1600’s. Indeed, Galileo was the first to understand pendulum motion and, with an assistant late in life, produced a sketch of a possible pendulum clock. A few decades later, in 1658, the great French scientist, Christian Huygens, wrote his milestone book of science and mathematics, Horologium Oscillatorium, in which he presented a detailed mathematical treatment of pendulum motion-physics. By 1673, Huygens had constructed the first pendulum clock following the principles set forth in his book.


In 1669, a very notable scientific paper appeared in the seminal English journal of science, The Philosophical Transactions of the Royal Society. That paper was the first English translation of a treatise originally published by Christian Huygens in 1665. In his paper, Huygens presents “Instructions concerning the use of pendulum-watches for finding the longitude at sea, together with a journal of a method for such watches.” The paper outlines a timekeeping method using the “equation of time” (which quantifies the monthly variations of the earth’s rotational period) and capitalizes on the potential accuracy of his proposed pendulum timekeeper. The year 1669 in which Huygens’ paper on finding the longitude-at-sea appeared in The Philosophical Transactions preceded by thirty-eight years the disastrous navigational tragedy of the British fleet and Sir Cloudesley Shovell in 1707.

As mentioned earlier, John Harrison was the first to design and construct marine chronometers having the accuracy necessary to determine the longitude-at-sea. After many years of utilizing large balanced pendulums in his bulky designs, Harrison’s ultimate success came decades later in the form of a large “watch” design which utilized the oscillating balance-wheel mechanism, so familiar today, rather than the pendulum principle. Harrison’s chronometer taxed his considerable ingenuity and perseverance to the max. The device had to keep accurate time at sea – under the worst conditions imaginable ranging from temperature and humidity extremes to the rolling/heaving motion of a ship at sea

The Longitude Act of 1714 specified that less than two minutes of deviation from true time is required over a six-week sea voyage to permit a longitude determination to within one-half degree of true longitude (35 miles at the equator). Lost time, revenue, and human lives were the price to be paid for excessive timekeeper inaccuracies.

Einstein and Special Relativity: Speeding Clocks that Run Slow

Albert Einstein revolutionized physics in 1905 with his special theory of relativity. Contrary to the assumptions of Isaac Newton, relativity dictates that there is no absolute flow of time in the universe – no master clock, as it were. An experiment will demonstrate what this implies: Two identical cesium 133 atomic clocks (the time-standard which defines the “second”) will run in virtual synchronization when sitting side by side in a lab. We would expect that to be true. If we take one of the two and launch it in an orbital space vehicle which then circles the earth at 18,000 miles per hour, from our vantage point on earth, we would observe that the orbiting clock now runs slightly slower than its identical twin still residing in our lab, here on earth. Indeed, upon returning to earth and the lab after some period of time spent in orbit, the elapsed time registered by the returning clock will be less than that of its twin which stayed put on earth even though its run-rate again matches its stationary twin! In case you are wondering, this experiment has indeed been tried many times. Unerringly, the results of such tests support Einstein’s contention that clocks moving with respect to an observer “at rest” will always run slower (as recorded by the observer) than they would were they not moving relative to the observer. Since the constant speed of light is 186,000 miles per second based on the dictates of relativity, the tiny time dilation which an orbital speed of 18,000 miles per hour would produce could only be observed using such an incredibly stable, high resolution time-source as an atomic clock. If two identical clocks passed each other traveling at one-third the speed of light, the “other” clock would seem to have slowed by 4.6%. At one-tenth the speed of light, the “other” clock slows by only 0.5%. This phenomena of slowing clocks applies to any timekeeper – from atomic clocks to hourglasses. Accordingly, the effect is not related to any construction aspects of timekeepers, only to our limitation “to observe” imposed by the non-infinite, constant speed of light dictated by relativity.

For most practical systems that we deal with, here on earth, relative velocities between systems are peanuts compared to the speed of light and the relativistic effects, although always present, are so small as to be insignificant, usually undetectable. There are important exceptions, however, and one of the most important involves the GPS (Global Positioning System). Another exception involves particle accelerators used by physicists. The GPS system uses earth-orbiting satellites traveling at a tiny fraction of the speed of light relative to the earth’s surface. In a curious demonstration of mathematical déjà vu when recalling the problem of finding the longitude-at-sea, even tiny variations in the timing signals sent between the satellites and earth can cause our position information here on earth to off by many miles. With such precise GPS timing requirements, the relativistic effect of time dilation on orbiting clocks – we are talking tiny fractions of a second! – would be enough to cause position location errors of many miles! For this reason, relativity IS and must be taken into account in order for the GPS system to be of any practical use whatsoever!

Is it not ironic that, as in the longitude-at-sea problem three centuries ago, accurate time plays such a crucial role in today’s satellite-based GPS location systems?

I hope this post has succeeded in my attempt to convey to you, the reader, the wonderful mysteries and importance of that elusive notion that we call time.

Finally, as we have all experienced throughout our lives, time is short and….



Relativity and the Birth of Quantum Physics: Two Major Problems for Physics in the Year 1900

Max-Planck-[1]In the year 1900, two critical questions haunted physicists, and both involved that elusive entity, light. The ultimate answers to these troublesome questions materialized during the dawn of the twentieth century and resulted in the most recent two of the four major upheavals in the history of physics. Albert Einstein was responsible for the third of those four upheavals in the form of his theory of special relativity which he published in 1905. Einstein’s revolutionary theory was his response to one of those two critical questions facing physics in the year 1900. A German scientist named Max Planck addressed the second critical question while igniting the fourth great upheaval in the history of physics. Max Planck began his Nobel Prize-winning investigation into the nature of heat/light radiation in the year 1894. His later discovery of the quantized nature of such radiation gave birth to the new realm of quantum physics which, in turn, led to a new picture of the atom and its behavior. Planck’s work directly addressed the second critical question nagging science in 1900. The aftermath of his findings ultimately changed physics and man’s view of physical reality, forever.

What were the two nagging problems in physics in 1900?

The nature of light and its behavior had long challenged the best minds in physics. For example: Is light composed of “particles,” or does it manifest itself as “waves” travelling through space? By the eighteenth century, two of science’s greatest names had voiced their opinions. Isaac Newton said that light is “particle” in nature. His brilliant French contemporary, Christian Huygens, claimed that light is comprised of “waves.”

Newton_Kneller_ 1702_1         huygens[1]

                  Isaac Newton                                                      Christian Huygens

By 1865, the great Scottish physicist, James Clerk Maxwell, had deduced that light, indeed, acted as an electromagnetic wave traveling at a speed of roughly 186,000 miles per second! Maxwell’s groundbreaking establishment of an all-encompassing electromagnetic theory represents the second of the four major historical revolutions in physics of which we speak. Ironically, this second great advance in the history of physics with its theoretically established speed of light led directly to the first of the two nagging issues facing physics in 1900. To understand that dilemma, a bit of easily digestible background is in order!

Maxwell began by determining that visible light is merely a small slice of the greater electromagnetic wave frequency spectrum which, today, includes radio waves at the low frequency end and x-rays at the high frequency end. Although the speed of light (thus all electromagnetic waves) had been determined fairly accurately by experiments made by others prior to 1865, Maxwell’s ability to theoretically predict the speed of light through space using the mathematics of his new science of electrodynamics was a tribute to his supreme command of physics and mathematics. The existence of Maxwell’s purely theoretical (at that time) electromagnetic waves was verified in 1887 via laboratory experiment conducted by the German scientist, Heinrich Hertz.

The first of the two quandaries on physicist’s minds in 1900 had been brewing during the latter part of the nineteenth century as physicists struggled to define the “medium” through which Maxwell’s electromagnetic waves of light propagated across seemingly empty space. Visualize a small pebble dropped into a still pond: Its entry into the water causes waves, or ripples, to propagate circularly from the point of disturbance. These “waves” of water represent mechanical energy being propagated across the water. Light is also a wave, but it propagates through space and carries electromagnetic energy.

Here is the key question which arose from Maxwell’s work and so roiled physics: What is the nature of the medium in presumably “empty space” which supports electromagnetic wave propagation…and can we detect it? Water is the obvious medium for transmitting the mechanical energy waves created by a pebble dropped into it. Air is the medium which is necessary to propagate mechanical sound-pressure waves to our ears – no air, no sound! Yet light waves travel readily through “empty space” and vacuums!

Lacking any evidence concerning the nature of a medium suitable for electromagnetic wave propagation, physicists nevertheless came up with a name for it….the “ether,” and pressed on to learn more about its presumed reality. Clever but futile attempts were made to detect the “ether sea” through which light appears to propagate. The famous Michelson-Morley experiments of 1881 and 1887 conclusively failed to detect ether’s existence. Science was forced to conclude that there is no detectable/describable medium! Rather, the cross-coupled waves of Maxwell’s electric and magnetic fields which comprise light (and all electromagnetic waves) “condition” the empty space of a perfect vacuum in such a manner as to allow the waves to propagate through that space. In expressing the seeming lack of an identifiable transmission medium and what to do about it, the best advice to physicists seemed: “It is what it is….deal with it!”

“Dealing with it” was easier said than done, because one huge problem remained. Maxwell and his four famous “Maxwell’s equations” which form the framework for all electromagnetic phenomena calculate one and only ONE value for the speed of light – everywhere, for all observers in the universe. One single value for the speed of light would have worked for describing its propagation speed relative to an “ether sea,” but there is no detectable ether sea!

The Great “Ether Conundrum” – Addressed by Einstein’s Relativity

In the absence of an ether sea through which to measure the speed of light as derived by Maxwell, here is the problem which results, as illustrated by two distant observers, A and B, who are rapidly traveling toward each other at half the speed of light: How can a single, consistent value for the speed of light apply both to the light measured by observer A as it leaves his flashlight (pointed directly at observer B) and observer B who will measure the incoming speed of the very same light beam as he receives it? Maxwell’s equations imply that each observer must measure the same beam of light at 186,000 miles per second, measured with respect to themselves and their surroundings – no matter what the relative speed between the two observers. This made no sense and represented a very big problem for physicists!

The Solution and Third Great Revolution in Physics:
 Einstein’s Relativity Theories

As already mentioned, the solution to this “ether dilemma” involving the speed of light was provided by Albert Einstein in his 1905 theory of special relativity – the third great revolution in physics. Special relativity completely revamped the widely accepted but untenable notions of absolute space and absolute time – holdovers from Newtonian physics – and time and space are the underpinnings of any notion/definition of “speed.” Einstein showed that a strange universe of slowing clocks and shrinking yardsticks is required to accommodate the constant speed of light for all observers regardless of their relative motion to each other. Einstein declared the constant speed of light for all observers to be a new, inviolable law of physics. Furthermore, he proved that nothing can travel faster than the speed of light.

The constant speed of light for all observers coupled with Einstein’s insistence that there is no way to measure one’s position or speed/velocity through empty space are the two notions which anchor special relativity and all its startling ramifications.

 The Year is 1900: Enter Max Planck and Quantum Physics –
The Fourth Great Revolution in Physics

The second nagging question facing the physics community in 1900 involved the spectral nature of radiation emanating from a so-called black-body radiator as it is heated to higher and higher temperatures. Objects that are increasingly made hotter emanate light whose colors change from predominately red to orange to white to a bluish color as the temperature rises. A big problem in 1900 was this: There is little experimental evidence indicating large levels of ultraviolet radiation produced at high temperatures – a situation completely contrary to the theoretical predictions of physics based on our scientific knowledge in the year 1900. Physics at that time predicted a so-called “ultraviolet catastrophe” at high temperatures generating huge levels of ultraviolet radiation – enough to damage the eyes with any significant exposure. The fact that there was no evidence of such levels of ultraviolet radiation was, in itself, a catastrophe for physics because it called into serious question our knowledge and assumptions of the atomic/molecular realm.

The German physicist, Max Planck, began tackling the so-called “ultraviolet catastrophe” disconnect as early as 1894. Using the experimental data available to him, Planck attempted to discern a new theory of spectral radiation for heated bodies which would match the observed results. Planck worked diligently on the problem but could not find a solution by working along conventional lines.

Finally, he explored an extremely radical approach – a technique which reflected his desperation. The resulting new theory matched the empirical results perfectly!

When Planck had completed formulation of his new theory in 1900, he called his son into his study and stated that he had just made a discovery which would change science forever – a rather startling proclamation for a conservative, methodical scientist. Planck’s new theory ultimately proved as revolutionary to physics as was Einstein’s theory of relativity which would come a mere five years later.

Max Planck declared that the radiation energy emanating from heated bodies is not continuous in nature; that is, the energy radiates in “bundles” which he referred to as “quanta.” Furthermore, Planck formulated the precise numerical values of these bundles through his famous equation which states:

 E = h times Frequency

where “h” is his newly-declared “Planck’s constant” and “Frequency” is the spectral frequency of the radiation being considered. Here is a helpful analogy: The radiation energy from heated bodies was always considered to be continuous – like water flowing through a garden hose. Planck’s new assertion maintained that radiation comes in bundles whose “size” is proportional to the frequency of radiation being considered. Visualize water emanating from a garden hose in distinct bursts rather than a continuous flow! Planck’s new theory of the energy “quanta” was the only way he saw fit to resolve the existing dilemma between theory and experiment.

The following chart reveals the empirical spectral nature of black-body radiation at different temperatures. Included is a curve which illustrates the “ultraviolet catastrophe” at 5000 degrees Kelvin predicted by (1900) classical physics. The catastrophe is represented by off-the-chart values of radiation in the “UV” range of short wavelength (high frequency).

Black_body copy

This chart plots radiated energy (vertical axis) versus radiation wavelength (horizontal axis) plotted for each of three temperatures in degrees K (degrees Kelvin). The wavelength of radiation is inversely proportional to the frequency of radiation. Higher frequency ultraviolet radiation (beyond the purple side of the visible spectrum) is thus portrayed at the left side of the graph (shorter wavelengths).

Note the part of the radiation spectrum which consists of frequencies in the visible light range. The purple curve for 5000 degrees Kelvin has a peak radiation “value” in the middle of the visible spectrum and proceeds to zero at higher frequencies (shorter wavelengths). This experimental purple curve is consistent with Planck’s new theory and is drastically different from the black curve on the plot which shows the predicted radiation at 5000 degrees Kelvin using the scientific theories in place prior to 1900 and Planck’s revolutionary findings. Clearly, the high frequency (short wavelength) portion of that curve heads toward infinite radiation energy in the ultraviolet range – a non-plausible possibility. Planck’s simple but revolutionary new radiation law expressed by E = h times Frequency served to perfectly match theory with experiment.

Why Max Planck Won the 1918 Nobel Prize
in Physics for His Discovery of the Energy Quanta

One might be tempted to ask why the work of Max Planck is rated so highly relative to Einstein’s theories of relativity which restructured no less than all of our assumptions regarding space and time! Here is the reason in a nutshell: Planck’s discovery led quickly to the subsequent work of Neils Bohr, Rutherford, De Broglie, Schrodinger, Pauli, Heisenberg, Dirac, and others who followed the clues inherent in Planck’s most unusual discovery and built the superstructure of atomic physics as we know it today. Our knowledge of the atom and its constituent particles stems directly from that subsequent work which was born of Planck and his discovery. The puzzling non-presence of the “ultraviolet catastrophe” predicted by pre-1900 physics was duly answered by the ultimate disclosure that the atom itself radiates in discrete manners thus preventing the high ultraviolet content of heated body radiation as predicted by the old, classical theories of physics.

Albert Einstein in 1905: The Photoelectric Effect –
Light and its Particle Nature

Published in the same 1905 volume of the German scientific journal, Annalen Der Physik, as Einstein’s revolutionary theory of special relativity, was his paper on the photoelectric effect. In that paper, Einstein described light’s seeming particle behavior. Electrons were knocked free of their atoms in metal targets by bombarding the targets with light in the form of energy bundles called “photons.” These photons were determined by Einstein to represent light energy at its most basic level – as discrete bundles of light energy. The governing effect which proved revolutionary was the fact that the intensity of light (the number of photons) impinging on the metal target was not the determining factor in their ability to knock electrons free of the target: The frequency of the light source was the governing factor. Increasing the intensity of light had no effect on the liberation of electrons from their metal atoms: The frequency of the light source had a direct and obvious effect. Einstein proved that these photons, these bundles of light energy which acted like bullets for displacing electrons from their metal targets, have discrete energies whose values depend only on the frequency of the light itself. The higher the frequency of the light, the greater is the energy of the photons emitted. As with Planck’s characterization of heat radiation from heated bodies, photon energies involve Planck’s constant and frequency. Einstein’s findings went beyond the quanta energy conceptualizations of Planck by establishing the physical reality of light photons. Planck interpreted his findings on energy quanta as atomic reactions to stimulation as opposed to discrete realities. Einstein’s findings earned him the 1921 Nobel Prize in physics for his paper on the photoelectric effect….and not for his work on relativity!

Deja Vu All Over Again: Is Light a Particle or a Wave?

My EinsteinAlong with Planck, Einstein is considered to be “the father of quantum physics.” The subsequent development by others of quantum mechanics (the methods of dealing with quantum physics) left Einstein sharply skeptical. For one, quantum physics and its principle of particle/wave duality dictates that light behaves both as particle and wave – depending on the experiment conducted. That, in itself, would trouble a physicist like Einstein for whom deterministic (cause and effect) physics was paramount, but there were other, startling ramifications of quantum mechanics which repulsed Einstein. The notion that events in the sub-atomic world could be statistical in nature rather than cause-and-effect left Einstein cold. “God does not play dice with the universe,” was Einstein’s opinion. Others, like the father of atomic theory, Neils Bohr, believed the evidence undeniable that nature is governed at some level by chance.

In one of the great ironies of physics, Einstein, one of the two fathers of quantum physics, felt compelled to abandon his brain-child because of philosophical/scientific conflicts within his own psyche. He never completely came to terms with the new science of quantum physics – a situation which left him somewhat outside the greater mainstream of physics in his later years.

Like Einstein’s relativity theories, quantum physics has stood the test of time. Quantum mechanics works, and no experiments have ever been conducted to prove the method wrong. Despite the truly mysterious realm of the energy quanta and quantum physics, the science works beautifully. Perhaps Einstein was right: Quantum mechanics, as currently formulated, may work just fine, but it is not the final, complete picture of the sub-atomic world. No one could appreciate that possibility in the pursuit of physics more than Einstein. After all, it was his general theory of relativity in 1916 which replaced Isaac Newton’s long-held and supremely useful force-at-a-distance theory of gravity with the more complete and definitive concept of four-dimensional, curved space-time.

By the way, and in conclusion, it is Newton’s mathematics-based science of dynamics (the science of force and motion) that defines the very first major upheaval in the history of physics – as recorded in his masterwork book from 1687, the Principia – the greatest scientific book ever written. Stay tuned.

A Greater Light for Mariners! Fresnel and His Life-Saving Lighthouse Lens

A recent drive north of San Francisco to Point Reyes National Seashore with its famous Point Reyes lighthouse was enough to stir many emotions. California’s rocky and picturesque northern coastline is reason enough to make the trip, but the lure of its famous lighthouse proves irresistible.

Lighthouse[1]Point Reyes Lighthouse

The Point Reyes lighthouse is perched on a high, notoriously treacherous point of land that extends well into the Pacific Ocean from the main coastline. Many a ship found its final resting place on these rocky shores, going back to the time when sailing vessels and their intrepid sailors first plied the waters, here. The first on the scene was likely Sir Francis Drake who is believed to have safely landed immediately south of here in 1579 at what today is known as “Drake’s Bay.”

The Point Reyes lighthouse first lit its first-order Fresnel (pronounced fray-nel) light source on December 1, 1870. The oil-lamp used was nestled at the focal point of the 6,000 pound rotating Fresnel lens assembly, and its focused light could be seen all the way to the horizon on clear nights – roughly twenty-four miles out in the ocean. The weight-driven, precision clockwork mechanism which rotates the huge lens assembly once every two minutes sweeps a beam of light past a given point every five seconds, a beam that can be seen three or four times farther out to sea than previous lights – thanks to the revolutionary lens design of the French engineer/scientist Augustin-Jean Fresnel. Prior to Fresnel’s published treatise on light diffraction in 1818 and the subsequent appearance of his revolutionary lens design in 1823, lighthouses relied on conventional, inefficient and heavy glass lenses and mirrors to focus light. Fresnel lenses were soon universally adopted for lighthouses based their superior performance. The 6,000 pound first-order Fresnel lens assembly and clockwork drive installed at Point Reyes in 1870 was purchased by the U.S. Government at the great Paris Exposition in 1867.

IMG_5053Looking up into the Fresnel lens assembly and pedestal

Fresnel lenses are ranked in order of their size (focal distance from internal light source to lens), and range from first-order at approximately 36 inches to just under 6 inches for a sixth-order lens. Point Reyes is renowned as the windiest location on the Pacific Coast and the second foggiest in all of North America. Given those credentials and the treacherous rocky point on which it sits, the Point Reyes lighthouse certainly merited the biggest Fresnel lens obtainable!

Fresnel Engraving_BW_8X10_2Augustin-Jean Fresnel


Edweard Muybridge photo – 1888
The domed Fresnel lens is clearly visible inside.

Linda and I were at Point Reyes celebrating our 49th wedding anniversary. Upon arriving at the lighthouse after 22 miles of driving from “town,” we were greeted with the warning that the path down to the lighthouse is comprised of 308 steps – (equivalent to a 13 story building) and that the “faint-of-heart” should not attempt the trip. We looked at each other, smiled, shrugged, and off we went. Though narrow, the cement steps are solid and shallow, so the trip back up was not bad!

Folks with fear-of-heights issues are NOT going to enjoy the stairs, however, as the light itself is perched high above the ocean on a treacherous ridge. In the old days, before there were stairs, the light-keeper occasionally had to get down on hands and knees on the rocky trail to complete the trip in howling winds and dense fog. Winds have been clocked higher than 130 mph at Point Reyes! After seeing the site, first-hand, it is easy to imagine just how difficult the light-keeper’s job was in the old days – keeping the light lit and the weight-driven clockwork running 24/7. The gravity-powered mechanism required “rewinding” every 2 ½ hours!

IMG_5069On the way back up!

Heading for the ShoalsHeading for the Shoals!

The terror of being “off course” in wild seas along a rugged coastline must have been overwhelming to seafarers. Lighthouses played a significant role in reducing the incidence of shipwreck for more than a century, but today’s GPS satellite navigational aids have all but rendered them superfluous. Among lighthouses that continue to operate today, the light source is a high-tech electric bulb within the lens, not an oil lamp. Many of yesterday’s Fresnel lens assemblies are relegated to static displays in a museum building adjoining the lighthouse in which they served. Point Reyes’ light remains in operating condition, still in its original position. The last of its resident “keepers” left Point Reyes in 1985. The lighthouse is now under the jurisdiction of the National Park Service.

As for Augustin-Jean Fresnel, the French hero of this scientific/seafaring drama: He died young in 1827 at age 39. Although honored in his day with membership in the prestigious Royal (scientific) Society of London and by its award of the prestigious Rumford Medal in 1824, his name is little known, today, outside of science. Anyone who visits lighthouses is bound to learn of him and his famous lenses, however, and of the importance of his work to both scientific-optics and seafaring. His name is engraved on the Eiffel Tower in Paris together with a long-list of other illustrious Frenchmen.

Two Fine Resources:

Short Bright Flash


I recommend the recently published book by Theresa Levitt on the history of the modern lighthouse and Augustin-Jean Fresnel whose pioneering work on scientific optics and subsequent lens design influenced both science and seafaring.

The other book specifically on the Point Reyes Lighthouse is a beautifully rendered historical and photographic treatment of the subject by Richard Blair and Kathleen Goodwin. I was delighted to find this fine book when we were in the town bookstore. I purchased two copies at a very reasonable price!


An example of the beautiful photography in my copy of The Point Reyes Lighthouse by Richard Blair and Kathleen Goodwin: The photo shows the interior of the Point Reyes first-order Fresnel lens with the modern electric light source(s) clearly visible. This book is published by Color & Light Editions which specializes in Point Reyes literature and art.

Patent Problems and Intellectual Property: “The Indigestion of Success”

Aside from love, one of the great emotions humans can experience is the thrill of discovery and achievement – being the first to reveal more of nature’s immutable laws governing the cosmos or doing something no one else has been able to do. Patent Warning_1 Some aspects of life inevitably go together – a coupling of cause-and-effect, if you will. Sometimes, we simply cannot have one thing without another. The claim that “there is a price to be paid for everything” seems a truism which ably illustrates that contention of coupled cause-and-effect. In that vein, man’s finest intellectual achievements or physical accomplishments materialize only after significant vested effort is expended. Our personal life experiences leave no doubt that hard work is a necessary, though not sufficient, prerequisite for great success…in any venue. We understand that. Not so obvious is the other price often associated with intellectual achievement and intellectual property, a price which is extracted after the fact – the tedious, ongoing, and costly effort required to establish and maintain the legal rights to the intellectual property behind any significant achievement.

I call this second price to be paid for success “the indigestion of success” which is often so severe as to result literally in ulcers if not merely pervasive, never-ending discontent.

The “indigestion of success” begins with proving one’s priority of invention while establishing patent rights, and it continues seemingly forever while vigilantly protecting those rights against usurpers. The motivation to defend one’s intellectual property is typically financial, but, understandably, the battle becomes distinctly a matter of personal principle as we will see…and the consequences can be tragic. It is difficult to overstate the high price – both financially and emotionally – of defending intellectual property and priority, yet this surcharge on success is inevitably demanded of inventors, engineers, scientists, and entrepreneurs. The list of such examples is varied and fascinating, stretching far back in recorded history. Gaileo Galilei, Isaac Newton, and Michael Faraday, three of the greatest physicists of all time were each affected by priority controversies during their careers – especially Newton, as we shall see. In the realms of engineering and business, Thomas Edison, Howard Armstrong (radio’s greatest inventor/engineer), Robert Noyce (of integrated circuit fame), and Steve Jobs of Apple Computer were all enveloped by priority controversies and patent battles. Even the Wright brothers, the well-documented founders of modern aviation paid a stiff price defending their marvelous invention, the controllable “flying machine.”

The Wright Brothers: Hard Work, Triumph, then Disillusionment

Wright Glider 1902_1 I just finished reading David McCullough’s new book, The Wright Brothers, which relates the incredible story of the two brothers from Dayton, Ohio – bicycle mechanics/salesmen who created the first true “flying machine”…in their spare time! McCullough is a consummate teller of true stories, but the story of these two men tests the line separating fact from fiction because their stunning success seemed so improbable. The truth is, the Wright Brothers “invented” and successfully flew the first full-sized, self-powered, controllable airplane – a staggering accomplishment for two young men with no formal technical credentials. Their ultimate success was rooted in a fascination at the prospect of manned flight coupled with a single-minded, driven determination to do whatever it takes to accomplish their dream of flying. The two brothers constitute the very best examples of self-made men… engineers and flyers, in their case. Their accomplishments are so thoroughly documented as to seem unassailable and safe from thieves who would steal in the courts of patent law, yet it was not quite that simple. It never is. Author McCullough paints a clear picture on his pages of just how technically challenging their task actually was. What also emerges is the sad turn of events their triumph became once the airplane was designed, tested, documented, and patented. Wilbur Wright, the brilliant engineering mind for whom no technical challenge seemed too large, died early in 1912 at the young age of forty-five years. The official cause of death was typhoid fever, but it seems Wilbur’s spirit was dying for quite some time before his body expired. In May of 1910, the brothers, who did all their own flying from the project’s onset in 1900, went up together in their Wright Flyer for the very first time – some seven years after Orville’s first flight at Kitty Hawk. Their disciplined methodology throughout the project dictated that, should there be an accident, at least one of them should survive to carry on the work. Their flight together that day seemed their tacit acknowledgement that they had completed their life’s dream; all that remained was to form and grow a profitable company which would carry on their work and insure a comfortable livelihood for the brothers and their immediate relatives.

WilburWright7[1]By 1912, two years had passed since Wilbur Wright had last done what he truly loved to do: Piloting the Wright Flyer while perfecting its design. His weeks and months the past two years were spent on business trips to New York and Washington and in courtrooms defending the patent portfolio he and Orville had assembled as the backbone of their new Wright Company… for the manufacture of airplanes. In author McCullough’s account, Orville took note of Wilbur’s restless discontent with the tedium and exasperations of establishing their company, noting that after a day spent in offices dealing with business and patent matters, Wilbur would “come home white.”

Wilbur, himself, wrote of the patent entanglements: “When we think of what we might have accomplished if we had been able to devote this time to experiments, we feel very sad, but it is always easier to deal with things than with men, and no one can direct his life entirely as he would choose.”

Within several years of Wilbur’s death, Orville Wright had sold the Wright Company to others, preferring a peaceful, retiring life to one spent constantly battling corporate demons and those who would usurp the brothers’ past and future accomplishments. His mission for the remainder of his long life: To represent his brother while defending the less materialistic aspects of the Wright brothers’ legacy. I believe I would have done precisely the same, were I in his shoes. Other notable, historical figures in similar circumstances made sadly different decisions when faced with the indigestion of success and the never-ending need to protect intellectual property. The two examples that follow vividly illustrate just how bad these matters of priority and intellectual property can become, especially for the most-principled of participants.

Edwin Howard Armstrong: Radio’s Greatest Inventor/Engineer and Tragic Victim of His Own Success and the Patent System

For radio and electrical engineers who know the history, Edwin Howard Armstrong is the tragic hero of early “wireless” and a victim of the radio empire which he helped to create. Howard Armstrong was the quintessential radio engineer’s engineer – bright, motivated, creative…and stubbornly persistent. He exuded personal integrity. The very qualities which made him the greatest inventor/engineer in the history of radio, led to his downfall and suicide in 1954. Howard Armstrong surfaced in 1912 as a senior electrical engineering major at Columbia University with an obsessive interest in the infant science of “wireless” radio. He was a fine student with a probing, independent mind that suffered no fools. In 1912, while living at home in nearby Yonkers, New York, and commuting daily to Columbia on an Indian-brand motorcycle, he invented a way to greatly increase signal amplification using a single De Forest Audion vacuum tube by feeding part of the tube’s marginally amplified output back to the input of the device where it was amplified over and over again. This technique is now known in the trade as “regeneration,” or positive feedback. Along the way, young Armstrong had made great strides in understanding the technology behind Lee De Forest’s recent invention of the Audion tube, insights far beyond those De Forest himself had offered. While tinkering with the idea of signal regeneration in his bedroom laboratory early on the morning of September 22, 1912, he achieved much greater signal amplification from the Audion than was possible without using regeneration. The entire household was abruptly awakened by young Armstrong’s unrestrained excitement over his discovery, and an important discovery it was for the infant science of “wireless radio.” Regeneration was patented by Armstrong in 1913/14 and was used, under license from him, in countless radios during the early years when radio sets with more than one tube were very expensive to produce, due to the high cost of tubes.

Armstrong Patent_2Armstrong’s 1914 patent on the regenerative receiving circuit – one of the foundations of early wireless radio and a gateway to efficient tube-based radio transmitters, as well. Armstrong_Regen_1 Armstrong’s historic, handwritten chronological account of inventing the regenerative circuit – page one of six; likely written around 1920 to serve as evidence in the litigation with De Forest over Armstrong’s regeneration patent. Note the Sept. 22, 1912 date of his triumph (near the bottom).

In 1914, Lee De Forest stepped forward to challenge Armstrong in court over Armstrong’s patent, claiming that he, De Forest, was the legitimate inventor of regeneration. The litigation in the court system over regeneration went back and forth, lasting twenty years and finally ending up in the United States Supreme Court. Shockingly, De Forest was handed the final decision by the court, but the substantial body of radio engineers across the nation in 1934, who were well aware of the “radio art” and its history, were not buying De Forest’s claim. They fully supported Armstrong as the legitimate inventor – the same view held today. The twenty-year patent litigation battle over regeneration was the longest in U.S. patent court history. Unfortunately, that was only the beginning of Armstrong’s troubles with the patent courts and those who would take advantage of his work.

The Tragedy of Edwin Howard Armstrong

Howard Armstrong was one of the last, great, lone-inventor/engineers. He was long affiliated with his alma-mater, Columbia University, and had extensive business/patent dealings with giant corporations, such as RCA and Philco, which drew their life-blood from his inventions and the industry which he helped to create. By licensing his many important patents to these corporations, Armstrong became a very wealthy man. At one time, he was the largest stockholder in the giant RCA Corporation. Despite such wide-spread affiliations, he was, by temperament, an independent thinker in the lone-inventor mold. As radio entered the late nineteen thirties, men-of-action like Armstrong were becoming obsolete, increasingly overrun by corporate bureaucracies and their in-house armies of engineers. Radio was now out of the hands of the lone-inventor, becoming the exclusive domain of the moneyed corporations with influence at the FCC (Federal Communications Commission) in Washington. Armstrong increasingly found himself defending his legitimate patent rights against large corporations which were treading on those rights, battling their great financial resources and their legions of corporate lawyers. As he continued to lose rightful patent royalties to corporate violations of his patents, he stubbornly fought back fueled by his personal principles of fair play, all the while dissipating his once-great financial security to fund the necessary lawyer’s fees. Armstrong was a man of principled integrity; he could have capitulated, retreated, retired comfortably, and lived out his life, but he chose to fight.

Armstrong's Suicide    004Ultimately, those ceaseless legal battles wore him down, bankrupted him, and destroyed his long marriage. On May 5, 1954, he stepped from his New York apartment window to his death thirteen stories below. In an ironic sense, he fell victim to the industry and the changing times he helped to create. He also was victimized by the very qualities which made him great: Intellectual independence, principled integrity, and the stubborn will to persevere. There are many lessons to be learned from Howard Armstrong’s life-story. The lone crusader was crushed by the corporate “Goliaths” he helped create. Final postscript: After Armstrong’s death, his estranged wife, Marion, took up her husband’s ongoing patent battles with the Goliaths of the radio industry. She eventually prevailed in every single case!

Inventor of the Calculus: Isaac Newton or the German Mathematician, Gottfried Leibniz?

History’s most ardent defender of his intellectual property also happened to be the greatest scientist/mathematician of all time, Sir Isaac Newton. As vindictive as he was brilliant, Newton waged one of history’s most vicious priority battles with Gottfried Leibniz over credit for the development of the calculus, that ubiquitous, indispensable mathematical tool of the engineer and scientist. Newton formulated its fundamentals in 1665/66, the famous “miracle year” spent at his mother’s homestead in isolation from the great plague which swept through England at the time. GodfreyKneller-IsaacNewton-1689[1]Newton’s peerless scientific self-discipline tended to completely desert him when challenged by others on matters of intellectual priority which he felt belonged to him. Leibniz and Robert Hooke were two men who famously felt the full force of Newton’s rage in such matters. For Newton and his circumstances, there was no real money at stake – only prestige and ego, and Newton’s ego was well-developed… and sensitive. Today, both Newton and Leibniz are credited with independently developing the calculus – essentially true, although it appears certain that Leibniz had unauthorized access to some of Newton’s early personal papers on the subject. In that sense, Newton is regarded as the “primary” developer of calculus. Leibniz never quite recovered from the savage and telling effects of Newton’s vindictiveness which was well publicized in scientific circles and which reduced the great Newton to unprincipled deceits in his efforts to discredit his rival. In Newton’s mind, much more was at stake than mere money: For him, personal satisfaction and the ego-satisfying prospect of scientific immortality were far more important motivators. In his defense, one could argue that, for Newton, the long-term stakes riding on his efforts to receive due credit for his brilliance were much higher than most. Nevertheless, when all was said and done, Newton’s personal reputation suffered significantly even if his scientific reputation remained unsullied over the dispute with Leibniz.

What Would You Do?

Milton_Wright_1889[1]If you were ever in the position of enjoying a significant personal success that had already conferred substantial wealth upon you, yet huge wealth beckons you or whoever else takes the enterprise still further – what would you do? Like Orville did, I would have heeded Bishop Milton Wright’s early admonition to his children (paraphrased here) that greed is bad and leads to grief; be content with sufficient money to sustain a comfortable life and require nothing more beyond that than the normal pleasures of life and living. The Bishop also warned against temper and ego. The Bishop was a very wise man; the brothers received some very informed guidance.

 “If I were giving a young man advice as to how he might succeed in life, I would say to him, pick out a good father and mother, and begin life in Ohio.”

 – Wilbur Wright

Click here to get to last week’s post, The Brothers Wright had “The Right Stuff”