Tagged: Nautilus Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:20 am on March 8, 2020 Permalink | Reply
    Tags: "An Open Letter to Telescope Protesters in Hawaii", , , , , Mauna Kea Observatory, Nautilus,   

    From Nautilus: “An Open Letter to Telescope Protesters in Hawaii” 

    From Nautilus

    March 5, 2020
    Dana Mackenzie

    1
    Nautilus

    Why Astronomy on Mauna Kea is not a desecration but a duty.

    On July 15, 2019, after a court decision had cleared the way for astronomers to build a new mega-telescope, called the Thirty Meter Telescope, on Hawaii’s Mauna Kea, a large group of protesters said, “No.” Pitching their camp directly on the access road to the top of Mauna Kea, the protesters, who called themselves kia’i mauna (protectors of the mountain), pledged to stop any construction vehicles from passing. The kia’i argue that the mountain is sacred to the native Hawaiian people, and that the construction of the TMT would desecrate it.

    When I heard about the protest I was torn apart, because I felt forced to choose between my two favorite ohanas (families). Though I am not an astronomer, I have been a science writer for 23 years and a mathematician before that, so I am part of the larger science ohana. Likewise, I have been a hula dancer for 15 years. Hula is simply a way of telling a story, and men have been part of that folk tradition from the beginning. Dancing with my hula sisters (and occasionally brothers) has taught me to admire the Hawaiian culture, especially their reverence for their land.

    The kia’i have always said that their complaint is not against science, and I take them at their word. Nevertheless, if you are protesting something it is important to know what you are protesting against. I believe that they have missed one crucial fact about the TMT within the context of Hawaiian culture. The astronomers, likewise, have failed to explain the telescope’s value to native Hawaiians in spiritual terms, rather than its value to science or the economy. I hope to bridge that communication gap. In this article I will speak for both of my families, using “we” to mean both astronomers and hula dancers, depending on the context.

    Dear kia’i mauna,

    I greet you in the name of Wakea, the sky god who created the Hawaiian islands and the mountain on which you stand. As you have said many times, Mauna Kea is only a contraction of its full name, Mauna a Wakea—or Wakea’s mountain.

    For our readers on the mainland, who may not have followed the drama on Mauna Kea closely, I would like to begin by celebrating some of your accomplishments. First and foremost among these, you have introduced the world to the concept of kapu aloha. This is a code that requires the protesters to maintain proper (pono) and respectful behavior at all times, without anger. Your adherence to this code has prevented any violence aside from the first week, when the police arrested some of your leaders. Your movement follows in the exemplary lineage of Gandhi and Martin Luther King. I consider kapu aloha to be spiritually identical with Gandhi’s concept of satyagraha, which means “truth power.” Likewise, telling the truth is at the heart of kapu aloha.

    You have also inspired and connected with indigenous people and their sympathizers around the world. As a student of hula, I could feel the joy in your tent when you chanted the oli and danced the hula in praise of the mountain, the waters, and the people who are defending their beliefs. When you came to my town of Santa Cruz, California, in November, you invited Valentin Lopez of the local Amah Mutsun tribe to speak, and he said, “We [indigenous people] are the only people with the moral authority to speak for this land.” These kinds of conflicts have arisen before. The Tohono O’odham Nation protested the VERITAS gamma ray detector on Kitt Peak in Arizona, and the San Carlos Apache opposed several telescopes on Mt. Graham, also in Arizona.

    CfA/VERITAS, a major ground-based gamma-ray observatory with an array of four Čerenkov Telescopes for gamma-ray astronomy in the GeV – TeV energy range. Located at Fred Lawrence Whipple Observatory,Mount Hopkins, Arizona, US in AZ, USA, Altitude 2,606 m (8,550 ft)

    U Arizona Submillimeter Telescope located on Mt. Graham near Safford, Arizona, USA, Altitude 3,191 m (10,469 ft)

    Both tribes succeeded at delaying, preventing, or relocating these telescopes. Of course, the use of sacred lands is only one of the many challenges facing indigenous peoples. But I believe that protests related to sacred lands have been especially effective, precisely because they force American society to confront and acknowledge your deepest values as a people.

    That brings me to your third accomplishment: You are changing the culture of Astronomy. It is no accident that we always want to put telescopes on mountaintops, and these mountains are usually sacred to somebody. We are not entitled to build there. We need to ask permission, humbly. And asking permission means accepting that the answer might be “no.” This is where the TMT board failed. They thought that once they held a hearing and received a permit, their job was done. We need to learn that hearings are not the same as listening, and a permit is not the same as permission.

    3
    EYE OF THE STORM: The Thirty Meter Telescope, seen here in an artist rendering, would be the largest visible-light telescope in the Northern Hemisphere, allowing astronomers to explore exoplanets and the formation of stars and galaxies. Mauna Kea is an ideal site for capturing sharp images, scientists say, because Hawaii’s atmosphere is calm, cool, and often free of clouds and weather.TMT International Observatory.

    I now come to the more difficult part of this letter, in which I tell you that the kia’i, too, have overlooked something. You are much more like the astronomers than you realize. Both of you, native Hawaiians and astronomers, learn by careful observation (maka’ala). You are kia’i mauna, watchers of the mountain. They are kia’i o na hoku, watchers of the stars. Each of you needs the other. Separately, you are out of balance. The kia’i mauna focus on their responsibility to their land and are blind (alas) to the epochal changes going on in our knowledge of the stars. The kia’i o na hoku focus on their quest to understand the skies, and forget sometimes their responsibility to the earth and its inhabitants. The two of you need each other and always will, and for that reason this drama cannot end with the victory of one side over the other. The only end is reconciliation, which can only come through dialogue conducted in the spirit of aloha.

    I mentioned above the new things we are learning about the stars. Let me explain what I mean. Beginning in the mid-1990s, astronomers found ways to indirectly detect exoplanets, or planets orbiting other stars. We recognize them either through the wobble they create in their parent stars’ orbits, or through the slight dimming of the star when the planet passes in front. We cannot yet see these exoplanets directly, because we do not have telescopes that are powerful enough. That is what the Thirty Meter Telescope is for. More than that, the TMT would give us the ability to probe those planets’ atmospheres and look for oxygen. If we find that, it will be a sign to us: “Here is life.”

    According to Hawaiian legend, in the early days of creation, the gods spoke with man through the kahunas, and man spoke with the gods. The gods are still speaking to man, but in a different way than before. One thing we are learning from them is that our sky father, Wakea, was much busier than we thought. He created millions of other worlds. And on some of these planets, the most favored ones, he may have created other living beings.

    We do not know what form they may have. They may be nothing more than one-celled organisms. We do not even have scientific proof that they exist, in part because we do not have the TMT yet. But I feel sure that there are some among you, dear kia’i, who know in your gut—in your na’au—that life does exist out there in the cosmos. If you know this, then you must know also that they are your family. They are your cousins just as surely as the taro plant, Wakea’s firstborn child, is your brother.

    When you propose to shut down the TMT, you are proposing that we should shut our eyes to our own family. Your own family. This has nothing to do with being for or against science. It is not pono. It violates what I have learned about Hawaiian culture, that ohana comes first.

    As you know, the astronomers have a plan B, to build the telescope in the Canary Islands. Gordon Squires, vice president for external affairs of the TMT, tells me that the effect will be to make the science take twice as long, because there are about half as many nights with good seeing on the Canary Islands. Still, the universe can wait. The one-celled organisms will still be there even if we take twice as long to find them.

    I’m not worried about that. I’m worried about you, native Hawaiians. What will be the effect on you when you abandon your kuleana, your responsibility to Wakea? He brought you to this island and made you stewards of this unique mountain, the mountain you named after him. Mauna Kea is the umbilical cord joining earth to the stars. It is a place that Wakea has designated for looking up as well as for looking down. He could not entrust this place to anyone else. He had to choose gatekeepers who could look in both directions: a caretaking people who valued their connection to the earth, and a voyaging people who valued their connection to the stars. He would not want you to succeed in only half of your mission.

    Suppose that, by the power of kapu aloha and the grace of the gods, you agree that my words are true. What then would I ask you to do? I would ask for only one change at first, small but profound. Over and over, the kia’i have referred to the TMT as a “desecration” of the sacred mountain. It is not, and the word should not be uttered again. Instead I ask you to acknowledge that the observatory will consecrate a small part of the mountain to a purpose intended by your own gods. Your mission is not to oppose this consecration, but to make sure that it is done right. Be pono, and make sure that the astronomers are pono too.

    What do I mean by “doing it right”? A long list of things, some of which may not be easy. First, there should be native Hawaiian astronomers. Jessica Dempsey, deputy director of the East Asian Observatory, says that there are currently no native Hawaiian astronomers at any of the 13 telescopes on the mountain.

    Mauna Kea Observatory Hawaii USA

    This is a scandal. Though there are many native Hawaiian engineers doing outstanding work on the mountain, it is the astronomers who provide the vision, and they cannot fulfill their job without native Hawaiian eyes.

    When I call for native Hawaiian astronomers, it is of course the responsibility of the astronomy community, but it is also your responsibility. Brialyn Onodera, a native Hawaiian engineer who works at one of the telescopes on Maui, wrote in the Honolulu Civil Beat that the protests have created a climate in which “telescope” has become a dirty word. (She is not the only one saying this; I have interviewed others.)

    You, the kia’i, can reverse this message. You can teach native Hawaiian children that astronomy is a sacred responsibility (or kuleana) that has been given to your people. Teach your children that there are two types of astronomy, just as there are two types of hula. We have kahiko, done in the ancient style with no instruments except chanting and drums, and we have ‘auana, done in the modern style, with Western music and instruments. No one protests against hula ‘auana, or calls it a desecration. We all recognize that it is another valid expression of what it means to be Hawaiian. Likewise, you can encourage some of your children to become kahiko astronomers, practicing the ancient methods of navigation, while others become ‘auana astronomers, fulfilling their kuleana with the best instruments that Western science can devise. Both of these missions should be treated with equal respect.

    Should you reverse your opposition to the TMT, a cause that some of you have given 10 years of your life to? I leave this choice to your own conscience. In any case, there is other work to be done. The Master Lease awarding management of the Mauna Kea Science Reserve to the University of Hawaii will expire in 2033. It seems to me that any decision about individual facilities should wait until the issue of who will manage the mountain next is resolved. The kia’i deserve a place at the table, and I hope you will take it. You have earned the power to say no, but you have also earned something greater: the power to say yes.

    This voyage of discovery, this quest to reunite the family of Wakea, will not be a short one. It will not end when TMT is built, or not built. It will not end when the Master Lease is renewed, or not renewed. The quest will last for centuries. All that we are asking, all that the gods are asking, and all that your children are asking, is for you to join us. At the helm, where you have always been.

    In the spirit of kapu aloha,

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 8:35 am on December 29, 2019 Permalink | Reply
    Tags: Amateurs: An apothecary by trade Heinrich Schwabe started observing the sun in 1826 and did so continuously more than 300 days a year for four decades., How counting sunspots unites the past and future of science., Max Waldmeier, Nautilus, , Sunspots appear in cycles., The Fraunhofer refracting telescope, The most stable apparatus for detecting change in the star that gives us life is the human brain and eye., The observation of sunspots predates modern astronomy by at least three millennia., The sun was central to several ancient religions., The Zurich Observatory, Today thanks to solar physics we know the sunspot cycle is driven by the rotational motion of plasma within the spinning sun.   

    From Nautilus: “The 315-Year-Old Science Experiment” 

    Nautilus

    From Nautilus

    March 26, 2015
    Jonathon Keats

    1
    Len Small from a NASA image

    How counting sunspots unites the past and future of science.

    The most arrogant astronomer in Switzerland in the mid-20th century was a solar physicist named Max Waldmeier. Colleagues were so relieved when he retired in 1980 that they nearly retired the initiative he led as director of the Zurich Observatory.

    2
    Die Sternwarte Urania in Zürich, CC-BY-SA-2.5

    Waldmeier was in charge of a practice that dated back to Galileo and remains one of the longest continuous scientific practice in history: counting sunspots.

    The Zurich Observatory was the world capital for tallying sunspots: cool dark areas on the sun’s surface where the circulation of internal heat is dampened by magnetic fields. Since the 19th century, astronomers had correlated sunspots with solar outbursts that could disrupt life on Earth. Today scientists know the spots mark areas in the sun that generate colossal electromagnetic fields that can interfere with everything from the Global Positioning System to electricity grids to the chemical makeup of our atmosphere.

    What alienated Waldmeier’s potential Swiss successors was his hostility toward methods other than his own. In the space age, he insisted on counting sunspots by eye, using a Fraunhofer refracting telescope, named after its 18th-century inventor, installed by the first Zurich Observatory director, Rudolf Wolf, in 1849.

    3
    DON’T FIX IT IF IT AIN’T BROKE: Seen here, the Fraunhofer refracting telescope, named after its 18th-century inventor, was employed by solar physicists to count sunspots well into the 20th century. University Observatory Munich

    (With Waldmeier’s legacy uncertain, his assistant walked off with the Fraunhofer telescope and installed it in his garden.) Automated observation—and solar monitoring by satellite—seemed like obvious improvements, far less subjective than squinting.

    Yet for all the animosity toward Waldmeier, his method persisted. Sunspots appear in cycles. Their number steadily increases over a period of approximately 11 years, followed by about 11 years of decrease. Waldmeier understood the interpretation cannot be hurried because of the inherent slowness of the cycle itself. “You cannot speed up the process,” says astronomer Frederic Clette, director of the Solar Influences Data Analysis Center, based at the Royal Observatory of Belgium. “If you want to understand the sun, you must keep a record of the cycle continuously over long durations.”

    The best way to ensure data remains consistent, explains Clette, is to employ a method of observation that links the past and present. In contrast to most new science, which progresses in tandem with technological developments, the most stable apparatus for detecting change in the star that gives us life is the human brain and eye.

    “Modern techniques and equipment are powerful, but the technologies span over only a few solar cycles, so they don’t show how cycles differ over centuries,” says Clette, who is the custodian of the worldwide sunspot count, begun by Wolf in Zurich, and now known as the International Sunspot number. Under Clette’s watch, blemishes are still counted by eye. “When we count by eye, what we observe now can be connected to what was observed in the distant past.”

    It’s a remarkable story, says Clette. One of the most enduring scientific methods is simply observing. “It’s a long and systematic evolution of accumulating information that has led to an understanding of the sunspot phenomenon, and the jewel on the crown, the ability to predict the future.”

    The observation of sunspots predates modern astronomy by at least three millennia. Since the sun was central to several ancient religions, any blemish was sure to be seen as significant. For ancient Africans living on the Zambezi River, sunspots were mud spattered in the face of the sun by a jealous moon. Ancient Chinese saw sunspots as the building blocks of a floating palace or even brushstrokes signifying the character for king. Virgil was more practical. “When [the sun] has checkered with spots his early dawn … beware of showers,” he warned in his Georgics.

    Galileo studied sunspots more scientifically, seeing them as useful markings to calibrate his study of the solar disc. From careful telescopic observation of their daily changes in appearance, he correctly deduced that the sun was spherical and rotated on its own axis, carrying the mutable blemishes with it. But to his eyes, and those of other early astronomers, the meandering of sunspots seemed random. That left plenty of opportunity for speculation: Philosopher Rene Descartes thought the spots were oceans of primordial scum. Astronomer William Herschel believed they were portholes into a dark subsolar world where people lived beneath the sun’s radiant sheath.

    Yet there was one amateur astronomer who was simply content to watch the sky and document what he saw. An apothecary by trade, Heinrich Schwabe started observing the sun in 1826, and did so continuously, more than 300 days a year, for four decades. Initially he was searching for undiscovered planets inside Mercury’s orbit. Finding nothing solid, his focus gradually shifted to the speckled solar surface.

    By 1844, having counted tens of thousands of spots, Schwabe grew convinced that there was a cycle to the blotchiness: The number of sunspots seemed to wax and wane every 10 years. He had no explanation, but reckoned that others might learn from his observation, so he published a page-long note in Astronomische Nachrichten. His paper was read by Rudolf Wolf, the 30-year-old director of the Bern Observatory. When Wolf took over as director of the Zurich Observatory in 1864, he decided to make the sunspot cycle a focus of study.

    Wolf was not content to count only forward in time. To determine whether there truly was a cycle, and to get its true measure, he shrewdly sought to collect past data—starting with Schwabe’s—and to integrate it with his own daily observations.

    The trouble was the figures didn’t sync. Their numbers didn’t match even when they counted on the same day, as they did thousands of times between 1849 and Schwabe’s final count in 1868. Wolf’s Fraunhofer telescope was considerably more powerful than Schwabe’s old instrument, revealing that many of Schwabe’s spots were actually clusters. To compensate, Wolf made two crucial decisions. The first was to censor his count—tallying clusters instead of individual spots—reasoning that the relative amount of sunspot activity was what really mattered. Wolf’s second important decision was to establish a ratio between himself and Schwabe by comparing their counts on days that both men observed the sun. That gave him a coefficient he dubbed k, a multiple that could be applied to all of Schwabe’s pre-1849 observations, statistically aligning them with Wolf’s newer data.

    The coefficient permitted something even more remarkable. By a series of coincidental overlaps in observation, Wolf could work his way back from Schwabe to establish k coefficients for other scientists, and reliably extend his sunspot data all the way to 1700. Wolf then created a continental network of sunspot counters, and their daily tallies, ranging from zero to a couple hundred, became one of the most reliable datasets in astronomy.

    The data showed that Schwabe was right about the sunspot cycle, but not its duration. At first Wolf recalculated the period to 11 years, which led him to believe he’d discovered the cause: Eleven years is the time it takes Jupiter to orbit the sun. Yet the more sunspot cycles he collected, the less plausible his correlation seemed. Some cycles were as long as 14 years. Others were as short as nine. Since Jupiter’s orbital period was invariant, he had to concede defeat.

    He kept counting, confident that someone would figure out the sunspot mechanism given enough data. He counted all the way up to his death in 1893. By then his assistant, Alfred Wolfer, had been counting alongside him for 17 years. Their k coefficient made the observational transition seamless to subsequent directors at the Zurich Observatory, including the haughty Waldmeier, who developed an evolutionary classification of sunspots, and method of forecasting geomagnetic storms, that greatly advanced solar science.

    4
    SEE SPOT SUN:
    This stunning image of a sunspot signals where magnetism has suppressed the movement of heat through the sun, a process known as solar convection. Sunspots mark areas that spark colossal flares that affect GPS and electricity grids on Earth.
    SST, Royal Swedish Academy of Sciences.

    So why are there periods of dark spottiness followed by periods when the sun is clear? “The truth is that we still don’t know for sure what causes the periodicity,” admits Clette. Even with 315 years of sunspot data, the inner workings of the sunspot cycle have yet to be illuminated in full.

    Still, a lot of progress has been made since Schwabe’s era, notably on the impact of the solar outbursts. In 1859, two amateur astronomers in Wolf’s observational network noticed two bright flares inside a cluster of sunspots. Over the following days, telegraph service was disrupted and auroras could be seen across Europe. Several episodes convinced scientists that there was a connection, the explanation for which came in 1908 when astronomer George Ellery Hale used a spectroscope to determine that sunspots are magnetic. (Magnetism subtly interferes with the color spectrum.) The sun’s dark blemishes could finally be understood. They weren’t primordial scum or signs of solar habitation, but areas where magnetism suppressed the movement of heat through the sun, a process known as solar convection.

    Today, thanks to solar physics, we know the sunspot cycle is driven by the rotational motion of plasma within the spinning sun.

    NASA Parker Solar Probe Plus named to honor Pioneering Physicist Eugene Parker

    Because the plasma is electrically charged, and layers of plasma rotate at different speeds, the solar sphere behaves like a dynamo, producing electromagnetic fields that are thousands of times stronger than Earth’s polar magnetism. The circulation of plasma that creates a solar dynamo is now being modeled on supercomputers. Centuries of sunspot data help scientists to refine and validate those models by running simulations, seeing which models most closely match the varying periodicity of successive cycles. And the more perfect models become, the better the sunspot cycle itself will be understood.

    The urgency for counting sunspots, explains Clette, has only increased as we’ve moved from an era of telegraphs to satellites. “The sunspot number helps establish the trend over the coming months and years for predicting the frequency and magnitude of disturbances,” he says. The Royal Observatory of Belgium receives regular requests for data from telecommunications and power companies. Commercial airlines also depend on sunspot trends because solar magnetism affects the rate at which radio waves pass through the ionosphere, warping GPS coordinates. If solar weather is trending toward storminess, pilots will shift their attention to alternative navigational instruments.

    There also are more speculative correlations between sunspots and life on Earth. Medical researchers are keen to find connections between solar magnetism and cancer. Economists look for relationships between sunspot cycles and agriculture. And climatologists want to know whether little ice ages are caused by periods of “grand minimum”—when the sun is almost spotless—as was the case in the early 18th century. (Period paintings show people ice skating on the Thames and Venice’s lagoons.)

    Progress in climatology is especially compelling. Solar radiation is known to change the chemistry of the upper atmosphere, and sunspots are known to modulate the intensity of different wavelengths—from infrared to X-rays—bombarding our planet. By linking the sunspot number to variations in the solar spectrum, climatologists will soon be able to deduce the spectral signature of the sun during the 18th-century grand minimum.

    It’s an application that Wolf could never have anticipated, and a lesson to would-be Wolfs present and future: Solving one of the most pressing problems in contemporary science—how the global climate changes—will depend on data collected long before the problem was known. “I think it’s the essence of scientific research when you observe a new phenomenon that you cannot understand,” says Clette. “It’s like discovering a new territory. You know that new knowledge will be gained, even if it comes from different directions than you expect.”

    Explaining the sunspot cycle would be the ultimate vindication of Wolf’s multi-century gambit. Yet in his role as custodian of sunspots, Clette is as jubilant about another breakthrough: He has recently established contact with the man who inherited Wolf’s instruments from Waldmeier’s renegade assistant. Observations from the old Fraunhofer telescope are once again contributing to the international sunspot count.

    Clette’s elation is not at all sentimental, but celebrates Wolf’s central role in making sunspot counting consistent. “I’ve been able to establish the k coefficient on the telescope,” he says. “It matches perfectly what Wolf established in the 19th century—and keep in mind that the present observer is not Wolf. The matching k coefficient is an indication that the eye-brain system hasn’t evolved in the past couple centuries.”

    And if the past couple centuries are a good measure, then simple observation will be viable far into the future. The sunspot count can be a model for any study that requires ultra-long-term data collection, such as the subtle changes in an ancient star’s behavior in the thousands of years before a supernova. Spanning tens or hundreds of generations, a supernova study would make sunspot counting seem as quick as scoring a baseball game.

    This experiment in deep time will be an epic challenge. It will depend on statistical cleverness worthy of Wolf and stubborn traditionalism worthy of Waldmeier. Yet to reach its fullest potential, it will take the placid mindset of Schwabe, who didn’t need to know what would eventually be found in his data, only that there was merit in observing.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 12:49 pm on December 26, 2019 Permalink | Reply
    Tags: "Before There Were Stars", , , Molecular Hydrogen, Nautilus, ,   

    From Nautilus: “Before There Were Stars” 

    Nautilus

    From Nautilus

    December 26, 2019
    Daniel Wolf Savin
    Illustration by Jon Han

    1

    The unlikely heroes that made starlight possible.

    The universe is the grandest merger story that there is. Complete with mysterious origins, forces of light and darkness, and chemistry complex enough to make the chemical conglomerate BASF blush, the trip from the first moments after the Big Bang to the formation of the first stars is a story of coming together at length scales spanning many orders of magnitude.

    ALMA Schematic diagram of the history of the Universe. The Universe is in a neutral state at 400 thousand years after the Big Bang, until light from the first generation of stars starts to ionise the hydrogen. After several hundred million years, the gas in the Universe is completely ionised. Credit. NAOJ

    To piece together this story, scientists have turned to the skies, but also to the laboratory to simulate some of the most extreme environments in the history our universe. The resulting narrative is full of surprises. Not least among these, is how nearly it didn’t happen—and wouldn’t have, without the roles played by some unlikely heroes. Two of the most important, at least when it comes to the formation of stars, which produced the heavier elements necessary for life to emerge, are a bit surprising: dark matter and molecular hydrogen. Details aside, here is their story.

    Fritz Zwicky discovered Dark Matter when observing the movement of the Coma Cluster., Vera Rubin a Woman in STEM denied the Nobel, did most of the work on Dark Matter.

    Fritz Zwicky from http:// palomarskies.blogspot.com

    Coma cluster via NASA/ESA Hubble

    Astronomer Vera Rubin at the Lowell Observatory in 1965, worked on Dark Matter (The Carnegie Institution for Science)


    Vera Rubin measuring spectra, worked on Dark Matter (Emilio Segre Visual Archives AIP SPL)


    Vera Rubin, with Department of Terrestrial Magnetism (DTM) image tube spectrograph attached to the Kitt Peak 84-inch telescope, 1970. https://home.dtm.ciw.edu

    The LSST, or Large Synoptic Survey Telescope is to be named the Vera C. Rubin Observatory by an act of the U.S. Congress.

    LSST telescope, The Vera Rubin Survey Telescope currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    Dark Matter Research

    Scientists studying the cosmic microwave background hope to learn about more than just how the universe grew—it could also offer insight into dark matter, dark energy and the mass of the neutrino.

    Dark matter cosmic web and the large-scale structure it forms The Millenium Simulation, V. Springel et al

    Dark Matter Particle Explorer China

    DEAP Dark Matter detector, The DEAP-3600, suspended in the SNOLAB deep in Sudbury’s Creighton Mine

    LBNL LZ Dark Matter project at SURF, Lead, SD, USA


    Inside the ADMX experiment hall at the University of Washington Credit Mark Stone U. of Washington. Axion Dark Matter Experiment

    2

    Dark Matter

    The Big Bang created matter through processes we still do not fully understand. Most of it—around 84 percent by mass—was a form of matter that does not interact with or emit light. Called dark matter, it appears to interact only gravitationally. The remaining 16 percent, dubbed baryonic or ordinary matter, makes up the everyday universe that we call home.

    Standard Model of Particle Physics

    Ordinary matter interacts not only gravitationally but also electromagnetically, by emitting and absorbing photons (sometimes called radiation by the cognoscenti and known as light in the vernacular).

    As the universe expanded and cooled, some of the energy from the Big Bang converted into ordinary matter: electrons, neutrons, and protons (the latter are equivalent to ionized hydrogen atoms). Today, protons and neutrons comfortably rest together in the nuclei of atoms.

    The quark structure of the proton 16 March 2006 Arpad Horvath

    But in the seconds after the Big Bang, any protons and neutrons that fused to form heavier atomic nuclei were rapidly blown apart by high-energy photons called gamma rays. The residual thermal radiation field of the Big Bang provided plenty of those. It was too hot to cook. But things got better a few seconds later, when the radiation temperature dropped to about a trillion degrees Kelvin—still quite a bit hotter than the 300 Kelvin room temperature to which we are accustomed, but a world of difference for matter in the early universe.

    Heavier nuclei could now survive the gamma-ray bombardment. Primordial nucleosynthesis kicked in, enabling nuclear forces to bind protons and neutrons together, until the expansion of the universe made it too cold for these fusion reactions to continue. In these 20 minutes, the universe was populated with atoms. The resulting elemental composition of the universe weighed in at roughly 76 percent hydrogen, 24 percent helium, and trace amounts of lithium—all ionized, since it was too hot for electrons to stably orbit these nuclei. And that was it, until the first stars formed and began to forge all the other elements of the periodic table.

    Before these stars could form, however, newly-formed hydrogen and helium atoms needed to gather together to make dense clouds. These clouds would have been produced when slightly denser regions of the universe gravitationally attracted matter from their surroundings. The question is, was the early universe clumpy enough for this to have happened?

    To answer the question, we can look to the modern-day night sky. In it, we see a faint glow of microwave radiation that has an even fainter pattern in it. This so-called cosmic microwave background [CMB] structure dates back to 377,000 years after the Big Bang, a mere fraction of the universe’s current age of 13.8 billion years, and analogous to less than a day in the 81-year life expectancy for a woman living today in the United States.

    CMB per ESA/Planck

    At that time, the universe had just cooled to about 3,000 Kelvin. Free electrons started to be captured into orbit around protons, forming neutral hydrogen atoms. Photons from the flash of the Big Bang, whose progress had been impeded by their scattering off of unbound electrons, could now finally stream throughout the cosmos, essentially free. These photons continue to permeate the universe today, at a frigid temperature of only 2.7 Kelvin, and constitute the cosmic microwave background that we have measured using an array of ground-based, balloon-born, and satellite telescopes.

    These sky maps suggested something surprising: The intensity of the residual heat from the Big Bang made the early universe too smooth for gas clouds to form.

    Enter dark matter [above]. Because it does not interact directly with light, it was unaffected by the same radiation that smoothed out ordinary matter. Therefore it was left with a relatively high degree of clumpiness. It, rather than regular matter, initiated the formation of the stars and galaxies that make up the modern structure of the universe. Regions of space with an above-average density of dark matter gravitationally attracted matter from regions with lower densities. Halos of dark matter formed and merged with other halos, bringing ordinary matter along for the ride.

    Caterpillar Project A Milky-Way-size dark-matter halo and its subhalos circled, an enormous suite of simulations . Griffen et al. 2016

    4

    Molecular Hydrogen

    Once the universe went neutral, gas began to form into clouds. As ordinary matter accelerated into the gravitational wells of dark matter, gravitational potential energy converted into kinetic energy, creating a hot gas of fast-moving particles with high kinetic energies embedded within halos of dark matter. Starting from temperatures around 1,000 Kelvin, these gas clouds eventually gave birth to the first stars when the universe was roughly half a billion years old (about four years into the lifespan of the typical U.S. woman).

    For a star to form, a gas cloud needs to reach a certain density; but if its constituent molecules are too hot, zipping around in every direction, this density may be unreachable. The first step toward making star-forming clouds was for gas atoms to slow down by radiating their kinetic energy out of the cloud and into the larger universe, which by this time had cooled to below 100 Kelvin.

    But they can’t cool themselves: As atoms collide like billiard balls, they exchange kinetic energy. But the total kinetic energy of the gas remains unchanged. They needed a catalyst to cool off.

    This catalyst was molecular hydrogen (two hydrogen atoms bound together by sharing their electrons). Hot particles colliding with this dumbbell-shaped molecule transferred some of their own energy to the molecule, causing it to rotate. Eventually these excited hydrogen molecules would relax back to their lowest-energy (or ground) state by emitting a photon that escaped from the cloud, carrying the energy out into the universe.

    To make molecular hydrogen, the atomic gas clouds needed to do some chemistry. It might be surprising to hear that any chemistry was going on at all, given that the entire universe had just three elements. The most sophisticated chemical models of early gas clouds, however, include nearly 500 possible reactions. Fortunately, to understand molecular hydrogen formation, we need concern ourselves with only two key processes.

    Chemists have named the first reaction associative detachment, a name fit for a psychiatric condition out of the DSM-V for which a clinician might prescribe some primordial lithium. Initially, most of the hydrogen in a gas cloud was in neutral atomic form, with the positive charge of a single proton cancelled out by the negative charge of a single orbiting electron. However, a small fraction of its atoms captured two electrons, creating a negatively charged hydrogen ion. These neutral hydrogen atoms and charged hydrogen ions “associated” with each other, causing the extra electron to detach and leaving behind neutral molecular hydrogen. In chemical notation, this can be represented as H + H- → H2 + e-. Associative detachment converted only about 0.01 percent of atomic hydrogen to molecules, but that small fraction allowed the clouds to begin to cool and become denser.

    When the cloud had become sufficiently cool and dense, a second chemical reaction began. It is called three-body association, and written as H + H + H → H2 + H. This ménage-à-trois begins with three separate hydrogen atoms, and ends with two of them coupled and the third one left out in the cold. Three-body association converted essentially all of the cloud’s remaining atomic hydrogen into molecular hydrogen. Once all of the hydrogen was fully molecular, the cloud cooled to the point where its gas could condense enough to form a star.

    5

    Stars

    From the formation of a dense cloud to the ignition of fusion at the heart of a star is a process whose complexity far exceeds what came before it. In fact, even the most sophisticated computer simulations available have yet to reach the point where the object becomes stellar in size, and fusion begins. Simulating most of the 200-million-year process is relatively easy, requiring only about 12 hours using high speed, parallel processing computer power. The problem lies in the final 10,000 years. As the density of the gas goes up, the structure of the cloud changes more and more rapidly. So, whereas for early times one needs only to calculate how the cloud changes every 100,000 years or so, for the final 10,000 years one must calculate the change every few days. This dramatic increase in the required number of calculation translates into more than a year of non-stop computer time on today’s fastest machines. Running simulations for the full range of possible starting conditions in these primordial clouds exceeds what can be achieved in a human lifetime. As a result, we still do not know the mass distribution for the first generation of stars. Since the mass of a star determines what elements it forges in its core, this hinders our ability to follow the pathway by which the universe began to synthesize the elements needed for life. Those of us who cannot wait to know the answer are counting on yet another hero: Moore’s Law.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 9:58 am on December 26, 2019 Permalink | Reply
    Tags: "The Joy of Cosmic Mediocrity", , , , By the 18th century many leading intellectuals embraced not only the idea of other worlds but even other inhabited worlds., , Heliocentrism, If Earth is exceptional then we might be profoundly alone., Nautilus, William Herschel (perhaps the most famous astronomer of the late 18th and early 19th century) was a firm proponent of the idea that life is common on other planets.   

    From Nautilus: “The Joy of Cosmic Mediocrity” 

    Nautilus

    From Nautilus

    1
    NASA’s retro-style poster celebrates the seven Earth-size planets recently found around the nearby red dwarf star TRAPPIST-1. Credit: NASA-JPL / Caltech

    A size comparison of the planets of the TRAPPIST-1 system, lined up in order of increasing distance from their host star. The planetary surfaces are portrayed with an artist’s impression of their potential surface features, including water, ice, and atmospheres. NASA

    ESO Belgian robotic Trappist National Telescope at Cerro La Silla, Chile

    ESO Belgian robotic Trappist-South National Telescope at Cerro La Silla, Chile, 600 km north of Santiago de Chile at an altitude of 2400 metres.

    December 26, 2019
    Corey S. Powell

    It’s lonely to be an exceptional planet.

    One of the greatest debates in the long history of astronomy has been that of exceptionalism versus mediocrity—and one of the great satisfactions of modern times has been watching the arguments for mediocrity emerge triumphant. Far more than just a high-minded clash of abstract ideas, this debate has shaped the way we humans evaluate our place in the universe. It has defined, in important ways, how we measure the very value of our existence.

    In the scientific context, exceptional means something very different than it does in the everyday language of, say, football commentary or restaurant reviews. To be exceptional is to be unique and solitary. To be mediocre is to be one of many, to be a part of a community. If Earth is exceptional, then we might be profoundly alone. There might not be any other intelligent beings like ourselves in the universe. Perhaps no other habitable planets like ours. Perhaps no other planets at all, beyond the neighboring worlds of our own solar system.

    2
    MEDIOCRITY #3: In the heliocentric system of Nicolaus Copernicus, Earth is just third in a set of planets circling the sun. But there is comfort in being part of a family. Mikołaj Kopernik

    If Earth is mediocre, the logic runs the other way. We might live in a galaxy teeming with planets, many of them potentially habitable, some of them actually harboring life. In the mediocre case, we bipedal little humans might not be the only sentient creatures peering out into the depths of space, wondering if anyone else is peering back.

    Today, the broadest version of exceptionalism has been thoroughly disproven, as astronomers have discovered 4,150 confirmed exoplanets, a tally that increases almost daily. The roster of alien worlds includes a remarkable variety of forms, many of which have no equivalent in our solar system. And that is just a limited sampling from the stars in our local corner of the galaxy.

    We do not yet have the technology needed to find a close analog of Earth orbiting a close analog of the sun, so we still know little about how common or rare such worlds may be. The question of alien life is still wide open. What we do know is that the Milky Way is home to a tremendous number of other planets. In that sense, at least, we are certainly not exceptional, and Earth is certainly not alone.

    The notion of cosmic mediocrity is so old that it predates modern observatories. It predates the 17th-century invention of the telescope. It predates even what could recognizably be called “science” in the modern sense, tracing its origins at least back to the Greek philosopher Anaxagoras of Clazomenae, writing and teaching in Athens in the 5th century B.C.

    Anaxagoras proposed that the cosmos is ruled by an all-pervasive intellect that he called nous, which functioned as a set of universal laws—a philosophical ancestor of Isaac Newton’s theory of universal gravitation. Under the action of nous, the elements of nature were set into circular motion, separating into different components. The sun, a ball of incendiary metal, was cast off into the sky by this process. So, too, were the stars and planets. Although what survives of Anaxagoras’s writing is fragmentary and mostly secondhand, it seems that he imagined the stars to be fiery lumps much like the sun, just drastically more distant. In one especially intriguing passage, he further hints at the existence of other lands similar to Earth and expansively argues “that there are a sun and a moon and other heavenly bodies for them, just as with us.”

    Many of these ideas reappeared in even more modern-looking style in the philosophy of Aristarchus of Samos. During the 3rd century B.C., Aristarchus advanced the first known heliocentric model of the solar system, evicting the Earth from its long-assumed central position and completely reworking the order of the cosmos. There is no surviving description of this iconoclastic model in Aristarchus’s own words. Fortunately, his contemporary Archimedes provided a succinct summary:

    His hypotheses are that the fixed stars and the Sun remain unmoved, that the Earth revolves about the Sun in the circumference of a circle, the Sun lying in the middle of the orbit, and that the sphere of the fixed stars, situated about the same center as the Sun, is so great that the circle in which he supposes the Earth to revolve bears such a proportion to the distance of the fixed stars as the center of the sphere bears to its surface.

    That final idea, though somewhat obscure in its phrasing, is pregnant with significance. Aristarchus is saying that the stars are so far away that we cannot see their parallax: They appear stationary even as the Earth moves in a great circle around the sun. The implications are twofold. First, he imagined a cosmos vastly larger than the one implied by the geocentric system. Second, he reiterated and expanded on Anaxagoras’s deduction that the stars might be other suns, this time explicitly spelling out the kinds of grand distances necessary for the stars to nevertheless appear as fixed, cold dots in our sky.

    The budding possibility of a multitude of worlds fully blossomed in the philosophy of the Greek atomists, most notably Epicurus. They envisioned not just other stars but other entire kosmoi (cosmic systems) beyond the one we know, each following the inexorable rules of the atoms it contains. Writing at about the same time as Aristarchus, Epicurus declared that “there is an infinite number of worlds, some like this world, others unlike it. For the atoms being infinite in number … are borne ever farther in their course.” His atoms were mathematical and ethical constructs, quite unlike the physically described quantum units of today’s physics, and yet in the way Epicurus reached toward a boundless universe he sounds shockingly prescient.

    3
    MANY WORLDS: Recent studies indicate that there may be a trillion planets in our galaxy—and then a trillion other galaxies in the observable universe.NASA, ESA, ESO / M. Kornmesser

    That pinnacle of glorious Epicurean mediocrity, alas, was followed by a lengthy retreat back into a constricted, Earth-centered cosmology. Aristotle retorted that “there cannot be more worlds than one,” and his great authority carried the day. Around 150 A.D., Claudius Ptolemy shrank even further from the kosmoi when he merged Aristotelian physics with state-of-the-art observations of stars and planets into a unified, Earth-centered model. The Ptolemaic system consisted of a set of nested celestial spheres, dispensing with exotic speculations about infinite space and other suns. By Ptolemy’s reckoning, the outermost crystalline sphere containing the fixed stars was about 20,000 times the radius of the Earth, making his entire cosmos just 160,000 miles wide in modern terms.

    What the Ptolemaic system lacked in grandeur, it made up in practicality. It predicted the motions of the planets and stars with admirable precision using a combination of mathematically appealing circular motions. Ptolemy’s astronomical writings, later translated by medieval Islamic scholars as the Almagest (literally “the greatest”), reigned supreme for more than a millennium. His authority was cemented when prominent theologians like Thomas Aquinas merged the Ptolemaic system with the Roman Catholic worldview during the Middle Ages. The outermost sphere of the cosmos equated with heaven; the Aristotelian “prime mover” that set the spheres in motion became one and the same with the Christian God.

    The same attributes that make exceptionalism appear impoverished from a scientific perspective made it precious from a theological point of view: only one Earth, one heaven, one God. But the fire of human imagination is not so easily snuffed. Some medieval Islamic astronomers continued to speculate about the existence of other worlds. Catholic scholars, too, pushed against the boundaries. Around 1450—a full century before the mystical speculations of Giordano Bruno—the German philosopher and astronomer Nicolas of Cuna wrote about the notion of infinite space, in contradiction to Ptolemaic concepts. Nicolas framed his ideas within a Catholic framework, exploring infinity as a natural corollary to the limitless glory of God, but his philosophy kept alive the possibility of a physically unbounded universe as well.

    Then along came Nicolas Copernicus, and mediocrity began a full-on comeback.

    From outward appearances, Copernicus was an unlikely figure to knock the solar system askew and to set astronomy on its modern path to a multitude of planets. He worked as a canon in Warmia, a small, semi-autonomous Catholic state in what is now Poland, tending to various local political and economic disputes. He was a modest, well-liked figure, not particularly known for his controversial opinions. Professionally, his most notable achievements were probably in economics and monetary theory. There was a spark within that set him apart, however: the bold, revisionist astronomical ideas brewing in his head.

    Sometime before 1514, while he was still in his 30s, Copernicus wrote a summary of his new model of the solar system. Influenced by the arguments of Aristarchus, as well as by his own strong sense of the mathematical ugliness of the Ptolemaic system, Copernicus returned the sun to the center and set the Earth in motion about it. He circulated his short document, called the Commentariolus, among his friends, with the intention of expanding its arguments into a fully developed work of heliocentric cosmology. That magnum opus, De Revolutionibus Orbium Coelestium (On the Revolutions of the Heavenly Spheres) famously was not published until 1543, when he was on his deathbed. Copernicus was unconscious when a finished copy was thrust into his limp hands, and died that same day.

    The publication delay was not, as popular accounts often claim, a simple matter of Copernicus’s fear of the Catholic church. He was more afraid of the Church’s intellectual partners, the Aristotelian philosophers, whom he worried (not unreasonably) might be brutal to this upstart living far from the intellectual heart of Europe. He also needed to perform detailed mathematical analysis and to collect astronomical observations in support of a theory that he was developing only in his spare time. Only in retrospect do those fears look absurd. It turns out that the time was ripe for a critical reexamination of entrenched classical Greek thinking. In the decades after its publication, De Revolutionibus was extensively read and discussed across Europe. The influential Danish astronomer Tycho Brahe even described Copernicus as “a second Ptolemy.”

    Two disciples of Copernicus were especially pivotal in establishing Copernican mediocrity—the notion that Earth does not sit in a privileged position, but rather is representative of the richness of the universe as a whole. In 1575, Thomas Digges, a leading astronomer in 16th-century England, published the first English translation of De Revolutionibus. He added commentary to clarify that the Copernican system was a physically realistic model of the solar system (not just a computational trick), and he overtly broached the idea that a sun-centered universe could be infinite in extent. To drive home this last point, Digges created a drawing showing, for the first time in history, how the stars might be scattered through endless space outside our solar system.

    A few years later, the German astronomer Michael Maestlin adopted Copernicus’s system as superior to Ptolemy’s, and spread heliocentric thinking broadly from his prominent position as a teacher at the University of Tübingen. Most notable among his students was a clever young fellow named Johannes Kepler, who starting in 1609 figured out that planets go around the sun in elliptical paths. This discovery thoroughly and finally smashed Ptolemy’s claustrophobic crystalline spheres. The universe was now wide open to all possibilities, and to endless worlds.

    From there, the concepts of Copernican mediocrity spread with astonishing rapidity—historically speaking. By the middle of the 17th century, heliocentrism was widely accepted across the Western world. By the 18th century, many leading intellectuals embraced not only the idea of other worlds, but even other inhabited worlds. Cyrano de Bergerac’s Comical History of the States and Empires of the Moon, published in 1657, introduced the reader to imaginary inhabitants of the moon. Jonathan Swift’s Gulliver’s Travels (1726) and Voltaire’s Micromegas (1752), whose central character is from a planet orbiting the star Sirius, casually assume a multiplicity of inhabited worlds as a backdrop to their social satire. Mediocrity was in vogue.

    On the scientific side, William Herschel (perhaps the most famous astronomer of the late 18th and early 19th century) was a firm proponent of the idea that life is common on other planets. Right around the time that he discovered the planet Uranus, Herschel shared what he believed to be telescopic evidence of intelligent life on the moon. He later argued that all worlds might be inhabited; improbable as it sounds, he even suggested that there is life on the sun, huddled beneath the luminous clouds covering its surface.

    3
    RED AND DEAD: The Mariner 4 probe flew past Mars in 1965, revealing a landscape that looked cratered and lifeless. NASA-JPL

    NASA Mariner 4

    Although many other researchers were not so enthusiastic, each generation found its champion of life beyond Earth. American astronomer Percival Lowell was especially effective at promoting such ideas well into the 20th century with his popular (if increasingly eccentric) writings about an imperiled advanced civilization on Mars. Science-fiction writers like Ray Bradbury, Arthur C. Clarke, Isaac Asimov, and Robert A. Heinlein further popularized many-worlds mediocrity with their compelling visions of alien beings on far-off worlds.

    That optimism suffered a major setback with the advent of the actual Space Age. Data from the first U.S. and Soviet space probes almost uniformly made the solar system seem shockingly hostile to life. NASA’s Mariner 2 flew past Venus in 1962 and found that the planet is not steamy jungle at all; rather, it is an Earth-size sterilizing oven, with a crushing atmosphere and surface temperatures hovering around 800 degrees Fahrenheit.

    4
    NASA’s Mariner 2

    Two years later, Mariner 4 [above] flew past Mars and beamed back images of a barren, cratered landscape to crestfallen planetary scientists. In 1976, NASA sent the twin Viking landers to Mars to do a Hail-Mary search for life right there on the surface.

    NASA/Viking 1 Lander

    NASA Viking 2 Lander

    The $1 billion effort, equivalent to $5 billion today, yielded no conclusive signs of anything alive.

    In the decade after Viking, the astrobiologist Carl Sagan energetically raised the possibility that many habitable worlds could exist undetected around other stars, but attempts to find such “extrasolar planets” proved a bust again and again. For a few decades, it seemed possible that Earth was a genuine outlier. The march of mediocrity resumed only in 1995, with the first unambiguous detection of a planet around another sunlike star. It was a weirdo—bigger than Jupiter, hotter than Mercury, clearly unsuitable for life—but the discovery provided the scientific confidence needed to gain approval for the big-budget Kepler space telescope and its successors, including the new TESS (Transiting Exoplanet Survey Satellite).

    NASA/Kepler Telescope, and K2 March 7, 2009 until November 15, 2018

    NASA/MIT TESS replaced Kepler in search for exoplanets

    Those missions indicate that there could be more than a trillion planets scattered through our galaxy, including many billions of them similar to Earth in size and temperature. Sara Seager of MIT, the deputy director of the TESS mission, has publicly set a lifetime goal of finding 500 planets similar to Earth. “If we’re lucky, maybe 100 of them will show biosignatures,” she says, referring to the data readings that would indicate the presence of life.

    With a sample that size, scientists could compare the different types of worlds that support life, different styles of metabolism, and different stages of evolution. They could navigate to whole new levels of mediocrity, exploring Earth’s place within an entire pantheon of inhabited worlds. But identifying even a single other living world would deliver an unprecedented connection between humanity and the rest of the universe.

    There is no way to know when such a discovery will happen. There’s no way to be certain it will happen at all. But today, more than ever before, the prospect of cosmic mediocrity spreads wide open and inviting before us.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 9:15 am on December 26, 2019 Permalink | Reply
    Tags: "The Climate Learning Tree", Anthropogenic climate change, Catastrophists are not patient people., Nautilus,   

    From Nautilus: “The Climate Learning Tree” 

    Nautilus

    From Nautilus

    December 26, 2019
    Summer Praetorius

    Why we need to branch out to solve global warming.

    As a paleoclimatologist, I often find myself wondering why more people aren’t listening to the warnings, the data, the messages of climate woes—it’s not just a storm on the horizon, it’s here, knocking on the front door. In fact, it’s not even the front door anymore. You are on the roof, waiting for a helicopter to rescue you from your submerged house.

    The data is clear: The rates of current carbon dioxide release are 10 times greater than even the most rapid natural carbon catastrophe [1] in the geological records, which brought about a miserable hothouse world of acidic oceans lacking oxygen, triggering a pulse of extinctions.[2]

    Despite the evidence for anthropogenic climate change, views about the severity and impact of global warming diverge like branch points on a gnarly old oak tree (below).

    1

    The first split is between deniers and acceptors; only the denial branch doesn’t go anywhere—it’s just a dead stump, no longer sustained by the nutrients of evidence. The next bifurcation is on the root cause of climate change. Naturalists say “the climate has always changed,” which aside from ignoring evidence that the recent increase in carbon dioxide is from burning fossil fuels,[3] is a diversion tactic for derailing meaningful conversations by stating the obvious. Of course, the climate is always changing; the relevant variable is the rate at which it does so.

    If we follow the branch line that accepts the evidence for human-induced climate change, the next major split is between those who see global warming as a good thing and those who view it as a bad one. The former view an ice-free Arctic as a business boon for oil extraction or sweltering cities as an expanding market for air conditioners, or they are your clueless uncle joking about his property going up in value because it will suddenly be beachfront property.

    This view is perched on a naïve premise that stability still prevails even amid the progressive undercutting of the systems that make it possible. It neglects the fact that accelerating the rates of change makes the probability of crossing thresholds far more likely.

    If you keep following the branch points higher and higher, you come to a split in the messaging around the outlook for the future. This might even be between two climate scientists with similar backgrounds—some of whom are struggling with ecological grief[4] and depression over dying coral reefs and the world their children will inherit, while others seem to always keep their chin up, adamant that the only way to communicate and solve the problem is to make sure it is wrapped in a bow of positivity.

    The divergent outlook of the future is like the old geological battle of gradualism versus catastrophism. Gradualists asserted that it was slow and steady processes like erosion that shaped the earth. Catastrophists pointed to extinction events in the fossil record as evidence for episodic events that punctuated the status quo and completely altered Earth’s bio and geo-spheres—events like asteroid impacts or volcanism-induced carbon catastrophes. Both, it would turn out, were right. They were just pointing to different periods in Earth’s history—different slopes on the graph—adamant that they had the proof to back up their claim.

    If we were to consider this dualism in terms of personality, we would all fall somewhere on the spectrum of gradualist to catastrophist. Gradualists expect more or less steady rates of change. They have money in the stock market; trust in stability. They are inclined to believe science will engineer a solution to climate change. Catastrophists have a healthy respect for the unexpected. They store their money in gold and bury it under the apple tree, viewing any day as ripe for collapse: earthquake, stock market, tsunami, bolide. Catastrophists are not patient people.

    The fact is, climate change will come both slow and steady as well as fast and furious, reflecting the long-term average changes in global temperature and the short-term extremes that will continue to get more and more outrageous as the system absorbs energy. The last five decades have been to some extent slow and steady because the oceans have absorbed so much of the excess heat energy,[5] buffering us from the brunt of it. This has likely contributed to a false sense of security for those who don’t know the climate system is riddled with thresholds and tipping points,[6] thinking that future changes will unfold just as gradually as the past.

    But all that heat is now fueling massive storms and generating marine heatwaves that can take down entire ecosystems in shockingly abrupt timescales. Slow erosion can give way to sudden failure. The last five years have given us a taste of the fast and furious. In these fives year, we have witnessed the collapse of coral reefs,[7] the collapse of the California kelp forest,[8] wildfires[9] and hurricanes[10] of unprecedented proportions. These are local catastrophes unfolding in real time to the occupants of these regions, their lives already divided into before and afters, much like a geological timeline.

    If there is one lesson we should heed from Earth history, it’s that thresholds become far more likely as the rates and magnitude of change increase. And the danger of thresholds is that they are effectively one-way doors: easily walked through and closed to re-entry. This is the time-asymmetry of instability: It takes much longer to establish stability than it does to unravel it.

    Rates may be the simplest and most critical aspect of climate change to understand, and yet it is not something that most people likely see on a regular basis. When I talk about rate, I take for granted that I am conjuring an image in my mind the whole time, in part because I stare at graphs of climate history every day. All those stories of ecological catastrophe are compactly folded up into a single near-vertical line on a graph. That’s when you know you’re in trouble: when the slope suddenly goes vertical (below).

    2
    DANGER AHEAD: Temperature anomalies for the Holocene period (green)[11] compared with recent global warming (blue)[12] and future projections of a low carbon emission scenario (RCP2.6, pink) and high emission scenario (RCP8.5, black) from the IPCC AR5 report.[13] The Holocene exhibits relatively gradual rates of change, whereas rates of modern and projected temperature increase are many times greater. We still have agency on whether we choose to take the double-black diamond route or an intermediate slope.

    For many climate scientists, the awareness of being on the knife’s edge of the graph, plotting steeper and steeper every day is like learning to live with vertigo, partitioning off a deep sense that we are no longer on stable ground while simultaneously trying to get on with the day, show up to work, laugh with our kids.

    Those who warn about potential instability have always been labeled “Cassandras of doom.” There’s an irony to this because Cassandra was right. She was just ignored as an “alarmist”— that dirty word now cleverly used to emasculate anyone concerned about climate change into the category of “hysterical woman.”

    The thing about alarms is that they turn out to be useful. The canary in the coalmine, smoke detectors, tornado sirens, cell phone alerts; we generally agree that instruments to detect and convey impending threats are a step in the right direction. In fact, we require them in most buildings. The inconvenience of an occasional false alarm is far outweighed by the benefit of not dying in your sleep by a raging fire.

    So while catastrophists may get the eye-roll of hyperbole, gradualists warrant an occasional head-slap of naivete. Their apparent inability to conceive a fundamentally different world leads them into a default mode of complacency, one that ironically makes it much more likely to provoke the thing they aren’t expecting. On the flip side, catastrophists are more prone to expect disaster, and might be more motivated to prevent the potential threats. So each will unwittingly prove the other one right, if they have their way of things.

    What if instead of feeling threatened by differences in opinion, we were to reconceptualize them in much the same way a tree will distribute a canopy to collect as much sunlight as possible—as a multi-pronged approach to getting the job done? In the same sense that both fast and slow processes contribute to Earth change, both steady progress and immediate local action will contribute to climate solutions. Let’s take stock of our pace and work together, thankful there is someone else to fill the space we can’t. After all, we are not lone trees, but a living, connected forest, and balance is essential for stability.

    References

    1. Cui, Y., et al. Slow release of fossil carbon during .the Palaeocene-Eocene thermal maximum Nature Geoscience 2, 481-485 (2011).

    2. McInerney, F.A. & Wing, S.L. The Paleocene-Eocene thermal maximum: A perturbation of carbon cycle, climate, and biosphere with implications for the future. Annual Review of Earth and Planetary Sciences 39, 489–516 (2011).

    3. “How Do We Know That Recent CO2 Increases Are Due to Human Activities?” RealClimate.org (2004).

    4. Cunsolo, A. & Ellis, N.R. Ecological grief as a mental health response to climate change-related loss. Nature Climate Change 8, 275–281 (2018).

    5. Cheng, L., Abraham, J., Hausfather, Z., & Trenberth, K.E.How fast are the oceans warming? Science 363, 128-129 (2019).

    6. Lenton, T.M. Climate tipping points—too risky to bet against. Nature.com (2019).

    7. Hughes, T.P., et al. Global warming impairs stock–recruitment dynamics of corals. Nature 568, 387–390 (2019).

    8. Rogers-Bennett, L. & Catton, C.A. Marine heat wave and multiple stressors tip bull kelp forest to sea urchin barrens. Scientific Reports 9, 1–9 (2019).

    9. Park, W.A., et al. Observed impacts of anthropogenic climate change on wildfire in California. Earth’s Future 7, 892–910 (2019).

    10. Trenberth, K.E., et al. Hurricane Harvey links to ocean heat content and climate change adaptation. Earth’s Future 6, 730–744 (2018).

    11. Marcott, S.A., Shakun, J.D., Clark, P.U., & Mix, A.C. A reconstruction of regional and global temperature for the past 11,3000 years. Science 339, 1198-1201 (2013).

    12. GISTEMP Team. GISS Surface Temperature Analysis (GISTEMP), version 4. NASA Goddard Institute for Space Studies (2019).

    13. IPCC Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change Stocker, T.F., et al. (Eds.) Cambridge University Press (2013).

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 11:56 am on December 8, 2019 Permalink | Reply
    Tags: , , , , , HSS-Smith; Hairston; and Slobodkin, Nautilus, Robert Paine: "The Ecologist Who Threw Starfish", Sea otters, The HSS hypothesis was essentially a description of the natural world based on observation., There are ecological rules that regulate the numbers and kinds of animals and plants in a given place.   

    From Nautilus: “The Ecologist Who Threw Starfish” 

    Nautilus

    From Nautilus

    March 10, 2016 [Just now in social media]
    By Sean B. Carroll
    Illustration by Aad Goudappel

    1
    Illustration: Aad Goudappel

    Robert Paine showed us the surprising importance of predators.

    Even in 1963, one had to go pretty far to find places in the United States that were not disturbed by people. After a good deal of searching, Robert Paine, a newly appointed assistant professor of zoology at the University of Washington in Seattle, found a great prospect at the far northwestern corner of the lower 48 states.

    On a field trip with students to the Pacific Coast, Paine wound up at Mukkaw Bay, at the tip of the Olympic Peninsula. The curved bay’s sand and gravel beach faced west into the open ocean, and was dotted with large outcrops. Among the rocks, Paine discovered a thriving community. The tide pools were full of colorful creatures—green anemones, purple sea urchins, pink seaweed, bright red Pacific blood starfish, as well as sponges, limpets, and chitons. Along the rock faces, the low tide exposed bands of small acorn barnacles, and large, stalked goose barnacles, beds of black California mussels, and some very large, purple and orange starfish, called Pisaster ochraceus.

    “Wow, this is what I have been looking for,” he thought.

    2
    Star hurler: Robert Paine at Mukkaw Bay, on the Olympic Peninsula in Washington, in 1974, and again recently. To understand the role of predatory starfish he hurled them from an area and later returned to assess the sea life without them.
    Left: Bob Paine / Alamy.com ; Right: Kevin Schafer / Alamy Stock Photo

    The next month, June 1963, he made the four-hour journey back to Mukkaw from Seattle, first crossing Puget Sound by ferry, then driving along the coastline of the Straits of Juan de Fuca, then onto the lands of the Makah Nation, and out to the cove of Mukkaw Bay. At low tide, he scampered onto a rocky outcrop.

    With a crowbar in hand and mustering all of the leverage he could with his 6-foot, 6-inch frame, he pried loose every purple or orange starfish on the slab, grabbed them, and hurled them as far as he could out into the bay.

    So began one of the most important experiments in the history of ecology.

    The 1960s were a time of revolution, but it was not all just sex, drugs, and rock and roll. Inside laboratories across the world, scientists were plumbing the depths of the gene to decipher the genetic code and the molecular rules of life, sparking a revolution that would gather dozens of Nobel Prizes and ultimately transform medicine.

    But largely outside of this spotlight, a few other biologists had started asking some simple, seemingly naïve questions about the wider world: Why is the planet green? Why don’t the animals eat all of the food? And what happens when certain animals are removed from a place? These questions led to the discovery that, just as there are molecular rules that regulate the numbers of different kinds of molecules and cells in the body, there are ecological rules that regulate the numbers and kinds of animals and plants in a given place. And these rules may have as much or more to do with our future welfare than all the molecular rules we may ever discover.

    Why Is the Planet Green?

    Paine’s journey to Mukkaw Bay and its starfish was a circuitous one. Born and raised in Cambridge, Massachusetts, Paine’s interests in nature were fueled by exploring the New England woods. His first love was bird-watching, with butterflies and salamanders close seconds. Paine was inspired by the writings of prominent naturalists, who opened his eyes to the drama of wildlife. He was as enthralled by intimate accounts of spider behavior as by Jim Corbett’s hair-raising tales of tracking down tigers and leopards in rural India, in Man-Eaters of Kumaon.

    After enrolling at Harvard, and inspired by several famous paleontologists on the faculty, Paine developed an intense new interest in animal fossils. He was so fascinated by the marine animals that lived in the seas more than 400 million years ago that he decided to study geology and paleontology in graduate school at the University of Michigan.

    The course requirements entailed rather dry surveys of various animal “ologies”—ichthyology (fishes), herpetology (reptiles and amphibians), and so forth that Paine found very boring. One exception was a course on the natural history of freshwater invertebrates taught by ecologist Fred Smith. Paine appreciated how the professor provoked his students to think.

    One memorable spring day, the sort of day when professors don’t feel like teaching and students don’t want to be inside, Smith told the class, “We are going to stay in this room.” He looked outside at a tree that was just getting its leaves.

    
“Why is that tree green?” Smith asked, looking out the window.

    “Chlorophyll,” a student replied, correctly naming the leaf pigment, but Smith was heading down a different path.

    “Why isn’t all of its greenery eaten?” Smith continued. It was such a simple question, but Smith showed how even such basic things were not known. “There is a host of insects out there. Maybe something is controlling them?” he mused.

    At the end of his first year, Smith sensed Paine’s unhappiness with geology, and suggested that he consider ecology instead. “Why don’t you be my student?” he asked.

    It was a major change in direction, and there was a catch. Paine proposed to study some fossil animals from the Devonian period in nearby rocks. Smith said, “No way.” Paine had to study living, not extinct creatures. Paine agreed, and Smith became his adviser.

    Smith had long been interested in brachiopods or “lamp shells,” marine animals with an upper and lower shell, joined at a hinge. Paine knew about the animals because they were abundant in the fossil record, but their present-day ecology was not well known. Paine’s first task was to find living forms. Lacking a nearby ocean, Paine made scouting trips to Florida in 1957 and 1958, and found some promising locations. With Smith’s approval, he began what he called his “graduate-student sabbatical.” In June 1959, he drove back to Florida and began living out of his Volkswagen van. For 11 months he studied the range, habitat, and behavior of one species.

    It was the sort of work that provided a solid foundation to a naturalist-in-training, and it would earn Paine his Ph.D. But the filter-feeding brachiopods were not the most dynamic animals. And sifting large amounts of sand for the less than quarter-inch-long creatures was, well, just not very exciting.

    As Paine shoveled his way along the Gulf Coast, it was not Florida’s brachiopods that captured his imagination. On the Florida panhandle, Paine discovered the Alligator Harbor Marine Laboratory, and was given permission to stay there. At the tip of nearby Alligator Point, he noticed that for a few days each month, the low tide exposed an enormous gathering of large predatory snails, such as the horse conch, some more than a foot long. The mud and sawgrass of Alligator Point was not at all boring, quite the contrary—it was a battlefield.

    On top of his thesis work on brachiopods, Paine made a careful study of the snails. He counted eight abundant snail species, and took detailed notes on who ate whom. In this “gastropod eats gastropod” arena, Paine saw that without exception it was always a larger snail devouring a smaller one, but not everything that was smaller. The 11-pound horse conch, for example, dined almost exclusively on other snails, and paid little attention to smaller prey such as the clams that were the main fare for the smaller snails.

    While Paine was in Florida watching predators up close, his advisor Smith had kept thinking about those green trees and the roles of predators in nature. Smith was keenly interested in not just the structure of communities, but in the processes that shaped them. He often had bag lunches with two colleagues, Nelson Hairston Sr. and Lawrence Slobodkin, during which they had friendly arguments about major ideas in ecology. All three scientists were interested in the processes that control animal populations, and they debated explanations circulating at the time. One major school of thought was that population size was controlled by physical conditions such as the weather. Smith, Hairston, and Slobodkin (hereafter dubbed “HSS”) all doubted this idea because, if true, it meant that population sizes fluctuated randomly with the weather. Instead, the trio was convinced that biological processes must control the abundance of species in nature, at least to some degree.

    HSS pictured the food chain as subdivided into different levels according to the food each consumed (known as trophic levels). At the bottom were the decomposers that degrade organic debris; above them were the producers, the plants that relied on sunlight, rain, and soil nutrients; the next level were the consumers, the herbivores that ate plants; and above them the predators that ate the herbivores.

    The ecological community generally accepted that each level limited the next higher level; that is, populations were positively regulated from the “bottom up.” But Smith and his lunch buddies pondered the observation that seemed at odds with this view: The terrestrial world is green. They knew that herbivores generally do not completely consume all of the vegetation available. Indeed, most plant leaves only show signs of being partially eaten. To HSS, that meant that herbivores were not food-limited, and that something else was limiting herbivore populations. That something, they believed, were predators, negatively regulating herbivore populations from the “top-down” in the food chain. While predator-prey relationships had long been studied by ecologists, it was generally thought that the availability of prey regulated predator numbers and not vice-versa. The proposal that predators as a whole acted to regulate prey populations was a radical twist.

    To bolster their case, HSS noted instances where herbivore populations had exploded after the removal of predators, such as the Kaibab deer population in Northern Arizona that increased after decimation of local wolf and coyote populations. They assembled their observations and arguments in a paper entitled “Community Structure, Population Control, and Competition” and submitted it to the journal Ecology in May 1959.

    It was rejected. The article did not see the light of day until the year-end issue of the American Naturalist in 1960.

    The proposal that predators regulate herbivore populations is now widely known as the “HSS hypothesis” or “Green World Hypothesis.” While HSS declared, “The logic used is not easily refuted,” their ideas, like most that challenge the status quo, drew a lot of criticism. One legitimate critique was their claims needed testing and more evidence. And that was just what Smith’s former student set out to do on Mukkaw Bay in 1963.

    3
    Ruler of the tidal zone: Starfish are opportunistic gourmands that eat barnacles, limpets, snails, and mussels. In this rocky intertidal zone on the Pacific coast, the starfish prey on mussels, which enables other species such as kelp and small animals to occupy the community. David Cowles, rosario.wallawalla.edu/inverts

    Kick It and See

    The HSS hypothesis was essentially a description of the natural world based on observation. Indeed, virtually all of ecology up to the 1960s had been based upon observation. The limitation of such observational biology was that it left itself open to alternative explanations and hypotheses. Paine realized that if he wanted to understand how nature worked—the rules that regulated animal populations—he would have to find situations where he could intervene and break them. In the specific case of the roles of predators, he needed a setting where he could remove predators and see what happened—what would later be described as “kick it and see” ecology. Hence, the starfish-hurling.

    Twice a month every spring and summer, and once a month in the winter, Paine kept returning to Mukkaw to repeat his starfish-throwing ritual. On a 25-feet long, 6-feet tall stretch of rock, he removed all of the starfish. On an adjacent stretch, he let nature take her course. On each plot, he counted the number and calculated the density of the inhabitants, tracking 15 species in all.

    To understand the structure of the Mukkaw food web, Paine paid close attention to what the predators were eating. The starfish has the neat trick of everting its stomach to consume prey. To see what they were feasting upon, Paine turned more than 1,000 starfish over and examined the animals held against their stomachs. He discovered that the starfish was an opportunistic gourmand that ate barnacles, chitons, limpets, snails, and mussels. While the small barnacles were the most numerous prey—the starfish was able to scarf up dozens of the little crustaceans at a time—they were not its primary source of calories. Mussels and chitons were the most important contributors to the starfish diet.

    By September, just three months after he began removing the starfish, Paine could already see that the community was changing. The acorn barnacles had spread out to occupy 60 to 80 percent of the available space. But by June of 1964, a year into the experiment, the acorn barnacles were in turn being crowded out by small, but rapidly growing goose barnacles and mussels. Moreover, four species of algae had largely disappeared, and the two limpet and two chiton species had abandoned the plot. While not preyed upon by the starfish, the anemone and sponges populations had also decreased. However, the population of one small predatory snail, Thais emarginata, increased 10- to 20-fold.

    Altogether, the removal of the predatory starfish had quickly reduced the diversity of the intertidal community from the original 15 species to eight.

    The results of this simple experiment were astonishing. They showed that one predator could control the composition of species in a community through its prey—affecting both animals it ate as well as animals and plants that it did not eat.

    As Paine continued the experiment over the next five years, the line of mussels advanced down the rock face by an average of almost 3 feet toward the low tide mark, monopolizing most of the available space and pushing all other species out completely. Paine realized that the starfish exerted their strong effects primarily by keeping the mussels in check. For the animals and algae of the intertidal zone, the important resource was real estate—space on the rocks. The mussels were very strong competitors for that space, and without the starfish, they took over and forced other species out. The predator stabilized the community by negatively regulating the population of the competitively dominant species.

    Paine’s starfish-tossing was strong confirmation of the HSS hypothesis that predators exerted control from the top down. But this was just one experiment with one predator in one spot on the Pacific Coast. If Paine was going to draw any generalities, it was important to test other sites and other predators. The dramatic results of the Mukkaw Bay experiments inspired a flurry of kick-it-and-see experiments.

    Paine discovered uninhabited Tatoosh Island when he was out on a salmon-fishing trip. On this small, storm-battered island, several miles up the coast from Mukkaw Bay and about half a mile offshore, Paine found many of the same species clinging to the rocks, including large Pisaster starfish. With the permission of the Makah tribe, Paine started tossing them back in the water. Within a few months, the mussels started spreading across the predator-free rocks.

    While on sabbatical in New Zealand, Paine investigated another intertidal community at the north end of a beach near Auckland. There, he found a different starfish species called Stichaster australis that preyed on the New Zealand green-lipped mussel, the same species exported to restaurants around the world. Over a period of nine months Paine removed all of the starfish from one 400-square-foot area, and left an adjacent, similar plot alone. He saw immediate and striking effects. The treated area quickly began to be dominated by mussels. Six of 20 other species initially present vanished in just eight months; within 15 months the majority of space was occupied solely by the mussels.

    To Paine, the predatory starfish of Washington and New Zealand were “keystones” in the structure of intertidal communities. Just as the stone at the apex of an arch is necessary for the stability of the structure, these apex predators at the top of the food web are critical to the diversity of an ecosystem. Dislodge them, and as Paine showed, the community falls apart. Paine’s pioneering experiments, and his coining of the term “keystone species” prompted the search for keystones in other communities, and would lead him to another seminal idea.

    Sea Otters and the Cascading Effect

    Paine’s kick-it-and-see experiments were not limited to manipulating predators. He was interested in understanding the rules that determined the overall make-up of coastal communities. Other prominent inhabitants of the tide pools and shallow waters included a great variety of algae, such as the large brown seaweed known as kelp. But their distribution was patchy—abundant and diverse in some places, nearly absent from others. One of the most prevalent grazers on the algae were sea urchins. Paine and zoologist Robert Vadas set out to find out what effect the urchins had on algal diversity.

    To do so, they removed all of the urchins by hand from some pools around Mukkaw Bay, or barred them from areas within Friday Harbor (near Bellingham) with wire cages. They left nearby pools and areas untouched as controls for their experiment. They observed dramatic effects of removing the sea urchins—several species of algae burst forth in the urchin-free zones. The control areas with large urchin populations contained very few algae.

    Paine also noticed that such urchin-dominated “barrens” were common in pools around Tatoosh Island. At first glance, the urchin barrens seemed to violate a key assertion of the HSS hypothesis that herbivores tended not to consume all of the vegetation available. But the explanation for why there were such barrens in Pacific waters would soon become clear—in the surprising discovery of another keystone species, an animal that had been removed from Washington’s coast long before Paine started tinkering with nature.

    Sea otters once ranged from Northern Japan to the Aleutian Islands and along the North American Pacific Coast as far south as Baja California. Coveted for their luxurious fur, the densest of all marine mammals, the animals were hunted so intensively in the 18th and 19th centuries that by the early 1900s only 2,000 or so animals remained of an original population of 150,000 to 300,000, and the species had disappeared from most of its range, including Washington state. The species gained protected status in 1911 under the terms of an international treaty. After their near-extermination from the Aleutian Islands, the animals rebounded to high densities in some locations.

    In 1971, Paine was offered a trip to one of those places—Amchitka Island, a treeless island in the western part of the Aleutians. Some students were working on the kelp communities there and Paine flew out to offer his advice. Jim Estes, a student from the University of Arizona, met with Paine and described his research plans. Estes was interested in sea otters, but he was not an ecologist. He explained to Paine that he was thinking about studying how the kelp forests supported the thriving sea otter populations.

    “Jim, you are asking the wrong questions,” Paine told him. “You want to look at the three trophic levels: sea otters eat urchins, sea urchins eat kelp.”

    4
    The importance of being a sea otter: In the presence of sea otters, sea urchin populations are controlled, which allows for kelp forests to grow (left). In the absence of sea otters, urchins proliferate, forming “barrens” that lack kelp (right). Bob Steneck

    Estes had only seen Amchitka with its abundant otters and kelp forests. He quickly realized the opportunity to compare islands with and without otters. With fellow student John Palmisano, Estes traveled to Shemya Island, a 6-square-mile chunk of rock 200 miles to the west without otters. Their first hint that something was very different was when they walked down to the beach and saw huge sea urchin carcasses. But the real shock came when Estes dove under the water for the first time.

    “The most dramatic moment of learning in my life happened in less than a second. And that was sticking my head in the water at Shemya Island,” Estes recalled. “We were in this sea of just sea urchins. And there was no kelp anywhere. Any fool would have been able to figure out what was going on.”

    Estes and Palmisano saw other striking differences between the two communities around each island: Colorful rockfish, harbor seals, and bald eagles were abundant around Amchitka, but not around otter-less Shemya. They proposed that the vast differences between the two communities were driven by sea otters, which were voracious predators of sea urchins. They suggested that sea otters were keystone species whose negative regulation of sea urchin populations was key to the structure and diversity of the coastal marine community.

    Estes’ and Palmisano’s observations suggested that the reintroduction of sea otters would lead to a dramatic restructuring of coastal ecosystems. Shortly after their pioneering study, the opportunity arose to test the impact of sea otters as they spread along the Alaskan coast and re-colonized various communities. In 1975, sea otters were absent from Deer Harbor in southeast Alaska. But by 1978, the animals had established themselves there, sea urchins were small and scarce, the sea bottom was littered with their remains, and tall, dense stands of kelp had sprung up.

    The presence of the otters had suppressed the urchins, which had otherwise suppressed the growth of kelp. This kind of double negative logic is widespread in biology. In this instance, otters “induce” the growth of kelp by repressing the population of sea urchins. The discovery of the regulation of kelp forest by sea otter predation on herbivorous urchins was very strong support for the HSS hypothesis and for Paine’s keystone species concept.

    In ecological terms, the predatory sea otters have a cascading effect on multiple trophic levels below them. Paine coined a new term to describe the strong, top-down effects that he and others had discovered upon the removal or reintroduction of species: He called them trophic cascades.

    The discovery of trophic cascades was exciting. The many indirect effects caused by the presence or absence of predators (starfish, sea otters) were surprising because they revealed previously unsuspected, indeed unimagined, connections among creatures. Who would have thought that the growth of kelp forests depended on the presence of sea otters? These dramatic and unexpected effects raised the possibility that, unbeknownst to biologists, trophic cascades were operating elsewhere to shape other kinds of communities. And if they were, then keystone species and trophic cascades might be general features of ecosystems—rules of regulation that governed the numbers and kinds of creatures in a community.

    Indeed, trophic cascades have been discovered across the globe, where keystone predators such as wolves, lions, sharks, coyotes, starfish, and spiders shape communities. And because of their newly appreciated regulatory roles, the loss of large predators over the past century has Estes, Paine, and many other biologists deeply concerned.

    Today, of course, one predator has more influence than any other. We have created the extraordinary ecological situation where we are the top predator and the top consumer in all habitats. “Humans are certainly the overdominant keystones and will be the ultimate losers if the rules are not understood and global ecosystems continue to deteriorate,” Paine says. The only species that can regulate us is us.

    See the full article here .

    Lauren Eiseley has a story of another starthrower (The Starthrower, Harcourt BraceJanovich, 1978, ©Estate of Loren C. Eiseley, pg 169. “The Starthrower”, a man (unnamed) who said he threw the starfish back because one never knew where the next important DNA might originate.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 11:30 am on November 28, 2019 Permalink | Reply
    Tags: "The Secret History of the Supernova at the Bottom of the Sea", , , , , , Fe-60 a heavy isotope of iron with four more neutrons than the regular isotope, Ferromanganese crusts, Nautilus, SN 1987A announced the violent collapse of a massive star.,   

    From Nautilus: “The Secret History of the Supernova at the Bottom of the Sea” 

    Nautilus

    From Nautilus

    November 28, 2019
    Julia Rosen

    1
    Lead composite image credit: Pinwheel-Shaped Galaxy by NASA, ESA, The Hubble Heritage Team, (STScI/AURA) and A. Riess (STScI) and Red Sea Coral Reef by Wusel700

    In February 1987, Neil Gehrels, a young researcher at NASA’s Goddard Space Flight Center, boarded a military plane bound for the Australian Outback. Gehrels carried some peculiar cargo: a polyethylene space balloon and a set of radiation detectors he had just finished building back in the lab. He was in a hurry to get to Alice Springs, a remote outpost in the Northern Territory, where he would launch these instruments high above Earth’s atmosphere to get a peek at the most exciting event in our neck of the cosmos: a supernova exploding in one of the Milky Way’s nearby satellite galaxies.

    Like many supernovas, SN 1987A announced the violent collapse of a massive star.

    SN1987a from NASA/ESA Hubble Space Telescope in Jan. 2017 using its Wide Field Camera 3 (WFC3).

    What set it apart was its proximity to Earth; it was the closest stellar cataclysm since Johannes Kepler spotted one in our own Milky Way galaxy in 1604. Since then, scientists have thought up many questions that to answer would require a front row seat to another supernova. They were questions like this: How close does a supernova need to be to devastate life on Earth?

    Back in the 1970s, researchers hypothesized that radiation from a nearby supernova could annihilate the ozone layer, exposing plants and animals to harmful ultraviolet light, and possibly cause a mass extinction. Armed with new data from SN 1987A, Gehrels could now calculate a theoretical radius of doom, inside which a supernova would have grievous effects, and how often dying stars might stray inside it.

    “The bottom line was that there would be a supernova close enough to the Earth to drastically affect the ozone layer about once every billion years,” says Gehrels, who still works at Goddard. That’s not very often, he admits, and no threatening stars prowl the solar system today. But Earth has existed for 4.6 billion years, and life for about half that time, meaning the odds are good that a supernova blasted the planet sometime in the past. The problem is figuring out when. Because supernovas mainly affect the atmosphere, it’s hard to find the smoking gun,” Gehrels says.

    Astronomers have searched the surrounding cosmos for clues, but the most compelling evidence for a nearby supernova comes—somewhat paradoxically—from the bottom of the sea. Here, a dull and asphalt black mineral formation called a ferromanganese crust grows on the bare bedrock of underwater mountains—incomprehensibly slowly. In its thin, laminated layers, it records the history of planet Earth and, according to some, the first direct evidence of a nearby supernova.

    2
    Plain-looking, but important: Ferromanganese crusts collected by James Hein nearby Hawaii.James Hein

    These kinds of clues about ancient cosmic explosions are immensely valuable to scientists, who suspect that supernovas may have played a little-known role in shaping the evolution of life on Earth. “This actually could have been part of the story of how life has gone on, and the slings and arrows that it had to dodge,” says Brian Fields, an astronomer at the University of Illinois at Urbana-Champaign. But to understand just how supernovas affected life, scientists needed to link the timing of their explosions to pivotal events on earth such as mass extinctions or evolutional leaps. The only way to do that is to trace the debris they deposited on Earth by finding elements on our planet that are primarily fused inside supernovas.

    Fields and his colleagues named a few such supernova-forged elements—mainly rare radioactive metals that decay slowly, making their presence a sure sign of an expired star. One of the most promising candidates was Fe-60, a heavy isotope of iron with four more neutrons than the regular isotope and a half-life of 2.6 million years. But finding Fe-60 atoms scattered on the Earth’s surface was no easy task. Fields estimated that only a very small amount of Fe-60 would have actually reached our planet, and on land, it would have been diluted by natural iron, or been eroded and washed away over millions of years.

    So scientists looked instead at the bottom of the sea, where they found Fe-60 atoms in the ferromanganese crusts, which are rocks that form a bit like stalagmites: They precipitate out of liquid, adding successive layers, except they are composed of metals and form extensive blankets instead of individual spires. Composed primarily of iron and manganese oxides, they also contain small amounts of almost every metal in the periodic table, from cobalt to yttrium.

    As iron, manganese, and other metal ions wash into the sea from land or gush from underwater volcanic vents, they react with the oxygen in seawater, forming solid substances that precipitate onto the ocean floor or float around until they adhere to existing crusts. James Hein at the United States Geological Survey, who studied crusts for more than 30 years, says that it remains a mystery exactly how they establish themselves on rocky stretches of seafloor, but once the first layer accumulates, more layers pile on—up to 25 centimeters thick.

    That enables crusts to serve as cosmic historians that keep records of seawater chemistry, including the elements that serve as timestamps of dying stars. One of the oldest crusts, fished out by Hein southwest of Hawaii in the 1980s, dates back more than 70 million years, to a time when dinosaurs roamed the planet and the Indian subcontinent was just an island in the ocean halfway between Antarctica and Asia.

    The crusts’ growth is one the slowest processes known to science—they put on about five millimeters every million years. For comparison, human fingernails grow about 7 million times faster. The reason for that is plain math. There’s less than one atom of iron or manganese for every billion molecules of water in the ocean—and then they must resist the pull of passing currents and the power of other chemical interactions that might pry them loose until they get trapped by the next layer.

    Unlike the slow-growing crusts, however, supernova explosions happen almost instantly. The most common type of supernova occurs when a star runs out of its hydrogen and helium fuel, causing its core to burn heavier elements until it eventually produces iron. That process can take millions of years, but the star’s final moments take only milliseconds. As heavy elements accumulate in the core, it becomes unstable and implodes, sucking the outer layers inward at a quarter of the speed of light. But the density of particles in the core soon repels the implosion, triggering a massive explosion that shoots a cloud of stellar debris out into space—including Fe-60 isotopes, some of which eventually find their home in ferromanganese crusts.

    3
    Meet the Earth’s historian: Klaus Knie used this 25 cm-thick ferromanganese crust sampled from the depth of 4,830m in the Pacific Ocean to trace the Fe-60 isotopes. Anton Wallner

    The first people to look for the Fe-60 in these crusts were Klaus Knie, an experimental physicist then at the Technical University of Munich, and his collaborators. Knie’s team was studying neither supernovas nor crusts—they were developing methods for measuring rare isotopes of various elements—including Fe-60. After another scientist measured an isotope of beryllium, which can be used to date the layers of the crusts, Knie decided to examine the same specimen for Fe-60, which he knew was produced in supernovas. “We are part of the universe and we have the chance to hold the ‘astrophysical’ matter in our hand, if we look at the right places,” says Knie, who is now at the GSI Helmholtz Center for Heavy Ion Research.

    The crust, also plucked from the seafloor not far from Hawaii, turned out to be the right place: Knie and his colleagues found a spike in Fe-60 in layers that dated back about 2.8 million years, which they say signaled the death of a nearby star around that time. Knie’s discovery was important in several ways. It represented the first evidence that supernova debris can be found here on Earth and it pinpointed the approximate timing of the last nearby supernova blast (if there had been a more recent one, Knie would have found more recent Fe-60 spikes.). But it also enabled Knie to propose an interesting evolutionary theory.

    Based on the concentration of Fe-60 in the crust, Knie estimated that the supernova exploded at least 100 light-years from Earth—three times the distance at which it could’ve obliterated the ozone layer—but close enough to potentially alter cloud formation, and thus, climate. While no mass-extinction events happened 2.8 million years ago, some drastic climate changes did take place—and they may have given a boost to human evolution. Around that time, the African climate dried up, causing the forests to shrink and give way to grassy savanna. Scientists think this change may have encouraged our hominid ancestors as they descended from trees and eventually began walking on two legs.

    That idea, as any young theory, is still speculative and has its opponents. Some scientists think Fe-60 may have been brought to Earth by meteorites, and others think these climate changes can be explained by decreasing greenhouse gas concentrations, or the closing of the ocean gateway between North and South America. But Knie’s new tool gives scientists the ability to date other, possibly more ancient, supernovas that may have passed in the vicinity of Earth, and to study their influence on our planet. It is remarkable that we can use these dull, slow-growing rocks to study the luminous, rapid phenomena of stellar explosions, Fields says. And they’ve got more stories to tell.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 11:03 am on August 3, 2019 Permalink | Reply
    Tags: "Physicists Peer Inside a Fireball of Quantum Matter", High Acceptance DiElectron Spectrometer (HADES), Nautilus   

    From Nautilus: “Physicists Peer Inside a Fireball of Quantum Matter” 

    Nautilus

    From Nautilus

    Aug 02, 2019
    Charlie Wood

    1
    Experimenters in Germany have glimpsed the kind of strange, non-atomic matter thought to fill the cores of merging neutron stars.Jan Michael Hosan.

    A gold wedding band will melt at around 1,000 degrees Celsius and vaporize at about 2,800 degrees, but these changes are just the beginning of what can happen to matter. Crank up the temperature to trillions of degrees, and particles deep inside the atoms start to shift into new, non-atomic configurations. Physicists seek to map out these exotic states—which probably occurred during the Big Bang, and are believed to arise in neutron star collisions and powerful cosmic ray impacts—for the insight they provide into the cosmos’s most intense moments.

    Now an experiment in Germany called the High Acceptance DiElectron Spectrometer (HADES) has put a new point on that map.

    The HADES detector in Darmstadt, Germany.

    For decades, experimentalists have used powerful colliders to crush gold and other atoms so tightly that the elementary particles inside their protons and neutrons, called quarks, start to tug on their new neighbors or (in other cases) fly free altogether. But because these phases of so-called “quark matter” are impenetrable to most particles, researchers have studied only their aftermath. Now, though, by detecting particles emitted by the collision’s fireball itself, the HADES collaboration has gotten a more direct glimpse of the kind of quark matter thought to fill the cores of merging neutron stars.

    “It’s a point in a region where nobody else has touched as far as I know,” said Gene Van Buren, a physicist at the Relativistic Heavy Ion Collider (RHIC) in New York, which probes a higher-energy variety of quark matter called quark-gluon plasma. “That’s pretty exciting.”


    BNL/RHIC

    Physicists have more or less understood how the strong nuclear force binds quarks together into composite particles such as protons and neutrons (each a triplet of quarks) since the 1970s. But the theory of the strong force, called quantum chromodynamics (QCD), is so complicated that no one has been able to predict exactly how matter will behave at high temperatures and densities. Theorists have developed a number of approximation schemes that are valid in certain situations, but large uncertainties make it hard to extend them. Experiments like HADES aim to manually fill the gaps left by the theory.

    With the indirect method of probing quark matter, researchers wait until a fireball cools and its energy transforms into a mélange of particles—a point called “freeze-out.” They then infer the earlier temperature from the relative numbers of each particle type. But the particles birthed at freeze-out can’t tell us much about the fireball’s origins, so the HADES collaboration leveraged a different phenomenon: Almost as soon as the quark matter forms, it starts making short-lived composite particles called rho mesons, each composed of a quark and an antiquark. The rho mesons immediately transform into fleeting “virtual” photons, each of which splits into an electron and its antimatter twin, the positron. These particles carry information about the matter’s early moments all the way out to the HADES detector.

    “There are no other observables that could really bring such rich information,” said Tetyana Galatyuk, one of the 200 members of the HADES collaboration.

    The experiment, reported this week in Nature Physics, is the first to measure the temperature of quark matter under conditions akin to the inside of a neutron star collision, where most particles are matter (as opposed to antimatter). QCD approximation schemes falter in environments where antimatter and matter don’t exist in roughly equal quantities, so this zone remains a blank spot in the theory.

    When neutron stars—the super-dense cores of dead stars—spiral together and collide, they shake the fabric of space-time and trigger explosions called kilonovas. To produce similar conditions, the team slammed gold atoms moving at nearly the speed of light into a gold target to create a jumble of hundreds of protons and neutrons so dense that the theory couldn’t conclusively predict what would happen. The resulting explosion was over in a flash, and electron-positron pairs piled up in the detector surrounding the crash site.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 1:08 pm on July 28, 2019 Permalink | Reply
    Tags: A second milestone would be the creation of fault-tolerant quantum computers., , , But a number of other groups have the potential to achieve quantum supremacy soon including those at IBM; IonQ; Rigetti; and Harvard University., By many accounts Google is knocking on the door of quantum supremacy and could demonstrate it before the end of the year., Circuit size is determined by the number of qubits you start with. Manipulations in a quantum computer are performed using “gates”., Engineers need to be able to build quantum circuits of at least a certain minimum size—and so far they can’t., Extended Church-Turing thesis: Quantum supremacy would be the first experimental violation of that principle and so would usher computer science into a whole new world, If the error rate is too high quantum computers lose their advantage over classical ones., If you run your qubits through 10 gates you’d say your circuit has “depth” 10., Ion traps have a contrasting set of strengths and weaknesses., Let’s consider a circuit that acts on 50 qubits. As the qubits go through the circuit the states of the qubits become intertwined- entangled- in what’s called a quantum superposition., Nautilus, , , Superconducting quantum circuits have the advantage of being made out of a solid-state material., The most crucial one is the error that accumulates in a computation each time the circuit performs a gate operation., The problem quantum engineers now face is that as the number of qubits and gates increases so does the error rate., There are many sources of error in a quantum circuit.   

    From Nautilus: “Quantum Supremacy Is Coming: Here’s What You Should Know” 

    From Nautilus

    July 2019
    Kevin Hartnett

    1
    Graham Carlow

    IBM iconic image of Quantum computer

    Researchers are getting close to building a quantum computer that can perform tasks a classical computer can’t. Here’s what the milestone will mean.

    Quantum computers will never fully replace “classical” ones like the device you’re reading this article on. They won’t run web browsers, help with your taxes, or stream the latest video from Netflix.

    3
    Lenovo ThinkPad X1 Yoga (OLED)

    What they will do—what’s long been hoped for, at least—will be to offer a fundamentally different way of performing certain calculations. They’ll be able to solve problems that would take a fast classical computer billions of years to perform. They’ll enable the simulation of complex quantum systems such as biological molecules, or offer a way to factor incredibly large numbers, thereby breaking long-standing forms of encryption.

    The threshold where quantum computers cross from being interesting research projects to doing things that no classical computer can do is called “quantum supremacy.” Many people believe that Google’s quantum computing project will achieve it later this year. In anticipation of that event, we’ve created this guide for the quantum-computing curious. It provides the information you’ll need to understand what quantum supremacy means, and whether it’s really been achieved.

    What is quantum supremacy and why is it important?

    To achieve quantum supremacy, a quantum computer would have to perform any calculation that, for all practical purposes, a classical computer can’t.

    In one sense, the milestone is artificial. The task that will be used to test quantum supremacy is contrived—more of a parlor trick than a useful advance (more on this shortly). For that reason, not all serious efforts to build a quantum computer specifically target quantum supremacy. “Quantum supremacy, we don’t use [the term] at all,” said Robert Sutor, the executive in charge of IBM’s quantum computing strategy. “We don’t care about it at all.”

    But in other ways, quantum supremacy would be a watershed moment in the history of computing. At the most basic level, it could lead to quantum computers that are, in fact, useful for certain practical problems.

    There is historical justification for this view. In the 1990s, the first quantum algorithms solved problems nobody really cared about. But the computer scientists who designed them learned things that they could apply to the development of subsequent algorithms (such as Shor’s algorithm for factoring large numbers) that have enormous practical consequences.

    “I don’t think those algorithms would have existed if the community hadn’t first worked on the question ‘What in principle are quantum computers good at?’ without worrying about use value right away,” said Bill Fefferman, a quantum information scientist at the University of Chicago.

    The quantum computing world hopes that the process will repeat itself now. By building a quantum computer that beats classical computers—even at solving a single useless problem—researchers could learn things that will allow them to build a more broadly useful quantum computer later on.

    “Before supremacy, there is simply zero chance that a quantum computer can do anything interesting,” said Fernando Brandão, a theoretical physicist at the California Institute of Technology and a research fellow at Google. “Supremacy is a necessary milestone.”

    In addition, quantum supremacy would be an earthquake in the field of theoretical computer science. For decades, the field has operated under an assumption called the “extended Church-Turing thesis,” which says that a classical computer can efficiently perform any calculation that any other kind of computer can perform efficiently. Quantum supremacy would be the first experimental violation of that principle and so would usher computer science into a whole new world. “Quantum supremacy would be a fundamental breakthrough in the way we view computation,” said Adam Bouland, a quantum information scientist at the University of California, Berkeley.

    How do you demonstrate quantum supremacy?

    By solving a problem on a quantum computer that a classical computer cannot solve efficiently. The problem could be whatever you want, though it’s generally expected that the first demonstration of quantum supremacy will involve a particular problem known as “random circuit sampling.”

    A simple example of a random sampling problem is a program that simulates the roll of a fair die. Such a program runs correctly when it properly samples from the possible outcomes, producing each of the six numbers on the die one-sixth of the time as you run the program repeatedly.

    In place of a die, this candidate problem for quantum supremacy asks a computer to correctly sample from the possible outputs of a random quantum circuit, which is like a series of actions that can be performed on a set of quantum bits, or qubits. Let’s consider a circuit that acts on 50 qubits. As the qubits go through the circuit, the states of the qubits become intertwined, or entangled, in what’s called a quantum superposition. As a result, at the end of the circuit, the 50 qubits are in a superposition of 250 possible states. If you measure the qubits, the sea of 250 possibilities collapses into a single string of 50 bits. This is like rolling a die, except instead of six possibilities you have 250, or 1 quadrillion, and not all of the possibilities are equally likely to occur.

    Quantum computers, which can exploit purely quantum features such as superpositions and entanglement, should be able to efficiently produce a series of samples from this random circuit that follow the correct distribution. For classical computers, however, there’s no known fast algorithm for generating these samples—so as the range of possible samples increases, classical computers quickly get overwhelmed by the task.

    What’s the holdup?

    As long as quantum circuits remain small, classical computers can keep pace. So to demonstrate quantum supremacy via the random circuit sampling problem, engineers need to be able to build quantum circuits of at least a certain minimum size—and so far, they can’t.

    Circuit size is determined by the number of qubits you start with, combined with the number of times you manipulate those qubits. Manipulations in a quantum computer are performed using “gates,” just as they are in a classical computer. Different kinds of gates transform qubits in different ways—some flip the value of a single qubit, while others combine two qubits in different ways. If you run your qubits through 10 gates, you’d say your circuit has “depth” 10.

    To achieve quantum supremacy, computer scientists estimate a quantum computer would need to solve the random circuit sampling problem for a circuit in the ballpark of 70 to 100 qubits with a depth of around 10. If the circuit is much smaller than that, a classical computer could probably still manage to simulate it — and classical simulation techniques are improving all the time.

    Yet the problem quantum engineers now face is that as the number of qubits and gates increases, so does the error rate. And if the error rate is too high, quantum computers lose their advantage over classical ones.

    There are many sources of error in a quantum circuit.

    At the moment, the best two-qubit quantum gates have an error rate of around 0.5%, meaning that there’s about one error for every 200 operations. This is astronomically higher than the error rate in a standard classical circuit, where there’s about one error every 1017operations. To demonstrate quantum supremacy, engineers are going to have to bring the error rate for two-qubit gates down to around 0.1%.

    How will we know for sure that quantum supremacy has been demonstrated?

    Some milestones are unequivocal. Quantum supremacy is not one of them. “It’s not like a rocket launch or a nuclear explosion, where you just watch and immediately know whether it succeeded,” said Scott Aaronson, a computer scientist at the University of Texas, Austin.

    To verify quantum supremacy, you have to show two things: that a quantum computer performed a calculation fast, and that a classical computer could not efficiently perform the same calculation.

    It’s the second part that’s trickiest. Classical computers often turn out to be better at solving certain kinds of problems than computer scientists expected. Until you’ve proved a classical computer can’t possibly do something efficiently, there’s always the chance that a better, more efficient classical algorithm exists. Proving that such an algorithm doesn’t exist is probably more than most people will need in order to believe a claim of quantum supremacy, but such a claim could still take some time to be accepted.

    How close is anyone to achieving it?

    By many accounts Google is knocking on the door of quantum supremacy and could demonstrate it before the end of the year. (Of course, the same was said in 2017.) But a number of other groups have the potential to achieve quantum supremacy soon, including those at IBM, IonQ, Rigetti and Harvard University.

    These groups are using several distinct approaches to building a quantum computer. Google, IBM and Rigetti perform quantum calculations using superconducting circuits. IonQ uses trapped ions. The Harvard initiative, led by Mikhail Lukin, uses rubidium atoms. Microsoft’s approach, which involves “topological qubits,” seems like more of a long shot.

    Each approach has its pros and cons.

    Superconducting quantum circuits have the advantage of being made out of a solid-state material. They can be built with existing fabrication techniques, and they perform very fast gate operations. In addition, the qubits don’t move around, which can be a problem with other technologies. But they also have to be cooled to extremely low temperatures, and each qubit in a superconducting chip has to be individually calibrated, which makes it hard to scale the technology to the thousands of qubits (or more) that will be needed in a really useful quantum computer.

    Ion traps have a contrasting set of strengths and weaknesses. The individual ions are identical, which helps with fabrication, and ion traps give you more time to perform a calculation before the qubits become overwhelmed with noise from the environment. But the gates used to operate on the ions are very slow (thousands of times slower than superconducting gates) and the individual ions can move around when you don’t want them to.

    At the moment, superconducting quantum circuits seem to be advancing fastest. But there are serious engineering barriers facing all of the different approaches. A major new technological advance will be needed before it’s possible to build the kind of quantum computers people dream of. “I’ve heard it said that quantum computing might need an invention analogous to the transistor—a breakthrough technology that performs nearly flawlessly and which is easily scalable,” Bouland said. “While recent experimental progress has been impressive, my inclination is that this hasn’t been found yet.”

    Say quantum supremacy has been demonstrated. Now what?

    If a quantum computer achieves supremacy for a contrived task like random circuit sampling, the obvious next question is: OK, so when will it will do something useful?

    The usefulness milestone is sometimes referred to as quantum advantage. “Quantum advantage is this idea of saying: For a real use case—like financial services, AI, chemistry—when will you be able to see, and how will you be able to see, that a quantum computer is doing something significantly better than any known classical benchmark?” said Sutor of IBM, which has a number of corporate clients like JPMorgan Chase and Mercedes-Benz who have started exploring applications of IBM’s quantum chips.

    A second milestone would be the creation of fault-tolerant quantum computers. These computers would be able to correct errors within a computation in real time, in principle allowing for error-free quantum calculations. But the leading proposal for creating fault-tolerant quantum computers, known as “surface code,” requires a massive overhead of thousands of error-correcting qubits for each “logical” qubit that the computer uses to actually perform a computation. This puts fault tolerance far beyond the current state of the art in quantum computing. It’s an open question whether quantum computers will need to be fault tolerant before they can really do anything useful. “There are many ideas,” Brandão said, “but nothing is for sure.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 9:16 am on July 28, 2019 Permalink | Reply
    Tags: A dull and asphalt black mineral formation called a ferromanganese crust grows on the bare bedrock of underwater mountains—incomprehensibly slowly, , , , , Fe-60- a heavy isotope of iron with four more neutrons than the regular isotope and a half-life of 2.6 million years., Ferromanganese crusts in their thin laminated layers it records the history of planet Earth and- according to some-the first direct evidence of a nearby supernova., , How a star explosion may have shaped life on Earth, Klaus Knie, Knie’s new tool gives scientists the ability to date other- possibly more ancient- supernovas that may have passed in the vicinity of Earth and to study their influence on our planet., Like many supernovas SN 1987A announced the violent collapse of a massive star., Nautilus, Neil Gehrels, , The crusts’ growth is one the slowest processes known to science—they put on about five millimeters every million years., The most compelling evidence for a nearby supernova comes—somewhat paradoxically—from the bottom of the sea., To understand just how supernovas affected life scientists needed to link the timing of their explosions to pivotal events on earth such as mass extinctions or evolutional leaps.   

    From Nautilus: “The Secret History of the Supernova at the Bottom of the Sea” 

    Nautilus

    From Nautilus

    July 2019
    Julia Rosen

    How a star explosion may have shaped life on Earth.

    1

    2
    Photograph of Neil Gehrels in his office at NASA Goddard Space Flight Center in October 2005. GNU Free Documentation License

    NASA Neil Gehrels Swift Observatory

    In February 1987, Neil Gehrels, a young researcher at NASA’s Goddard Space Flight Center, boarded a military plane bound for the Australian Outback. Gehrels carried some peculiar cargo: a polyethylene space balloon and a set of radiation detectors he had just finished building back in the lab. He was in a hurry to get to Alice Springs, a remote outpost in the Northern Territory, where he would launch these instruments high above Earth’s atmosphere to get a peek at the most exciting event in our neck of the cosmos: a supernova exploding in one of the Milky Way’s nearby satellite galaxies.

    2
    Alice Springs

    Like many supernovas, SN 1987A announced the violent collapse of a massive star.

    SN1987a from NASA/ESA Hubble Space Telescope in Jan. 2017 using its Wide Field Camera 3 (WFC3).

    This is an artist’s impression of the SN 1987A remnant. The image is based on real data and reveals the cold, inner regions of the remnant, in red, where tremendous amounts of dust were detected and imaged by ALMA. This inner region is contrasted with the outer shell, lacy white and blue circles, where the blast wave from the supernova is colliding with the envelope of gas ejected from the star prior to its powerful detonation. Image credit: ALMA / ESO / NAOJ / NRAO / Alexandra Angelich, NRAO / AUI / NSF.

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    NRAO/Karl V Jansky Expanded Very Large Array, on the Plains of San Agustin fifty miles west of Socorro, NM, USA, at an elevation of 6970 ft (2124 m)

    Back in the 1970s, researchers hypothesized that radiation from a nearby supernova could annihilate the ozone layer, exposing plants and animals to harmful ultraviolet light, and possibly cause a mass extinction. Armed with new data from SN 1987A, Gehrels could now calculate a theoretical radius of doom, inside which a supernova would have grievous effects, and how often dying stars might stray inside it.

    Like many supernovas, SN 1987A announced the violent collapse of a massive star. What set it apart was its proximity to Earth; it was the closest stellar cataclysm since Johannes Kepler spotted one in our own Milky Way galaxy in 1604. Since then, scientists have thought up many questions that to answer would require a front row seat to another supernova. They were questions like this: How close does a supernova need to be to devastate life on Earth?

    _______________________________________________
    To understand just how supernovas affected life, scientists needed to link the timing of their explosions to pivotal events on earth such as mass extinctions or evolutional leaps.
    _______________________________________________

    “The bottom line was that there would be a supernova close enough to the Earth to drastically affect the ozone layer about once every billion years,” says Gehrels, who still works at Goddard.

    NASA Goddard Campus

    That’s not very often, he admits, and no threatening stars prowl the solar system today. But Earth has existed for 4.6 billion years, and life for about half that time, meaning the odds are good that a supernova blasted the planet sometime in the past. The problem is figuring out when. Because supernovas mainly affect the atmosphere, it’s hard to find the smoking gun,” Gehrels says.

    Astronomers have searched the surrounding cosmos for clues, but the most compelling evidence for a nearby supernova comes—somewhat paradoxically—from the bottom of the sea. Here, a dull and asphalt black mineral formation called a ferromanganese crust grows on the bare bedrock of underwater mountains—incomprehensibly slowly.

    3
    PLAIN-LOOKING, BUT IMPORTANT: Ferromanganese crusts collected by James Hein nearby
    James Hein

    In its thin, laminated layers, it records the history of planet Earth and, according to some, the first direct evidence of a nearby supernova.

    These kinds of clues about ancient cosmic explosions are immensely valuable to scientists, who suspect that supernovas may have played a little-known role in shaping the evolution of life on Earth. “This actually could have been part of the story of how life has gone on, and the slings and arrows that it had to dodge,” says Brian Fields, an astronomer at the University of Illinois at Urbana-Champaign. But to understand just how supernovas affected life, scientists needed to link the timing of their explosions to pivotal events on earth such as mass extinctions or evolutional leaps. The only way to do that is to trace the debris they deposited on Earth by finding elements on our planet that are primarily fused inside supernovas.

    Fields and his colleagues named a few such supernova-forged elements—mainly rare radioactive metals that decay slowly, making their presence a sure sign of an expired star. One of the most promising candidates was Fe-60, a heavy isotope of iron with four more neutrons than the regular isotope and a half-life of 2.6 million years. But finding Fe-60 atoms scattered on the Earth’s surface was no easy task.

    3
    GAMS – Group: Supernova-produced Fe-60 on earth

    Fields estimated that only a very small amount of Fe-60 would have actually reached our planet, and on land, it would have been diluted by natural iron, or been eroded and washed away over millions of years.

    _______________________________________________
    The crusts’ growth is one the slowest processes known to science—they put on about five millimeters every million years.
    _______________________________________________

    So scientists looked instead at the bottom of the sea, where they found Fe-60 atoms in the ferromanganese crusts, which are rocks that form a bit like stalagmites: They precipitate out of liquid, adding successive layers, except they are composed of metals and form extensive blankets instead of individual spires. Composed primarily of iron and manganese oxides, they also contain small amounts of almost every metal in the periodic table, from cobalt to yttrium.

    As iron, manganese, and other metal ions wash into the sea from land or gush from underwater volcanic vents, they react with the oxygen in seawater, forming solid substances that precipitate onto the ocean floor or float around until they adhere to existing crusts. James Hein at the United States Geological Survey, who studied crusts for more than 30 years, says that it remains a mystery exactly how they establish themselves on rocky stretches of seafloor, but once the first layer accumulates, more layers pile on—up to 25 centimeters thick.

    That enables crusts to serve as cosmic historians that keep records of seawater chemistry, including the elements that serve as timestamps of dying stars. One of the oldest crusts, fished out by Hein southwest of Hawaii in the 1980s, dates back more than 70 million years, to a time when dinosaurs roamed the planet and the Indian subcontinent was just an island in the ocean halfway between Antarctica and Asia.

    The crusts’ growth is one the slowest processes known to science—they put on about five millimeters every million years. For comparison, human fingernails grow about 7 million times faster. The reason for that is plain math. There’s less than one atom of iron or manganese for every billion molecules of water in the ocean—and then they must resist the pull of passing currents and the power of other chemical interactions that might pry them loose until they get trapped by the next layer.

    Unlike the slow-growing crusts, however, supernova explosions happen almost instantly. The most common type of supernova occurs when a star runs out of its hydrogen and helium fuel, causing its core to burn heavier elements until it eventually produces iron. That process can take millions of years, but the star’s final moments take only milliseconds. As heavy elements accumulate in the core, it becomes unstable and implodes, sucking the outer layers inward at a quarter of the speed of light. But the density of particles in the core soon repels the implosion, triggering a massive explosion that shoots a cloud of stellar debris out into space—including Fe-60 isotopes, some of which eventually find their home in ferromanganese crusts.

    5
    MEET THE EARTH’S HISTORIAN: Klaus Knie used this 25 cm-thick ferromanganese crust sampled from the depth of 4,830m in the Pacific Ocean to trace the Fe-60 isotopes. Anton Wallner

    The first people to look for the Fe-60 in these crusts were Klaus Knie, an experimental physicist then at the Technical University of Munich, and his collaborators.

    Knie’s team was studying neither supernovas nor crusts—they were developing methods for measuring rare isotopes of various elements—including Fe-60.

    6
    Haut, Knie, Hüfte, Rücken – alles verständlich erklärt

    After another scientist measured an isotope of beryllium, which can be used to date the layers of the crusts, Knie decided to examine the same specimen for Fe-60, which he knew was produced in supernovas. “We are part of the universe and we have the chance to hold the ‘astrophysical’ matter in our hand, if we look at the right places,” says Knie, who is now at the GSI Helmholtz Center for Heavy Ion Research.

    GSI Helmholtz Centre for Heavy Ion Research GmbH, Darmstadt, Germany,

    _______________________________________________

    Knie’s new tool gives scientists the ability to date other, possibly more ancient, supernovas that may have passed in the vicinity of Earth, and to study their influence on our planet.
    _______________________________________________

    The crust, also plucked from the seafloor not far from Hawaii, turned out to be the right place: Knie and his colleagues found a spike in Fe-60 in layers that dated back about 2.8 million years, which they say signaled the death of a nearby star around that time. Knie’s discovery was important in several ways. It represented the first evidence that supernova debris can be found here on Earth and it pinpointed the approximate timing of the last nearby supernova blast (if there had been a more recent one, Knie would have found more recent Fe-60 spikes.). But it also enabled Knie to propose an interesting evolutionary theory.

    Based on the concentration of Fe-60 in the crust, Knie estimated that the supernova exploded at least 100 light-years from Earth—three times the distance at which it could’ve obliterated the ozone layer—but close enough to potentially alter cloud formation, and thus, climate. While no mass-extinction events happened 2.8 million years ago, some drastic climate changes did take place—and they may have given a boost to human evolution. Around that time, the African climate dried up, causing the forests to shrink and give way to grassy savanna. Scientists think this change may have encouraged our hominid ancestors as they descended from trees and eventually began walking on two legs.

    That idea, as any young theory, is still speculative and has its opponents. Some scientists think Fe-60 may have been brought to Earth by meteorites, and others think these climate changes can be explained by decreasing greenhouse gas concentrations, or the closing of the ocean gateway between North and South America. But Knie’s new tool gives scientists the ability to date other, possibly more ancient, supernovas that may have passed in the vicinity of Earth, and to study their influence on our planet. It is remarkable that we can use these dull, slow-growing rocks to study the luminous, rapid phenomena of stellar explosions, Fields says. And they’ve got more stories to tell.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

    “The bottom line was that there would be a supernova close enough to the Earth to drastically affect the ozone layer about once every billion years,” says Gehrels, who still works at Goddard. That’s not very often, he admits, and no threatening stars prowl the solar system today. But Earth has existed for 4.6 billion years, and life for about half that time, meaning the odds are good that a supernova blasted the planet sometime in the past. The problem is figuring out when. Because supernovas mainly affect the atmosphere, it’s hard to find the smoking gun,” Gehrels says.

    Astronomers have searched the surrounding cosmos for clues, but the most compelling evidence for a nearby supernova comes—somewhat paradoxically—from the bottom of the sea. Here, a dull and asphalt black mineral formation called a ferromanganese crust grows on the bare bedrock of underwater mountains—incomprehensibly slowly. In its thin, laminated layers, it records the history of planet Earth and, according to some, the first direct evidence of a nearby supernova.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: