Tagged: Nautilus (US) Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:34 am on September 16, 2021 Permalink | Reply
    Tags: "Where Aliens Could Be Watching Us", , , , , , , Nautilus (US),   

    From Cornell University (US) via Nautilus (US) : “Where Aliens Could Be Watching Us” 

    From Cornell University (US)


    Nautilus (US)

    September 16, 2021
    Lisa Kaltenegger

    A view of the Earth and sun from thousands of miles above our planet, with stars in position to see Earth transiting around the sun brightened and the Milky Way visible on the left. Credit: Open Space/ © American Museum of Natural History-New York City (US).

    More than 1,700 stars could have seen Earth in the past 5,000 years.

    Do you ever feel like someone is watching you? They could be. And I’m not talking about the odd neighbors at the end of your street.

    This summer, at The Carl Sagan Institute (US) at Cornell University and The American Museum of Natural History (US) in NYC, my colleague Jacky Faherty and I identified 1,715 stars in our solar neighborhood that could have seen Earth in the past 5,000 years. In the mesmerizing gravitational dance of the stars, those stars found themselves at just the right place to spot Earth. That’s because our pale blue dot blocks out part of the sun’s light from their view. This is how we find most exoplanets circling other stars. We spot the temporary dimming of their star’s light.

    The perfect cosmic front seat to Earth with its curious beings, is quite rare. But with about the same technology as we have, any nominal, curious aliens on planets circling one of the 1,715 stars could have spotted us. Would they have identified us as intelligent life?

    All of us observe the dynamics of the cosmos every night. Stars rise and set—including our sun—because Earth rotates among the rich stellar tapestry. Our night sky changes throughout the year because Earth moves in orbit around the sun. We only see stars at night when the sun doesn’t outshine them. While circling the sun, we glimpse the brightest stars in the anti-sun direction only. Thus, we see different stars in different seasons.

    If we could watch for thousands of years, we could watch the dynamic dance of the cosmos unfold in our night sky. But alternatively, we can use the newest data from The European Space Agency [Agence spatiale européenne][Europäische Weltraumorganisation](EU)’s GAIA mission and computers to fast-forward the time before our eyes, with decades unfolding in mere minutes.

    While we can only see the light of the stars, we already know that more than 4,500 of these stars are not alone. They host extrasolar planets. Several thousand additional signals indicate even more new worlds on our cosmic horizon.

    Astronomers found most of these exoplanets in the last two decades because of a temporary dimming of their stars when a planet, by chance, crossed our line of sight on its journey around its star.

    The planet temporarily blocks out part of the hot star—and its light—from our view. Telescopes on the ground and from space, including NASA’s Kepler and TESS (Transiting Exoplanet Survey Satellite) mission, found thousands of exoplanets by spotting this dimming, which repeats like clockwork.

    The time between dimming tells us how long the planet needs to circle its star. That allows us to figure out how far away an exoplanet wanders from its hot central star. Most known exoplanets are scorching hot gas balls. We can tell when planets orbit closer to a central star than others because they need less time to circle it—we also find those faster than the cooler ones farther away. But about three dozen of these exoplanets are already cool enough. They orbit at the right distance from their stars, where it is not too hot and not too cold. Surface temperatures could allow rivers and oceans to glisten on the surfaces of these planets in this so-called Habitable Zone.

    TRANSIT OF EARTH In this video, scientists at Cornell University and the American Museum of Natural History explain how they have identified stars that have been at just the right place, sometime in the past 5,000 years, to have seen Earth as a transiting exoplanet. Credit: D. Desir National Aeronautics Space Agency (US) / AMNH OpenSpace.

    This vantage point—to see a planet block part of the hot stellar surface from view—is special. The alignment of us and the planet must be just right. Thus, these thousands of known exoplanets are only the tip of the figurative exoplanet iceberg. The ones we can most easily spot hint at the majority waiting to be discovered.

    But what if we change that vantage point? If anyone out there were looking, which stars are just in the right place to spot us?

    Our powers of observation have been boosted by the European Space Agency’s Gaia mission. Launched in 2013, the Gaia spacecraft is mapping the motion stars around the center of our galaxy, the Milky Way.

    The agency aims to survey 1 percent of the galaxy’s 100 billion stars. It has generated the best catalog of stars in our neighborhood within 326 light-years from the sun. Less than 1 percent of the 331,312 catalogued objects —stars, brown dwarfs, and stellar corpses—are at the right place to see Earth as a transiting exoplanet. This special vantage point is held by only those objects in a position close to the plane of Earth’s orbit. Roughly 1,400 stars are at the right place right now to see Earth as a transiting exoplanet.

    But this special vantage point is not forever. It is gained and lost in the precise gravitational dance in our dynamic cosmos. How long does that cosmic front-row seat to Earth transit last? Because the Gaia mission records the motion of the stars, we can spin their movement into the future and trace it back into the past on a computer. It shows us the night sky over thousands of years since civilizations bloomed on Earth and gives us a glimpse of a night sky of the far future, millennia away.

    If we had observed the sky for transiting planets thousands of years earlier or later, we would see different ones. And different ones could find us. We calculated that 1,715 objects in our solar neighborhood could have seen Earth transit since human civilizations started to bloom about 5,000 years ago and kept that special vantage point for hundreds of years. Three hundred and nineteen objects will enter the Earth transit zone in the next 5,000 years.

    Among these 2,034 stars, seven harbor known exoplanets, with three stars’ exoplanets circling in this temperate Habitable Zone. However, the small region around the plane of the Earth’s orbit, where all these stars lie, is crowded. Astronomers usually don’t look for planets there. Generally, it is easier to find exoplanets around stars in non-crowded fields. But now we have a reason: to discover the planets that could also discover us.

    NASA’s Kepler mission stared for more than three years at about 150,000 stars about 1,000 light-years away. These 150,000 stars fit in a small fraction of the sky. Its goal was to estimate how many stars harbor exoplanets. The answer is exciting. Every second star has at least one planet, big or small, and about every fourth star hosts a planet in the Goldilocks Zone. These results provide cautious optimism about our chances of not being the only life in the cosmos. It also means that about 500 exoplanets in the Habitable Zone should be on our list, waiting to be discovered.

    The three systems that host planets in the Habitable Zone in the Earth transit zone are close enough to detect radio waves from Earth. Because radio waves travel at light speed, they have only washed over 75 of the stars on our list so far. These stars are within 100 light-years from Earth—because light had 100 years to travel since Earth first started to leak radio signals.

    Ross 128b, an exoplanet a mere 11 light-years away from us, could have seen Earth block the sun’s light about 3,000 years ago. But it lost this bull’s-eye view about 900 years ago. Another exoplanet, Teegarden’s Star b, which is a bit heavier than Earth, and circles a red sun, is about 12.5 light-years away, and will start to see Earth transit in 29 years. And the fascinating Trappist-1 system, with seven Earth-size planets at 40 light-years distance, will be able to see Earth as a transiting planet but only in about 1,600 years.

    With the launch of the James Webb Space Telescope (JWST) later this year, we will have a big enough telescope to collect light from small, close-by exoplanets that could be like ours.

    A particular combination of oxygen and methane has identified Earth as a living planet for about 2 billion years. That combination of gases is what we will be looking for in the atmosphere of other worlds. This exoplanet exploration will be on the edge of our technological possibility, but it will be possible for the first time. Future technology should be able to characterize exoplanets, not just in transit. But for now, telescopes like the JWST collect only enough light from the atmosphere of close-by transiting worlds to explore them, allowing us to wonder whether nominal curious astronomers on alien worlds might be watching us too.

    Of course, no aliens have visited us yet, and we haven’t found any cosmic messages from them. Is that because we’re unique? Have other civilizations destroyed themselves? Or are they just not interested in us?

    In my Introduction to Astronomy class at Cornell, I ask students whether they would contact or visit an exoplanet that is 5,000 years younger than Earth or 5,000 years older. Without fail, they pick the older planet and its potentially more advanced life. More “advanced” than us. During our discussions, the concept of advanced life invariably rolls back around to us. Would life on Earth qualify as intelligent for anyone watching?

    After all, we’ve been using radio waves for only about 100 years, and so those waves would only have traveled 100 light-years so far. We have set foot on the moon but not farther yet and are only starting to think about interstellar travel. So our interstellar travel resume is awfully thin.

    One thing that an alien astronomer would likely see is our atmosphere. If they had been watching us for a while, they would have seen that we destroyed our ozone layer—but we also managed to fix it. So maybe we would have scored a point on their intelligence scale. Now, of course, they see our atmosphere is becoming concentrated with carbon dioxide and shows no signs of abating yet. But maybe every civilization goes through this, every civilization nearly destroys its habitat before figuring out a way to save themselves from themselves.

    If any aliens are out there watching us from those 2,043 stars in our solar neighborhood, I hope they’re also rooting for us.

    Science paper:

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Once called “the first American university” by educational historian Frederick Rudolph, Cornell University represents a distinctive mix of eminent scholarship and democratic ideals. Adding practical subjects to the classics and admitting qualified students regardless of nationality, race, social circumstance, gender, or religion was quite a departure when Cornell was founded in 1865.

    Today’s Cornell reflects this heritage of egalitarian excellence. It is home to the nation’s first colleges devoted to hotel administration, industrial and labor relations, and veterinary medicine. Both a private university and the land-grant institution of New York State, Cornell University is the most educationally diverse member of the Ivy League.

    On the Ithaca campus alone nearly 20,000 students representing every state and 120 countries choose from among 4,000 courses in 11 undergraduate, graduate, and professional schools. Many undergraduates participate in a wide range of interdisciplinary programs, play meaningful roles in original research, and study in Cornell programs in Washington, New York City, and the world over.

    Cornell University (US) is a private, statutory, Ivy League and land-grant research university in Ithaca, New York. Founded in 1865 by Ezra Cornell and Andrew Dickson White, the university was intended to teach and make contributions in all fields of knowledge—from the classics to the sciences, and from the theoretical to the applied. These ideals, unconventional for the time, are captured in Cornell’s founding principle, a popular 1868 quotation from founder Ezra Cornell: “I would found an institution where any person can find instruction in any study.”

    The university is broadly organized into seven undergraduate colleges and seven graduate divisions at its main Ithaca campus, with each college and division defining its specific admission standards and academic programs in near autonomy. The university also administers two satellite medical campuses, one in New York City and one in Education City, Qatar, and Jacobs Technion-Cornell Institute(US) in New York City, a graduate program that incorporates technology, business, and creative thinking. The program moved from Google’s Chelsea Building in New York City to its permanent campus on Roosevelt Island in September 2017.

    Cornell is one of the few private land grant universities in the United States. Of its seven undergraduate colleges, three are state-supported statutory or contract colleges through the SUNY – The State University of New York (US) system, including its Agricultural and Human Ecology colleges as well as its Industrial Labor Relations school. Of Cornell’s graduate schools, only the veterinary college is state-supported. As a land grant college, Cornell operates a cooperative extension outreach program in every county of New York and receives annual funding from the State of New York for certain educational missions. The Cornell University Ithaca Campus comprises 745 acres, but is much larger when the Cornell Botanic Gardens (more than 4,300 acres) and the numerous university-owned lands in New York City are considered.

    Alumni and affiliates of Cornell have reached many notable and influential positions in politics, media, and science. As of January 2021, 61 Nobel laureates, four Turing Award winners and one Fields Medalist have been affiliated with Cornell. Cornell counts more than 250,000 living alumni, and its former and present faculty and alumni include 34 Marshall Scholars, 33 Rhodes Scholars, 29 Truman Scholars, 7 Gates Scholars, 55 Olympic Medalists, 10 current Fortune 500 CEOs, and 35 billionaire alumni. Since its founding, Cornell has been a co-educational, non-sectarian institution where admission has not been restricted by religion or race. The student body consists of more than 15,000 undergraduate and 9,000 graduate students from all 50 American states and 119 countries.


    Cornell University was founded on April 27, 1865; the New York State (NYS) Senate authorized the university as the state’s land grant institution. Senator Ezra Cornell offered his farm in Ithaca, New York, as a site and $500,000 of his personal fortune as an initial endowment. Fellow senator and educator Andrew Dickson White agreed to be the first president. During the next three years, White oversaw the construction of the first two buildings and traveled to attract students and faculty. The university was inaugurated on October 7, 1868, and 412 men were enrolled the next day.

    Cornell developed as a technologically innovative institution, applying its research to its own campus and to outreach efforts. For example, in 1883 it was one of the first university campuses to use electricity from a water-powered dynamo to light the grounds. Since 1894, Cornell has included colleges that are state funded and fulfill statutory requirements; it has also administered research and extension activities that have been jointly funded by state and federal matching programs.

    Cornell has had active alumni since its earliest classes. It was one of the first universities to include alumni-elected representatives on its Board of Trustees. Cornell was also among the Ivies that had heightened student activism during the 1960s related to cultural issues; civil rights; and opposition to the Vietnam War, with protests and occupations resulting in the resignation of Cornell’s president and the restructuring of university governance. Today the university has more than 4,000 courses. Cornell is also known for the Residential Club Fire of 1967, a fire in the Residential Club building that killed eight students and one professor.

    Since 2000, Cornell has been expanding its international programs. In 2004, the university opened the Weill Cornell Medical College in Qatar. It has partnerships with institutions in India, Singapore, and the People’s Republic of China. Former president Jeffrey S. Lehman described the university, with its high international profile, a “transnational university”. On March 9, 2004, Cornell and Stanford University(US) laid the cornerstone for a new ‘Bridging the Rift Center’ to be built and jointly operated for education on the Israel–Jordan border.


    Cornell, a research university, is ranked fourth in the world in producing the largest number of graduates who go on to pursue PhDs in engineering or the natural sciences at American institutions, and fifth in the world in producing graduates who pursue PhDs at American institutions in any field. Research is a central element of the university’s mission; in 2009 Cornell spent $671 million on science and engineering research and development, the 16th highest in the United States. Cornell is classified among “R1: Doctoral Universities – Very high research activity”.

    For the 2016–17 fiscal year, the university spent $984.5 million on research. Federal sources constitute the largest source of research funding, with total federal investment of $438.2 million. The agencies contributing the largest share of that investment are the Department of Health and Human Services and the National Science Foundation(US), accounting for 49.6% and 24.4% of all federal investment, respectively. Cornell was on the top-ten list of U.S. universities receiving the most patents in 2003, and was one of the nation’s top five institutions in forming start-up companies. In 2004–05, Cornell received 200 invention disclosures; filed 203 U.S. patent applications; completed 77 commercial license agreements; and distributed royalties of more than $4.1 million to Cornell units and inventors.

    Since 1962, Cornell has been involved in unmanned missions to Mars. In the 21st century, Cornell had a hand in the Mars Exploration Rover Mission. Cornell’s Steve Squyres, Principal Investigator for the Athena Science Payload, led the selection of the landing zones and requested data collection features for the Spirit and Opportunity rovers. NASA-JPL/Caltech(US) engineers took those requests and designed the rovers to meet them. The rovers, both of which have operated long past their original life expectancies, are responsible for the discoveries that were awarded 2004 Breakthrough of the Year honors by Science. Control of the Mars rovers has shifted between National Aeronautics and Space Administration(US)’s JPL-Caltech (US) and Cornell’s Space Sciences Building.

    Further, Cornell researchers discovered the rings around the planet Uranus, and Cornell built and operated the telescope at Arecibo Observatory located in Arecibo, Puerto Rico(US) until 2011, when they transferred the operations to SRI International, the Universities Space Research Association (US) and the Metropolitan University of Puerto Rico [Universidad Metropolitana de Puerto Rico](US).

    The Automotive Crash Injury Research Project was begun in 1952. It pioneered the use of crash testing, originally using corpses rather than dummies. The project discovered that improved door locks; energy-absorbing steering wheels; padded dashboards; and seat belts could prevent an extraordinary percentage of injuries.

    In the early 1980s, Cornell deployed the first IBM 3090-400VF and coupled two IBM 3090-600E systems to investigate coarse-grained parallel computing. In 1984, the National Science Foundation began work on establishing five new supercomputer centers, including the Cornell Center for Advanced Computing, to provide high-speed computing resources for research within the United States. As an National Science Foundation (US) center, Cornell deployed the first IBM Scalable Parallel supercomputer.

    In the 1990s, Cornell developed scheduling software and deployed the first supercomputer built by Dell. Most recently, Cornell deployed Red Cloud, one of the first cloud computing services designed specifically for research. Today, the center is a partner on the National Science Foundation XSEDE-Extreme Science Engineering Discovery Environment supercomputing program, providing coordination for XSEDE architecture and design, systems reliability testing, and online training using the Cornell Virtual Workshop learning platform.

    Cornell scientists have researched the fundamental particles of nature for more than 70 years. Cornell physicists, such as Hans Bethe, contributed not only to the foundations of nuclear physics but also participated in the Manhattan Project. In the 1930s, Cornell built the second cyclotron in the United States. In the 1950s, Cornell physicists became the first to study synchrotron radiation.

    During the 1990s, the Cornell Electron Storage Ring, located beneath Alumni Field, was the world’s highest-luminosity electron-positron collider. After building the synchrotron at Cornell, Robert R. Wilson took a leave of absence to become the founding director of DOE’s Fermi National Accelerator Laboratory(US), which involved designing and building the largest accelerator in the United States.

    Cornell’s accelerator and high-energy physics groups are involved in the design of the proposed ILC-International Linear Collider(JP) and plan to participate in its construction and operation. The International Linear Collider(JP), to be completed in the late 2010s, will complement the CERN Large Hadron Collider(CH) and shed light on questions such as the identity of dark matter and the existence of extra dimensions.

    As part of its research work, Cornell has established several research collaborations with universities around the globe. For example, a partnership with the University of Sussex(UK) (including the Institute of Development Studies at Sussex) allows research and teaching collaboration between the two institutions.

  • richardmitnick 1:41 pm on August 1, 2021 Permalink | Reply
    Tags: "The Math of Living Things", After DNA was confirmed as the durable unit James Watson and Francis Crick found its double helix structure in 1953., , , , , Decades later the connections seen by Thompson Schrödinger and Einstein have grown., In 1944 Schrödinger published a smaller book with a different and profound effect “What is Life?” a record of his public lectures at Trinity College Dublin in 1943., In the 1940s Albert Einstein and Erwin Schrödinger-founders of relativistic and quantum physics respectively projected that tackling questions of biological importance could also enhance physics., James Watson and Francis Crick credited “What is Life?” with stimulating their work., Nautilus (US), Now new mathematical approaches give deeper views into how organisms develop their bodily structures., Physicists reported biological research at their first international meeting in 1900 and physics and math still help biologists understand living things., , Reasoning from quantum and statistical physics Schrödinger concluded that genetic data must be carried by a small and durable unit capable of wide variation ., Relating information to order and thermodynamics has special meaning in living organisms which survive grow and reproduce by maintaining their internal organization., The first major work to put physics and math into biology came much earlier. The Scottish biologist and polymath D’Arcy Wentworth Thompson published "On Growth and Form" in 1917., The Oxford English Dictionary definition of physics as the “branch of science concerned with the nature and properties of non-living matter and energy” is incomplete., Thompsons work was revised with a massive 1116 page second edition in 1942.   

    From Nautilus (US) : “The Math of Living Things” 

    From Nautilus (US)

    June 23, 2021
    Sidney Perkowitz

    Credit: Africa Studio / Shutterstock

    Exploring the intersection of physical and biological laws.

    It’s hard to argue with the famously authoritative Oxford English Dictionary but its definition of physics as the “branch of science concerned with the nature and properties of non-living matter and energy” is incomplete because physics studies living things as well. Physicists reported biological research at their first international meeting in 1900 and physics and math still help biologists understand living things.

    In a striking reverse connection, in the 1940s Albert Einstein and Erwin Schrödinger-founders of relativistic and quantum physics respectively projected that tackling questions of biological importance could also enhance physics. They were right: Today researchers are exploring “information” as more than a vaguely defined idea. Instead it has become a specific and unifying concept with deep meaning in both physics and biology.

    The first major work to put physics and math into biology came much earlier. The Scottish biologist and polymath D’Arcy Wentworth Thompson published On Growth and Form in 1917, with a massive 1,116 page second edition in 1942.[1] It explains that the structure of organisms exists “in conformity with physical and mathematical laws.” Arguing that Darwin’s natural selection is incomplete, Thompson showed how to extend the theory of evolution through analysis. He explained the shapes and sizes of animals and their skeletons through the laws of mechanics, and used pure math to show how an animal’s body might develop. The book influenced scientists with its challenges to Darwinian evolution and its compelling explication of the beauties of the natural world. A recent reconsideration praises it as “provocative and inspiring.”

    Then in 1944, Schrödinger published a smaller book with a different and profound effect, What is Life?, a record of his public lectures at Trinity College, Dublin, in 1943. Schrödinger’s equation is a cornerstone of quantum theory, and quantum ideas enter as What is Life? responds to a fundamental, then unresolved question: How do organisms preserve and transmit hereditary information through generations?

    Reasoning from quantum and statistical physics Schrödinger concluded that genetic data must be carried by a small and durable unit capable of wide variation to account for mutations in biological evolution—a molecule of around 1,000 atoms with different stable quantum configurations that encode the genetic record. After DNA was confirmed as this hereditary molecule, James Watson and Francis Crick found its double helix structure in 1953 (using Rosalind Franklin’s X-ray crystallography data) and credited What is Life? with stimulating their work. The book helped found molecular biology, and also led Schrödinger to glimpse something more. Because of the “difficulty of interpreting life by the ordinary laws of physics,” he wrote, “we must be prepared to find a new type of physical law.” This, he thought, might lie within quantum theory.

    Einstein also thought that biological research could extend physics, starting with investigations by the German-Austrian Nobel Laureate zoologist Karl von Frisch. This work established honeybees as models for animal behavior, and showed that the bees use polarized skylight to orient themselves. In 1949, Einstein noted that this last result did not open new paths in physics because polarization is a well-understood property of light.[2] But, he added, “the investigation of the behavior of migratory birds and carrier pigeons may some day lead to the understanding of some physical process which is not yet known.” Clearly he saw the value of a two-way flow between physics and biology.

    Decades later the connections seen by Thompson Schrödinger and Einstein have grown. One theme in Thompson’s work is the use of pure math to understand the morphology of living things. Thompson explored this by drawing an outline of an organism on a square grid and applying a mathematical transformation such as stretching the grid in one direction. The resulting image resembled another closely related organism—the long body of a parrotfish mathematically became the curved shape of an angelfish. This suggests that an organism’s body develops along preferred directions for cell growth, although math alone does not explain what biochemical and physical processes might cause this.

    Now new mathematical approaches give deeper views into how organisms develop their bodily structures.

    In 2020, physicists and biologists at the Technion – Israel Institute of Technology [ הטכניון – מכון טכנולוגי לישראל] (IL) analyzed the hydra, a fresh-water animal up to a centimeter long. Its cylindrical body has a foot that adheres to a surface, and a head with tentacles and a mouth that catches and eats prey. This creature interests biologists because a piece of its tissue can regenerate into a complete and functioning new animal. (Hydra are named after a mythological sea monster with many serpent-like heads, with the ability to grow two new heads for every one that was cut off.) Regeneration provides a kind of immortality that may have clues for human lifespans.

    REGENERATION: Scientists examined hydra, a fresh-water animal up to a centimeter long, under a microscope, and found that as it regenerated, its tissues behaved like atoms in a crystalline solid might. Credit: Rattiya Thongdumhyu / Shutterstock.

    The Technion group microscopically examined a piece of hydra tissue as it regenerated, particularly its multicellular fibers that lie parallel to the long axis of a mature hydra. The tissue first folded itself into a spheroid with its fibers forming a pattern like lines of longitude on Earth, which are parallel near the equator but sharply change orientation as they converge at the North and South poles. This is one type of topological defect, an anomaly that occurs in various forms wherever a regular geometry, like the parallel fibers in a hydra or the atomic arrangement in a crystalline solid, has its order seriously disturbed. It is called “topological” because its understanding and analysis requires topology, the branch of pure math that studies how shapes change when stretched, bent or twisted.

    The significance of the two topological defects observed in the hydra tissue is that they define its entire body plan because they eventually become the sites of the foot and head in the new cylindrical animal. More work is needed to understand the mechanical and biochemical processes that make topological defects important, but that they mark significant changes in living matter has also just been demonstrated in colonies of bacteria as they grow, in some cases into intricate multicellular structures.

    Another approach Thompson used to great advantage was the physical one of determining how mechanical quantities such as force affect the size and behavior of organisms. He did this by dimensional analysis, which recognizes that any mechanical quantity can be expressed as a combination of the three physical fundamentals mass M, length L, and time T; for instance, velocity has the dimensions L/T, and force the dimensions ML/T^2. From these basics, Thompson showed that big fish swim faster than little ones, and that an insect cannot become monstrously huge. This is because as its size increases, its weight increases faster than the strength of its supporting legs, so as it grew it would soon become unable to function.

    Ken Andersen at the Center for Marine Life, Technical University of Denmark [Danmarks Tekniske Universitet](DK), is now extending dimensional analysis to describe plankton, the enormous group of organisms that is part of the ocean ecosystem. He presented this research at the workshop “On Being the Right Size” organized at Emory University in 2020 to discuss how underlying physical principles determine the size and function of living creatures. (The workshop title comes from a famous 1928 essay by the eminent British biologist J.B.S. Haldane, about the importance of size in setting the capabilities of organisms.)

    Plankton consists of tiny animals and plants drifting through the oceans. It is important in the Earth’s carbon and oxygen cycles, and in the food chain that produces a significant part of the human diet. To analyze its diversity, Andersen categorized its organisms by how they take in nutrients. For an organism that actively feeds, its rate of ingestion as it encounters food depends on its dimensional speed L/T multiplied by its cross-sectional area L^2, or L^3/T where L is a characteristic size of the organism. Some animal plankton instead passively absorb nutrients as molecules of dissolved organic matter diffuse into their bodies, which detailed physical analysis shows occurs at a rate L/T. Plants however make their own nutrients by photosynthesis. This requires that they gather solar energy and so depends on the organism’s surface area with the dimensional rate L^2/T, along with some nutrition by diffusion at the rate L/T.

    Andersen plotted these rates of nutrient intake against the size of the organism from 10^-4 millimeters to 1 millimeter and found that size correlates with feeding mode. Smaller organisms feed by diffusion, larger ones actively feed, and those mid-range in size tend to be plants that use photosynthesis. The relative numbers of the three types therefore depends on the level of nutrients and sunlight as they occur across the oceans; for instance, with plentiful nutrients but little light, active and diffusion-based animal feeders dominate plants. Andersen is now developing plankton simulator software based on the underlying physical ideas to provide estimates of plankton diversity and function under different ocean conditions.

    The results for hydra and plankton extend Thompson’s analysis of whole organisms. In showing how atoms carefully arranged in a molecule could carry biological order through generations, Schrödinger’s What is Life? represents a newer approach at the molecular scale. Molecular biology has since led to other advances such as gene editing and better understanding of cellular processes.

    These successes suggest the attractive possibility of starting from molecules as basic units of biochemical life processes and building up to cells, tissues, organs and whole organisms. Such a reductionist approach seems valid in physics, where in principle elementary particles can be assembled into nuclei and atoms, which then form molecules and bigger assemblies of matter and energy up to the whole universe. Might molecules form the basis of understanding complex living things and perhaps life itself? Perhaps, but some observers think this bottom-up process is insufficient to explain higher-level biological structure and function. A prime example is the difficulty of linking our own internal consciousness, a property of the mind, to the behavior of molecules and neurons in the brain. Perhaps a different idea is needed to make the jump from molecules to complete living things.

    Schrödinger thought so when he speculated that to understand life, known physics should be supplemented by a “new type of physical law” that might come from quantum theory. Researchers have since reported some signs of quantum behavior or theorized about it in such areas as photosynthesis and olfactory response. But these results are controversial, and a convincing case for the widespread biological influence of quantum effects remains to be made.

    There is however a broad physical law that was not widely appreciated in Schrödinger’s time but is now important in physics and biology. In 1867, the Scottish mathematical physicist James Clerk Maxwell imagined a so-called “Maxwell’s demon.” This tiny being would reside in a box of gas and sort its fast and slow molecules into separate chambers. Temperature correlates with speed, so the result would be a temperature difference between the hot and cold regions that could produce useful work. In thus showing how to produce energy from pure information, Maxwell’s demon gave information physical reality. Then in the 1940s the mathematician Claude Shannon showed that the information describing a given system reflects the degree of order in the system. Thermodynamics describes order in a different way, through the quantity called entropy; Shannon’s insight gave further physical weight to information by linking it to order, entropy, and thermodynamics.

    Relating information to order and thermodynamics has special meaning in living organisms which survive grow and reproduce by maintaining their internal organization. This is implicit in the so-called “central dogma” of molecular biology, the statement by Francis Crick that the information stored in the DNA molecule flows to other molecular processes that make proteins and then a whole organism according to plan. Following the flow of information is therefore a way to describe the thermodynamics of entire biological systems. This opens up the study of properties that arise when the interactions among the system’s components, such as the neurons in the brain, produce new “emergent” high-level behavior.

    This more expansive approach is influencing research at the interface of physics and biology as shown at a 2018 symposium held at Trinity College to celebrate the 75th anniversary of the lectures that became What is Life? The event featured noted scientists who projected where research in new areas related to information and emergent properties, such as complex systems and the network of neurons that constitutes the brain, will take both physics and biology in years to come. Whatever those outcomes, what is surely important is the growing use of a broad approach based on information, which encompasses physical and biological science. Only such a powerful multidisciplinary, even transdisciplinary effort could hope to finally answer Schrödinger’s original question: “What is life?”


    1. Thompson, D.W. On Growth and Form: The Complete Revised Edition Dover, Mineola, NY (1992).

    2. Dyer, A.G., et al. Einstein, von Frisch and the honeybee: a historical letter comes to light. Journal of Comparative Physiology A (2021).

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus (US). We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

  • richardmitnick 11:35 am on July 25, 2021 Permalink | Reply
    Tags: "How Much Should Expectation Drive Science?", , , , , Nautilus (US), ,   

    From Nautilus (US) : “How Much Should Expectation Drive Science?” 

    From Nautilus (US)

    Dark Matter March 2017

    Credit: Caleb Perkins / EyeEm

    Answers to the biggest mysteries may lie well outside traditional paradigms.

    Claudia Geib

    In December 2015, particle physicists were buzzing with excitement: The Standard Model—which has dominated physics for 40 years and defines the basic constituents of matter and how they interact—had a new challenger.

    At the Large Hadron Collider in Switzerland, physicists announced evidence of what appeared to be a new particle.

    Known colloquially as the “diphoton bump,” the new particle promised to upend the Standard Model, which doesn’t predict its existence. It also opened the door to the possibility of solving long-unanswered puzzles, including the nature of Dark Matter.

    About a month later, another potential challenger emerged. A group of nuclear physicists out of the Hungarian Academy of Sciences [Magyar Tudományos Akadémia] (HU)’s Institute for Nuclear Research published a paper on an anomaly detected in the decay of beryllium-8. They proposed that the irregularity could be the signature of a “dark photon,” one of the force carriers thought to dictate the action of dark matter.

    On the surface, neither of these experiments might seem more viable than the other. Yet the response to each could not have been more different. While hundreds of theoretical papers were published on the “diphoton bump,” almost none followed the Hungarian paper.

    For almost eight months, the Hungarian paper remained in obscurity, until a group from the University of California-Irvine (US) posted their own theory about the anomaly detected in the decaying beryllium-8. Though their analysis didn’t support the suggestion that the anomaly was a dark photon, they thought it would fit the behavior of a new type of light, neutral boson—the class of particle that includes both photons and force-conveying particles like the Higgs boson. Yet unlike other force-conveying particles, their proposed particle interacted only with neutrons, and not with protons and electrons. Appropriately, they called it a “protophobic X boson.”

    As the UC-Irvine team gave the Hungarian experiment visibility, the community began to respond. Yet unlike the physicists of the Large Hadron Collider experiment, who were met with enthusiasm, the Hungarian scientists mostly faced skepticism.

    What caused such a dichotomy? Though many factors have since been cited, including the reputation of the Hungarian scientists, it seems the true difference between these two experiments comes down to something subtler: Expectation, and divergence from those expectations.

    In the sciences, new ideas are often judged for how far they lie outside of the systems that scaffold our understanding of the world— systems that are not only scientific, but also social. But when it comes to solving our most persistent mysteries in physics, like the composition of dark matter—which has so far resisted all attempt at elucidation by traditional physics—claims from outside this paradigm may be vital.

    The first strike against the beryllium-8 experiments is a scientific one: The model of the universe that it requires also needs physics well outside what is predicted by the Standard Model.

    Though the team from UC-Irvine found that the Hungarians’ data did not contradict any existing experiments, the model they proposed to explain the new particle needed to be an intricate one. After all, the scientists had to explain why this new particle would not have shown up in years of previous experiments. They suggest a particle that interacts with neutrons, but not protons, and which experiences a hitherto unknown force with a range about 12 times the size of a proton. It is this model with which many scientist take issue.

    “The question is, why would nature choose such a complicated model just to explain this phenomenon?” said Rouven Essig, a professor of physics at Stony Brook University-SUNY (US)‘s C.N. Yang Institute for Theoretical Physics.

    “We’ve got such a beautiful, nicely consistent theory in the Standard Model,” Essig said. “If something comes along that doesn’t fit any of that, and perhaps requires a unique, intricate new model to make it fit with anything, then that’s when it makes us very skeptical.”

    There are other scientific aspects of the beryllium-8 experiment that can, and have, been raised as concerns. The Hungarian experimenters work mostly in nuclear physics, not particle physics; their detection came on a single small device, one many magnitudes less sensitive than the two massive, top-of-the-line detectors—ATLAS and CMS—which double-check every discovery in the Large Hadron Collider.

    The group also published two previous papers with similar claims of new particles, including a 2008 claim of a potential 12-MeV particle, and a 2012 claim of a 13.45-MeV particle. Indeed, the Atomki researchers seem to have a penchant for publishing only papers with anomalies from their beryllium-8 experiments, what some see as potential confirmation bias.

    These concerns play into the second strike against the beryllium-8 experiment. However, this second strike is more sociological than scientific.

    “Where things get a little less defensible is that some of [the community’s skepticism] has to do with the fact that this is not a place where we thought physics was going to show up,” explains Tim Tait, a theoretical physicist from UC-Irvine, and a member of the group that brought the Hungarian experiment out from obscurity with their own theory. He says that this assessment applies both to the laboratory the proposed particle came out of, and the category it falls into—being a lighter-mass particle, physicists would expect to have observed it already in previous experiments. Neither is what the community would have anticipated as the source of new physics.

    The Band of Outsiders If the theory of these Hungary-based physicists is true, it could upend the Standard Model. Credit: Attila Krasznahorkay.

    Tait says that when translated from a personal to a community level, these expectations are what lead to certain types of experiments, or certain places, being perceived as inherently more trustworthy than others. The diphoton bump, he says, is an almost textbook example of this.

    The fluctuation that later became known as the diphoton bump was measured after a renovation, one that allowed the Large Hadron Collider to run at higher energies than before—the sort of energies that physicists expect to produce new particles. A particle that sat well outside of the Standard Model apparently did not seem so dubious in a program run by well-known physicists, in the very same facility where the Higgs Boson had been discovered not too long before—in short, a place where new particles were expected to appear.

    “Lots of people came up with reasons that the diphoton excess was exciting, even though they knew it wasn’t statistically significant,” Tait said. “I think a lot of it comes down to the fact that these experiments at the LHC are very familiar territory. We trust these guys, we think they know what they’re doing, and their detectors are very sophisticated.”

    It makes sense that scientists would rely on heuristics, like reputation and the standard theories of the field, as provisional rules of thumb. In judging a new experiment, scientists have only the experimenter and his/her experiment to go on. And while experimental data is often viewed as the ultimate arbiter between theory and fact, experiments themselves are not infallible. As sociologist of science Harry Collins laid out in his theory of the “experimenter’s regress,” facts can only be produced by good instruments, but good instruments are only considered such if they produce facts.

    A long-standing reputation for doing work that has been replicated, or a set of data that easily slides into what is already understood, provides extra reassurance that this regress (or other potential distortions) played no part in a new idea.

    Yet clearly these heuristics are imperfect. Exhibit A is the diphoton excess: Even with its impeccable pedigree, the new particle turned out to be an anomaly, and all the attention it received was for naught. Meanwhile, despite general skepticism around the unknown team out of Hungary, no significant objections have yet been raised about the team’s experimental results.

    Take another example, this time from cosmology. In March 2014, a team working on the BICEP2 telescope, led by a researcher from Harvard–Smithsonian Center for Astrophysics (US), announced that they had detected polarization in the cosmic microwave background [CMB].

    This polarization would signal the existence of gravitational waves created by the expansion of the universe. It was a discovery that had been long anticipated; it was predicted by the Inflationary Theory of the Universe, which like the Standard Model, guides cosmologists’ understanding of their field, but is never observed.


    Alan Guth, from Highland Park High School and M.I.T., who first proposed cosmic inflation

    HPHS Owls

    Lamda Cold Dark Matter Accerated Expansion of The universe http scinotions.com the-cosmic-inflation-suggests-the-existence-of-parallel-universes
    Alex Mittelmann, Coldcreation

    Alan Guth’s notes:

    Alan Guth’s original notes on inflation


    Thus, the announcement was received enthusiastically at first. It fit with what was expected and who was expected to find it, and was widely accepted at face value. Over the course of the next year, however, it was proven that the signal could be attributed to cosmic dust.

    In the face of these dilemmas, the question is: Are there more effective heuristics for judging new results?

    A case could certainly be made for “no.” After all, it’s difficult to conceive of other criteria to judge new physics on face value. Being more receptive to outsiders and theories, and following up diligently on these claims, would flood the field and take up valuable time needed to investigate the most promising leads. The above incidents are also natural failures of the sort that we should expect out of such a huge and unfathomable field of science.

    What’s more, those who study the history of science see it as unlikely that such a drastic change in culture will occur.

    “The way in which a community behaves is constructed over a long social progress, made by power structures, years of training, reward systems, rules of competition and collaboration between and within different groups,” says Roberto Lalli, research scholar at the Max Planck Institute for the History of Science. He says that history has shown that subcultures within physics—such as theoretical or particle physics—are relatively stable, and that it’s likely that places like CERN and ideas within the paradigm will continue to be considered the most plausible.

    “This attitude is not only due to authority bias, but also has to do with first-hand knowledge of the internal reviewing systems within experimental groups,” Lalli said. “This creates a system of trust, which will not change in a sudden way.” Social pressures, like the continual fight for funding and university positions, also make communities more unwilling to accept those from outside the mainstream.

    But a case still can, and should, be made for seeking new standards for the system. A sterling reputation can be hard to come by in a digital world, where obtaining visibility can be like shouting over a million voices, and the difficulty of the academic job market has spread talent widely beyond the most well-known institutions. Additionally, outsider ideas can help break the echo chamber that comes of only speaking to those within a relatively closed community.

    Indeed, one of the most groundbreaking physicists in history could be framed as an outsider from the onset.

    Albert Einstein was a low-level employee at the Swiss Patent Office when he proposed his special theory of relativity and his photon hypothesis (which theorized that light consists of individual particles, or quanta) in 1905. Though social structures were vastly different within physics in the early 20th century, hindsight of Einstein’s progress suggests that current outsider ideas may simply require time to pass before they can be accepted.

    “The way in which [Einstein’s ideas] became part of the new paradigmatic framework was not rapid,” says Lalli. “It took years and a lot of work, of reformulation of previous knowledge, to fully understand the radical physical implications contained in the new theories.” For example, Einstein’s photon hypothesis was nearly universally rejected at first, and was only accepted in the late 1920s after the discovery of the Compton effect.

    For the same sort of thing to happen today, Lalli says it “might not necessarily involve a change in culture. Rather, new ideas coming from unexpected places would gradually be included in the mainstream culture.”

    As for the trusted standard of the Standard Model, physicists readily acknowledge that there’s much outside the current theories that we likely do not know. As Essig put it: “We often judge new physics models by how well they can explain phenomena and how simple they are, but Nature may or may not care about our taste.”

    This also wouldn’t be the first time that nature was revealed to be much more complex than humans expected. In the beginning of the 20th century, when scientists first began experimenting on the scale of the atom, they saw particles behaving with zero regard for the laws of physics they understood. The birth of quantum physics required scientists in the field to rethink everything they knew about the laws of the universe—in essence, to throw out their textbooks.

    A new culture of particle physics as a field of small experiments from outsider physicists, as well as huge ones from trusted groups, would not take such a dramatic transformation. It would perhaps require just the writing of a few new chapters.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus (US). We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

  • richardmitnick 8:38 am on July 18, 2021 Permalink | Reply
    Tags: "Astronomers Find Secret Planet-Making Ingredient- Magnetic Fields", , , , , Nautilus (US), , ,   

    From Nautilus (US) : “Astronomers Find Secret Planet-Making Ingredient- Magnetic Fields” 

    From Nautilus (US)

    Robin George Andrews

    Supercomputer simulations that include magnetic fields can readily form midsize planets, seen here as red dots. Credit: Hongping Deng et al.

    Scientists have long struggled to understand how common planets form. A new supercomputer simulation shows that the missing ingredient may be magnetism.

    We like to think of ourselves as unique. That conceit may even be true when it comes to our cosmic neighborhood: Despite the fact that planets between the sizes of Earth and Neptune appear to be the most common in the cosmos, no such intermediate-mass planets can be found in the solar system.

    The problem is, our best theories of planet formation—cast as they are from the molds of what we observe in our own backyard—haven’t been sufficient to truly explain how planets form. One study, however, published in Nature Astronomy in February 2021, demonstrates that by taking magnetism into account, astronomers may be able to explain the striking diversity of planets orbiting alien stars.

    It’s too early to tell if magnetism is the key missing ingredient in our planet-formation models, but the new work is nevertheless “a very cool new result,” said Anders Johansen, a planetary scientist at the University of Copenhagen [Københavns Universitet](DK) who was not involved with the work.

    Until recently, gravity has been the star of the show. In the most commonly cited theory for how planets form, known as core accretion, hefty rocks orbiting a young sun violently collide over and over again, attaching to one another and growing larger over time. They eventually create objects with enough gravity to scoop up ever more material—first becoming a small planetesimal, then a larger protoplanet, then perhaps a full-blown planet.

    Yet gravity does not act alone. The star constantly blows out radiation and winds that push material out into space. Rocky materials are harder to expel, so they coalesce nearer the sun into rocky planets. But the radiation blasts more easily vaporized elements and compounds—various ices, hydrogen, helium and other light elements—out into the distant frontiers of the star system, where they form gas giants such as Jupiter and Saturn and ice giants like Uranus and Neptune.

    But a key problem with this idea is that for most would-be planetary systems, the winds spoil the party. The dust and gas needed to make a gas giant get blown out faster than a hefty, gassy world can form. Within just a few million years, this matter either tumbles into the host star or gets pushed out by those stellar winds into deep, inaccessible space.

    For some time now, scientists have suspected that magnetism may also play a role. What, specifically, magnetic fields do has remained unclear, partly because of the difficulty in including magnetic fields alongside gravity in the computer models used to investigate planet formation. In astronomy, said Meredith MacGregor, an astronomer at the University of Colorado-Boulder (US), there’s a common refrain: “We don’t bring up magnetic fields, because they’re difficult.”

    And yet magnetic fields are commonplace around planetesimals and protoplanets, coming either from the star itself or from the movement of starlight-washed gas and dust. In general terms, astronomers know that magnetic fields may be able to protect nascent planets from a star’s wind, or perhaps stir up the disk and move planet-making material about. “We’ve known for a long time that magnetic fields can be used as a shield and be used to disrupt things,” said Zoë Leinhardt, a planetary scientist at the University of Bristol (UK) who was not involved with the work. But details have been lacking, and the physics of magnetic fields at this scale are poorly understood.

    “It’s hard enough to model the gravity of these disks in high enough resolution and to understand what’s going on,” said Ravit Helled, a planetary scientist at the University of Zürich[Universität Zürich](CH). Adding magnetic fields is a significantly larger challenge.

    In the new work, Helled, along with her Zurich colleague Lucio Mayer and Hongping Deng of the University of Cambridge (UK), used the PizDaint supercomputer, the fastest in Europe, to run extremely high-resolution simulations that incorporated magnetic fields alongside gravity.

    Magnetism seems to have three key effects. First, magnetic fields shield certain clumps of gas—those that may grow up to be smaller planets—from the destructive influence of stellar radiation. In addition, those magnetic cocoons also slow down the growth of what would have become supermassive planets. The magnetic pressure pushing out into space “stops the infalling of new matter,” said Mayer, “maybe not completely, but it reduces it a lot.”

    The third apparent effect is both destructive and creative. Magnetic fields can stir gas up. In some cases, this influence disintegrates protoplanetary clumps. In others, it pushes gas closer together, which encourages clumping.

    Taken together, these influences seem to result in a larger number of smaller worlds, and fewer giants. And while these simulations only examined the formation of gassy worlds, in reality those prototypical realms can accrete solid material too, perhaps becoming rocky realms instead.

    Altogether, these simulations hint that magnetism may be partly responsible for the abundance of intermediate-mass exoplanets out there, whether they are smaller Neptunes or larger Earths.

    “I like their results; I think it shows promise,” said Leinhardt. But even though the researchers had a supercomputer on their side, the resolution of individual worlds remains fuzzy. At this stage, we can’t be totally sure what is happening with magnetic fields on a protoplanetary scale. “This is more a proof of concept, that they can do this, they can marry the gravity and the magnetic fields to do something very interesting that I haven’t seen before.”

    The researchers don’t claim that magnetism is the arbiter of the fate of all worlds. Instead, magnetism is just another ingredient in the planet-forming potpourri. In some cases, it may be important; in others, not so much. Which fits, once you consider the billions upon billions of individual planets out there in our own galaxy alone. “That’s what makes the field so exciting and lively,” said Helled: There is never, nor will there ever be, a lack of astronomical curiosities to explore and understand.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus (US). We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

  • richardmitnick 10:02 pm on July 8, 2021 Permalink | Reply
    Tags: "The Billion-Dollar Telescope Race", , , , ESO ELT, , , Nautilus (US), ,   

    From Nautilus (US) : “The Billion-Dollar Telescope Race” 

    From Nautilus (US)

    March 13, 2014 [Re-issued 7.7.21]
    Mark Anderson

    How three groups are competing to make the first extremely large telescope.

    When Warner Brothers animators wanted to include cutting-edge astronomy in a 1952 Bugs Bunny cartoon [1] they set a scene at an observatory that looks like Palomar Observatory in California.

    The then-newly unveiled Hale Telescope, stationed at Palomar, had a 5-meter-diameter mirror, the world’s largest. In 1989, when cartoonist Bill Watterson included a mention of the world’s largest telescope in a “Calvin and Hobbes” cartoon,[2] he again set the action at Palomar. Although computers had grown a million times faster during those 38 years, and eight different particle colliders had been built and competed for their field’s top ranking, astronomy’s king of the hill stayed perched on its throne.

    This changed in 1992, with the introduction of the Keck telescope and its compound, 10-meter mirror.

    About a dozen 8-10 meter telescopes have been built since, e.g.

    But it has been more than 20 years since this last quantum leap in telescope technology. Now, finally, the next generation is coming. Three telescopes are on their way, and the race among them has already begun.

    Three new observatories are on the drawing boards, all with diameters, or apertures,[3] between 25 and 40 meters, and all with estimated first light being collected in 2022: the Giant Magellan Telescope (GMT, headquartered in Pasadena, Calif.); the Thirty Meter Telescope (TMT, also in Pasadena); and the European Extremely Large Telescope (E-ELT, headquartered in Garching, Germany). At stake are the mapping of asteroids, dwarf planets, and planetary moons in our solar system; imaging whole planetary systems; observing close-in the Goliathan black hole at the Milky Way’s core; discovering the detailed laws governing star and galaxy formation; and taking baby pictures of the farthest objects in the early universe.

    Thanks to these telescopes, astronomy is poised to reinvent itself over the next few decades. Renown and glory, headlines and prestige, and perhaps a few Nobel Prizes too, will go to those astronomers that first reveal a bit of new cosmic machinery. Surprisingly, the story of this race will be written, not just in the technical specifications and design breakthroughs of the instruments themselves, but also in the organizational approaches that each team has taken. The horse race is a unique window both into technology, and into the process of science itself.

    In the 50 years following its 1949 construction, the telescope that came closest to beating the performance of the Palomar Observatory was the Bolshoi Teleskop Alt-azimutalnyi (BTA-6), a Soviet telescope that used a 6-meter mirror and was christened in 1975.

    BTA-6 [Большой Телескоп Альт-азимутальный] Large Altazimuth Telescope, a 6-metre (20 ft) aperture optical telescope at the Special Astrophysical Observatory located in the Zelenchuksky District on the north side of the Caucasus Mountains in southern Russia.

    But the BTA-6 only proved how difficult it is to build and operate single-mirror telescopes larger than 5 meters. Its mirror was so elephantine that it cracked under its own weight, and the heat from the light it collected destabilized its sensitive optics. As a result, for productive astronomical observatories, Palomar remained the world’s most powerful until 1992, when the 10-meter W.M. Keck Observatory telescope in Hawaii first opened its eyes.

    Keck’s history began with a single American physicist called Jerry Nelson, an upstart scientist at the DOE’s Lawrence Berkeley National Laboratory. In 1977, conventional wisdom held that a 10-meter instrument, just as subject to gravity’s warp as BTA-6, was extremely impractical if not downright impossible. Nelson’s innovation was not to rely on one mirror but rather on a honeycomb-like structure of small, hexagonal mirror segments. Each flexible and lightweight mirror would be independently mounted and had its own curvature unique to its placement in the array. A mirror in the center would be curved upward on all six sides. A mirror placed off-axis to the right would curve down on its left edges and up on its right. The sum total structure of hexagonal mirrors would be a meta-mirror that behaved exactly like a curved single mirror.

    Nelson’s design was made more complex by the fact that as the telescope’s body rotated, each mirror needed to be adjusted on the fly by arrays of computerized screws and flywheels that nudged the mirrors so as to compensate for gravity’s pull [Active Optics].

    “I remember when Jerry Nelson used to give these talks,” says Michael Bolte, TMT member and astronomy professor at the University of California-Santa Cruz (US). “Everybody thought he was completely nuts. They thought, if you get out in the real world where the wind blows and gravity vectors change and humidity changes, surely this would never work.” Even today, astronomers’ skepticism seems well warranted. To operate a 10-meter telescope using Nelson’s design required continual monitoring and adjustment of each mirror segment’s position to within a few billionths of a meter.

    On top of that, Keck later implemented the further innovation of using another array of computers to monitor minute disturbances in the atmosphere above the telescope. Then an additional, deformable mirror down the line could compensate for the tiny thermal wiggles that the atmosphere introduces to a star’s image.

    In other words, Nelson hoped to design a telescope that could “subtract” off the influence of the earth’s atmosphere, all but launching his instrument into space without ever lifting it off the ground. (Such adaptive optics are being used in all three next-generation telescope projects.)

    No wonder, then, that many leading astronomers in the 1980s and early 1990s had written off Nelson’s scheme. A 1993 Los Angeles Times profile of Nelson, for instance, quotes an anonymous source it describes as “one of the nation’s top telescope designers.” The anonymous source rated Nelson an “arrogant fool” and predicted that the W.M. Keck Observatory’s $200 million price tag would ultimately just be money down the drain.

    Yet when in 1992 the Keck telescope—followed by its cousin Keck II in 1996—instead delivered on its designers’ promise of ushering in a new era of 10-meter class astronomy, other observatories around the world were caught by surprise.

    “These problems you’d been working on your whole career, after one night on Keck, you’d have all the data you’d need,” Bolte says. “We were actually unpopular with much of the world. And there were many people who, when we started thinking about a 30-meter telescope, swore they’d never get ‘Keck’ed again.”

    The history of the Keck design continues to color the field. One of the three teams, TMT, has directly inherited Keck’s design and many of its team members, including Nelson. TMT will also share mountaintop space with Keck, on the dormant Mauna Kea volcano in Hawaii. Its new design is an extension of Keck’s segmented hexagonal mirror to the 30-meter scale. TMT’s Bolte adds, with not more than the tiniest amount of relish, that the competing E-ELT team developed a similar plan to the TMT/Keck’s, even without any legacy or institutional inertia pushing it toward one telescope design or another.

    “I don’t want to sound like I’m criticizing anybody here,” he says. “But I think if you were really going to design a telescope from scratch, a 25 to 30 meter telescope, you’d almost certainly pick the TMT design over the GMT design. That is, small segments with very tiny gaps. As evidence for that, the Europeans could have done whatever they wanted. They had a clean slate… They did the cost benefit analysis and concluded that a telescope very much like the TMT was the way to build.”

    In fact, TMT and E-ELT’s mirror segments are exactly the same size scale, 1.44 meters. (They’re not interchangeable, though, as each mirror has a different curve and warp.) Asked why his team picked the same mirror component size as TMT, Tim de Zeeuw, [then] director general of the European Southern Observatory (ESO), noted that “there is … no formal intention to collaborate on the production of segments, but since the sizes are the same it is however also not impossible.” The TMT will use its 492 hexagonal mirrors to create an effective 30-meter aperture. E-ELT, to be sited on a mountaintop observatory in the Atacama Desert near Antofagasta, Chile, will use 798 hexagonal mirror segments to create an effective telescope size of 39 meters.

    The GMT telescope, by contrast, uses seven circular 8.4-meter mirrors that all reflect into a central convex mirror suspended above the primary mirror. The seven-mirror structure, to be situated on a mountaintop observatory near La Serena, Chile, together create a meta-mirror with a resolving power equivalent to that of a 24.5 meter single-mirror telescope. The segments (of which [then] one has been completed and two more are being manufactured) use a lightweight honeycomb design that overcomes the 6-meter limit that BTA-6 famously encountered. The University of Arizona (US), a GMT partner institution, is making the mirrors in its U Arizona Steward Observatory Mirror Lab (US), located beneath the university’s football stadium.

    “Completing the first mirror segment was a very significant milestone for us,” says Charles Alcock, director of the Harvard-Smithsonian Center for Astrophysics (US) and member of the GMT board. “It has a very complicated shape, since it’s an off-axis segment it’s not symmetrical about its center. And it’s being polished to an accuracy of 19 nanometers. So it is the best large optical surface ever created in human history.”

    Roger Angel,[1] professor of astronomy and optics at the University of Arizona, was the GMT’s chief architect and intellectual forebear. Alcock notes that although Keck was the first 10-meter class telescope, there are other telescopes—including the Magellan Telescopes in Chile (distinct from the Giant Magellan Telescope) and the Multiple Mirror Telescope and Large Binocular Telescope in Arizona—that do not use the Keck design.

    “The TMT is a direct successor to the Kecks, but with 492 segments, up from 36, it is a significantly different design,” Alcock says. “The GMT design … has as much heritage as—arguably more than—the TMT design.”

    With so much hard science in the balance, one might think that the varying designs of the three competing telescopes would decide which is first past the post. But there is a more prosaic aspect to the competition: Securing partners. This boils down to a kind of musical chairs of international corporations, institutes, and national organizations. “Everybody in our world knows who the potential partners are,” says Alcock. “If we’re talking with somebody, you just know that TMT has probably had some contact with them. I think it’s unlikely that any individual potential partner would join both projects. It’s high stakes in that regard.”

    “The GMT realized very early on that they needed to find some more partners to fund their telescope, so we were all running around the world doing the same thing,” says TMT’s Bolte. “We’d show up at the airport in Beijing just as somebody from the E-ELT was leaving. Or we’d run into [GMT leader] Wendy Friedman in the airport in Tokyo. We were all talking up our projects to all of these countries. I don’t know for sure how they made their choices. But I’m really pleased that we got some of the major players to select our project given the choice of all three.”

    A major win for TMT was China, a country whose economic size and scientific stature meant that each telescope’s officials watched its courtship maneuvers closely. China had considered making its own 30-meter class telescope, but the country doesn’t have mountaintop sites that boast the astronomically perfect conditions of the dry Chilean mountains or the Mauna Kea summit. Shude Mao of the National Astronomical Observatories of China at Chinese Academy of Sciences [中国科学院](CN) in Beijing now sits on the TMT board. He says Keck’s impressive track record was an important factor in swaying the world’s second-largest economy toward TMT.

    China’s decision also reveals the different kinds of organizational structures at play in each of the three competing teams. The E-ELT has a European model of national-level cooperation. Joining the E-ELT requires membership in the European Southern Observatory [Observatoire européen austral][Europäische Südsternwarte] (EU) (CL) and a pledge of a small fixed percentage of a country’s gross domestic product (GDP) toward the ESO budget. This would have made membership expensive for China, whose GDP in 2013 was $9 trillion.

    Both GMT and TMT have a trim and more corporate American-style organization that is part institutional, part national partnerships. GMT was less attractive to China, however, because it mandates cash contributions. By contrast, Mao says, 70 percent of China’s contribution to TMT in its earliest stages can be “in kind.” This means China may be required to manufacture and then donate a spectrometer or a certain number of the TMT’s 492 mirror segments. But China could then also do this work in-country, stimulating its own industries. “That is extremely important to us,” Mao says.

    By contrast, says Brian Schmidt of the Australian National University (AU), GMT downplays in kind contributions for a good reason. Schmidt, a Nobel Laureate astronomer and a leader in GMT’s effort to sign on Australia, explains that GMT awards its contracts only on the basis of scientific and technical merit. It plays no favorites in awarding its work orders. “It’s a real minimalist structure that’s focused on really getting the thing done,” Schmidt says of the GMT organization.

    Other big countrywide “gets” round out the early tally. As of early 2014, these have been Brazil signing on to ESO (and thus E-ELT), although this still requires ratification by the Brazilian Parliament; South Korea and Australia putting their weight behind GMT; and China, India, and Japan backing TMT.

    The resulting three-way race, as ESO public information officer Richard Hook describes it, is a kind of portmanteau of cooperation and competition. “You could call the situation ‘Co-opetition,’ ” he says.

    For each of the three telescope projects, much of the industrial work and projected completion dates are well-guarded trade secrets. All three telescopes’ websites leave the exact projected date of their completion unknown, expressing the likelihood that each will be completed and conducting actual science by 2022. But each community clearly has a common goal in mind: to be first.

    “I’m really hoping we’re still going to be first,” Bolte says of TMT. “We would have liked to have started building this telescope five years ago. I think technically we were ready to do that. What we didn’t have is our partnerships put together.”

    Says ESO’s Hook of his group’s E-ELT, “Yes, the scientific community that we serve is of course keen to be first.” But he goes on to add, “It is certainly possible to overstate the level of competition. All three will be general-purpose telescopes with long lives. They are not focused on one result.”

    In fact, every official from the three telescopes that Nautilus spoke to for this story was careful to couch their assessments of the competition in collegial terms. More than once it was expressed that they didn’t want to appear sniping or derogatory toward telescopes that, in all likelihood, will be as much collaborators as rivals. No one seemed to want to provide justifiable cause for bad blood. But that each team is also in a race to the finish was plainly obvious.

    Regardless of the winner among the three teams, should all three telescopes be built—and no expert consulted for this story predicted any other outcome—they will likely surge astronomy ahead unlike any time in modern memory. The only precedent within the professional lifetimes of astronomers working today would be Keck’s launch in 1993. Just what new windows on the universe this trio of extraordinary scientific instruments may open remains anyone’s guess.

    “Our experience with previous generations of telescopes is that people do carry out the science programs that led them to build the telescopes,” says Alcock. “But frequently the most exciting science is something that nobody was thinking about. Something entirely unanticipated.”


    1. The last scene of this 1952 Bugs Bunny cartoon featured an observatory that looked a suspicious amount like the Palomar Observatory in California.

    2. In this 1989 comic, Calvin, disguised as “Stupendous Man,” visits Palomar Observatory.

    3. For more information on the lenses used in telescopes, visit Starizonia.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus (US). We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

  • richardmitnick 8:34 pm on July 8, 2021 Permalink | Reply
    Tags: "The Planets with the Giant Diamonds Inside", , Nautilus (US),   

    From Johns Hopkins University Applied Physics Lab via Nautilus (US) : “The Planets with the Giant Diamonds Inside” 

    Johns Hopkins University

    From Johns Hopkins University Applied Physics Lab


    Nautilus (US)

    July 7, 2021
    Corey S. Powell

    Mining the mysteries of Uranus and Neptune.

    Tilt: NASA Voyager’s instruments showed that Uranus’ magnetic field is tilted 60 degrees relative to its axis, as if your compass needle pointed to Houston instead of the north pole. This image shows the magnetic field. The yellow arrow points to the sun, the light blue arrow marks Uranus’ magnetic axis, and the dark blue arrow marks Uranus’ rotation axis. Credit: Tom Bridgman/ NASA Goddard Scientific Visualization Studio (US)

    On the dark night of March 13, 1781, William Herschel settled down in his garden observatory in Bath, England, for a routine night of observing stars, when he noticed something out of place in the heavens. Through the eyepiece of his homemade 7-foot telescope, he spied an interloper in the constellation Gemini: “a curious, either nebulous star or perhaps a comet,” as he recorded it. For weeks, he stalked the unknown object, monitoring its steady appearance and circular path around the sun until there could be no doubt about its true identity. He had discovered not a comet but a new planet, far more distant than any of the others.

    Stormy Weather: These images of Uranus, taken by the Keck II telescope in Hawaii, are the sharpest, most detailed pictures of the planet to date, according to NASA. The north pole (to the right) is characterized by a swarm of storm-like features, and an unusual scalloped pattern of clouds encircles the planet’s equator.Credit: Lawrence Sromovsky, Pat Fry, Heidi Hammel, Imke de Pater / University of Wisconsin-Madison (US).

    Being a politically astute fellow, Herschel proposed naming the planet Georgium Sidus, or “George’s star,” in honor of King George III. The ploy worked—he promptly was named the king’s astronomer and received a royal stipend—but his colleagues outside of England objected. They wanted a noble and politically neutral name like Urania, the Greek muse of astronomy. In the end, scientists settled on the even more dignified “Uranus,” the ancient Greek god of the sky and ancestor of the other deities. Centuries of snickering ensued.

    But seriously. Uranus orbits the sun at twice the distance of Saturn, so Herschel’s discovery instantly doubled the size of the known solar system. From a modern perspective, it’s hard to appreciate how shocking that was. At the time, the solar system was the only charted region of space; nobody yet had a clue about how far away even the nearest the stars were. In effect, Herschel had doubled the size of the entire known universe. He also brushed away the final traces of classical astronomy and astrology. Uranus is typically described as the first planet discovered since antiquity, but it’s more accurate to say it was the first planet to be discovered, period. All the others are readily visible to the naked eye, and so were known to all. Uranus shattered the common assumption that there were no more planets beyond the six classical ones, establishing an endless-frontier ethos that resonates through science and science fiction to this day.

    That ethos is part of daily life for Kirby Runyon, a young geomorphologist at Johns Hopkins University’s Applied Physics Lab, who is developing new ways to study Uranus and its similar-but-bizarrely-different planetary sister, Neptune. Like the handful of others who study this distant duo, he is enthralled by the boundary-busting nature of the solar system’s outermost planets. “What brought me into space science as a professional was the chance to, as Star Trek says, ‘explore strange, new worlds’,” Runyon says. “If you like seat-of-your-pants, Captain Picard-style exploration, then Uranus and Neptune have to rank high in your list.”

    You have to admire Runyon’s passion. After all, who dreams of a space voyage to Uranus or Neptune? They are not brightly ringed like Saturn, nor do they hold the prospect of life like Mars. The two planets, though, still hold a special status as worlds on the edge. They formed in chaos, at the boundary between the inner, planetary part of our solar system and the outer zone filled with far-flung comets. In this transitional zone, they also took on transitional forms, with a size and composition that places them halfway between gas giant planets like Jupiter and rocky planets like Earth. Astronomers call these in-betweeners “ice giants,” and are now finding that such midsize worlds are extremely common around other stars. “Neptune and Uranus are the closest analogs in our solar system to the most populous type of planet that we know of,” says Heidi Hammel, a veteran researcher of the outer planets who is now at the Association of Universities for Research in Astronomy (US) in Washington, D.C.

    Uranus and Neptune are also fascinatingly odd in themselves. Their cloudy surfaces are marked with raging storms and the fastest winds recorded on any planet, while high above they have complex systems of moons, including ones that may harbor buried oceans. All the more shame, then, that only a single spacecraft has ever visited them—and that was more than three decades ago. “They’re enigmas because they are so far away,” Hammel says wistfully, “but they are such intriguing enigmas.”

    Long before anyone was poking rockets above Earth’s atmosphere, Uranus was already directing astronomers on a virtual voyage through the solar system and beyond. In the decades after Herschel’s discovery, observations of Uranus indicated that it was deviating from its expected orbit around the sun. By the 1820s, the discrepancy was undeniable: Either Newton’s theory of gravity was wrong, or there was some object beyond Uranus that was tugging it off course. “The Newtonian theory appeared to the great majority, perhaps nearly all, astronomers to be the impregnably true system of the world,” writes science historian Robert W. Smith.[1] At the same time, the discovery of Uranus had already demonstrated the possibility of more new worlds. Faith in the laws of physics dictated that there must be another, unseen planet out there.

    That faith paid off on Sept. 23, 1846, when German astronomer Johann Gottfried Galle—using calculations provided by French mathematician Urbain Le Verrier, in an act of trans-national nerd comity—spotted a new planet less than one degree from its predicted location. The orbital calculations pinpointing its location took years to complete; Galle’s visual search for the planet required all of one hour. Galle suggested naming it Janus, the two-faced Roman god, implying that it was facing outward toward the stars. The more optimistic Le Verrier objected. “Janus would indicate that this planet is the last of the solar system, which there is no reason to believe,” he wrote.[2] Instead, he proposed Neptune, the god of the sea.

    The discovery of Neptune was as transformative as that of Uranus, though in a quite different way. Uranus had already expanded the scale of the known universe. Neptune expanded the means by which we could get to know it. When Galle saw the planet exactly where Newton’s equations said it should be, he demonstrated that astronomers could detect celestial bodies by gravity alone. Now they could track down objects that had never been observed, perhaps even ones so dark or distant that they fundamentally could not be observed.

    Modern cosmologists use the term Dark Matter to refer to invisible mass that is thought to influence the formation and structure of galaxies across the universe; in that sense, the term traces back to a 1933 paper by the Swiss-American cosmologist Fritz Zwicky. But the concept of dark matter truly began with Neptune, the first celestial object ever discovered before it was seen. From there, things escalated quickly. German astronomer Frederick Bessell had been tracking the erratic motions of the bright stars Sirius and Procyon and deduced that they, like Uranus, were being pulled off-course by unseen objects. “The existence of numberless visible stars can prove nothing against the existence of numberless invisible ones,” Bessell wrote in 1844.[3] Invisible stars sounded like an oxymoron, but the discovery of Neptune made the outlandish idea seem more plausible. The reality of those dark companions was soon confirmed; in the 1910s the objects were identified as white dwarfs, the faint, collapsed corpses of stars like the sun. Similar detective work in the 1970s led to the discovery of black holes scattered across our galaxy.

    In all this excitement, Uranus and Neptune themselves largely got left behind. They languished in scientific obscurity for another century and a half, mostly because they are so damn hard to study. Uranus never comes within 1.6 billion miles of Earth, 40 times as far as Mars; Neptune is a billion miles farther still. Its apparent size in the sky is equivalent to a dime seen from a mile away. The development of deep-sky photography beginning in the late 19th century greatly boosted the study of our galaxy and led directly to the discovery of countless other galaxies beyond. For the ice giant planets, however, the new technology had an opposite effect. When astronomers stopped looking through the eyepiece and started focusing on photographic plates instead, the planets became even more obscure.

    Neptune’s Rings: NASA Voyager captured Neptune’s rings in 1989. The long-exposure images were taken while the rings were back-lighted by the sun, which enhances the visibility of dust. The bright glare in the center is due to over-exposure of the crescent of Neptune. The two gaps in the upper part of the outer ring (on the left) are due to blemish removal in the computer processing. NASA/JPL-Caltech (US)

    “Going back to the 1800s, early observers would look at Uranus and see bands and other features,” Hammel says. Their eyes were trained to pick out the fleeting moments when the air becomes steady and fine details pop into view. Photography and early forms of digital imaging couldn’t capture those split-seconds of clarity. Instead, they yielded blurry, long-exposure images that suggested the outermost planets were bland and unchanging. “The technology of the time smeared everything out, giving rise to this mythology that Uranus had no cloud features,” Hammel laments. “We had a hundred years of misinterpretation.”

    Then at long last humans developed the technology to visit the ice giants and see them up close … and the misinterpretations just kept coming. On Jan. 24, 1986, NASA’s Voyager 2 swooped over Uranus’s cloud tops and sent back picture after picture of a featureless blue-green orb.

    By bad luck, the spacecraft had arrived at the beginning of summer in the planet’s northern hemisphere, a time when the global weather turns hazy and bland. The Voyager 2 images cemented the idea of Uranus as a boring planet—a knock that still galls Hammel. “It was like, ‘Let me show you a picture of what I looked like on one day in 1986.’ That doesn’t give you an understanding of who I am as a person,” she says.

    The Voyager flyby did offer hints that there’s more to Uranus than meets the eye. The planet has a system of thin, dark rings, which turn out to contain large chunks—possibly the remains of a moon that was destroyed long ago. More surprising, Voyager’s instruments showed that Uranus’s magnetic field is tilted 60 degrees relative to its axis, as if your compass needle pointed to Houston instead of the north pole. There must be a huge, lopsided magnetic generator cranking away inside the planet, which leaves Hammel buzzing with questions: “What kind of internal structure can do that? Is it stable? Does it change over time?”

    But the reputation of the ice giants didn’t really recover until Voyager 2 reached Neptune on Aug. 25, 1989. Unlike its sibling, Neptune was a riot of activity. It seemed to be staring back at the spacecraft with its Great Dark Spot, an anticyclone storm (a hurricane in reverse, with a high-pressure eye) nearly as large as the entire Earth. The Spot was streaked with white clouds of methane ice and surrounded by smaller storms and dark bands circling the entire planet, all tinged a rich, deep blue by methane gas in Neptune’s atmosphere. Beautiful, complex, and not at all boring.

    The Voyager results revealed that weather operates differently on ice giants than it does here on Earth, for reasons that scientists are only starting to decipher. “Wind speeds increase as you go farther out from the sun, which is weird,” says Amy Simon, a Uranus and Neptune enthusiast at NASA’s Goddard Space Flight Center. On Uranus, they blow at 550 mph, as fast as a jet airplane at cruising speed. On Neptune, the winds are even fiercer, averaging 700 mph and gusting to 1,500 mph around the Great Dark Spot. They manage to pick up tremendous energy, even though Neptune receives just 1/900th as much solar heat as Earth does.

    Seasons on the ice giants are also unlike anything seen on Earth or anywhere else. For one thing, the seasons are extreme, especially on Uranus: The planet is tipped sideways, so its poles spend half the time in perpetual sunshine and the other half in total darkness. For another, seasons take a long time to change, because the ice giants follow huge, lazy paths around the sun. Uranus takes 84 years to complete a single orbit, and Neptune takes 165 years. Neptune’s northern hemisphere was heading into winter when Voyager flew by in 1989.[4] Springtime won’t arrive until 2038. The ice giants have both the fastest and the slowest climates in the solar system, which makes them useful as extreme natural laboratories. “We run the same weather and climate codes we use on Earth, and we learn about unknown sensitivities or details that aren’t quite right,” Simon says. “And if someday we want to apply these codes to planets around other stars, they’d better work across our whole solar system first.”

    In 2014, belatedly acknowledging the mad complexity of weather on the ice giants, NASA greenlit the Outer Planet Atmospheres Legacy program, or OPAL, with Simon in charge; her glorious official title is Senior Scientist for Planetary Atmospheres Research. Once a year, OPAL takes over the Hubble Space Telescope and turns it into an outer-planet weather satellite, monitoring conditions on Uranus and Neptune (and Jupiter and Saturn, for good measure), with Simon as the interplanetary weathercaster.

    For the first time, scientists have both the time and the clarity of vision to learn the long-term personalities of Uranus and Neptune. Simon’s reserved demeanor lights up when she describes the quirks she has been observing on her planets. As Uranus has progressed from northern summer to late autumn, its weather has transitioned from hazy to crazy. “We see a polar cap that has gotten really bright and thick. And we’ve seen little storms. They tend to break apart really fast, on order of an hour, because the high winds shear things apart very quickly,” she says. Those changes demonstrate that even tiny variations in solar energy can transform the weather of a giant planet, nearly four times the size of Earth.

    On Neptune, the most significant finding from OPAL is that the activity just never ends. “The big thing we’ve found has been the dark spots. In our few years of monitoring, we’ve seen two more of them. One was already there when we started, and it disappeared over a couple of years. The other one formed in 2019. After they form, the dark spots start to drift in latitude, just like a hurricane on Earth, until eventually they break apart,” Simon says.

    It’s not clear what maintains all of this activity. One possible explanation, Hammel notes, is that the planets’ ultra-cold air is almost frictionless, so it takes very little kick to get big storm systems going. One important clue is that the two ice giants behave quite differently, despite being almost identical in mass, composition, and diameter. “Uranus and Neptune don’t look much alike at all. We see a lot more of the discrete storms on Neptune than on Uranus, and we don’t see much of a bright polar cap,” Simon says.

    The disparity hints at stark differences deep inside the two ice giants. Voyager measurements showed that Neptune emits 2.7 times as much heat as it receives from the sun, apparently retaining a lot of energy from the time of its formation. Uranus, in contrast, sheds just a trickle of additional warmth. “Internal heat’s got to be a much bigger factor than the sunlight in driving the weather, but we’re still trying to puzzle some of that out,” says Simon. If anything, it adds another layer of mystery: Why is Neptune so much hotter under the collar than Uranus? One hypothesis is that Uranus had a near-fatal encounter with a huge proto-planet, twice as massive as Earth, some 4.5 billion years ago. The collision could have knocked it over, explaining its sideways tilt, while scrambling its interior in a way that allowed its primordial warmth to escape: one tidy explanation for two major oddities.

    “It’s a nice idea, but it seems a bit contrived,” Simon says, her measured tone returning. “Every time we think we understand these planets, we realize we don’t.”

    One way to learn more about the ice giants is to get under their skin, by recreating them in the lab. Based on what they can measure and infer about their overall composition, planetary scientists have deduced that both Uranus and Neptune must contain vast quantities of water, ammonia, and methane on the inside. In everyday life, we’d call that combination “Windex and natural gas.” Inside the ice giants, however, these molecules mingle together into a slush that astronomers refer to generically as “ice”—hence the term ice giants. Recent experiments show that it is not like any ice you have ever seen, however.

    Dominik Kraus, a physicist at the University of Rostock [Universität Rostock] (DE) in Germany, leads a group of researchers who have been shooting X-ray lasers at simulated bits of ice-giant material, heating and compressing it to match conditions in the planets’ interiors. He finds that the carbon atoms spontaneously break free of their molecules and arrange themselves into diamonds. Inside Uranus and Neptune, such diamonds could grow to the size of a person, slowly raining down toward the core of the planet. The diamond rain would release energy and stir up huge currents, possibly explaining the unusual magnetic fields of Uranus and Neptune.

    In parallel work, Marius Millot at DOE’s Lawrence Livermore National Laboratory (US) and his colleagues subjected water molecules to similarly extreme conditions and found that it turns into previously unseen material, “superionic ice”—a hot, black, crystalline version of water, also known as Ice XVIII. It sounds exotic, but maybe it shouldn’t. Given how much water is inside the ice giants, and given how many ice giants astronomers are finding around other stars, superionic ice may actually be the most common form of water in the universe. Black ice and diamond rain could be the norm; lakes and rivers and lumps of coal may be the cosmic oddities.

    Another way to know more about the ice giants is to look at the company they keep—the large systems of moons that orbit Uranus and Neptune. Like Uranus itself, its moons are tilted at a rakish, 98-degree angle. No other planet is oriented that way. Whatever knocked the planet over evidently took its moons along for the ride. “If a large impact tipped the whole planet on its side, then the gravitational excess from Uranus’s equatorial bulge would have pulled its whole system of moons to be on its side as well,” says Runyon.

    The moons of Neptune document a whole other style of disaster, one that spared the planet but unleashed pandemonium around it. Neptune’s system of moons is overwhelmingly dominated by a single large one, Triton, surrounded by 13 much smaller bodies, mostly in irregular, looping orbits. Triton orbits the planet in a clockwise direction, the opposite of every other planet and major moon, indicating that it formed separately and later got snared by Neptune. When that happened, it must have rolled through the Neptunian system like a bowling ball, as Runyon explains: “The gravitational interactions from Triton probably scattered Neptune’s original system of moons. If there were rings, it would have scattered them, too.” The current moons either reassembled from the wreckage or were captured afterward. Triton also left behind a set of thin, clumpy rings that bunch together in arcs, unlike any other formation in the solar system.

    The Voyager 2 flybys of the 1980s unveiled the ice-giant moons as a set of distinctive worlds. The five main moons of Uranus display ancient chasms, ripples, and hints of volcanic flows—all made of frozen water and other ices instead of rock. The two largest, Ariel and Titania, appear to have been geologically active for an extended period of time. The smallest, Miranda, is a jumble of formations that looks like a jigsaw puzzle that was put together by an inattentive child; nobody knows how it got that way. But the true marvel is Triton, a geologically youthful world that resembles a cantaloupe, its surface sculpted by “cryovolcanic” eruptions of water-ammonia lava. Triton is also dotted with active geysers, likely caused by the explosive defrosting of underground nitrogen. To the amazement of mission scientists, Voyager 2 sent back images of sooty plumes shooting 5 miles into the air and trailing for hundreds of miles.

    Triton broadly resembles Pluto, but it is in many ways the wilder and more exciting of the duo (not to keep dumping on the dwarf planet, but facts are facts). It is about 15 percent larger than Pluto and, more significantly, it is more geologically active, with liquid water sloshing away underground. That’s right: A moon around the coldest, most distant planet in the solar system contains a huge, underground ocean. “Triton is exchanging gravitational energy with Neptune, so it’s warm and gooey on the inside,” Runyon says. “On Earth, living things like warm, gooey places. If you put Earth microbes in that warm, gooey Triton ocean, they would probably survive and proliferate. Which raises the possibility, since we don’t really know how things go from non-life to life, that Triton could be a habitable and inhabited world.”

    He’s not saying aliens, mind you. He’s just saying there could be aliens.

    Despite all of these insights, we are in many ways still at the handshake stage of getting to know the ice giants. The OPAL program watches the planets for less than one day a year. Voyager 2 gathered only limited information about the planet’s composition and internal structure. The flybys happened so quickly that the spacecraft saw just one side of the Uranian and Neptunian moons. “There’s that whole unexplored other half. We don’t know what the heck is happening on the rest of Triton,” Simon says.

    The yearning for deeper familiarity is even more acute now that astronomers recognize Uranus and Neptune as prototypes of billions of similar worlds all across our galaxy. Right now, these exoplanets—planets around other stars—are true cyphers. Astronomers can deduce their sizes, masses, densities, and not much else. Still, that’s enough to tell that many of them seem like slightly shrunken versions of Neptune, with thick, toxic atmospheres. Others, just a wee bit smaller, seem to be rocky “super-Earths.” Nobody knows why this dividing line exists, or whether super-Earths could be habitable. For that matter, nobody yet knows whether ocean moons like Triton can support life. When you’re dealing with ice giants, you get used to the three-word mantra: We don’t know.

    That mantra explains why Runyon just completed an intense round of work as project scientist on Neptune Odyssey, a proposed flagship—that is, multi-billion-dollar—NASA mission that would perform an extended survey of Neptune and Triton while dropping a probe into Neptune’s atmosphere.

    The technology exists to mount an ambitious expedition like this. Even the sober analysts at the National Academy of Sciences have identified an ice-giant mission as a high scientific priority. Unfortunately, these kinds of projects keep getting shot down. Earlier this year, NASA came close to approving Trident, a stripped-down mission to Triton, but passed it over in favor of a pair of probes to Venus.

    A big part of the problem is the waiting. It took Voyager 2 a dozen years to reach Neptune. If Trident had been approved, it wouldn’t have reached its destination until 2038—and even then, it would have sent back just another snapshot. If you want to study the ice giants, you have to adapt to their pace of doing things. No one researcher is going to live long enough to witness a full cycle of seasons on Uranus, much less on Neptune. The last mission to an ice giant happened 32 years ago, and realistically the next one is not likely to arrive until the 2040s at the earliest; this is inevitably going to be a multi-generational effort. Heidi Hammel (age 0.37 Neptune years) has been at it so long that she has largely moved on to administrative work. “This is sad to say, Corey, but I kind of don’t do astronomy anymore,” she confesses. But she’s encouraged to see people like Kirby Runyon (a spritely 0.21 Neptune years) entering the field.

    Until someone invents warp drive or the like, there is no way to overcome the obstacle of time. The only path forward is embracing extreme patience as the cost—or the joy—of pressing into the unknown. After he discovered Uranus, Herschel explained it was not luck but persistence that brought the planet into view. “I examined every star of the heavens,” he wrote. That night in 1783 was Uranus’ “turn to be discovered.” Perhaps now it is the ice giants’ turn to be truly known, in all of their weird and wonderful glory.


    1. Smith, R.W. The Cambridge network in action: The discovery of Neptune. Isis 80, 395-422 (1989) https://www.journals.uchicago.edu/doi/10.1086/355082.

    2. Kollerstrom, N. The naming of Neptune. Journal of Astronomical History and Heritage 12, 66-71 (2009).

    3. Bessell, F. Extract from the translation of a letter from Professor Bessel, dated Kronigsberg, 10th of August, 1844. On the variations of the proper motions of Procyon and Sirius. Monthly Notices of the Royal Astronomical Society 6, 136-141 (1844). https://academic.oup.com/mnras/article/6/11/136/964304

    4. Meeus, J. Equinoxes and solstices on Uranus and Neptune. Journal of the British Astronomical Association 107, 332 (1997).

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    JHUAPL campus.

    Founded on March 10, 1942—just three months after the United States entered World War II—Applied Physics Lab -was created as part of a federal government effort to mobilize scientific resources to address wartime challenges.

    APL was assigned the task of finding a more effective way for ships to defend themselves against enemy air attacks. The Laboratory designed, built, and tested a radar proximity fuze (known as the VT fuze) that significantly increased the effectiveness of anti-aircraft shells in the Pacific—and, later, ground artillery during the invasion of Europe. The product of the Laboratory’s intense development effort was later judged to be, along with the atomic bomb and radar, one of the three most valuable technology developments of the war.

    On the basis of that successful collaboration, the government, The Johns Hopkins University, and APL made a commitment to continue their strategic relationship. The Laboratory rapidly became a major contributor to advances in guided missiles and submarine technologies. Today, more than seven decades later, the Laboratory’s numerous and diverse achievements continue to strengthen our nation.

    APL continues to relentlessly pursue the mission it has followed since its first day: to make critical contributions to critical challenges for our nation.

    Johns Hopkins Unversity campus.

    Johns Hopkins University opened in 1876, with the inauguration of its first president, Daniel Coit Gilman. “What are we aiming at?” Gilman asked in his installation address. “The encouragement of research … and the advancement of individual scholars, who by their excellence will advance the sciences they pursue, and the society where they dwell.”

    The mission laid out by Gilman remains the university’s mission today, summed up in a simple but powerful restatement of Gilman’s own words: “Knowledge for the world.”

    What Gilman created was a research university, dedicated to advancing both students’ knowledge and the state of human knowledge through research and scholarship. Gilman believed that teaching and research are interdependent, that success in one depends on success in the other. A modern university, he believed, must do both well. The realization of Gilman’s philosophy at Johns Hopkins, and at other institutions that later attracted Johns Hopkins-trained scholars, revolutionized higher education in America, leading to the research university system as it exists today.

    The Johns Hopkins University (US) is a private research university in Baltimore, Maryland. Founded in 1876, the university was named for its first benefactor, the American entrepreneur and philanthropist Johns Hopkins. His $7 million bequest (approximately $147.5 million in today’s currency)—of which half financed the establishment of the Johns Hopkins Hospital—was the largest philanthropic gift in the history of the United States up to that time. Daniel Coit Gilman, who was inaugurated as the institution’s first president on February 22, 1876, led the university to revolutionize higher education in the U.S. by integrating teaching and research. Adopting the concept of a graduate school from Germany’s historic Ruprecht Karl University of Heidelberg, [Ruprecht-Karls-Universität Heidelberg] (DE), Johns Hopkins University is considered the first research university in the United States. Over the course of several decades, the university has led all U.S. universities in annual research and development expenditures. In fiscal year 2016, Johns Hopkins spent nearly $2.5 billion on research. The university has graduate campuses in Italy, China, and Washington, D.C., in addition to its main campus in Baltimore.

    Johns Hopkins is organized into 10 divisions on campuses in Maryland and Washington, D.C., with international centers in Italy and China. The two undergraduate divisions, the Zanvyl Krieger School of Arts and Sciences and the Whiting School of Engineering, are located on the Homewood campus in Baltimore’s Charles Village neighborhood. The medical school, nursing school, and Bloomberg School of Public Health, and Johns Hopkins Children’s Center are located on the Medical Institutions campus in East Baltimore. The university also consists of the Peabody Institute, Applied Physics Laboratory, Paul H. Nitze School of Advanced International Studies, School of Education, Carey Business School, and various other facilities.

    Johns Hopkins was a founding member of the American Association of Universities (US). As of October 2019, 39 Nobel laureates and 1 Fields Medalist have been affiliated with Johns Hopkins. Founded in 1883, the Blue Jays men’s lacrosse team has captured 44 national titles and plays in the Big Ten Conference as an affiliate member as of 2014.


    The opportunity to participate in important research is one of the distinguishing characteristics of Hopkins’ undergraduate education. About 80 percent of undergraduates perform independent research, often alongside top researchers. In FY 2013, Johns Hopkins received $2.2 billion in federal research grants—more than any other U.S. university for the 35th consecutive year. Johns Hopkins has had seventy-seven members of the Institute of Medicine, forty-three Howard Hughes Medical Institute Investigators, seventeen members of the National Academy of Engineering, and sixty-two members of the National Academy of Sciences. As of October 2019, 39 Nobel Prize winners have been affiliated with the university as alumni, faculty members or researchers, with the most recent winners being Gregg Semenza and William G. Kaelin.

    Between 1999 and 2009, Johns Hopkins was among the most cited institutions in the world. It attracted nearly 1,222,166 citations and produced 54,022 papers under its name, ranking No. 3 globally [after Harvard University (US) and the Max Planck Society (DE) in the number of total citations published in Thomson Reuters-indexed journals over 22 fields in America.

    In FY 2000, Johns Hopkins received $95.4 million in research grants from the National Aeronautics and Space Administration (US), making it the leading recipient of NASA research and development funding. In FY 2002, Hopkins became the first university to cross the $1 billion threshold on either list, recording $1.14 billion in total research and $1.023 billion in federally sponsored research. In FY 2008, Johns Hopkins University performed $1.68 billion in science, medical and engineering research, making it the leading U.S. academic institution in total R&D spending for the 30th year in a row, according to a National Science Foundation (US) ranking. These totals include grants and expenditures of JHU’s Applied Physics Laboratory in Laurel, Maryland.

    The Johns Hopkins University also offers the “Center for Talented Youth” program—a nonprofit organization dedicated to identifying and developing the talents of the most promising K-12 grade students worldwide. As part of the Johns Hopkins University, the “Center for Talented Youth” or CTY helps fulfill the university’s mission of preparing students to make significant future contributions to the world. The Johns Hopkins Digital Media Center (DMC) is a multimedia lab space as well as an equipment, technology and knowledge resource for students interested in exploring creative uses of emerging media and use of technology.

    In 2013, the Bloomberg Distinguished Professorships program was established by a $250 million gift from Michael Bloomberg. This program enables the university to recruit fifty researchers from around the world to joint appointments throughout the nine divisions and research centers. Each professor must be a leader in interdisciplinary research and be active in undergraduate education. Directed by Vice Provost for Research Denis Wirtz, there are currently thirty two Bloomberg Distinguished Professors at the university, including three Nobel Laureates, eight fellows of the American Association for the Advancement of Science (US), ten members of the American Academy of Arts and Sciences, and thirteen members of the National Academies.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: