Tagged: Nautilus (US) Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:13 am on January 2, 2022 Permalink | Reply
    Tags: "Planets Are Born from Dust Trap Rings", , Astronomers propose that the planets formed from three separate rings of planetesimals within a gaseous disk around the sun., , , , , , Dust traps can be associated with specific important molecules evaporating and condensing., Gravity and a mess of other forces conspired to build our solar system., Nautilus (US), One can directly connect each ring with a specific region of our solar system., , Planetesimals can coalesce spontaneously whenever there’s enough dust clumped within a specific location within a gas-dominated disk., , Scientists often see ringed traps in images of planet-forming disks., The asteroid belt, The outer ring of planetesimals corresponds to the present-day Kuiper belt., The three-ring model reproduces what one might call our solar system’s orbital architecture.   

    From Rice University (US) via Nautilus (US): “Planets Are Born from Dust Trap Rings” 

    From Rice University (US)


    Nautilus (US)

    Dec 30, 2021

    Sean Raymond
    Andre Izidoro
    Rajdeep Dasgupta

    The ALMA telescope, in Chile, sensitive to millimeter-sized dust, took these images of planet-forming disks. Credit: S. Andrews et al./Atacama Large Millimeter/submillimeter Array(CL) (The European Southern Observatory [Observatoire européen austral][Europäische Südsternwarte](EU)(CL)/The National Astronomical Observatory of Japan (国立天文台](JP)/The National Radio Astronomy Observatory (US)); S. Dagnello/NRAO/The Associated Universities Inc (US)/The National Science Foundation (US).

    European Southern Observatory/National Radio Astronomy Observatory(US)/National Astronomical Observatory of Japan(JP) ALMA Observatory (CL).

    All we are is dust in the wind, man. The same goes for the planets and asteroids and comets. Starting from our dusty beginnings, gravity and a mess of other forces conspired to build our solar system. There’s a venerable tradition of trying to figure out what that grand and hectic process must have looked like. Today, with the aid of sophisticated simulations, scientists can meticulously tinker with models that might show us how the solar system got to be the way it is now. In a new Nature Astronomy paper, we take a step forward by building on some of the most compelling ideas to date.

    This has long been an alluring challenge. The German bigwig philosopher Immanuel Kant and one of France’s most important scientific theorists, Pierre Laplace, both 18th-century thinkers, thought it all started from a disk of dust and gas going round the young sun. Later, about a century ago, Thomas Chamberlain and Forest Moulton proposed a different idea, that the planets formed out of city- to county-sized rocky bodies called planetesimals. It turns out that both ideas are basically correct. Anyone today can spend an hour looking through NASA images from telescopes revealing disks around young stars that astronomers suspect are forming planets as you read. Asteroids and comets are evidence that rocky and icy planetesimals formed throughout the solar system.

    A Frankensteinian monster of a model.

    We propose that the planets formed from three separate rings of planetesimals within a gaseous disk around the sun. This model also makes sense of planetary disks around other stars, connecting them with the orbits of our solar system’s planets and asteroids, as well as chemical measurements of meteorites. Why rings of planetesimals? This concept starts to make sense if you squint at the solar system from a large distance and imagine spreading out the mass that makes up the planets: Almost all of the rocky material concentrates between the orbits of Venus and Earth with very little mass closer to the sun or in the asteroid belt, while a little farther from the sun, Jupiter and Saturn make up a huge amount of mass that tapers off to the outer solar system. But what determines the properties of these rings?

    Planetesimals can coalesce spontaneously whenever there’s enough dust clumped within a specific location within a gas-dominated disk. Dust grains grow in dust-grain collisions, and when they reach roughly a millimeter in size, they start to experience drag as if biking against the wind. This causes large dust grains to drift inward, toward the sun. Modeling has shown that there exist dust “traps” within the disk, associated with bumps in the local gas pressure.

    Scientists often see such ringed traps in images of planet-forming disks. You can see some in the image above, taken with the ALMA (Atacama Large Millimeter Array) telescope in Chile. Dust traps can be associated with specific important molecules evaporating and condensing. In our model, these end up being silicate rocks, water, and carbon monoxide—they’re linked to our three strongest traps. The condensation temperatures of these elements span from 30 degrees above absolute zero (for carbon monoxide) to 1500 degrees (for silicates). Each corresponds to a given orbital distance from the star. Drifting dust piles up at each of these locations within the disk and produces a ring of planetesimals. These three rings are, in our model, the building blocks of the planets.

    You can directly connect each ring with a specific region of our solar system. That’s pretty neat. In our simulations, the inner ring contains two to three times as much mass as Earth in rocky planetesimals. The two most massive terrestrial planets, Venus and Earth, formed within this ring; Mars and Mercury were scattered out of the ring, their growth stunted. Mars grew mostly from material in the ring’s outer parts, which nicely explains the chemical difference between Mars and the Earth (Earth is more similar in composition to one group of meteorites and Mars to another.)

    Three—no more, no less: In our model of the solar system’s beginnings, three rings of planetesimals form, connected with the condensation/evaporation or “snow” lines of silicates, water, and carbon monoxide (CO). The two main classes of meteorites—CC (for carbonaceous chondrites) and NC (for non-carbonaceous)—represent planetesimals that formed in the middle and inner rings and later scattered into the asteroid belt. Credit: Rajdeep Dasgupta.

    The middle ring is, in our simulations, the most massive, with 50 to 100 Earth masses in planetesimals. Massive planets, of 10 to 20 Earth masses, grow quickly within the ring by colliding with dust and other planetesimals. These, in virtue of their gravity, capture gas from the disk and grow into Jupiter and Saturn. The ice giants Uranus and Neptune also formed within the outskirts of this ring, but their slower growth prevented them from capturing more gas.

    And the asteroid belt? That lies between the inner and middle rings. In our simulations, it can be thought of as a cosmic “refugee camp.” It contains objects that formed across the solar system but perhaps not within the belt, and births few planetesimals and sometimes none at all. This matches the observed orbital distribution as well as the chemical gradients across the belt that are inferred from meteorites linked with different asteroid types. The present-day belt only contains a total of less than 0.05 percent of an Earth mass, consisting of planetesimals scattered outward from the inner ring during the growth of the rocky planets, and planetesimals scattered inward from the middle ring during the growth of the gas- and ice giants.

    The outer ring of planetesimals corresponds to the present-day Kuiper belt, the population of small icy bodies beyond the orbit of Neptune.

    Kuiper Belt. Minor Planet Center.

    While our simulations typically produce 20 to 30 Earth masses in planetesimals—two orders of magnitude more than the present-day Kuiper belt—it conveniently matches the amount of mass needed in an outer belt to explain the giant planets’ current orbits.

    So, our three-ring model reproduces what you could call our solar system’s orbital architecture. It would be impressive if these same processes could explain the diversity of other exoplanet systems. And we think they can.

    Planet-forming disks around other stars are ubiquitous, with a spectrum of different properties, but planetesimal rings should form systematically. Close-in super-Earth, or sub-Neptune planets, found around roughly 30 percent of all stars may form from planetesimals within inner or middle rings. If the planets reach Mars’ size, about 10 percent of Earth’s mass, before the gas disappears from the disk, which can take a few million years, then they launch spiral density waves. These cause the planets’ orbits to shrink, or migrate, toward the central star, an outcome our solar system likely avoided because our rocky planets grew too slowly. Gas giant planets should most naturally form from the middle rings, and, indeed, most giant exoplanets are found on orbits wider than super-Earths’ but still closer to Earth’s orbit than Jupiter’s (perhaps due to a modest degree of migration).

    Moulton, one of the proponents of the old planetesimal hypothesis, once stressed how utterly distinct this idea was from the dusty-disk one Laplace had. “The gap,” he wrote in a 1928 issue of Science, “between these different genera of intellectual constructions is as profound as that between different genera of living organisms, and as difficult to bridge.” He was, as you might guess, pretty wrong about that. Our three-ring model bridges or synthesizes those ideas, along with several others, to make what Moulton—sticking with his analogy to life—might have called a complicated chimera. A Frankensteinian monster of a model. As long as it’s useful, that’s fine with us.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus (US). We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

    Rice University (US) [formally William Marsh Rice University] is a private research university in Houston, Texas. It is situated on a 300-acre campus near the Houston Museum District and is adjacent to the Texas Medical Center.
    Opened in 1912 after the murder of its namesake William Marsh Rice, Rice is a research university with an undergraduate focus. Its emphasis on education is demonstrated by a small student body and 6:1 student-faculty ratio. The university has a very high level of research activity. Rice is noted for its applied science programs in the fields of artificial heart research, structural chemical analysis, signal processing, space science, and nanotechnology. Rice has been a member of the Association of American Universities (US) since 1985 and is classified among “R1: Doctoral Universities – Very high research activity”.
    The university is organized into eleven residential colleges and eight schools of academic study, including the Wiess School of Natural Sciences, the George R. Brown School of Engineering, the School of Social Sciences, School of Architecture, Shepherd School of Music and the School of Humanities. Rice’s undergraduate program offers more than fifty majors and two dozen minors, and allows a high level of flexibility in pursuing multiple degree programs. Additional graduate programs are offered through the Jesse H. Jones Graduate School of Business and the Susanne M. Glasscock School of Continuing Studies. Rice students are bound by the strict Honor Code, which is enforced by a student-run Honor Council.
    Rice competes in 14 NCAA Division I varsity sports and is a part of Conference USA, often competing with its cross-town rival the University of Houston. Intramural and club sports are offered in a wide variety of activities such as jiu jitsu, water polo, and crew.
    The university’s alumni include more than two dozen Marshall Scholars and a dozen Rhodes Scholars. Given the university’s close links to National Aeronautics Space Agency (US), it has produced a significant number of astronauts and space scientists. In business, Rice graduates include CEOs and founders of Fortune 500 companies; in politics, alumni include congressmen, cabinet secretaries, judges, and mayors. Two alumni have won the Nobel Prize.


    Rice University’s history began with the demise of Massachusetts businessman William Marsh Rice, who had made his fortune in real estate, railroad development and cotton trading in the state of Texas. In 1891, Rice decided to charter a free-tuition educational institute in Houston, bearing his name, to be created upon his death, earmarking most of his estate towards funding the project. Rice’s will specified the institution was to be “a competitive institution of the highest grade” and that only white students would be permitted to attend. On the morning of September 23, 1900, Rice, age 84, was found dead by his valet, Charles F. Jones, and was presumed to have died in his sleep. Shortly thereafter, a large check made out to Rice’s New York City lawyer, signed by the late Rice, aroused the suspicion of a bank teller, due to the misspelling of the recipient’s name. The lawyer, Albert T. Patrick, then announced that Rice had changed his will to leave the bulk of his fortune to Patrick, rather than to the creation of Rice’s educational institute. A subsequent investigation led by the District Attorney of New York resulted in the arrests of Patrick and of Rice’s butler and valet Charles F. Jones, who had been persuaded to administer chloroform to Rice while he slept. Rice’s friend and personal lawyer in Houston, Captain James A. Baker, aided in the discovery of what turned out to be a fake will with a forged signature. Jones was not prosecuted since he cooperated with the district attorney, and testified against Patrick. Patrick was found guilty of conspiring to steal Rice’s fortune and he was convicted of murder in 1901 (he was pardoned in 1912 due to conflicting medical testimony). Baker helped Rice’s estate direct the fortune, worth $4.6 million in 1904 ($131 million today), towards the founding of what was to be called the Rice Institute, later to become Rice University. The board took control of the assets on April 29 of that year.

    In 1907, the Board of Trustees selected the head of the Department of Mathematics and Astronomy at Princeton University, Edgar Odell Lovett, to head the Institute, which was still in the planning stages. He came recommended by Princeton University (US)‘s president, Woodrow Wilson. In 1908, Lovett accepted the challenge, and was formally inaugurated as the Institute’s first president on October 12, 1912. Lovett undertook extensive research before formalizing plans for the new Institute, including visits to 78 institutions of higher learning across the world on a long tour between 1908 and 1909. Lovett was impressed by such things as the aesthetic beauty of the uniformity of the architecture at the University of Pennsylvania, a theme which was adopted by the Institute, as well as the residential college system at Cambridge University in England, which was added to the Institute several decades later. Lovett called for the establishment of a university “of the highest grade,” “an institution of liberal and technical learning” devoted “quite as much to investigation as to instruction.” [We must] “keep the standards up and the numbers down,” declared Lovett. “The most distinguished teachers must take their part in undergraduate teaching, and their spirit should dominate it all.”
    Establishment and growth

    In 1911, the cornerstone was laid for the Institute’s first building, the Administration Building, now known as Lovett Hall in honor of the founding president. On September 23, 1912, the 12th anniversary of William Marsh Rice’s murder, the William Marsh Rice Institute for the Advancement of Letters, Science, and Art began course work with 59 enrolled students, who were known as the “59 immortals,” and about a dozen faculty. After 18 additional students joined later, Rice’s initial class numbered 77, 48 male and 29 female. Unusual for the time, Rice accepted coeducational admissions from its beginning, but on-campus housing would not become co-ed until 1957.

    Three weeks after opening, a spectacular international academic festival was held, bringing Rice to the attention of the entire academic world.

    Per William Marsh Rice’s will and Rice Institute’s initial charter, the students paid no tuition. Classes were difficult, however, and about half of Rice’s students had failed after the first 1912 term. At its first commencement ceremony, held on June 12, 1916, Rice awarded 35 bachelor’s degrees and one master’s degree. That year, the student body also voted to adopt the Honor System, which still exists today. Rice’s first doctorate was conferred in 1918 on mathematician Hubert Evelyn Bray.

    The Founder’s Memorial Statue, a bronze statue of a seated William Marsh Rice, holding the original plans for the campus, was dedicated in 1930, and installed in the central academic quad, facing Lovett Hall. The statue was crafted by John Angel. In 2020, Rice students petitioned the university to take down the statue due to the founder’s history as slave owner.

    During World War II, Rice Institute was one of 131 colleges and universities nationally that took part in the V-12 Navy College Training Program, which offered students a path to a Navy commission.

    The residential college system proposed by President Lovett was adopted in 1958, with the East Hall residence becoming Baker College, South Hall residence becoming Will Rice College, West Hall becoming Hanszen College, and the temporary Wiess Hall becoming Wiess College.

    In 1959, the Rice Institute Computer went online. 1960 saw Rice Institute formally renamed William Marsh Rice University. Rice acted as a temporary intermediary in the transfer of land between Humble Oil and Refining Company and NASA, for the creation of NASA’s Manned Spacecraft Center (now called Johnson Space Center) in 1962. President John F. Kennedy then made a speech at Rice Stadium reiterating that the United States intended to reach the moon before the end of the decade of the 1960s, and “to become the world’s leading space-faring nation”. The relationship of NASA with Rice University and the city of Houston has remained strong to the present day.

    The original charter of Rice Institute dictated that the university admit and educate, tuition-free, “the white inhabitants of Houston, and the state of Texas”. In 1963, the governing board of Rice University filed a lawsuit to allow the university to modify its charter to admit students of all races and to charge tuition. Ph.D. student Raymond Johnson became the first black Rice student when he was admitted that year. In 1964, Rice officially amended the university charter to desegregate its graduate and undergraduate divisions. The Trustees of Rice University prevailed in a lawsuit to void the racial language in the trust in 1966. Rice began charging tuition for the first time in 1965. In the same year, Rice launched a $33 million ($268 million) development campaign. $43 million ($283 million) was raised by its conclusion in 1970. In 1974, two new schools were founded at Rice, the Jesse H. Jones Graduate School of Management and the Shepherd School of Music. The Brown Foundation Challenge, a fund-raising program designed to encourage annual gifts, was launched in 1976 and ended in 1996 having raised $185 million. The Rice School of Social Sciences was founded in 1979.

    On-campus housing was exclusively for men for the first forty years, until 1957. Jones College was the first women’s residence on the Rice campus, followed by Brown College. According to legend, the women’s colleges were purposefully situated at the opposite end of campus from the existing men’s colleges as a way of preserving campus propriety, which was greatly valued by Edgar Odell Lovett, who did not even allow benches to be installed on campus, fearing that they “might lead to co-fraternization of the sexes”. The path linking the north colleges to the center of campus was given the tongue-in-cheek name of “Virgin’s Walk”. Individual colleges became coeducational between 1973 and 1987, with the single-sex floors of colleges that had them becoming co-ed by 2006. By then, several new residential colleges had been built on campus to handle the university’s growth, including Lovett College, Sid Richardson College, and Martel College.

    Late twentieth and early twenty-first century

    The Economic Summit of Industrialized Nations was held at Rice in 1990. Three years later, in 1993, the James A. Baker III Institute for Public Policy was created. In 1997, the Edythe Bates Old Grand Organ and Recital Hall and the Center for Nanoscale Science and Technology, renamed in 2005 for the late Nobel Prize winner and Rice professor Richard E. Smalley, were dedicated at Rice. In 1999, the Center for Biological and Environmental Nanotechnology was created. The Rice Owls baseball team was ranked #1 in the nation for the first time in that year (1999), holding the top spot for eight weeks.

    In 2003, the Owls won their first national championship in baseball, which was the first for the university in any team sport, beating Southwest Missouri State (US) in the opening game and then the University of Texas and Stanford University twice each en route to the title. In 2008, President David Leebron issued a ten-point plan titled “Vision for the Second Century” outlining plans to increase research funding, strengthen existing programs, and increase collaboration. The plan has brought about another wave of campus constructions, including the erection the newly renamed BioScience Research Collaborative building (intended to foster collaboration with the adjacent Texas Medical Center), a new recreational center and the renovated Autry Court basketball stadium, and the addition of two new residential colleges, Duncan College and McMurtry College.

    Beginning in late 2008, the university considered a merger with Baylor College of Medicine, though the merger was ultimately rejected in 2010. Rice undergraduates are currently guaranteed admission to Baylor College of Medicine upon graduation as part of the Rice/Baylor Medical Scholars program. According to History Professor John Boles’ recent book University Builder: Edgar Odell Lovett and the Founding of the Rice Institute, the first president’s original vision for the university included hopes for future medical and law schools.

    In 2018, the university added an online MBA program, MBA@Rice.

    In June 2019, the university’s president announced plans for a task force on Rice’s “past in relation to slave history and racial injustice”, stating that “Rice has some historical connections to that terrible part of American history and the segregation and racial disparities that resulted directly from it”.


    Rice’s campus is a heavily wooded 285-acre (115-hectare) tract of land in the museum district of Houston, located close to the city of West University Place.

    Five streets demarcate the campus: Greenbriar Street, Rice Boulevard, Sunset Boulevard, Main Street, and University Boulevard. For most of its history, all of Rice’s buildings have been contained within this “outer loop”. In recent years, new facilities have been built close to campus, but the bulk of administrative, academic, and residential buildings are still located within the original pentagonal plot of land. The new Collaborative Research Center, all graduate student housing, the Greenbriar building, and the Wiess President’s House are located off-campus.

    Rice prides itself on the amount of green space available on campus; there are only about 50 buildings spread between the main entrance at its easternmost corner, and the parking lots and Rice Stadium at the West end. The Lynn R. Lowrey Arboretum, consisting of more than 4000 trees and shrubs (giving birth to the legend that Rice has a tree for every student), is spread throughout the campus.
    The university’s first president, Edgar Odell Lovett, intended for the campus to have a uniform architecture style to improve its aesthetic appeal. To that end, nearly every building on campus is noticeably Byzantine in style, with sand and pink-colored bricks, large archways and columns being a common theme among many campus buildings. Noteworthy exceptions include the glass-walled Brochstein Pavilion, Lovett College with its Brutalist-style concrete gratings, Moody Center for the Arts with its contemporary design, and the eclectic-Mediterranean Duncan Hall. In September 2011, Travel+Leisure listed Rice’s campus as one of the most beautiful in the United States.

    The university and Houston Independent School District jointly established The Rice School-a kindergarten through 8th grade public magnet school in Houston. The school opened in August 1994. Through Cy-Fair ISD Rice University offers a credit course based summer school for grades 8 through 12. They also have skills based classes during the summer in the Rice Summer School.

    Innovation District

    In early 2019 Rice announced the site where the abandoned Sears building in Midtown Houston stood along with its surrounding area would be transformed into the “The Ion” the hub of the 16-acre South Main Innovation District. President of Rice David Leebron stated “We chose the name Ion because it’s from the Greek ienai, which means ‘go’. We see it as embodying the ever-forward motion of discovery, the spark at the center of a truly original idea.”

    Students of Rice and other Houston-area colleges and universities making up the Student Coalition for a Just and Equitable Innovation Corridor are advocating for a Community Benefits Agreement (CBA)-a contractual agreement between a developer and a community coalition. Residents of neighboring Third Ward and other members of the Houston Coalition for Equitable Development Without Displacement (HCEDD) have faced consistent opposition from the City of Houston and Rice Management Company to a CBA as traditionally defined in favor of an agreement between the latter two entities without a community coalition signatory.


    Rice University is chartered as a non-profit organization and is governed by a privately appointed board of trustees. The board consists of a maximum of 25 voting members who serve four-year terms. The trustees serve without compensation and a simple majority of trustees must reside in Texas including at least four within the greater Houston area. The board of trustees delegates its power by appointing a president to serve as the chief executive of the university. David W. Leebron was appointed president in 2004 and succeeded Malcolm Gillis who served since 1993. The provost six vice presidents and other university officials report to the president. The president is advised by a University Council composed of the provost, eight members of the Faculty Council, two staff members, one graduate student, and two undergraduate students. The president presides over a Faculty Council which has the authority to alter curricular requirements, establish new degree programs, and approve candidates for degrees.

    The university’s academics are organized into several schools. Schools that have undergraduate and graduate programs include:

    The Rice University School of Architecture
    The George R. Brown School of Engineering
    The School of Humanities
    The Shepherd School of Music
    The Wiess School of Natural Sciences
    The Rice University School of Social Sciences

    Two schools have only graduate programs:

    The Jesse H. Jones Graduate School of Management
    The Susanne M. Glasscock School of Continuing Studies

    Rice’s undergraduate students benefit from a centralized admissions process which admits new students to the university as a whole, rather than a specific school (the schools of Music and Architecture are decentralized). Students are encouraged to select the major path that best suits their desires; a student can later decide that they would rather pursue study in another field or continue their current coursework and add a second or third major. These transitions are designed to be simple at Rice with students not required to decide on a specific major until their sophomore year of study.

    Rice’s academics are organized into six schools which offer courses of study at the graduate and undergraduate level, with two more being primarily focused on graduate education, while offering select opportunities for undergraduate students. Rice offers 360 degrees in over 60 departments. There are 40 undergraduate degree programs, 51 masters programs, and 29 doctoral programs.

    Faculty members of each of the departments elect chairs to represent the department to each School’s dean and the deans report to the Provost who serves as the chief officer for academic affairs.

    Rice Management Company

    The Rice Management Company manages the $6.5 billion Rice University endowment (June 2019) and $957 million debt. The endowment provides 40% of Rice’s operating revenues. Allison Thacker is the President and Chief Investment Officer of the Rice Management Company, having joined the university in 2011.


    Rice is a medium-sized highly residential research university. The majority of enrollments are in the full-time four-year undergraduate program emphasizing arts & sciences and professions. There is a high graduate coexistence with the comprehensive graduate program and a very high level of research activity. It is accredited by the Southern Association of Colleges and Schools Commission on Colleges (US) as well as the professional accreditation agencies for engineering, management, and architecture.

    Each of Rice’s departments is organized into one of three distribution groups, and students whose major lies within the scope of one group must take at least 3 courses of at least 3 credit hours each of approved distribution classes in each of the other two groups, as well as completing one physical education course as part of the LPAP (Lifetime Physical Activity Program) requirement. All new students must take a Freshman Writing Intensive Seminar (FWIS) class, and for students who do not pass the university’s writing composition examination (administered during the summer before matriculation), FWIS 100, a writing class, becomes an additional requirement.

    The majority of Rice’s undergraduate degree programs grant B.S. or B.A. degrees. Rice has recently begun to offer minors in areas such as business, energy and water sustainability, and global health.

    Student body

    As of fall 2014, men make up 52% of the undergraduate body and 64% of the professional and post-graduate student body. The student body consists of students from all 50 states, including the District of Columbia, two U.S. Territories, and 83 foreign countries. Forty percent of degree-seeking students are from Texas.

    Research centers and resources

    Rice is noted for its applied science programs in the fields of nanotechnology, artificial heart research, structural chemical analysis, signal processing and space science.

    Rice Alliance for Technology and Entrepreneurship – supports entrepreneurs and early-stage technology ventures in Houston and Texas through education, collaboration, and research, ranked No. 1 among university business incubators.
    Baker Institute for Public Policy – a leading nonpartisan public policy think-tank
    BioScience Research Collaborative (BRC) – interdisciplinary, cross-campus, and inter-institutional resource between Rice University and Texas Medical Center
    Boniuk Institute – dedicated to religious tolerance and advancing religious literacy, respect and mutual understanding
    Center for African and African American Studies – fosters conversations on topics such as critical approaches to race and racism, the nature of diasporic histories and identities, and the complexity of Africa’s past, present and future
    Chao Center for Asian Studies – research hub for faculty, students and post-doctoral scholars working in Asian studies
    Center for the Study of Women, Gender, and Sexuality (CSWGS) – interdisciplinary academic programs and research opportunities, including the journal Feminist Economics
    Data to Knowledge Lab (D2K) – campus hub for experiential learning in data science
    Digital Signal Processing (DSP) – center for education and research in the field of digital signal processing
    Ethernest Hackerspace – student-run hackerspace for undergraduate engineering students sponsored by the ECE department and the IEEE student chapter
    Humanities Research Center (HRC) – identifies, encourages, and funds innovative research projects by faculty, visiting scholars, graduate, and undergraduate students in the School of Humanities and beyond
    Institute of Biosciences and Bioengineering (IBB) – facilitates the translation of interdisciplinary research and education in biosciences and bioengineering
    Ken Kennedy Institute for Information Technology – advances applied interdisciplinary research in the areas of computation and information technology
    Kinder Institute for Urban Research – conducts the Houston Area Survey, “the nation’s longest running study of any metropolitan region’s economy, population, life experiences, beliefs and attitudes”
    Laboratory for Nanophotonics (LANP) – a resource for education and research breakthroughs and advances in the broad, multidisciplinary field of nanophotonics
    Moody Center for the Arts – experimental arts space featuring studio classrooms, maker space, audiovisual editing booths, and a gallery and office space for visiting national and international artists
    OpenStax CNX (formerly Connexions) and OpenStax – an open source platform and open access publisher, respectively, of open educational resources
    Oshman Engineering Design Kitchen (OEDK) – space for undergraduate students to design, prototype and deploy solutions to real-world engineering challenges
    Rice Cinema – an independent theater run by the Visual and Dramatic Arts department at Rice which screens documentaries, foreign films, and experimental cinema and hosts film festivals and lectures since 1970
    Rice Center for Engineering Leadership (RCEL) – inspires, educates, and develops ethical leaders in technology who will excel in research, industry, non-engineering career paths, or entrepreneurship
    Religion and Public Life Program (RPLP) – a research, training and outreach program working to advance understandings of the role of religion in public life
    Rice Design Alliance (RDA) – outreach and public programs of the Rice School of Architecture
    Rice Center for Quantum Materials (RCQM) – organization dedicated to research and higher education in areas relating to quantum phenomena
    Rice Neuroengineering Initiative (NEI) – fosters research collaborations in neural engineering topics
    Rice Space Institute (RSI) – fosters programs in all areas of space research
    Smalley-Curl Institute for Nanoscale Science and Technology (SCI) – the nation’s first nanotechnology center
    Welch Institute for Advanced Materials – collaborative research institute to support the foundational research for discoveries in materials science, similar to the model of Salk Institute and Broad Institute
    Woodson Research Center Special Collections & Archives – publisher of print and web-based materials highlighting the department’s primary source collections such as the Houston African American, Asian American, and Jewish History Archives, University Archives, rare books, and hip hop/rap music-related materials from the Swishahouse record label and Houston Folk Music Archive, etc.

    Residential colleges

    In 1957, Rice University implemented a residential college system, which was proposed by the university’s first president, Edgar Odell Lovett. The system was inspired by existing systems in place at University of Oxford (UK) and University of Cambridge (UK) and at several other universities in the United States, most notably Yale University (US). The existing residences known as East, South, West, and Wiess Halls became Baker, Will Rice, Hanszen, and Wiess Colleges, respectively.

    Student-run media

    Rice has a weekly student newspaper (The Rice Thresher), a yearbook (The Campanile), college radio station (KTRU Rice Radio), and now defunct, campus-wide student television station (RTV5). They are based out of the RMC student center. In addition, Rice hosts several student magazines dedicated to a range of different topics; in fact, the spring semester of 2008 saw the birth of two such magazines, a literary sex journal called Open and an undergraduate science research magazine entitled Catalyst.

    The Rice Thresher is published every Wednesday and is ranked by Princeton Review as one of the top campus newspapers nationally for student readership. It is distributed around campus, and at a few other local businesses and has a website. The Thresher has a small, dedicated staff and is known for its coverage of campus news, open submission opinion page, and the satirical Backpage, which has often been the center of controversy. The newspaper has won several awards from the College Media Association, Associated Collegiate Press and Texas Intercollegiate Press Association.

    The Rice Campanile was first published in 1916 celebrating Rice’s first graduating class. It has published continuously since then, publishing two volumes in 1944 since the university had two graduating classes due to World War II. The website was created sometime in the early to mid 2000s. The 2015 won the first place Pinnacle for best yearbook from College Media Association.

    KTRU Rice Radio is the student-run radio station. Though most DJs are Rice students, anyone is allowed to apply. It is known for playing genres and artists of music and sound unavailable on other radio stations in Houston, and often, the US. The station takes requests over the phone or online. In 2000 and 2006, KTRU won Houston Press’ Best Radio Station in Houston. In 2003, Rice alum and active KTRU DJ DL’s hip-hip show won Houston PressBest Hip-hop Radio Show. On August 17, 2010, it was announced that Rice University had been in negotiations to sell the station’s broadcast tower, FM frequency and license to the University of Houston System to become a full-time classical music and fine arts programming station. The new station, KUHA, would be operated as a not-for-profit outlet with listener supporters. The FCC approved the sale and granted the transfer of license to the University of Houston System on April 15, 2011, however, KUHA proved to be an even larger failure and so after four and a half years of operation, The University of Houston System announced that KUHA’s broadcast tower, FM frequency and license were once again up for sale in August 2015. KTRU continued to operate much as it did previously, streaming live on the Internet, via apps, and on HD2 radio using the 90.1 signal. Under student leadership, KTRU explored the possibility of returning to FM radio for a number of years. In spring 2015, KTRU was granted permission by the FCC to begin development of a new broadcast signal via LPFM radio. On October 1, 2015, KTRU made its official return to FM radio on the 96.1 signal. While broadcasting on HD2 radio has been discontinued, KTRU continues to broadcast via internet in addition to its LPFM signal.

    RTV5 is a student-run television network available as channel 5 on campus. RTV5 was created initially as Rice Broadcast Television in 1997; RBT began to broadcast the following year in 1998, and aired its first live show across campus in 1999. It experienced much growth and exposure over the years with successful programs like Drinking with Phil, The Meg & Maggie Show, which was a variety and call-in show, a weekly news show, and extensive live coverage in December 2000 of the shut down of KTRU by the administration. In spring 2001, the Rice undergraduate community voted in the general elections to support RBT as a blanket tax organization, effectively providing a yearly income of $10,000 to purchase new equipment and provide the campus with a variety of new programming. In the spring of 2005, RBT members decided the station needed a new image and a new name: Rice Television 5. One of RTV5’s most popular shows was the 24-hour show, where a camera and couch placed in the RMC stayed on air for 24 hours. One such show is held in fall and another in spring, usually during a weekend allocated for visits by prospective students. RTV5 has a video on demand site at rtv5.rice.edu. The station went off the air in 2014 and changed its name to Rice Video Productions. In 2015 the group’s funding was threatened, but ultimately maintained. In 2016 the small student staff requested to no longer be a blanket-tax organization. In the fall of 2017, the club did not register as a club.

    The Rice Review, also known as R2, is a yearly student-run literary journal at Rice University that publishes prose, poetry, and creative nonfiction written by undergraduate students, as well as interviews. The journal was founded in 2004 by creative writing professor and author Justin Cronin.

    The Rice Standard was an independent, student-run variety magazine modeled after such publications as The New Yorker and Harper’s. Prior to fall 2009, it was regularly published three times a semester with a wide array of content, running from analyses of current events and philosophical pieces to personal essays, short fiction and poetry. In August 2009, The Standard transitioned to a completely online format with the launch of their redesigned website, http://www.ricestandard.org. The first website of its kind on Rice’s campus, The Standard featured blog-style content written by and for Rice students. The Rice Standard had around 20 regular contributors, and the site features new content every day (including holidays). In 2017 no one registered The Rice Standard as a club within the university.

    Open, a magazine dedicated to “literary sex content,” predictably caused a stir on campus with its initial publication in spring 2008. A mixture of essays, editorials, stories and artistic photography brought Open attention both on campus and in the Houston Chronicle. The third and last annual edition of Open was released in spring of 2010.


    Rice plays in NCAA Division I athletics and is part of Conference USA. Rice was a member of the Western Athletic Conference before joining Conference USA in 2005. Rice is the second-smallest school, measured by undergraduate enrollment, competing in NCAA Division I FBS football, only ahead of Tulsa.

    The Rice baseball team won the 2003 College World Series, defeating Stanford, giving Rice its only national championship in a team sport. The victory made Rice University the smallest school in 51 years to win a national championship at the highest collegiate level of the sport. The Rice baseball team has played on campus at Reckling Park since the 2000 season. As of 2010, the baseball team has won 14 consecutive conference championships in three different conferences: the final championship of the defunct Southwest Conference, all nine championships while a member of the Western Athletic Conference, and five more championships in its first five years as a member of Conference USA. Additionally, Rice’s baseball team has finished third in both the 2006 and 2007 College World Series tournaments. Rice now has made six trips to Omaha for the CWS. In 2004, Rice became the first school ever to have three players selected in the first eight picks of the MLB draft when Philip Humber, Jeff Niemann, and Wade Townsend were selected third, fourth, and eighth, respectively. In 2007, Joe Savery was selected as the 19th overall pick.

    Rice has been very successful in women’s sports in recent years. In 2004–05, Rice sent its women’s volleyball, soccer, and basketball teams to their respective NCAA tournaments. The women’s swim team has consistently brought at least one member of their team to the NCAA championships since 2013. In 2005–06, the women’s soccer, basketball, and tennis teams advanced, with five individuals competing in track and field. In 2006–07, the Rice women’s basketball team made the NCAA tournament, while again five Rice track and field athletes received individual NCAA berths. In 2008, the women’s volleyball team again made the NCAA tournament. In 2011 the Women’s Swim team won their first conference championship in the history of the university. This was an impressive feat considering they won without having a diving team. The team repeated their C-USA success in 2013 and 2014. In 2017, the women’s basketball team, led by second-year head coach Tina Langley, won the Women’s Basketball Invitational, defeating UNC-Greensboro 74–62 in the championship game at Tudor Fieldhouse. Though not a varsity sport, Rice’s ultimate frisbee women’s team, named Torque, won consecutive Division III national championships in 2014 and 2015.

    In 2006, the football team qualified for its first bowl game since 1961, ending the second-longest bowl drought in the country at the time. On December 22, 2006, Rice played in the New Orleans Bowl in New Orleans, Louisiana against the Sun Belt Conference champion, Troy. The Owls lost 41–17. The bowl appearance came after Rice had a 14-game losing streak from 2004–05 and went 1–10 in 2005. The streak followed an internally authorized 2003 McKinsey report that stated football alone was responsible for a $4 million deficit in 2002. Tensions remained high between the athletic department and faculty, as a few professors who chose to voice their opinion were in favor of abandoning the football program. The program success in 2006, the Rice Renaissance, proved to be a revival of the Owl football program, quelling those tensions. David Bailiff took over the program in 2007 and has remained head coach. Jarett Dillard set an NCAA record in 2006 by catching a touchdown pass in 13 consecutive games and took a 15-game overall streak into the 2007 season.

    In 2008, the football team posted a 9-3 regular season, capping off the year with a 38–14 victory over Western Michigan University (US) in the Texas Bowl. The win over Western Michigan marked the Owls’ first bowl win in 45 years.

    Rice Stadium also serves as the performance venue for the university’s Marching Owl Band, or “MOB.” Despite its name, the MOB is a scatter band that focuses on performing humorous skits and routines rather than traditional formation marching.

    Rice Owls men’s basketball won 10 conference titles in the former Southwest Conference (1918, 1935*, 1940, 1942*, 1943*, 1944*, 1945, 1949*, 1954*, 1970; * denotes shared title). Most recently, guard Morris Almond was drafted in the first round of the 2007 NBA Draft by the Utah Jazz. Rice named former Cal Bears head coach Ben Braun as head basketball coach to succeed Willis Wilson, fired after Rice finished the 2007–2008 season with a winless (0-16) conference record and overall record of 3-27.

  • richardmitnick 10:16 am on December 5, 2021 Permalink | Reply
    Tags: "What Impossible Meant to Richard Feynman", , , Nautilus (US), , Quantum theory of electromagnetism, The greatest theoretical physicists of the 20th century.   

    From Nautilus (US): “What Impossible Meant to Richard Feynman” 

    From Nautilus (US)

    November 24, 2021
    Paul J. Steinhardt

    Credit not displayed.

    What I learned when I challenged the legendary physicist.


    The word resonated throughout the large lecture hall. I had just finished describing a revolutionary concept for a new type of matter that my graduate student, Dov Levine, and I had invented.

    The California Institute of Technology (US) lecture room was packed with scientists from every discipline across campus. The discussion had gone remarkably well. But just as the last of the crowd was filing out, there arose a familiar, booming voice and that word: “Impossible!”

    I could have recognized that distinctive, gravelly voice with the unmistakable New York accent with my eyes closed. Standing before me was my scientific idol, the legendary physicist Richard Feynman, with his shock of graying, shoulder-length hair, wearing his characteristic white shirt, along with a disarming, devilish smile.

    THE JOKER: Although Richard Feynman had a playful sense of humor, remembers Paul J. Steinhardt, he was brutally honest. One day when Steinhardt gave a talk and saw Feynman in the front row, he was terrified.

    Feynman had won a Nobel Prize for his groundbreaking work developing the first quantum theory of electromagnetism. Within the scientific community, he was already considered one of the greatest theoretical physicists of the 20th century. He would eventually achieve iconic status with the general public, as well, because of his pivotal role identifying the cause of the Challenger space shuttle disaster and his two bestselling books, Surely You’re Joking, Mr. Feynman! and What Do You Care What Other People Think?

    He had a wonderfully playful sense of humor, and was notorious for his elaborate practical jokes. But when it came to science, Feynman was always uncompromisingly honest and brutally critical, which made him an especially terrifying presence during scientific seminars. One could anticipate that he would interrupt and publicly challenge a speaker the moment he heard something that was, in his mind, imprecise or inaccurate.

    So I had been keenly aware of Feynman’s presence when he entered the lecture hall just before my presentation began and took his usual seat in the front row. I kept a careful watch on him out of the corner of my eye throughout the presentation, awaiting any potential outburst. But Feynman never interrupted and never raised an objection.

    The fact that Feynman came forward to confront me after the talk was something that probably would have petrified many scientists. But this was not our first encounter. I had been lucky enough to work closely with Feynman when I was an undergraduate at Caltech about a decade earlier and had nothing but admiration and affection for him. Feynman changed my life through his writings, lectures, and personal mentoring.

    When I first arrived on campus as a freshman in 1970, my intention was to major in biology or mathematics. I had never been particularly interested in physics in high school. But I knew that every Caltech undergraduate was required to take two years of the subject.

    I quickly discovered that freshman physics was wickedly hard, thanks in large part to the textbook, The Feynman Lectures on Physics, Volume 1. The book was less of a traditional textbook than a collection of brilliant essays based on a famous series of freshman physics lectures that Feynman delivered in the 1960s.

    Unlike any other physics textbook that I have ever encountered, The Feynman Lectures on Physics never bothers to explain how to solve any problems, which made trying to complete the daunting homework assignments challenging and time-consuming. What the essays did provide, however, was something much more valuable—deep insights into Feynman’s original way of thinking about science. Generations have benefited from the Feynman Lectures. For me, the experience was an absolute revelation.

    After a few weeks, I felt like my skull had been pried open and my brain rewired. I began to think like a physicist, and loved it. Like many other scientists of my generation, I was proud to adopt Feynman as my hero. I scuttled my original academic plans about biology and mathematics and decided to pursue physics with a vengeance.

    I can remember a few times during my freshman year when I screwed up enough courage to say hello to Feynman before a seminar. Anything more would have been unimaginable at the time. But in my junior year, my roommate and I somehow summoned the nerve to knock on his office door to ask if he might consider teaching an unofficial course in which he would meet once a week with undergraduates like us to answer questions about anything we might ask. The whole thing would be informal, we told him. No homework, no tests, no grades, and no course credit. We knew he was an iconoclast with no patience for bureaucracy, and were hoping the lack of structure would appeal to him.

    A decade or so earlier, Feynman had given a similar class, but solely for freshmen and only for one quarter per year. Now we were asking him to do the same thing for a full year and to make it available for all undergraduates, especially third- and fourth-year students like ourselves who were likely to ask more advanced questions. We suggested the new course be called Physics X, the same as his earlier one, to make it clear to everyone that it was completely off the books.

    Feynman thought a moment and, much to our surprise, replied “Yes!” So every week for the next two years, my roommate and I joined dozens of other lucky students for a riveting and unforgettable afternoon with Dick Feynman.

    Physics X always began with him entering the lecture hall and asking if anyone had any questions. Occasionally, someone wanted to ask about a topic on which Feynman was expert. Naturally, his answers to those questions were masterful. In other cases, though, it was clear that Feynman had never thought about the question before. I always found those moments especially fascinating because I had the chance to watch how he engaged and struggled with a topic for the first time.

    I vividly recall asking him something I considered intriguing, even though I was afraid he might think it trivial. “What color is a shadow?” I wanted to know.

    After walking back and forth in front of the lecture room for a minute, Feynman grabbed on to the question with gusto. He launched into a discussion of the subtle gradations and variations in a shadow, then the nature of light, then the perception of color, then shadows on the moon, then earthshine on the moon, then the formation of the moon, and so on, and so on, and so on. I was spellbound.

    During my senior year, Dick agreed to be my mentor on a series of research projects. Now I was able to witness his method of attacking problems even more closely. I also experienced his sharp, critical tongue whenever his high expectations were not met. He called out my mistakes using words like “crazy,” “nuts,” “ridiculous,” and “stupid.”

    The harsh words stung at first, and caused me to question whether I belonged in theoretical physics. But I couldn’t help noticing that Dick did not seem to take the critical comments as seriously as I did. In the next breath, he would always be encouraging me to try a different approach and inviting me to return when I made progress.

    One of the most important things Feynman ever taught me was that some of the most exciting scientific surprises can be discovered in everyday phenomena. All you need do is take the time to observe things carefully and ask yourself good questions. He also influenced my belief that there is no reason to succumb to external pressures that try to force you to specialize in a single area of science, as many scientists do. Feynman showed me by example that it is acceptable to explore a diversity of fields if that is where your curiosity leads.

    One of our exchanges during my final term at Caltech was particularly memorable. I was explaining a mathematical scheme that I had developed to predict the behavior of a Super Ball, the rubbery, super-elastic ball that was especially popular at the time.

    It was a challenging problem because a Super Ball changes direction with every bounce. I wanted to add another layer of complexity by trying to predict how the Super Ball would bounce along a sequence of surfaces set at different angles. For example, I calculated the trajectory as it bounced from the floor to the underside of a table to a slanted plane and then off the wall. The seemingly random movements were entirely predictable, according to the laws of physics.

    I showed Feynman one of my calculations. It predicted that I could throw the Super Ball and that, after a complicated set of bounces, it would return right back to my hand. I handed him the paper and he took a glance at my equations.

    “That’s impossible!” he said.

    Impossible? I was taken aback by the word. It was something new from him. Not the “crazy” or “stupid” that I had come to occasionally expect.

    “Why do you think it’s impossible?” I asked nervously.

    Feynman pointed out his concern. According to my formula, if someone were to release the Super Ball from a height with a certain spin, the ball would bounce and careen off nearly sideways at a low angle to the floor.

    “And that’s clearly impossible, Paul,” he said.

    I glanced down to my equations and saw that, indeed, my prediction did imply that the ball would bounce and take off at a low angle. But I wasn’t so sure that was impossible, even if it seemed counterintuitive.

    I was now experienced enough to push back. “Okay, then,” I said. “I have never tried this experiment before, but let’s give it a shot right here in your office.”

    I pulled a Super Ball out of my pocket and Feynman watched me drop it with the prescribed spin. Sure enough, the ball took off in precisely the direction that my equations predicted, scooting sideways at a low angle off the floor, exactly the way Feynman had thought was impossible.

    In a flash, he deduced his mistake. He had not accounted for the extreme stickiness of the Super Ball surface, which affected how the spin influenced the ball’s trajectory.

    “How stupid!” Feynman said out loud, using the same exact tone of voice he sometimes used to criticize me.

    After two years of working together, I finally knew for sure what I had long suspected: Stupid was just an expression Feynman applied to everyone, including himself, as a way to focus attention on an error so it was never made again.

    I also learned that impossible, when used by Feynman, did not necessarily mean “unachievable” or “ridiculous.” Sometimes it meant, “Wow! Here is something amazing that contradicts what we would normally expect to be true. This is worth understanding!”

    So 11 years later, when Feynman approached me after my lecture with a playful smile and jokingly pronounced my theory “Impossible!” I was pretty sure I knew what he meant. The subject of my talk, a radically new form of matter known as “quasicrystals,” conflicted with principles he thought were true. It was therefore interesting and worth understanding.

    Feynman walked up to the table where I had set up an experiment to demonstrate the idea. He pointed to it and demanded, “Show me again!”

    I flipped the switch to start the demonstration and Feynman stood motionless. With his own eyes, he was witnessing a clear violation of one of the most well-known principles in science. It was something so basic that he had described it in the Feynman Lectures. In fact, the principles had been taught to every young scientist for nearly 200 years.

    But now, here I was, standing in front of Richard Feynman explaining that these long-standing rules were wrong.

    Crystals were not the only possible forms of matter with orderly arrangements of atoms and pinpoint diffraction patterns. There was now a vast new world of possibilities with its own set of rules, which we named quasicrystals.

    We chose the name to make clear how the new materials differ from ordinary crystals. Both materials consist of groups of atoms that repeat throughout the entire structure.

    The groups of atoms in crystals repeat at regular intervals, just like the five known patterns. In quasicrystals, however, different groups repeat at distinct intervals. Our inspiration was a two-dimensional pattern known as a Penrose tiling, which is an unusual pattern that contains two different types of tiles that repeat at two incommensurate intervals. Mathematicians call such a pattern quasiperiodic. Hence, we dubbed our theoretical discovery “quasiperiodic crystals” or “quasicrystals,” for short.

    My little demonstration for Feynman was designed to prove my case using a laser and a slide with a photograph of a quasiperiodic pattern. I flipped on the laser, as Feynman had directed, and aimed the beam so that it passed through the slide onto the distant wall. The laser light produced the same effect as X-rays passing through the channels between atoms: It created a diffraction pattern, like the one pictured in the photo below.

    I turned off the overhead lights so that Feynman could get a good look at the signature snowflake pattern of pinpoints on the wall. It was unlike any other diffraction pattern that Feynman had ever seen.

    I pointed out to him, as I had done during the lecture, that the brightest spots formed rings of ten that were concentric. That was unheard of. One could also see groups of pinpoints that formed pentagons, revealing a symmetry that was thought to be absolutely forbidden in the natural world. A closer look revealed yet more spots between the pinpoints. And spots between those spots. And yet more spots still.

    Feynman asked to look more closely at the slide. I switched the lights back on and removed it from the holder and gave it to him. The image on the slide was so reduced that it was hard to appreciate the detail, so I also handed him an enlargement of the tiling pattern, which he put down on the table in front of the laser.

    No credit provided.

    The next few moments passed in silence. I began to feel like a student again, waiting for Feynman to react to the latest cockamamie idea I had come up with. He stared at the enlargement on the table, reinserted the slide in the holder, and switched on the laser himself. His eyes went back and forth between the printed enlargement on the table, up to the laser pattern on the wall, then back down again to the enlargement.

    “Impossible!” Feynman finally said. I nodded in agreement and smiled, because I knew that to be one of his greatest compliments.

    He looked back up at the wall, shaking his head. “Absolutely impossible! That is one of the most amazing things I have ever seen.”

    And then, without saying another word, Dick Feynman looked at me with delight and gave me a huge, devilish smile.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus (US). We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

  • richardmitnick 9:39 am on December 5, 2021 Permalink | Reply
    Tags: "What You are Doing Right Now Is Proof of Quantum Theory", , , Besides photons qubits can be based on quantum objects and systems such as electrons; ions and superconductors., Electrons jump to other orbits by absorbing energy or by emitting it as a photon., Energy gaps are at the heart of digital electronics and computing., Energy gaps: Electrons occupy only certain orbits around the proton each with a specific energy but not the spaces or energies in between., Ephemeral “virtual” photons, If a third party reads or alters the key that amounts to a quantum measurement changing the key in a way that the sender and recipient can detect., Nautilus (US), , , , , , Superposition; entanglement and teleportation raise deeper questions about quantum theory yet also inspire new technology., , The fact that energy is discontinuous at small scales is at odds with our view of the ordinary world but now we take this aspect of nature for granted., These connections illustrate the flow of ideas from scientific abstraction to useful application., Transmissions are encrypted then decoded by the recipient with a secret key sent via a separate secure channel-the key is a random string of qubits., Wavefunction   

    From Nautilus (US): “What You are Doing Right Now Is Proof of Quantum Theory” 

    From Nautilus (US)

    Credit:Bakhtiar Zein/Shutterstock.

    November 24, 2021
    Sidney Perkowitz

    Running a computer underscores how quantum physics is remaking our world.

    Nobody understands quantum mechanics,” Richard Feynman famously said. Long after Max Planck’s discovery in 1900 that energy comes in separate packets or quanta, quantum physics remains enigmatic. It is vastly different from how things work at bigger scales, where objects from baseballs to automobiles follow Newton’s laws of mechanics and gravitation, consistent with our own bodily experiences. But at the quantum level, an electron is a particle and a wave, and light is a wave and a particle (wave-particle duality); an electron in an atom takes on only certain energies (energy quantization); electrons or photons can instantaneously affect each other over arbitrary distances (entanglement and teleportation); a quantum object exists in different states until it is measured (superposition, or popularly, Schrödinger’s cat); and a real physical force emerges from the apparent nothingness of vacuum (the Casimir effect).

    For a theory that nobody understands, quantum physics has changed human society in remarkable ways.[1] It lies behind the digital technology of integrated circuit chips, and the new technology of light-emitting diodes moving us toward a greener world. Scientists are now excited by one of the more elusive notions in quantum physics, the idea of ephemeral “virtual” photons, which could make possible non-invasive medical methods to diagnose the heart and brain. These connections illustrate the flow of ideas from scientific abstraction to useful application. But there is also a counter flow, where pragmatic requirements generate deep insight. The universal laws of thermodynamics have roots in efforts by 19th-century French engineer Sadi Carnot to make the leading technology of the time, the steam engine, more efficient. Similarly, the growth of quantum technology leads to deeper knowledge of the quantum. The interplay between pure theory, and its outcomes in the everyday world, is a continuing feature of science as it develops. In quantum physics, this interaction traces back to one of its founders, Danish physicist Niels Bohr.

    SPOOKY ACTION: Entanglement, a key principle of quantum physics, enables quantum telecommunication, which makes the transfer of confidential information secure against tampering or eavesdropping. Credit: Jurik Peter/Shutterstock.

    In 1913, Bohr applied quantum ideas to the simplest atom, hydrogen. He found that the lone electron could occupy only certain orbits around the central proton, each with a specific energy, but not the spaces or energies in between. The electron jumps to other orbits by absorbing energy or by emitting it as a photon at a wavelength set by the orbital energies involved. Bohr’s model scored a huge success when it predicted the exact wavelengths of light emitted by energized hydrogen gas. Quantum jumps across the gaps between energy levels also produce the vividly colored light from advertising signs filled with neon and other gases, from lasers, and from LEDs.

    Energy gaps are also at the heart of digital electronics and computing, which depend on the semiconducting transistor, invented in 1947. Semiconductors lie between metals, with many free electrons that carry electrical current, and insulators, whose electrons are held within its atoms and cannot form a current. The electrons in a semiconductor are also confined to its atoms, but once they jump across a so-called band gap, they can travel freely to form an electrical current. This current can be precisely manipulated to turn on and off as a switch, amplify signals, and perform other electronic functions when the semiconductor is formed into a transistor. Produced in millions within integrated circuit chips made of semiconducting silicon, transistors control the digital technology that defines our world. The band gap underlying all this is a pure quantum effect that arises because the electrons in a semiconductor occupy bands of energy separated by forbidden regions.

    The fact that energy is discontinuous at small scales is at odds with our view of the ordinary world but now we take this aspect of nature for granted. However superposition; entanglement and teleportation raise deeper questions about quantum theory yet also inspire new technology.

    Superposition is tied to Erwin Schrödinger’s famous equation, published in 1926. It is the quantum equivalent of Newton’s equation F = ma (that is, force = mass x acceleration, the basic equation of particle motion) but it describes a sub-atomic entity such as an electron as a wave, not a particle (although without telling us what this wave is). However, the equation’s solution, called the wavefunction, can be used to calculate any property of the entity such as its position, but not definitively. The wavefunction only gives the probability that the electron can exist at a given position in an atom. In principle each location is possible according to its probability until an electron is measured, when the wavefunction is said to “collapse” to that value.

    This view of quantum behavior is called the Copenhagen interpretation because of Bohr’s role in developing it. It is like saying a deck of cards contains 52 different states that a selected card could take, in this case, with equal probabilities; but once you pick a card, that becomes the actual state, leaving the other 51 irrelevant. But the analogy is imperfect: We firmly believe that the suit and value printed on each card are real and fixed whether someone picks a card or not. In classical physics too we assume that objects have definite properties even if they are not being measured, a belief called realism. But the Copenhagen view makes it questionable whether an electron or photon has definite values independent of a measurement.

    If, reading this, you are wondering how and why the act of measurement somehow deeply affects the nature of the thing measured, you are not alone. This is one of the enduring questions that Einstein, among other physicists, and philosophers too, have asked but not yet answered, leaving the full meaning of quantum theory unsettled. The Copenhagen view is widely used, but other interpretations attempt to resolve the issues it raises.

    Regardless, superposition enables a novel approach to computation. For example: A photon has an electric field that can be polarized to point in a given direction. This can be arranged so that under superposition, the photon has a 50 percent probability of pointing either vertically or horizontally, representing binary “1” or “0” respectively. The result is a quantum computer bit (a “qubit”) that is both 1 and 0 until it is measured. An ordinary computer bit is only either 1 or 0, so using qubits enhances computing capacity by a factor of 2^n where n is the number of qubits; for instance, four ordinary bits can hold only one of the 16 binary numbers 0000 to 1111 (decimal 0 to 15) but four qubits hold all 16 simultaneously. By handling many pieces of data in parallel, a quantum computer takes computational power to unprecedented levels.

    Besides photons qubits can be based on quantum objects and systems such as electrons,[2] ions, and superconductors. Researchers are now testing these approaches to find the best one for commercial quantum computation. IBM projects a 1,000-qubit superconducting chip in two years.[3] Even a smaller 11-qubit computer, one based on ion qubits, can in principle juggle 2,048 numbers at once.[4]

    Entanglement provides other new ways to handle data; but when Schrödinger invented the term in 1935 he was thinking about entanglement as “not one but apparently the characteristic trait of quantum mechanics.” This defining feature can be illustrated with a pair of electrons. Due to the property called spin, electrons are like tiny magnets with a north pole that points either up or down. It is possible to create a pair of electrons with total spin zero, with one north pole up and the other down, but we don’t know which is which. Now separate the two electrons as far apart as you like and measure the spin direction of electron A; then whatever the result, a measurement of the spin direction of electron B always gives the opposite value.

    This is entanglement, where a measurement of a property of one of two linked quantum objects instantaneously sets the value of that property in the other, regardless of distance. Numerous experiments confirm that quantum objects are correlated in ways that ordinary objects are not, even if far apart. In 2017, a group under Jian-Wei Pan of The University of Science and Technology [中国科学技术大学] (CN) at Chinese Academy of Sciences [中国科学院](CN) showed that a pair of photons remained entangled over a record distance of 1,200 kilometers. Experiments also show that any interaction between entangled objects occurs more quickly than the speed of light would allow. This violates the condition called locality that arises from special relativity, which is what troubled Einstein when he called entanglement “spooky action at a distance.” With quantum realism already dubious, many physicists are tending to the view that both realism and locality do not apply in the quantum world.

    Nevertheless, through its role in teleportation, entanglement enables quantum telecommunication, even at a global scale. In 1993, Charles Bennet of IBM and colleagues theorized about how to exactly copy and send the unknown state of a quantum system to a distant receiver, that is, teleport quantum information. This was revolutionary because in quantum theory, one cannot exactly clone, say, a photon with an unknown polarization, since making copies would provide a way to evade the uncertainty principle. But entanglement provides another workaround: If you measure the polarization of an entangled photon, you know the polarization of its partner without a direct measurement. The Bennett paper proposed using entangled photons A and B, located respectively with the sender and the recipient, to transmit the unknown state of a third photon X to photon B.

    In 1997 Anton Zeilinger, then at The University of Innsbruck [Leopold-Franzens-Universität Innsbruck](AT), and colleagues successfully teleported the unknown polarization state of a photon. Such experiments have opened the door to distributing data in the form of polarized photon qubits, with a huge bonus: The quantum nature of teleportation makes the transfer of confidential information secure against tampering or eavesdropping. This is not merely an issue in spy novels: Security is essential in a host of electronic transactions that support internet commerce and financial transfers and carry private information.

    For security, these transmissions are encrypted then decoded by the recipient with a secret key sent via a separate secure channel which is where the method is vulnerable. In a quantum system, however, the key is a random string of qubits, which is tamper-proof: If a third party reads or alters the key that amounts to a quantum measurement changing the key in a way that the sender and recipient can detect. With this feature, quantum teleportation raises the possibility of an utterly secure worldwide network implemented through space satellite transmissions over large distances. The 1,200 km transmission of entangled photons was carried out between a space satellite and ground stations.[5] It is a first step toward a global quantum internet.

    The quantum effects that translate into technology are anti-intuitive and hard to visualize, but one recent development brings us back to the familiar world of Newtonian mechanics with an actual physical force—except that it arises from the quantum vacuum, which unlike the classical idea of vacuum, is not empty. Instead, in the extension of quantum mechanics called quantum field theory, the vacuum of space is the lowest energy state of the universe and supports “virtual” elementary particles that briefly pop into existence. These include virtual photons with varied wavelengths. In 1948, the Dutch theorist Hendrik Casimir predicted that two barely separated parallel metal plates placed in this quantum vacuum would be attracted to each other. This happens because only waves that fit exactly into the gap between the plates survive there, producing a lower energy density than exists outside the plates and hence an inward force. This force is extremely small, and so is the necessary spacing between the plates, around 100 nanometers.

    Finally in 1997 delicate measurements quantitatively confirmed Casimir’s prediction, and other results show that the effect also occurs for non-metals in different geometries. Since then, researchers in micro- and nano-electromechanical systems—chip-size devices that combine electrical and mechanical functions—have begun exploiting the Casimir force. Its sensitivity to small distances allows ultra-precise measurements by tracking mechanical movements. One promising idea is to use this technology to examine the human heart and brain without physical contact, by measuring the extremely small magnetic fields they generate. At present this is done with bulky equipment that needs cryogenic cooling. An electromechanical chip would instead use the Casimir effect to analyze a heart or brain right in a physician’s office at room temperature.

    Apart from its usefulness, quantum technology shows that applied quantum physics can broaden fundamental understanding. Ingenious quantum scientists showed that teleportation can make an end run around the uncertainty principle and achieve a real-world result, secure telecommunications. In this case, researchers learned how to manipulate quantum effects to evade one basic principle; but also appreciate that the secure nature of teleported keys made up of qubits arises from other basic quantum properties, randomness, and sensitivity to measurement.

    But will we ever fully understand why quantum theory works so well? Feynman’s graduate adviser, the distinguished theorist John Archibald Wheeler, did not have the answer but clearly stated the goal when he wrote in 1984 that “the most revolutionary discovery in science is yet to come! … not by questioning the quantum, but by uncovering that utterly simple idea that demands the quantum.”


    1. Adesso, G., Lo Franco, R., & Parigi, V. Foundations of quantum mechanics and their impact on contemporary society. Philosophical Transactions of the Royal Society A 376, 20180112 (2018).

    2. Valich, L. One small step for electrons, one giant leap for quantum computers. http://www.rochester.edu (2019).

    3. Gambetta, J. IBM’s roadmap for scaling quantum technology. http://www.research.ibm.com (2020).

    4. Wright, K., et al. Benchmarking an 11-qubit computer. Nature Communications 10, 5464 (2019).

    5. Popkin, G. China’s quantum satellite achieves “spooky action” at record distance. Science (2017).

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus (US). We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

  • richardmitnick 9:07 am on October 31, 2021 Permalink | Reply
    Tags: "The Volcano That Shrouded the Earth and Gave Birth to a Monster", , , Mount Tambora in Indonesia, Nautilus (US),   

    From Nautilus (US) : “The Volcano That Shrouded the Earth and Gave Birth to a Monster” 

    From Nautilus (US)

    December 31, 2015 [Re-presented 10.31.21]
    Gillen D’Arcy Wood

    Two hundred years ago, the greatest eruption in Earth’s recorded history took place. Mount Tambora—located on Sumbawa Island in the East Indies—blew itself up with apocalyptic force in April 1815.

    Mount Tambora. The deep volcanic crater, top, was produced by the eruption of Mount Tambora in Indonesia in April 1815 – the most powerful volcanic blast in recorded history.Credit: Iwan Setiyawan/KOMPAS, via Associated Press via The New York Times.

    Study Links the Explosive Eruption of Mount Tambora and the 1816 “Year Without a Summer” via SciTechDaily.

    After perhaps 1,000 years’ dormancy, the devastating evacuation and collapse required only a few days. It was the concentrated energy of this event that was to have the greatest human impact. By shooting its contents into the stratosphere with biblical force, Tambora ensured its volcanic gases reached sufficient height to disable the seasonal rhythms of the global climate system, throwing human communities worldwide into chaos. The sun-dimming stratospheric aerosols produced by Tambora’s eruption in 1815 spawned the most devastating, sustained period of extreme weather seen on our planet in perhaps thousands of years.

    Within weeks, Tambora’s stratospheric ash cloud circled the planet at the equator, from where it embarked on a slow-moving sabotage of the global climate system at all latitudes. Five months after the eruption, in September 1815, meteorological enthusiast Thomas Forster observed strange, spectacular sunsets over Tunbridge Wells near London. “Fair dry day,” he wrote in his weather diary—but “at sunset a fine red blush marked by diverging red and blue bars.”

    RAIN OF ASH: This map shows the density of ash fall issuing from Tambora’s eruption. 
The thickness of the ash is shown in centimeters. Prevailing trade winds drove the ash clouds north and west as far as Celebes (Sulawesu) and Borneo, 1,300 kilometers away. The volcanic eruptions could be heard twice as far away.Credit: Macmillan Publishers Ltd.

    Artists across Europe took note of the changed atmosphere. William Turner drew vivid red skyscapes that, in their coloristic abstraction, seem like an advertisement for the future of art. Meanwhile, from his studio on Greifswald Harbor in Germany, Caspar David Friedrich painted a sky with a chromic density that—one scientific study has found—corresponds to the “optical aerosol depth” of the colossal volcanic eruption that year.

    For three years following Tambora’s explosion, to be alive, almost anywhere in the world, meant to be hungry. In New England, 1816 was nicknamed the “Year Without a Summer” or “Eighteen-Hundred-and-Froze-to-Death.” Germans called 1817 the “Year of the Beggar.” Across the globe, harvests perished in frost and drought or were washed away by flooding rains. Villagers in Vermont survived on porcupine and boiled nettles, while the peasants of Yunnan in China sucked on white clay. Summer tourists traveling in France mistook beggars crowding the roads for armies on the march.

    One such group of English tourists, at their lakeside villa near Geneva, passed the cold, crop-killing days by the fire exchanging ghost stories. Mary Shelley’s storm-lashed novel Frankenstein bears the imprint of the Tambora summer of 1816, and her literary coterie—which included the poets Percy Shelley and Lord Byron—serve as tour guides through the suffering worldscape of 1815–18.

    Considered on a geological timescale, Tambora stands almost insistently near to us. The Tambora climate emergency of 1815–18 offers us a rare, clear window onto a world convulsed by weather extremes, with human communities everywhere struggling to adapt to sudden, radical shifts in temperatures and rainfall, and a flow-on tsunami of famine, disease, dislocation, and unrest. It is a case study in the fragile interdependence of human and natural systems.

    On Sumbawa Island, the beginning of the dry season in April 1815 meant a busy time for the local farmers. In a few weeks the rice would be ready, and the raja of Sanggar, a small kingdom on the northeast coast of the island, would send his people into the fields to harvest. Until then, the men of his village, called Koreh, continued to work in the surrounding forests, chopping down the sandalwood trees vital to shipbuilders in the busy sea lanes of the Dutch East Indies.

    On the evening of April 5, 1815, at about the time his servants would have been clearing the dinner dishes, the raja heard an enormous thunderclap. Perhaps his first panicked thought was that the beach lookout had fallen asleep and allowed a pirate ship to creep in to shore and fire its cannon. But everyone was instead staring up at Mount Tambora. A jet of flame burst skyward from the summit, lighting up the darkness and rocking the earth beneath their feet. The noise was incredible, painful.

    Huge plumes of flame issued from the mountain for three hours, until the dark mist of ash became confused with the natural darkness, seeming to announce the end of the world. Then, as suddenly as it had begun, the column of fire collapsed, the earth stopped shaking, and the bone-jarring roars faded. Over the next few days, Tambora continued to bellow occasionally, while ash drifted down from the sky.

    THE BIG CHILL: This diagram shows the penetration of volcanic matter into the stratosphere. As volcanic sulfur dioxide is chemically transformed into sulfuric acid, an aerosol layer forms, reducing incoming radiation from the sun and cooling the surface, even as the stratosphere itself is warmed. Credit: Macmillan Publishers Ltd.

    Meanwhile to the southeast in the capital Bima, colonial administrators were sufficiently alarmed by the events of April 5 to send an official, named Israel, to investigate the emergency situation at the volcano, on the Sanggar Peninsula. By April 10, the man’s bureaucratic zeal had led him to the very slopes of Tambora. There, in the dense tropical forest, at about 7 p.m., he became one of the first victims of the most powerful volcanic eruption in recorded history.

    Within hours, the village of Koreh, along with all other villages on the Sanggar Peninsula, ceased to exist entirely, a victim of Tambora’s spasm of self-destruction. This time three distinct columns of fire burst in a cacophonous roar from the summit to the west, blanketing the stars and uniting in a ball of swirling flame at a height greater than the eruption of five days before. The mountain itself began to glow as streams of boiling liquefied rock coursed down its slopes. At 8 p.m., the terrifying conditions across Sanggar grew worse still, as a hail of pumice stones descended, mixed with a downpour of hot rain and ash.

    On the northern and western slopes of the volcano, whole villages, totaling perhaps 10,000 people, had already been consumed within a vortical hell of flames, ash, boiling magma, and hurricane-strength winds. In 2004, an archaeological team from the University of Rhode Island uncovered the first remains of a village buried by the eruption: a single house under three meters of volcanic pumice and ash. Inside the walled remains, they found two carbonized bodies, perhaps a married couple. The woman, her bones turned to charcoal by the heat, lay on her back, arms extended, holding a long knife. Her sarong, also carbonized, still hung across her shoulder.

    Back on the mountain’s eastern flank, the rain of volcanic rocks gave way to ashfall, but there was to be no relief for the surviving villagers. The spectacular, jet-like “plinian” eruption (named for Pliny the Younger, who left a famous account of Vesuvius’s vertical column of fire) continued unabated, while glowing, fast-moving currents of rock and magma, called “pyroclastic streams,” generated enormous phoenix clouds of choking dust. As these burning magmatic rivers poured into the cool sea, secondary explosions redoubled the aerial ash cloud created by the original plinian jet. An enormous curtain of steam and ash clouds rose and encircled the peninsula, creating, for those trapped inside it, a short-term microclimate of pure horror.

    First, a “violent whirlwind” struck Koreh, blowing away roofs. As it gained in strength, the volcanic hurricane uprooted large trees and launched them like burning javelins into the sea. Horses, cattle, and people alike flew upward in the fiery wind. What survivors remained then faced another deadly element: giant waves from the sea. The crew of a British ship cruising offshore in the Flores strait, coated with ash and bombarded by volcanic rocks, watched stupefied as a 12-foot-high tsunami washed away the rice fields and huts along the Sanggar coast. Then, as if the combined cataclysms of air and sea weren’t enough, the land itself began to sink as the collapse of Tambora’s cone produced waves of subsidence across the plain.

    On the sunless days following the cataclysm, corpses lay unburied all along the roads on the inhabited eastern side of the island between Dompu and Bima. Villages stood deserted, their surviving inhabitants having scattered in search of food. With forests and rice paddies destroyed, and the island’s wells poisoned by volcanic ash, some 40,000 islanders would perish from sickness and starvation in the ensuing weeks, bringing the estimated death toll from the eruption to over 100,000, the largest in history.

    While the skyward eruptions lasted only about three hours each, the boiling cascade of pyroclastic streams down Tambora’s slopes continued a full day. Hot magma gushed from Tambora’s collapsing chamber down to the peninsula, while columns of ash, gas, and rock rose and fell, feeding the flow. The fiery flood that consumed the Sanggar Peninsula, traveling up to 19 miles at great speeds, ultimately extended over a 216-square-mile area, one of the greatest pyroclastic events in the historical record. Within a few short hours, it buried human civilization in northeast Sumbawa under a smoking meter-high layer of ignimbrite.

    Tambora’s cacophony of explosions on April 10, 1815, could be heard hundreds of miles away. All across the region, government ships put to sea in search of imaginary pirates and invading navies. In the seas to the north off Macassar, the captain of the East India Company vessel Benares gave a vivid account of conditions in the region on April 11:

    The ashes now began to fall in showers, and the appearance altogether was truly awful and alarming. By noon, the light that had remained in the eastern part of the horizon disappeared, and complete darkness had covered the face of day … The darkness was so profound throughout the remainder of the day, that I never saw anything equal to it in the darkest night; it was impossible to see your hand when held up close to the eye.

    Across a 600-kilometer radius, darkness descended for two days, while Tambora’s ash cloud expanded to cover a region nearly the size of the continental United States. The entire Southeast Asian region was blanketed in volcanic debris for a week. Day after dark day, British officials conducted business by candlelight, as the death toll mounted.

    Months after the eruption, the atmosphere remained heavy with dust—the sun a blur. Drinking water contaminated by fluorine-rich ash spread disease and with 95 percent of the rice crop in the field at the time of the eruption, the threat of starvation was immediate and universal. In their desperation for food, islanders were reduced to eating dry leaves and their much-valued horseflesh. By the time the acute starvation crisis was over, Sumbawa had lost half its population to famine and disease, while most of the rest had fled to other islands.

    Tambora’s violent impact on global weather patterns was due, in part, to the already unstable conditions prevailing at the time of its eruption. A major tropical volcano had blown up six years prior, in 1809. This cooling event, hugely amplified by the sublime Tambora eruption in 1815, ensured extreme volcanic weather across the entire decade.

    A flurry of research since the discovery of the 1809 eruption has resulted in the identification of the 1810–19 decade as a whole as the coldest in the historical record—a gloomy distinction. A 2008 modeling study concluded Tambora’s eruption to have had by far the largest impact on global mean surface air temperatures among volcanic events since 1610, while the 1809 volcano ranked second over that same period, measuring just over half Tambora’s decline. Two papers published the following year confirmed the status of the 1810s as “probably the coldest during the past 500 years or longer,” a fact directly attributable to the proximity of the two major tropical eruptions.

    The spectacular eruption increased that cooling to a truly dire extent, contributing to an overall decline of global average temperatures of 1.5 degrees Celsius across the decade. One-and-a-half degrees might seem a small number, but as a sustained decline characterized by a sharp rise in extreme weather events—floods, droughts, storms, and summer frosts—the chilled global climate system of the 1810s had devastating impacts on human agriculture, food supply, and disease ecologies.

    The Scottish meteorologist George Mackenzie kept meticulous records of cloudy skies between 1803 and 1821 over various parts of the British Isles. Where lovely clear summer days in the earlier period (1803–10) averaged over 20, in the volcanic decade (1811–20) that figure dropped to barely five. For 1816, the Year Without a Summer, Mackenzie recorded no clear days at all.

    On the eve of the summer of 1816, 18-year-old Mary Godwin took flight with her lover, Percy Shelley, and their baby for Switzerland, escaping the chilly atmosphere of her father’s house in London. Mary’s young stepsister, Claire Clairmont, accompanied them, eager to reunite with her own poet-lover, Lord Byron, who had left England for Geneva a week earlier. Mary’s other sister, the ever dispensable Fanny, was left behind.

    The dismal, often terrifying weather of the summer of 1816 is a touchstone of the ensuing correspondence between the sisters. In a letter to Fanny, written on her arrival in Geneva, Mary describes their ascent of the Alps “amidst a violent storm of wind and rain.” The cold was “excessive” and the villagers complained of the lateness of the spring. On their alpine descent days later, a snowstorm ruined their view of Geneva and its famous lake. In her return letter, Fanny expresses sympathy for Mary’s bad luck, reporting that it was “dreadfully dreary and rainy” in London too, and very cold.

    Stormy nor’easters are standard features of Genevan weather in summertime, careening from the mountains to whip the waters of the lake into a sirocco of foam. Beginning in June 1816, these annual storms attained a manic intensity not witnessed before or since. “An almost perpetual rain confines us principally to the house,” Mary wrote to Fanny on the first of June from Maison Chappuis, their rented house on the shores of Lake Geneva: “One night we enjoyed a finer storm than I had ever before beheld. The lake was lit up—the pines on Jura made visible, and all the scene illuminated for an instant, when a pitchy blackness succeeded, and the thunder came in frightful bursts over our heads amid the darkness.” A diarist in nearby Montreux compared the bodily impact of these deafening thunderclaps to a heart attack.

    In fact, the year 1816 remains the coldest, wettest Geneva summer since records began in 1753. That unforgettable year, 130 days of rain between April and September swelled the waters of Lake Geneva, flooding the city. Up in the mountains the snow refused to melt. Clouds hung heavy, while the winds blew bitingly cold. In some parts of the inundated city, transport was only possible by boat. A cold northwest wind from the Jura mountains—called le joran by locals—swept relentlessly across the lake. The Montreux diarist called the persistent snows and le joran “the twin evil genies of 1816.” Tourists complained they couldn’t recognize the famously picturesque landscape because of the constant wind and avalanches, which drove snow across vast areas of the plains.

    On the night of June 13, 1816, the Shelleys’ splendidly domiciled neighbor, Lord Byron, stood out on the balcony of the lakeside Villa Diodati to witness “the mightiest of the storms” that he—well-traveled aristocrat that he was—had ever seen. He memorialized that tumultuous night in his wildly popular poem “Childe Harold’s Pilgrimage”:

    “The sky is changed—and such a change! Oh night,
    And storm, and darkness, ye are wondrous strong …
    And now again ’tis black,—and now, the glee

    Of the loud hills shakes with its mountain-mirth,

    As if they did rejoice o’er a young earthquake’s birth.”

    In Byron’s imagination, the Tamboran storms of 1816 achieve volcanic dimensions—like an “earthquake’s birth”—and take delight in their destructive power.

    What caused the terrible weather conditions over Britain and western Europe in 1816–18? The relation between volcanism and climate depends on eruptive scale. Volcanic ejecta and gases must penetrate skyward high enough to reach the stratosphere where, in its cold lower reaches, sulfate aerosols form. These then enter the meridional currents of the global climate system, disrupting normal patterns of temperature and precipitation across the hemispheres. Tambora’s April 1815 eruption launched enormous volumes of long-suppressed volcanic rock and gases more than 25 miles into the stratosphere. This volcanic plume—consisting of as much as 12 cubic miles of total matter—eventually spread across 386,000 square miles of the Earth’s atmosphere, an aerosol umbrella six times the size of the cloud produced by the massive 1991 eruption of Mount Pinatubo in the Philippines.

    In the first weeks after Tambora’s eruption, a vast volume of coarser ash particles—volcanic “dust”—cascaded back to Earth mixed with rain. But ejecta of smaller size—water vapor, molecules of sulfur and fluorine gases, and fine ash particles—remained suspended in the stratosphere, where a sequence of chemical reactions resulted in the formation of a 60-mega- ton sulfate aerosol layer. Over the following months, this dynamic, streamer-like cloud of aerosols—much smaller in size than the original volcanic matter—expanded by degrees to form a molecular screen of planetary scale, spread aloft by the winds and meridional currents of the world. In the course of an 18-month journey, it passed across both south and north poles, leaving a telltale sulfate imprint on the ice for paleo-climatologists to discover more than a century and a half later.

    Once settled in the dry firmament of the stratosphere, Tambora’s global veil circulated above the weather dynamics of the atmosphere, comfortably distanced from the rain clouds that might have dispersed it. From there, its planet-girdling aerosol film continued to scatter shortwave solar radiation back into space until early 1818, while allowing much of the longwave radiant heat from the earth to escape. The resultant three-year cooling regime, unevenly distributed by the currents of the world’s major weather systems, barely affected some places on the globe (Russia, for instance, and the trans-Appalachian United States) but precipitated a drastic 5 to 6 degrees Fahrenheit seasonal decline in other regions, including Europe.

    The first extreme impact of a major tropical eruption is felt in raw temperature. But in western Europe, biblical-style inundation during the 1816 summer growing season wrought the greatest havoc. Because of the tilt of the Earth in relation to the sun and the different heat absorption rates of land and sea, solar insolation of the planet is irregular. Uneven heating in turn creates an air pressure gradient across the latitudes of the globe. Wind is the weatherly expression of these temperature and pressure differentials, transporting heat from the tropics to the poles, moderating temperature extremes, and carrying evaporated water from the oceans over the land to support plant and animal life. The major meridional circulation patterns, measuring thousands of miles in breadth, transport energy and moisture horizontally across the globe, creating continental-scale weather patterns. Meanwhile, at smaller scales, the redistribution of heat and moisture through the vertical column of the atmosphere produces localized weather phenomena, such as thunderstorms.

    In the summer after Tambora’s eruption, however, the aerosol loading of the stratosphere heated the upper layer, which bore down upon the atmosphere. The “tropopause” that marks the ceiling of the Earth’s atmosphere dropped lower, cooling air temperatures and displacing the jet streams, storm tracks, and meridional circulation patterns from their usual course. By early 1816, Tambora’s chilling envelope had created a radiation deficit across the North Atlantic, altering the dynamics of the vital Arctic Oscillation. Slower-churning warm waters north of the Azores pumped over-loads of moisture into the atmosphere, saturating the skies while enhancing the temperature gradient that fuels wind dynamics. Meanwhile, air pressure at sea level plummeted across the mid-latitudes of the North Atlantic, dragging cyclonic storm tracks southward. Pioneering British climate historian Hubert Lamb has calculated that the influential Icelandic low-pressure system shifted several degrees latitude to the south during the cold summers of the 1810s compared to 20th-century norms, settling in the unfamiliar domain of the British Isles, and thus ensuring colder, wetter conditions for all of western Europe.

    Both computer models and historical data draw a dramatic picture of Tambora-driven storms hammering Britain and western Europe. A recent computer simulation conducted at the National Center for Atmospheric Research in Boulder, Colorado showed fierce westerly winds in the North Atlantic in the aftermath of a major tropical eruption, while a parallel study based on multiproxy reconstructions of volcanic impacts on European climate since 1500 concluded that volcanic weather drives the increased “advection of maritime air from the North Atlantic,” meaning “stronger westerlies” and “anomalously wet conditions over Northern Europe.”

    Back at the ground level of observed weather phenomena, an archival study of Scottish weather has found that, in the 1816–18 period, gale-force winds battered Edinburgh at a rate and intensity unmatched in over 200 years of record keeping. In January 1818, a particularly violent storm nearly destroyed the beloved St. John’s Chapel in the heart of the city. The slowing of oceanic currents in response to the overall deficit of solar radiation post-Tambora had left unusual volumes of heated water churning through the critical area between Iceland and the Azores, sapping air pressure, energizing westerly winds, and giving shape to titanic storms.

    It was in this literally electric atmosphere that the Shelley party in Geneva, with Byron attached, conceived the idea of a ghost story contest, to entertain themselves indoors during this cold, wild summer. On the night of June 18, 1816, while another volcanic summer thunderstorm raged around them, Mary and Percy Shelley, Claire Clairmont, Byron, and Byron’s doctor-companion John Polidori recited the poet Coleridge’s recent volume of gothic verse to each other in the candlelit dimness at the Villa Diodati. In his 1986 movie about the Shelley circle that summer, British film director Ken Russell imagines Shelley gulping tincture of opium while Claire Clairmont performs fellatio on Byron, recumbent in a chair. Group sex in the drawing room might be implausible, even for the Shelley circle, but drug taking is very likely, inspired by Coleridge, the poet-addict supreme. How else to explain Shelley’s running screaming from the room at Byron’s recitation of the psychosexual “Christabel,” tormented by his vision of a bare-chested Mary Shelley with eyes instead of nipples?

    From such antics, Byron conceived the outline of a modern vampire tale, which the bitter Polidori would later appropriate and publish under Byron’s name as a satire on his employer’s cruel aristocratic hauteur and sexual voracity. For Mary, the lurid events of this stormy night gave literary body to her own distracted musings on the ghost story competition, instituted two nights earlier. She would write a horror story of her own, about a doomed monster brought unwittingly to life during a storm. As Percy Shelley later wrote, the novel itself seemed generated by “the magnificent energy and swiftness of a tempest.” Thus it was that the unique creative synergies of this remarkable group of college-age tourists—in the course of a few weeks’ biblical weather—gave birth to two singular icons of modern popular culture: Frankenstein’s monster and the Byronic Dracula.

    A week after the memorable night of June 18, Byron and Shelley almost came to grief sailing on Lake Geneva, caught unawares as another violent storm swept in from the east. “The wind gradually increased in violence,” Shelley recalled, “until it blew tremendously; and, as it came from the remotest extremity of the lake, produced waves of a frightful height, and covered the whole surface with a chaos of foam.” By some miracle they found a sheltered port, where even the storm-hardened locals exchanged “looks of wonder.” Onshore, trees had blown down or been shattered by lightning.

    The pyrotechnical lightning displays of June 1816 ignited the literary imagination of Mary Shelley. In Frankenstein, she uses the experience of a violent thunderstorm as the scene of fateful inspiration for her young, doomed scientist:

    “When I was about fifteen years old … we witnessed a most violent and terrible thunderstorm. It advanced from behind the mountains of Jura; and the thunder burst at once with frightful loudness from various quarters of the heavens. I remained, while the storm lasted, watching its progress with curiosity and delight. As I stood at the door, on a sudden I beheld a stream of fire issue from an old and beautiful oak, which stood about twenty yards from our house; and so soon as the dazzling light vanished, the oak had disappeared, and nothing remained but a blasted stump.”

    Frankenstein’s life is changed in this moment; he devotes himself, with maniacal energy, to the study of electricity and galvanism. In the fierce smithy of that Tamboran storm, Frankenstein is born as the anti-superhero of modernity—the “Modern Prometheus”—stealer of the gods’ fire.

    Tambura’s influence on human history does not derive from extreme weather events considered in isolation but in the myriad environmental impacts of a climate system gone haywire. As a result of the prolonged poor weather, crop yields across the British Isles and western Europe plummeted by 75 percent and more in 1816–17. In the first summer of Tambora’s cold, wet, and windy regime, the European harvest languished miserably. Farmers left their crops in the field as long as they dared, hoping some fraction might mature in late-coming sunshine. But the longed-for warm spell never arrived and at last, in October, they surrendered. Potato crops were left to rot, while entire fields of barley and oats lay blanketed in snow until the following spring.

    In Germany, the descent from bad weather to crop failure to mass starvation conditions took a frighteningly rapid course. Carl von Clausewitz, the military tactician, witnessed “heartrending” scenes on his horseback travels through the Rhine country in the spring of 1817: “I saw decimated people, barely human, prowling the fields for half-rotten potatoes.” In the winter of 1817, in Augsburg, Memmingen, and other German towns, riots erupted over the rumored export of corn to starving Switzerland, while the locals were reduced to eating horse and dog flesh.

    Meanwhile, back in England, riots broke out in the East Anglian counties as early as May 1816. Armed laborers bearing flags with the slogan “Bread or Blood” marched on the cathedral town of Ely, held its magistrates hostage, and fought a pitched battle against the militia.

    In his magisterial account of the social and economic upheaval in Europe during the Tambora period, historian John Post has shown the scale of human suffering to be worst in Switzerland, home to Shelley and her circle in 1816. Even in normal times, a Swiss family devoted at least half its income to buying bread. Already by August 1816, bread was scarce, and in December, bakers in Montreux threatened to cease production unless they could be allowed to raise prices. With imminent famine came the threat of “soulèvements”: violent uprisings. Bakers were set upon by starving mobs in the market towns and their shops destroyed. The English ambassador to Switzerland, Stratford Canning, wrote to his prime minister that an army of peasants, unemployed and starving, was assembling to march on Lausanne.

    Most shocking of all was the fate of some desperate mothers. In horrific circumstances repeated around the world in the Tambora period, some Swiss families abandoned their offspring in the crisis, while others chose killing their children as the more humane course. For this crime, some starving women were apprehended and decapitated. Thousands of Swiss with more means and resilience emigrated east to prosperous Russia, while others set off along the Rhine to Holland and sailed from there to North America, which witnessed its first significant wave of refugee European migration in the 19th century. The numbers of European immigrants arriving at U.S. ports in 1817 more than doubled the number of any previous year.

    Devastated by famine and disease in the Tambora period, the poor of Europe hurriedly buried their dead before resuming the bitter fight for their own survival. In the worst cases, children were abandoned by their families and died alone in the fields or by the roadside. The well-born members of the Shelley circle were never reduced to such abysmal circumstances. They did not experience the food crises that afflicted millions among the rural populations of western Europe in the Tambora period. Yet the Shelleys’ celebrated writings were enmeshed within the web of ecological breakdown following the Tambora eruption.

    Byron and Percy Shelley were companions on a weeklong walking tour of Alpine Switzerland in June 1816, during which they debated poetry, metaphysics, and the future of mankind but also found time to remark on the village children they encountered, who “appeared in an extraordinary way deformed and diseased. Most of them were crooked, and with enlarged throats.” In Frankenstein, the Doctor’s benighted creation assumes a similar grotesque shape: a barely human creature, deformed, crooked, and enlarged. Like the hordes of refugees on the roads of Europe seeking aid in 1816–18, the Creature, when he ventures into the towns, is met with fear and hostility, horror and abomination. As the indigent Creature himself puts it, he suffered first “from the inclemency of the season” but “still more from the barbarity of man.”

    As remarkable a feat of literary imagination as Frankenstein is, Mary Shelley was not wanting for real-world inspiration for her horror story, namely the deteriorating rural populations of Europe, in the climatic upheaval of Mount Tambora.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus (US). We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

  • richardmitnick 9:34 am on September 16, 2021 Permalink | Reply
    Tags: "Where Aliens Could Be Watching Us", , , , , , , Nautilus (US),   

    From Cornell University (US) via Nautilus (US) : “Where Aliens Could Be Watching Us” 

    From Cornell University (US)


    Nautilus (US)

    September 16, 2021
    Lisa Kaltenegger

    A view of the Earth and sun from thousands of miles above our planet, with stars in position to see Earth transiting around the sun brightened and the Milky Way visible on the left. Credit: Open Space/ © American Museum of Natural History-New York City (US).

    More than 1,700 stars could have seen Earth in the past 5,000 years.

    Do you ever feel like someone is watching you? They could be. And I’m not talking about the odd neighbors at the end of your street.

    This summer, at The Carl Sagan Institute (US) at Cornell University and The American Museum of Natural History (US) in NYC, my colleague Jacky Faherty and I identified 1,715 stars in our solar neighborhood that could have seen Earth in the past 5,000 years. In the mesmerizing gravitational dance of the stars, those stars found themselves at just the right place to spot Earth. That’s because our pale blue dot blocks out part of the sun’s light from their view. This is how we find most exoplanets circling other stars. We spot the temporary dimming of their star’s light.

    The perfect cosmic front seat to Earth with its curious beings, is quite rare. But with about the same technology as we have, any nominal, curious aliens on planets circling one of the 1,715 stars could have spotted us. Would they have identified us as intelligent life?

    All of us observe the dynamics of the cosmos every night. Stars rise and set—including our sun—because Earth rotates among the rich stellar tapestry. Our night sky changes throughout the year because Earth moves in orbit around the sun. We only see stars at night when the sun doesn’t outshine them. While circling the sun, we glimpse the brightest stars in the anti-sun direction only. Thus, we see different stars in different seasons.

    If we could watch for thousands of years, we could watch the dynamic dance of the cosmos unfold in our night sky. But alternatively, we can use the newest data from The European Space Agency [Agence spatiale européenne][Europäische Weltraumorganisation](EU)’s GAIA mission and computers to fast-forward the time before our eyes, with decades unfolding in mere minutes.

    While we can only see the light of the stars, we already know that more than 4,500 of these stars are not alone. They host extrasolar planets. Several thousand additional signals indicate even more new worlds on our cosmic horizon.

    Astronomers found most of these exoplanets in the last two decades because of a temporary dimming of their stars when a planet, by chance, crossed our line of sight on its journey around its star.

    The planet temporarily blocks out part of the hot star—and its light—from our view. Telescopes on the ground and from space, including NASA’s Kepler and TESS (Transiting Exoplanet Survey Satellite) mission, found thousands of exoplanets by spotting this dimming, which repeats like clockwork.

    The time between dimming tells us how long the planet needs to circle its star. That allows us to figure out how far away an exoplanet wanders from its hot central star. Most known exoplanets are scorching hot gas balls. We can tell when planets orbit closer to a central star than others because they need less time to circle it—we also find those faster than the cooler ones farther away. But about three dozen of these exoplanets are already cool enough. They orbit at the right distance from their stars, where it is not too hot and not too cold. Surface temperatures could allow rivers and oceans to glisten on the surfaces of these planets in this so-called Habitable Zone.

    TRANSIT OF EARTH In this video, scientists at Cornell University and the American Museum of Natural History explain how they have identified stars that have been at just the right place, sometime in the past 5,000 years, to have seen Earth as a transiting exoplanet. Credit: D. Desir National Aeronautics Space Agency (US) / AMNH OpenSpace.

    This vantage point—to see a planet block part of the hot stellar surface from view—is special. The alignment of us and the planet must be just right. Thus, these thousands of known exoplanets are only the tip of the figurative exoplanet iceberg. The ones we can most easily spot hint at the majority waiting to be discovered.

    But what if we change that vantage point? If anyone out there were looking, which stars are just in the right place to spot us?

    Our powers of observation have been boosted by the European Space Agency’s Gaia mission. Launched in 2013, the Gaia spacecraft is mapping the motion stars around the center of our galaxy, the Milky Way.

    The agency aims to survey 1 percent of the galaxy’s 100 billion stars. It has generated the best catalog of stars in our neighborhood within 326 light-years from the sun. Less than 1 percent of the 331,312 catalogued objects —stars, brown dwarfs, and stellar corpses—are at the right place to see Earth as a transiting exoplanet. This special vantage point is held by only those objects in a position close to the plane of Earth’s orbit. Roughly 1,400 stars are at the right place right now to see Earth as a transiting exoplanet.

    But this special vantage point is not forever. It is gained and lost in the precise gravitational dance in our dynamic cosmos. How long does that cosmic front-row seat to Earth transit last? Because the Gaia mission records the motion of the stars, we can spin their movement into the future and trace it back into the past on a computer. It shows us the night sky over thousands of years since civilizations bloomed on Earth and gives us a glimpse of a night sky of the far future, millennia away.

    If we had observed the sky for transiting planets thousands of years earlier or later, we would see different ones. And different ones could find us. We calculated that 1,715 objects in our solar neighborhood could have seen Earth transit since human civilizations started to bloom about 5,000 years ago and kept that special vantage point for hundreds of years. Three hundred and nineteen objects will enter the Earth transit zone in the next 5,000 years.

    Among these 2,034 stars, seven harbor known exoplanets, with three stars’ exoplanets circling in this temperate Habitable Zone. However, the small region around the plane of the Earth’s orbit, where all these stars lie, is crowded. Astronomers usually don’t look for planets there. Generally, it is easier to find exoplanets around stars in non-crowded fields. But now we have a reason: to discover the planets that could also discover us.

    NASA’s Kepler mission stared for more than three years at about 150,000 stars about 1,000 light-years away. These 150,000 stars fit in a small fraction of the sky. Its goal was to estimate how many stars harbor exoplanets. The answer is exciting. Every second star has at least one planet, big or small, and about every fourth star hosts a planet in the Goldilocks Zone. These results provide cautious optimism about our chances of not being the only life in the cosmos. It also means that about 500 exoplanets in the Habitable Zone should be on our list, waiting to be discovered.

    The three systems that host planets in the Habitable Zone in the Earth transit zone are close enough to detect radio waves from Earth. Because radio waves travel at light speed, they have only washed over 75 of the stars on our list so far. These stars are within 100 light-years from Earth—because light had 100 years to travel since Earth first started to leak radio signals.

    Ross 128b, an exoplanet a mere 11 light-years away from us, could have seen Earth block the sun’s light about 3,000 years ago. But it lost this bull’s-eye view about 900 years ago. Another exoplanet, Teegarden’s Star b, which is a bit heavier than Earth, and circles a red sun, is about 12.5 light-years away, and will start to see Earth transit in 29 years. And the fascinating Trappist-1 system, with seven Earth-size planets at 40 light-years distance, will be able to see Earth as a transiting planet but only in about 1,600 years.

    With the launch of the James Webb Space Telescope (JWST) later this year, we will have a big enough telescope to collect light from small, close-by exoplanets that could be like ours.

    A particular combination of oxygen and methane has identified Earth as a living planet for about 2 billion years. That combination of gases is what we will be looking for in the atmosphere of other worlds. This exoplanet exploration will be on the edge of our technological possibility, but it will be possible for the first time. Future technology should be able to characterize exoplanets, not just in transit. But for now, telescopes like the JWST collect only enough light from the atmosphere of close-by transiting worlds to explore them, allowing us to wonder whether nominal curious astronomers on alien worlds might be watching us too.

    Of course, no aliens have visited us yet, and we haven’t found any cosmic messages from them. Is that because we’re unique? Have other civilizations destroyed themselves? Or are they just not interested in us?

    In my Introduction to Astronomy class at Cornell, I ask students whether they would contact or visit an exoplanet that is 5,000 years younger than Earth or 5,000 years older. Without fail, they pick the older planet and its potentially more advanced life. More “advanced” than us. During our discussions, the concept of advanced life invariably rolls back around to us. Would life on Earth qualify as intelligent for anyone watching?

    After all, we’ve been using radio waves for only about 100 years, and so those waves would only have traveled 100 light-years so far. We have set foot on the moon but not farther yet and are only starting to think about interstellar travel. So our interstellar travel resume is awfully thin.

    One thing that an alien astronomer would likely see is our atmosphere. If they had been watching us for a while, they would have seen that we destroyed our ozone layer—but we also managed to fix it. So maybe we would have scored a point on their intelligence scale. Now, of course, they see our atmosphere is becoming concentrated with carbon dioxide and shows no signs of abating yet. But maybe every civilization goes through this, every civilization nearly destroys its habitat before figuring out a way to save themselves from themselves.

    If any aliens are out there watching us from those 2,043 stars in our solar neighborhood, I hope they’re also rooting for us.

    Science paper:

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Once called “the first American university” by educational historian Frederick Rudolph, Cornell University represents a distinctive mix of eminent scholarship and democratic ideals. Adding practical subjects to the classics and admitting qualified students regardless of nationality, race, social circumstance, gender, or religion was quite a departure when Cornell was founded in 1865.

    Today’s Cornell reflects this heritage of egalitarian excellence. It is home to the nation’s first colleges devoted to hotel administration, industrial and labor relations, and veterinary medicine. Both a private university and the land-grant institution of New York State, Cornell University is the most educationally diverse member of the Ivy League.

    On the Ithaca campus alone nearly 20,000 students representing every state and 120 countries choose from among 4,000 courses in 11 undergraduate, graduate, and professional schools. Many undergraduates participate in a wide range of interdisciplinary programs, play meaningful roles in original research, and study in Cornell programs in Washington, New York City, and the world over.

    Cornell University (US) is a private, statutory, Ivy League and land-grant research university in Ithaca, New York. Founded in 1865 by Ezra Cornell and Andrew Dickson White, the university was intended to teach and make contributions in all fields of knowledge—from the classics to the sciences, and from the theoretical to the applied. These ideals, unconventional for the time, are captured in Cornell’s founding principle, a popular 1868 quotation from founder Ezra Cornell: “I would found an institution where any person can find instruction in any study.”

    The university is broadly organized into seven undergraduate colleges and seven graduate divisions at its main Ithaca campus, with each college and division defining its specific admission standards and academic programs in near autonomy. The university also administers two satellite medical campuses, one in New York City and one in Education City, Qatar, and Jacobs Technion-Cornell Institute(US) in New York City, a graduate program that incorporates technology, business, and creative thinking. The program moved from Google’s Chelsea Building in New York City to its permanent campus on Roosevelt Island in September 2017.

    Cornell is one of the few private land grant universities in the United States. Of its seven undergraduate colleges, three are state-supported statutory or contract colleges through the SUNY – The State University of New York (US) system, including its Agricultural and Human Ecology colleges as well as its Industrial Labor Relations school. Of Cornell’s graduate schools, only the veterinary college is state-supported. As a land grant college, Cornell operates a cooperative extension outreach program in every county of New York and receives annual funding from the State of New York for certain educational missions. The Cornell University Ithaca Campus comprises 745 acres, but is much larger when the Cornell Botanic Gardens (more than 4,300 acres) and the numerous university-owned lands in New York City are considered.

    Alumni and affiliates of Cornell have reached many notable and influential positions in politics, media, and science. As of January 2021, 61 Nobel laureates, four Turing Award winners and one Fields Medalist have been affiliated with Cornell. Cornell counts more than 250,000 living alumni, and its former and present faculty and alumni include 34 Marshall Scholars, 33 Rhodes Scholars, 29 Truman Scholars, 7 Gates Scholars, 55 Olympic Medalists, 10 current Fortune 500 CEOs, and 35 billionaire alumni. Since its founding, Cornell has been a co-educational, non-sectarian institution where admission has not been restricted by religion or race. The student body consists of more than 15,000 undergraduate and 9,000 graduate students from all 50 American states and 119 countries.


    Cornell University was founded on April 27, 1865; the New York State (NYS) Senate authorized the university as the state’s land grant institution. Senator Ezra Cornell offered his farm in Ithaca, New York, as a site and $500,000 of his personal fortune as an initial endowment. Fellow senator and educator Andrew Dickson White agreed to be the first president. During the next three years, White oversaw the construction of the first two buildings and traveled to attract students and faculty. The university was inaugurated on October 7, 1868, and 412 men were enrolled the next day.

    Cornell developed as a technologically innovative institution, applying its research to its own campus and to outreach efforts. For example, in 1883 it was one of the first university campuses to use electricity from a water-powered dynamo to light the grounds. Since 1894, Cornell has included colleges that are state funded and fulfill statutory requirements; it has also administered research and extension activities that have been jointly funded by state and federal matching programs.

    Cornell has had active alumni since its earliest classes. It was one of the first universities to include alumni-elected representatives on its Board of Trustees. Cornell was also among the Ivies that had heightened student activism during the 1960s related to cultural issues; civil rights; and opposition to the Vietnam War, with protests and occupations resulting in the resignation of Cornell’s president and the restructuring of university governance. Today the university has more than 4,000 courses. Cornell is also known for the Residential Club Fire of 1967, a fire in the Residential Club building that killed eight students and one professor.

    Since 2000, Cornell has been expanding its international programs. In 2004, the university opened the Weill Cornell Medical College in Qatar. It has partnerships with institutions in India, Singapore, and the People’s Republic of China. Former president Jeffrey S. Lehman described the university, with its high international profile, a “transnational university”. On March 9, 2004, Cornell and Stanford University(US) laid the cornerstone for a new ‘Bridging the Rift Center’ to be built and jointly operated for education on the Israel–Jordan border.


    Cornell, a research university, is ranked fourth in the world in producing the largest number of graduates who go on to pursue PhDs in engineering or the natural sciences at American institutions, and fifth in the world in producing graduates who pursue PhDs at American institutions in any field. Research is a central element of the university’s mission; in 2009 Cornell spent $671 million on science and engineering research and development, the 16th highest in the United States. Cornell is classified among “R1: Doctoral Universities – Very high research activity”.

    For the 2016–17 fiscal year, the university spent $984.5 million on research. Federal sources constitute the largest source of research funding, with total federal investment of $438.2 million. The agencies contributing the largest share of that investment are the Department of Health and Human Services and the National Science Foundation(US), accounting for 49.6% and 24.4% of all federal investment, respectively. Cornell was on the top-ten list of U.S. universities receiving the most patents in 2003, and was one of the nation’s top five institutions in forming start-up companies. In 2004–05, Cornell received 200 invention disclosures; filed 203 U.S. patent applications; completed 77 commercial license agreements; and distributed royalties of more than $4.1 million to Cornell units and inventors.

    Since 1962, Cornell has been involved in unmanned missions to Mars. In the 21st century, Cornell had a hand in the Mars Exploration Rover Mission. Cornell’s Steve Squyres, Principal Investigator for the Athena Science Payload, led the selection of the landing zones and requested data collection features for the Spirit and Opportunity rovers. NASA-JPL/Caltech(US) engineers took those requests and designed the rovers to meet them. The rovers, both of which have operated long past their original life expectancies, are responsible for the discoveries that were awarded 2004 Breakthrough of the Year honors by Science. Control of the Mars rovers has shifted between National Aeronautics and Space Administration(US)’s JPL-Caltech (US) and Cornell’s Space Sciences Building.

    Further, Cornell researchers discovered the rings around the planet Uranus, and Cornell built and operated the telescope at Arecibo Observatory located in Arecibo, Puerto Rico(US) until 2011, when they transferred the operations to SRI International, the Universities Space Research Association (US) and the Metropolitan University of Puerto Rico [Universidad Metropolitana de Puerto Rico](US).

    The Automotive Crash Injury Research Project was begun in 1952. It pioneered the use of crash testing, originally using corpses rather than dummies. The project discovered that improved door locks; energy-absorbing steering wheels; padded dashboards; and seat belts could prevent an extraordinary percentage of injuries.

    In the early 1980s, Cornell deployed the first IBM 3090-400VF and coupled two IBM 3090-600E systems to investigate coarse-grained parallel computing. In 1984, the National Science Foundation began work on establishing five new supercomputer centers, including the Cornell Center for Advanced Computing, to provide high-speed computing resources for research within the United States. As an National Science Foundation (US) center, Cornell deployed the first IBM Scalable Parallel supercomputer.

    In the 1990s, Cornell developed scheduling software and deployed the first supercomputer built by Dell. Most recently, Cornell deployed Red Cloud, one of the first cloud computing services designed specifically for research. Today, the center is a partner on the National Science Foundation XSEDE-Extreme Science Engineering Discovery Environment supercomputing program, providing coordination for XSEDE architecture and design, systems reliability testing, and online training using the Cornell Virtual Workshop learning platform.

    Cornell scientists have researched the fundamental particles of nature for more than 70 years. Cornell physicists, such as Hans Bethe, contributed not only to the foundations of nuclear physics but also participated in the Manhattan Project. In the 1930s, Cornell built the second cyclotron in the United States. In the 1950s, Cornell physicists became the first to study synchrotron radiation.

    During the 1990s, the Cornell Electron Storage Ring, located beneath Alumni Field, was the world’s highest-luminosity electron-positron collider. After building the synchrotron at Cornell, Robert R. Wilson took a leave of absence to become the founding director of DOE’s Fermi National Accelerator Laboratory(US), which involved designing and building the largest accelerator in the United States.

    Cornell’s accelerator and high-energy physics groups are involved in the design of the proposed ILC-International Linear Collider(JP) and plan to participate in its construction and operation. The International Linear Collider(JP), to be completed in the late 2010s, will complement the CERN Large Hadron Collider(CH) and shed light on questions such as the identity of dark matter and the existence of extra dimensions.

    As part of its research work, Cornell has established several research collaborations with universities around the globe. For example, a partnership with the University of Sussex(UK) (including the Institute of Development Studies at Sussex) allows research and teaching collaboration between the two institutions.

  • richardmitnick 1:41 pm on August 1, 2021 Permalink | Reply
    Tags: "The Math of Living Things", After DNA was confirmed as the durable unit James Watson and Francis Crick found its double helix structure in 1953., , , , , Decades later the connections seen by Thompson Schrödinger and Einstein have grown., In 1944 Schrödinger published a smaller book with a different and profound effect “What is Life?” a record of his public lectures at Trinity College Dublin in 1943., In the 1940s Albert Einstein and Erwin Schrödinger-founders of relativistic and quantum physics respectively projected that tackling questions of biological importance could also enhance physics., James Watson and Francis Crick credited “What is Life?” with stimulating their work., Nautilus (US), Now new mathematical approaches give deeper views into how organisms develop their bodily structures., Physicists reported biological research at their first international meeting in 1900 and physics and math still help biologists understand living things., , Reasoning from quantum and statistical physics Schrödinger concluded that genetic data must be carried by a small and durable unit capable of wide variation ., Relating information to order and thermodynamics has special meaning in living organisms which survive grow and reproduce by maintaining their internal organization., The first major work to put physics and math into biology came much earlier. The Scottish biologist and polymath D’Arcy Wentworth Thompson published "On Growth and Form" in 1917., The Oxford English Dictionary definition of physics as the “branch of science concerned with the nature and properties of non-living matter and energy” is incomplete., Thompsons work was revised with a massive 1116 page second edition in 1942.   

    From Nautilus (US) : “The Math of Living Things” 

    From Nautilus (US)

    June 23, 2021
    Sidney Perkowitz

    Credit: Africa Studio / Shutterstock

    Exploring the intersection of physical and biological laws.

    It’s hard to argue with the famously authoritative Oxford English Dictionary but its definition of physics as the “branch of science concerned with the nature and properties of non-living matter and energy” is incomplete because physics studies living things as well. Physicists reported biological research at their first international meeting in 1900 and physics and math still help biologists understand living things.

    In a striking reverse connection, in the 1940s Albert Einstein and Erwin Schrödinger-founders of relativistic and quantum physics respectively projected that tackling questions of biological importance could also enhance physics. They were right: Today researchers are exploring “information” as more than a vaguely defined idea. Instead it has become a specific and unifying concept with deep meaning in both physics and biology.

    The first major work to put physics and math into biology came much earlier. The Scottish biologist and polymath D’Arcy Wentworth Thompson published On Growth and Form in 1917, with a massive 1,116 page second edition in 1942.[1] It explains that the structure of organisms exists “in conformity with physical and mathematical laws.” Arguing that Darwin’s natural selection is incomplete, Thompson showed how to extend the theory of evolution through analysis. He explained the shapes and sizes of animals and their skeletons through the laws of mechanics, and used pure math to show how an animal’s body might develop. The book influenced scientists with its challenges to Darwinian evolution and its compelling explication of the beauties of the natural world. A recent reconsideration praises it as “provocative and inspiring.”

    Then in 1944, Schrödinger published a smaller book with a different and profound effect, What is Life?, a record of his public lectures at Trinity College, Dublin, in 1943. Schrödinger’s equation is a cornerstone of quantum theory, and quantum ideas enter as What is Life? responds to a fundamental, then unresolved question: How do organisms preserve and transmit hereditary information through generations?

    Reasoning from quantum and statistical physics Schrödinger concluded that genetic data must be carried by a small and durable unit capable of wide variation to account for mutations in biological evolution—a molecule of around 1,000 atoms with different stable quantum configurations that encode the genetic record. After DNA was confirmed as this hereditary molecule, James Watson and Francis Crick found its double helix structure in 1953 (using Rosalind Franklin’s X-ray crystallography data) and credited What is Life? with stimulating their work. The book helped found molecular biology, and also led Schrödinger to glimpse something more. Because of the “difficulty of interpreting life by the ordinary laws of physics,” he wrote, “we must be prepared to find a new type of physical law.” This, he thought, might lie within quantum theory.

    Einstein also thought that biological research could extend physics, starting with investigations by the German-Austrian Nobel Laureate zoologist Karl von Frisch. This work established honeybees as models for animal behavior, and showed that the bees use polarized skylight to orient themselves. In 1949, Einstein noted that this last result did not open new paths in physics because polarization is a well-understood property of light.[2] But, he added, “the investigation of the behavior of migratory birds and carrier pigeons may some day lead to the understanding of some physical process which is not yet known.” Clearly he saw the value of a two-way flow between physics and biology.

    Decades later the connections seen by Thompson Schrödinger and Einstein have grown. One theme in Thompson’s work is the use of pure math to understand the morphology of living things. Thompson explored this by drawing an outline of an organism on a square grid and applying a mathematical transformation such as stretching the grid in one direction. The resulting image resembled another closely related organism—the long body of a parrotfish mathematically became the curved shape of an angelfish. This suggests that an organism’s body develops along preferred directions for cell growth, although math alone does not explain what biochemical and physical processes might cause this.

    Now new mathematical approaches give deeper views into how organisms develop their bodily structures.

    In 2020, physicists and biologists at the Technion – Israel Institute of Technology [ הטכניון – מכון טכנולוגי לישראל] (IL) analyzed the hydra, a fresh-water animal up to a centimeter long. Its cylindrical body has a foot that adheres to a surface, and a head with tentacles and a mouth that catches and eats prey. This creature interests biologists because a piece of its tissue can regenerate into a complete and functioning new animal. (Hydra are named after a mythological sea monster with many serpent-like heads, with the ability to grow two new heads for every one that was cut off.) Regeneration provides a kind of immortality that may have clues for human lifespans.

    REGENERATION: Scientists examined hydra, a fresh-water animal up to a centimeter long, under a microscope, and found that as it regenerated, its tissues behaved like atoms in a crystalline solid might. Credit: Rattiya Thongdumhyu / Shutterstock.

    The Technion group microscopically examined a piece of hydra tissue as it regenerated, particularly its multicellular fibers that lie parallel to the long axis of a mature hydra. The tissue first folded itself into a spheroid with its fibers forming a pattern like lines of longitude on Earth, which are parallel near the equator but sharply change orientation as they converge at the North and South poles. This is one type of topological defect, an anomaly that occurs in various forms wherever a regular geometry, like the parallel fibers in a hydra or the atomic arrangement in a crystalline solid, has its order seriously disturbed. It is called “topological” because its understanding and analysis requires topology, the branch of pure math that studies how shapes change when stretched, bent or twisted.

    The significance of the two topological defects observed in the hydra tissue is that they define its entire body plan because they eventually become the sites of the foot and head in the new cylindrical animal. More work is needed to understand the mechanical and biochemical processes that make topological defects important, but that they mark significant changes in living matter has also just been demonstrated in colonies of bacteria as they grow, in some cases into intricate multicellular structures.

    Another approach Thompson used to great advantage was the physical one of determining how mechanical quantities such as force affect the size and behavior of organisms. He did this by dimensional analysis, which recognizes that any mechanical quantity can be expressed as a combination of the three physical fundamentals mass M, length L, and time T; for instance, velocity has the dimensions L/T, and force the dimensions ML/T^2. From these basics, Thompson showed that big fish swim faster than little ones, and that an insect cannot become monstrously huge. This is because as its size increases, its weight increases faster than the strength of its supporting legs, so as it grew it would soon become unable to function.

    Ken Andersen at the Center for Marine Life, Technical University of Denmark [Danmarks Tekniske Universitet](DK), is now extending dimensional analysis to describe plankton, the enormous group of organisms that is part of the ocean ecosystem. He presented this research at the workshop “On Being the Right Size” organized at Emory University in 2020 to discuss how underlying physical principles determine the size and function of living creatures. (The workshop title comes from a famous 1928 essay by the eminent British biologist J.B.S. Haldane, about the importance of size in setting the capabilities of organisms.)

    Plankton consists of tiny animals and plants drifting through the oceans. It is important in the Earth’s carbon and oxygen cycles, and in the food chain that produces a significant part of the human diet. To analyze its diversity, Andersen categorized its organisms by how they take in nutrients. For an organism that actively feeds, its rate of ingestion as it encounters food depends on its dimensional speed L/T multiplied by its cross-sectional area L^2, or L^3/T where L is a characteristic size of the organism. Some animal plankton instead passively absorb nutrients as molecules of dissolved organic matter diffuse into their bodies, which detailed physical analysis shows occurs at a rate L/T. Plants however make their own nutrients by photosynthesis. This requires that they gather solar energy and so depends on the organism’s surface area with the dimensional rate L^2/T, along with some nutrition by diffusion at the rate L/T.

    Andersen plotted these rates of nutrient intake against the size of the organism from 10^-4 millimeters to 1 millimeter and found that size correlates with feeding mode. Smaller organisms feed by diffusion, larger ones actively feed, and those mid-range in size tend to be plants that use photosynthesis. The relative numbers of the three types therefore depends on the level of nutrients and sunlight as they occur across the oceans; for instance, with plentiful nutrients but little light, active and diffusion-based animal feeders dominate plants. Andersen is now developing plankton simulator software based on the underlying physical ideas to provide estimates of plankton diversity and function under different ocean conditions.

    The results for hydra and plankton extend Thompson’s analysis of whole organisms. In showing how atoms carefully arranged in a molecule could carry biological order through generations, Schrödinger’s What is Life? represents a newer approach at the molecular scale. Molecular biology has since led to other advances such as gene editing and better understanding of cellular processes.

    These successes suggest the attractive possibility of starting from molecules as basic units of biochemical life processes and building up to cells, tissues, organs and whole organisms. Such a reductionist approach seems valid in physics, where in principle elementary particles can be assembled into nuclei and atoms, which then form molecules and bigger assemblies of matter and energy up to the whole universe. Might molecules form the basis of understanding complex living things and perhaps life itself? Perhaps, but some observers think this bottom-up process is insufficient to explain higher-level biological structure and function. A prime example is the difficulty of linking our own internal consciousness, a property of the mind, to the behavior of molecules and neurons in the brain. Perhaps a different idea is needed to make the jump from molecules to complete living things.

    Schrödinger thought so when he speculated that to understand life, known physics should be supplemented by a “new type of physical law” that might come from quantum theory. Researchers have since reported some signs of quantum behavior or theorized about it in such areas as photosynthesis and olfactory response. But these results are controversial, and a convincing case for the widespread biological influence of quantum effects remains to be made.

    There is however a broad physical law that was not widely appreciated in Schrödinger’s time but is now important in physics and biology. In 1867, the Scottish mathematical physicist James Clerk Maxwell imagined a so-called “Maxwell’s demon.” This tiny being would reside in a box of gas and sort its fast and slow molecules into separate chambers. Temperature correlates with speed, so the result would be a temperature difference between the hot and cold regions that could produce useful work. In thus showing how to produce energy from pure information, Maxwell’s demon gave information physical reality. Then in the 1940s the mathematician Claude Shannon showed that the information describing a given system reflects the degree of order in the system. Thermodynamics describes order in a different way, through the quantity called entropy; Shannon’s insight gave further physical weight to information by linking it to order, entropy, and thermodynamics.

    Relating information to order and thermodynamics has special meaning in living organisms which survive grow and reproduce by maintaining their internal organization. This is implicit in the so-called “central dogma” of molecular biology, the statement by Francis Crick that the information stored in the DNA molecule flows to other molecular processes that make proteins and then a whole organism according to plan. Following the flow of information is therefore a way to describe the thermodynamics of entire biological systems. This opens up the study of properties that arise when the interactions among the system’s components, such as the neurons in the brain, produce new “emergent” high-level behavior.

    This more expansive approach is influencing research at the interface of physics and biology as shown at a 2018 symposium held at Trinity College to celebrate the 75th anniversary of the lectures that became What is Life? The event featured noted scientists who projected where research in new areas related to information and emergent properties, such as complex systems and the network of neurons that constitutes the brain, will take both physics and biology in years to come. Whatever those outcomes, what is surely important is the growing use of a broad approach based on information, which encompasses physical and biological science. Only such a powerful multidisciplinary, even transdisciplinary effort could hope to finally answer Schrödinger’s original question: “What is life?”


    1. Thompson, D.W. On Growth and Form: The Complete Revised Edition Dover, Mineola, NY (1992).

    2. Dyer, A.G., et al. Einstein, von Frisch and the honeybee: a historical letter comes to light. Journal of Comparative Physiology A (2021).

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus (US). We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

  • richardmitnick 11:35 am on July 25, 2021 Permalink | Reply
    Tags: "How Much Should Expectation Drive Science?", , , , , Nautilus (US), ,   

    From Nautilus (US) : “How Much Should Expectation Drive Science?” 

    From Nautilus (US)

    Dark Matter March 2017

    Credit: Caleb Perkins / EyeEm

    Answers to the biggest mysteries may lie well outside traditional paradigms.

    Claudia Geib

    In December 2015, particle physicists were buzzing with excitement: The Standard Model—which has dominated physics for 40 years and defines the basic constituents of matter and how they interact—had a new challenger.

    At the Large Hadron Collider in Switzerland, physicists announced evidence of what appeared to be a new particle.

    Known colloquially as the “diphoton bump,” the new particle promised to upend the Standard Model, which doesn’t predict its existence. It also opened the door to the possibility of solving long-unanswered puzzles, including the nature of Dark Matter.

    About a month later, another potential challenger emerged. A group of nuclear physicists out of the Hungarian Academy of Sciences [Magyar Tudományos Akadémia] (HU)’s Institute for Nuclear Research published a paper on an anomaly detected in the decay of beryllium-8. They proposed that the irregularity could be the signature of a “dark photon,” one of the force carriers thought to dictate the action of dark matter.

    On the surface, neither of these experiments might seem more viable than the other. Yet the response to each could not have been more different. While hundreds of theoretical papers were published on the “diphoton bump,” almost none followed the Hungarian paper.

    For almost eight months, the Hungarian paper remained in obscurity, until a group from the University of California-Irvine (US) posted their own theory about the anomaly detected in the decaying beryllium-8. Though their analysis didn’t support the suggestion that the anomaly was a dark photon, they thought it would fit the behavior of a new type of light, neutral boson—the class of particle that includes both photons and force-conveying particles like the Higgs boson. Yet unlike other force-conveying particles, their proposed particle interacted only with neutrons, and not with protons and electrons. Appropriately, they called it a “protophobic X boson.”

    As the UC-Irvine team gave the Hungarian experiment visibility, the community began to respond. Yet unlike the physicists of the Large Hadron Collider experiment, who were met with enthusiasm, the Hungarian scientists mostly faced skepticism.

    What caused such a dichotomy? Though many factors have since been cited, including the reputation of the Hungarian scientists, it seems the true difference between these two experiments comes down to something subtler: Expectation, and divergence from those expectations.

    In the sciences, new ideas are often judged for how far they lie outside of the systems that scaffold our understanding of the world— systems that are not only scientific, but also social. But when it comes to solving our most persistent mysteries in physics, like the composition of dark matter—which has so far resisted all attempt at elucidation by traditional physics—claims from outside this paradigm may be vital.

    The first strike against the beryllium-8 experiments is a scientific one: The model of the universe that it requires also needs physics well outside what is predicted by the Standard Model.

    Though the team from UC-Irvine found that the Hungarians’ data did not contradict any existing experiments, the model they proposed to explain the new particle needed to be an intricate one. After all, the scientists had to explain why this new particle would not have shown up in years of previous experiments. They suggest a particle that interacts with neutrons, but not protons, and which experiences a hitherto unknown force with a range about 12 times the size of a proton. It is this model with which many scientist take issue.

    “The question is, why would nature choose such a complicated model just to explain this phenomenon?” said Rouven Essig, a professor of physics at Stony Brook University-SUNY (US)‘s C.N. Yang Institute for Theoretical Physics.

    “We’ve got such a beautiful, nicely consistent theory in the Standard Model,” Essig said. “If something comes along that doesn’t fit any of that, and perhaps requires a unique, intricate new model to make it fit with anything, then that’s when it makes us very skeptical.”

    There are other scientific aspects of the beryllium-8 experiment that can, and have, been raised as concerns. The Hungarian experimenters work mostly in nuclear physics, not particle physics; their detection came on a single small device, one many magnitudes less sensitive than the two massive, top-of-the-line detectors—ATLAS and CMS—which double-check every discovery in the Large Hadron Collider.

    The group also published two previous papers with similar claims of new particles, including a 2008 claim of a potential 12-MeV particle, and a 2012 claim of a 13.45-MeV particle. Indeed, the Atomki researchers seem to have a penchant for publishing only papers with anomalies from their beryllium-8 experiments, what some see as potential confirmation bias.

    These concerns play into the second strike against the beryllium-8 experiment. However, this second strike is more sociological than scientific.

    “Where things get a little less defensible is that some of [the community’s skepticism] has to do with the fact that this is not a place where we thought physics was going to show up,” explains Tim Tait, a theoretical physicist from UC-Irvine, and a member of the group that brought the Hungarian experiment out from obscurity with their own theory. He says that this assessment applies both to the laboratory the proposed particle came out of, and the category it falls into—being a lighter-mass particle, physicists would expect to have observed it already in previous experiments. Neither is what the community would have anticipated as the source of new physics.

    The Band of Outsiders If the theory of these Hungary-based physicists is true, it could upend the Standard Model. Credit: Attila Krasznahorkay.

    Tait says that when translated from a personal to a community level, these expectations are what lead to certain types of experiments, or certain places, being perceived as inherently more trustworthy than others. The diphoton bump, he says, is an almost textbook example of this.

    The fluctuation that later became known as the diphoton bump was measured after a renovation, one that allowed the Large Hadron Collider to run at higher energies than before—the sort of energies that physicists expect to produce new particles. A particle that sat well outside of the Standard Model apparently did not seem so dubious in a program run by well-known physicists, in the very same facility where the Higgs Boson had been discovered not too long before—in short, a place where new particles were expected to appear.

    “Lots of people came up with reasons that the diphoton excess was exciting, even though they knew it wasn’t statistically significant,” Tait said. “I think a lot of it comes down to the fact that these experiments at the LHC are very familiar territory. We trust these guys, we think they know what they’re doing, and their detectors are very sophisticated.”

    It makes sense that scientists would rely on heuristics, like reputation and the standard theories of the field, as provisional rules of thumb. In judging a new experiment, scientists have only the experimenter and his/her experiment to go on. And while experimental data is often viewed as the ultimate arbiter between theory and fact, experiments themselves are not infallible. As sociologist of science Harry Collins laid out in his theory of the “experimenter’s regress,” facts can only be produced by good instruments, but good instruments are only considered such if they produce facts.

    A long-standing reputation for doing work that has been replicated, or a set of data that easily slides into what is already understood, provides extra reassurance that this regress (or other potential distortions) played no part in a new idea.

    Yet clearly these heuristics are imperfect. Exhibit A is the diphoton excess: Even with its impeccable pedigree, the new particle turned out to be an anomaly, and all the attention it received was for naught. Meanwhile, despite general skepticism around the unknown team out of Hungary, no significant objections have yet been raised about the team’s experimental results.

    Take another example, this time from cosmology. In March 2014, a team working on the BICEP2 telescope, led by a researcher from Harvard–Smithsonian Center for Astrophysics (US), announced that they had detected polarization in the cosmic microwave background [CMB].

    This polarization would signal the existence of gravitational waves created by the expansion of the universe. It was a discovery that had been long anticipated; it was predicted by the Inflationary Theory of the Universe, which like the Standard Model, guides cosmologists’ understanding of their field, but is never observed.


    Alan Guth, from Highland Park High School and M.I.T., who first proposed cosmic inflation

    HPHS Owls

    Lamda Cold Dark Matter Accerated Expansion of The universe http scinotions.com the-cosmic-inflation-suggests-the-existence-of-parallel-universes
    Alex Mittelmann, Coldcreation

    Alan Guth’s notes:

    Alan Guth’s original notes on inflation


    Thus, the announcement was received enthusiastically at first. It fit with what was expected and who was expected to find it, and was widely accepted at face value. Over the course of the next year, however, it was proven that the signal could be attributed to cosmic dust.

    In the face of these dilemmas, the question is: Are there more effective heuristics for judging new results?

    A case could certainly be made for “no.” After all, it’s difficult to conceive of other criteria to judge new physics on face value. Being more receptive to outsiders and theories, and following up diligently on these claims, would flood the field and take up valuable time needed to investigate the most promising leads. The above incidents are also natural failures of the sort that we should expect out of such a huge and unfathomable field of science.

    What’s more, those who study the history of science see it as unlikely that such a drastic change in culture will occur.

    “The way in which a community behaves is constructed over a long social progress, made by power structures, years of training, reward systems, rules of competition and collaboration between and within different groups,” says Roberto Lalli, research scholar at the Max Planck Institute for the History of Science. He says that history has shown that subcultures within physics—such as theoretical or particle physics—are relatively stable, and that it’s likely that places like CERN and ideas within the paradigm will continue to be considered the most plausible.

    “This attitude is not only due to authority bias, but also has to do with first-hand knowledge of the internal reviewing systems within experimental groups,” Lalli said. “This creates a system of trust, which will not change in a sudden way.” Social pressures, like the continual fight for funding and university positions, also make communities more unwilling to accept those from outside the mainstream.

    But a case still can, and should, be made for seeking new standards for the system. A sterling reputation can be hard to come by in a digital world, where obtaining visibility can be like shouting over a million voices, and the difficulty of the academic job market has spread talent widely beyond the most well-known institutions. Additionally, outsider ideas can help break the echo chamber that comes of only speaking to those within a relatively closed community.

    Indeed, one of the most groundbreaking physicists in history could be framed as an outsider from the onset.

    Albert Einstein was a low-level employee at the Swiss Patent Office when he proposed his special theory of relativity and his photon hypothesis (which theorized that light consists of individual particles, or quanta) in 1905. Though social structures were vastly different within physics in the early 20th century, hindsight of Einstein’s progress suggests that current outsider ideas may simply require time to pass before they can be accepted.

    “The way in which [Einstein’s ideas] became part of the new paradigmatic framework was not rapid,” says Lalli. “It took years and a lot of work, of reformulation of previous knowledge, to fully understand the radical physical implications contained in the new theories.” For example, Einstein’s photon hypothesis was nearly universally rejected at first, and was only accepted in the late 1920s after the discovery of the Compton effect.

    For the same sort of thing to happen today, Lalli says it “might not necessarily involve a change in culture. Rather, new ideas coming from unexpected places would gradually be included in the mainstream culture.”

    As for the trusted standard of the Standard Model, physicists readily acknowledge that there’s much outside the current theories that we likely do not know. As Essig put it: “We often judge new physics models by how well they can explain phenomena and how simple they are, but Nature may or may not care about our taste.”

    This also wouldn’t be the first time that nature was revealed to be much more complex than humans expected. In the beginning of the 20th century, when scientists first began experimenting on the scale of the atom, they saw particles behaving with zero regard for the laws of physics they understood. The birth of quantum physics required scientists in the field to rethink everything they knew about the laws of the universe—in essence, to throw out their textbooks.

    A new culture of particle physics as a field of small experiments from outsider physicists, as well as huge ones from trusted groups, would not take such a dramatic transformation. It would perhaps require just the writing of a few new chapters.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus (US). We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

  • richardmitnick 8:38 am on July 18, 2021 Permalink | Reply
    Tags: "Astronomers Find Secret Planet-Making Ingredient- Magnetic Fields", , , , , Nautilus (US), , ,   

    From Nautilus (US) : “Astronomers Find Secret Planet-Making Ingredient- Magnetic Fields” 

    From Nautilus (US)

    Robin George Andrews

    Supercomputer simulations that include magnetic fields can readily form midsize planets, seen here as red dots. Credit: Hongping Deng et al.

    Scientists have long struggled to understand how common planets form. A new supercomputer simulation shows that the missing ingredient may be magnetism.

    We like to think of ourselves as unique. That conceit may even be true when it comes to our cosmic neighborhood: Despite the fact that planets between the sizes of Earth and Neptune appear to be the most common in the cosmos, no such intermediate-mass planets can be found in the solar system.

    The problem is, our best theories of planet formation—cast as they are from the molds of what we observe in our own backyard—haven’t been sufficient to truly explain how planets form. One study, however, published in Nature Astronomy in February 2021, demonstrates that by taking magnetism into account, astronomers may be able to explain the striking diversity of planets orbiting alien stars.

    It’s too early to tell if magnetism is the key missing ingredient in our planet-formation models, but the new work is nevertheless “a very cool new result,” said Anders Johansen, a planetary scientist at the University of Copenhagen [Københavns Universitet](DK) who was not involved with the work.

    Until recently, gravity has been the star of the show. In the most commonly cited theory for how planets form, known as core accretion, hefty rocks orbiting a young sun violently collide over and over again, attaching to one another and growing larger over time. They eventually create objects with enough gravity to scoop up ever more material—first becoming a small planetesimal, then a larger protoplanet, then perhaps a full-blown planet.

    Yet gravity does not act alone. The star constantly blows out radiation and winds that push material out into space. Rocky materials are harder to expel, so they coalesce nearer the sun into rocky planets. But the radiation blasts more easily vaporized elements and compounds—various ices, hydrogen, helium and other light elements—out into the distant frontiers of the star system, where they form gas giants such as Jupiter and Saturn and ice giants like Uranus and Neptune.

    But a key problem with this idea is that for most would-be planetary systems, the winds spoil the party. The dust and gas needed to make a gas giant get blown out faster than a hefty, gassy world can form. Within just a few million years, this matter either tumbles into the host star or gets pushed out by those stellar winds into deep, inaccessible space.

    For some time now, scientists have suspected that magnetism may also play a role. What, specifically, magnetic fields do has remained unclear, partly because of the difficulty in including magnetic fields alongside gravity in the computer models used to investigate planet formation. In astronomy, said Meredith MacGregor, an astronomer at the University of Colorado-Boulder (US), there’s a common refrain: “We don’t bring up magnetic fields, because they’re difficult.”

    And yet magnetic fields are commonplace around planetesimals and protoplanets, coming either from the star itself or from the movement of starlight-washed gas and dust. In general terms, astronomers know that magnetic fields may be able to protect nascent planets from a star’s wind, or perhaps stir up the disk and move planet-making material about. “We’ve known for a long time that magnetic fields can be used as a shield and be used to disrupt things,” said Zoë Leinhardt, a planetary scientist at the University of Bristol (UK) who was not involved with the work. But details have been lacking, and the physics of magnetic fields at this scale are poorly understood.

    “It’s hard enough to model the gravity of these disks in high enough resolution and to understand what’s going on,” said Ravit Helled, a planetary scientist at the University of Zürich[Universität Zürich](CH). Adding magnetic fields is a significantly larger challenge.

    In the new work, Helled, along with her Zurich colleague Lucio Mayer and Hongping Deng of the University of Cambridge (UK), used the PizDaint supercomputer, the fastest in Europe, to run extremely high-resolution simulations that incorporated magnetic fields alongside gravity.

    Magnetism seems to have three key effects. First, magnetic fields shield certain clumps of gas—those that may grow up to be smaller planets—from the destructive influence of stellar radiation. In addition, those magnetic cocoons also slow down the growth of what would have become supermassive planets. The magnetic pressure pushing out into space “stops the infalling of new matter,” said Mayer, “maybe not completely, but it reduces it a lot.”

    The third apparent effect is both destructive and creative. Magnetic fields can stir gas up. In some cases, this influence disintegrates protoplanetary clumps. In others, it pushes gas closer together, which encourages clumping.

    Taken together, these influences seem to result in a larger number of smaller worlds, and fewer giants. And while these simulations only examined the formation of gassy worlds, in reality those prototypical realms can accrete solid material too, perhaps becoming rocky realms instead.

    Altogether, these simulations hint that magnetism may be partly responsible for the abundance of intermediate-mass exoplanets out there, whether they are smaller Neptunes or larger Earths.

    “I like their results; I think it shows promise,” said Leinhardt. But even though the researchers had a supercomputer on their side, the resolution of individual worlds remains fuzzy. At this stage, we can’t be totally sure what is happening with magnetic fields on a protoplanetary scale. “This is more a proof of concept, that they can do this, they can marry the gravity and the magnetic fields to do something very interesting that I haven’t seen before.”

    The researchers don’t claim that magnetism is the arbiter of the fate of all worlds. Instead, magnetism is just another ingredient in the planet-forming potpourri. In some cases, it may be important; in others, not so much. Which fits, once you consider the billions upon billions of individual planets out there in our own galaxy alone. “That’s what makes the field so exciting and lively,” said Helled: There is never, nor will there ever be, a lack of astronomical curiosities to explore and understand.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus (US). We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

  • richardmitnick 10:02 pm on July 8, 2021 Permalink | Reply
    Tags: "The Billion-Dollar Telescope Race", , , , ESO ELT, , , Nautilus (US), ,   

    From Nautilus (US) : “The Billion-Dollar Telescope Race” 

    From Nautilus (US)

    March 13, 2014 [Re-issued 7.7.21]
    Mark Anderson

    How three groups are competing to make the first extremely large telescope.

    When Warner Brothers animators wanted to include cutting-edge astronomy in a 1952 Bugs Bunny cartoon [1] they set a scene at an observatory that looks like Palomar Observatory in California.

    The then-newly unveiled Hale Telescope, stationed at Palomar, had a 5-meter-diameter mirror, the world’s largest. In 1989, when cartoonist Bill Watterson included a mention of the world’s largest telescope in a “Calvin and Hobbes” cartoon,[2] he again set the action at Palomar. Although computers had grown a million times faster during those 38 years, and eight different particle colliders had been built and competed for their field’s top ranking, astronomy’s king of the hill stayed perched on its throne.

    This changed in 1992, with the introduction of the Keck telescope and its compound, 10-meter mirror.

    About a dozen 8-10 meter telescopes have been built since, e.g.

    But it has been more than 20 years since this last quantum leap in telescope technology. Now, finally, the next generation is coming. Three telescopes are on their way, and the race among them has already begun.

    Three new observatories are on the drawing boards, all with diameters, or apertures,[3] between 25 and 40 meters, and all with estimated first light being collected in 2022: the Giant Magellan Telescope (GMT, headquartered in Pasadena, Calif.); the Thirty Meter Telescope (TMT, also in Pasadena); and the European Extremely Large Telescope (E-ELT, headquartered in Garching, Germany). At stake are the mapping of asteroids, dwarf planets, and planetary moons in our solar system; imaging whole planetary systems; observing close-in the Goliathan black hole at the Milky Way’s core; discovering the detailed laws governing star and galaxy formation; and taking baby pictures of the farthest objects in the early universe.

    Thanks to these telescopes, astronomy is poised to reinvent itself over the next few decades. Renown and glory, headlines and prestige, and perhaps a few Nobel Prizes too, will go to those astronomers that first reveal a bit of new cosmic machinery. Surprisingly, the story of this race will be written, not just in the technical specifications and design breakthroughs of the instruments themselves, but also in the organizational approaches that each team has taken. The horse race is a unique window both into technology, and into the process of science itself.

    In the 50 years following its 1949 construction, the telescope that came closest to beating the performance of the Palomar Observatory was the Bolshoi Teleskop Alt-azimutalnyi (BTA-6), a Soviet telescope that used a 6-meter mirror and was christened in 1975.

    BTA-6 [Большой Телескоп Альт-азимутальный] Large Altazimuth Telescope, a 6-metre (20 ft) aperture optical telescope at the Special Astrophysical Observatory located in the Zelenchuksky District on the north side of the Caucasus Mountains in southern Russia.

    But the BTA-6 only proved how difficult it is to build and operate single-mirror telescopes larger than 5 meters. Its mirror was so elephantine that it cracked under its own weight, and the heat from the light it collected destabilized its sensitive optics. As a result, for productive astronomical observatories, Palomar remained the world’s most powerful until 1992, when the 10-meter W.M. Keck Observatory telescope in Hawaii first opened its eyes.

    Keck’s history began with a single American physicist called Jerry Nelson, an upstart scientist at the DOE’s Lawrence Berkeley National Laboratory. In 1977, conventional wisdom held that a 10-meter instrument, just as subject to gravity’s warp as BTA-6, was extremely impractical if not downright impossible. Nelson’s innovation was not to rely on one mirror but rather on a honeycomb-like structure of small, hexagonal mirror segments. Each flexible and lightweight mirror would be independently mounted and had its own curvature unique to its placement in the array. A mirror in the center would be curved upward on all six sides. A mirror placed off-axis to the right would curve down on its left edges and up on its right. The sum total structure of hexagonal mirrors would be a meta-mirror that behaved exactly like a curved single mirror.

    Nelson’s design was made more complex by the fact that as the telescope’s body rotated, each mirror needed to be adjusted on the fly by arrays of computerized screws and flywheels that nudged the mirrors so as to compensate for gravity’s pull [Active Optics].

    “I remember when Jerry Nelson used to give these talks,” says Michael Bolte, TMT member and astronomy professor at the University of California-Santa Cruz (US). “Everybody thought he was completely nuts. They thought, if you get out in the real world where the wind blows and gravity vectors change and humidity changes, surely this would never work.” Even today, astronomers’ skepticism seems well warranted. To operate a 10-meter telescope using Nelson’s design required continual monitoring and adjustment of each mirror segment’s position to within a few billionths of a meter.

    On top of that, Keck later implemented the further innovation of using another array of computers to monitor minute disturbances in the atmosphere above the telescope. Then an additional, deformable mirror down the line could compensate for the tiny thermal wiggles that the atmosphere introduces to a star’s image.

    In other words, Nelson hoped to design a telescope that could “subtract” off the influence of the earth’s atmosphere, all but launching his instrument into space without ever lifting it off the ground. (Such adaptive optics are being used in all three next-generation telescope projects.)

    No wonder, then, that many leading astronomers in the 1980s and early 1990s had written off Nelson’s scheme. A 1993 Los Angeles Times profile of Nelson, for instance, quotes an anonymous source it describes as “one of the nation’s top telescope designers.” The anonymous source rated Nelson an “arrogant fool” and predicted that the W.M. Keck Observatory’s $200 million price tag would ultimately just be money down the drain.

    Yet when in 1992 the Keck telescope—followed by its cousin Keck II in 1996—instead delivered on its designers’ promise of ushering in a new era of 10-meter class astronomy, other observatories around the world were caught by surprise.

    “These problems you’d been working on your whole career, after one night on Keck, you’d have all the data you’d need,” Bolte says. “We were actually unpopular with much of the world. And there were many people who, when we started thinking about a 30-meter telescope, swore they’d never get ‘Keck’ed again.”

    The history of the Keck design continues to color the field. One of the three teams, TMT, has directly inherited Keck’s design and many of its team members, including Nelson. TMT will also share mountaintop space with Keck, on the dormant Mauna Kea volcano in Hawaii. Its new design is an extension of Keck’s segmented hexagonal mirror to the 30-meter scale. TMT’s Bolte adds, with not more than the tiniest amount of relish, that the competing E-ELT team developed a similar plan to the TMT/Keck’s, even without any legacy or institutional inertia pushing it toward one telescope design or another.

    “I don’t want to sound like I’m criticizing anybody here,” he says. “But I think if you were really going to design a telescope from scratch, a 25 to 30 meter telescope, you’d almost certainly pick the TMT design over the GMT design. That is, small segments with very tiny gaps. As evidence for that, the Europeans could have done whatever they wanted. They had a clean slate… They did the cost benefit analysis and concluded that a telescope very much like the TMT was the way to build.”

    In fact, TMT and E-ELT’s mirror segments are exactly the same size scale, 1.44 meters. (They’re not interchangeable, though, as each mirror has a different curve and warp.) Asked why his team picked the same mirror component size as TMT, Tim de Zeeuw, [then] director general of the European Southern Observatory (ESO), noted that “there is … no formal intention to collaborate on the production of segments, but since the sizes are the same it is however also not impossible.” The TMT will use its 492 hexagonal mirrors to create an effective 30-meter aperture. E-ELT, to be sited on a mountaintop observatory in the Atacama Desert near Antofagasta, Chile, will use 798 hexagonal mirror segments to create an effective telescope size of 39 meters.

    The GMT telescope, by contrast, uses seven circular 8.4-meter mirrors that all reflect into a central convex mirror suspended above the primary mirror. The seven-mirror structure, to be situated on a mountaintop observatory near La Serena, Chile, together create a meta-mirror with a resolving power equivalent to that of a 24.5 meter single-mirror telescope. The segments (of which [then] one has been completed and two more are being manufactured) use a lightweight honeycomb design that overcomes the 6-meter limit that BTA-6 famously encountered. The University of Arizona (US), a GMT partner institution, is making the mirrors in its U Arizona Steward Observatory Mirror Lab (US), located beneath the university’s football stadium.

    “Completing the first mirror segment was a very significant milestone for us,” says Charles Alcock, director of the Harvard-Smithsonian Center for Astrophysics (US) and member of the GMT board. “It has a very complicated shape, since it’s an off-axis segment it’s not symmetrical about its center. And it’s being polished to an accuracy of 19 nanometers. So it is the best large optical surface ever created in human history.”

    Roger Angel,[1] professor of astronomy and optics at the University of Arizona, was the GMT’s chief architect and intellectual forebear. Alcock notes that although Keck was the first 10-meter class telescope, there are other telescopes—including the Magellan Telescopes in Chile (distinct from the Giant Magellan Telescope) and the Multiple Mirror Telescope and Large Binocular Telescope in Arizona—that do not use the Keck design.

    “The TMT is a direct successor to the Kecks, but with 492 segments, up from 36, it is a significantly different design,” Alcock says. “The GMT design … has as much heritage as—arguably more than—the TMT design.”

    With so much hard science in the balance, one might think that the varying designs of the three competing telescopes would decide which is first past the post. But there is a more prosaic aspect to the competition: Securing partners. This boils down to a kind of musical chairs of international corporations, institutes, and national organizations. “Everybody in our world knows who the potential partners are,” says Alcock. “If we’re talking with somebody, you just know that TMT has probably had some contact with them. I think it’s unlikely that any individual potential partner would join both projects. It’s high stakes in that regard.”

    “The GMT realized very early on that they needed to find some more partners to fund their telescope, so we were all running around the world doing the same thing,” says TMT’s Bolte. “We’d show up at the airport in Beijing just as somebody from the E-ELT was leaving. Or we’d run into [GMT leader] Wendy Friedman in the airport in Tokyo. We were all talking up our projects to all of these countries. I don’t know for sure how they made their choices. But I’m really pleased that we got some of the major players to select our project given the choice of all three.”

    A major win for TMT was China, a country whose economic size and scientific stature meant that each telescope’s officials watched its courtship maneuvers closely. China had considered making its own 30-meter class telescope, but the country doesn’t have mountaintop sites that boast the astronomically perfect conditions of the dry Chilean mountains or the Mauna Kea summit. Shude Mao of the National Astronomical Observatories of China at Chinese Academy of Sciences [中国科学院](CN) in Beijing now sits on the TMT board. He says Keck’s impressive track record was an important factor in swaying the world’s second-largest economy toward TMT.

    China’s decision also reveals the different kinds of organizational structures at play in each of the three competing teams. The E-ELT has a European model of national-level cooperation. Joining the E-ELT requires membership in the European Southern Observatory [Observatoire européen austral][Europäische Südsternwarte] (EU) (CL) and a pledge of a small fixed percentage of a country’s gross domestic product (GDP) toward the ESO budget. This would have made membership expensive for China, whose GDP in 2013 was $9 trillion.

    Both GMT and TMT have a trim and more corporate American-style organization that is part institutional, part national partnerships. GMT was less attractive to China, however, because it mandates cash contributions. By contrast, Mao says, 70 percent of China’s contribution to TMT in its earliest stages can be “in kind.” This means China may be required to manufacture and then donate a spectrometer or a certain number of the TMT’s 492 mirror segments. But China could then also do this work in-country, stimulating its own industries. “That is extremely important to us,” Mao says.

    By contrast, says Brian Schmidt of the Australian National University (AU), GMT downplays in kind contributions for a good reason. Schmidt, a Nobel Laureate astronomer and a leader in GMT’s effort to sign on Australia, explains that GMT awards its contracts only on the basis of scientific and technical merit. It plays no favorites in awarding its work orders. “It’s a real minimalist structure that’s focused on really getting the thing done,” Schmidt says of the GMT organization.

    Other big countrywide “gets” round out the early tally. As of early 2014, these have been Brazil signing on to ESO (and thus E-ELT), although this still requires ratification by the Brazilian Parliament; South Korea and Australia putting their weight behind GMT; and China, India, and Japan backing TMT.

    The resulting three-way race, as ESO public information officer Richard Hook describes it, is a kind of portmanteau of cooperation and competition. “You could call the situation ‘Co-opetition,’ ” he says.

    For each of the three telescope projects, much of the industrial work and projected completion dates are well-guarded trade secrets. All three telescopes’ websites leave the exact projected date of their completion unknown, expressing the likelihood that each will be completed and conducting actual science by 2022. But each community clearly has a common goal in mind: to be first.

    “I’m really hoping we’re still going to be first,” Bolte says of TMT. “We would have liked to have started building this telescope five years ago. I think technically we were ready to do that. What we didn’t have is our partnerships put together.”

    Says ESO’s Hook of his group’s E-ELT, “Yes, the scientific community that we serve is of course keen to be first.” But he goes on to add, “It is certainly possible to overstate the level of competition. All three will be general-purpose telescopes with long lives. They are not focused on one result.”

    In fact, every official from the three telescopes that Nautilus spoke to for this story was careful to couch their assessments of the competition in collegial terms. More than once it was expressed that they didn’t want to appear sniping or derogatory toward telescopes that, in all likelihood, will be as much collaborators as rivals. No one seemed to want to provide justifiable cause for bad blood. But that each team is also in a race to the finish was plainly obvious.

    Regardless of the winner among the three teams, should all three telescopes be built—and no expert consulted for this story predicted any other outcome—they will likely surge astronomy ahead unlike any time in modern memory. The only precedent within the professional lifetimes of astronomers working today would be Keck’s launch in 1993. Just what new windows on the universe this trio of extraordinary scientific instruments may open remains anyone’s guess.

    “Our experience with previous generations of telescopes is that people do carry out the science programs that led them to build the telescopes,” says Alcock. “But frequently the most exciting science is something that nobody was thinking about. Something entirely unanticipated.”


    1. The last scene of this 1952 Bugs Bunny cartoon featured an observatory that looked a suspicious amount like the Palomar Observatory in California.

    2. In this 1989 comic, Calvin, disguised as “Stupendous Man,” visits Palomar Observatory.

    3. For more information on the lenses used in telescopes, visit Starizonia.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus (US). We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

  • richardmitnick 8:34 pm on July 8, 2021 Permalink | Reply
    Tags: "The Planets with the Giant Diamonds Inside", , Nautilus (US),   

    From Johns Hopkins University Applied Physics Lab via Nautilus (US) : “The Planets with the Giant Diamonds Inside” 

    Johns Hopkins University

    From Johns Hopkins University Applied Physics Lab


    Nautilus (US)

    July 7, 2021
    Corey S. Powell

    Mining the mysteries of Uranus and Neptune.

    Tilt: NASA Voyager’s instruments showed that Uranus’ magnetic field is tilted 60 degrees relative to its axis, as if your compass needle pointed to Houston instead of the north pole. This image shows the magnetic field. The yellow arrow points to the sun, the light blue arrow marks Uranus’ magnetic axis, and the dark blue arrow marks Uranus’ rotation axis. Credit: Tom Bridgman/ NASA Goddard Scientific Visualization Studio (US)

    On the dark night of March 13, 1781, William Herschel settled down in his garden observatory in Bath, England, for a routine night of observing stars, when he noticed something out of place in the heavens. Through the eyepiece of his homemade 7-foot telescope, he spied an interloper in the constellation Gemini: “a curious, either nebulous star or perhaps a comet,” as he recorded it. For weeks, he stalked the unknown object, monitoring its steady appearance and circular path around the sun until there could be no doubt about its true identity. He had discovered not a comet but a new planet, far more distant than any of the others.

    Stormy Weather: These images of Uranus, taken by the Keck II telescope in Hawaii, are the sharpest, most detailed pictures of the planet to date, according to NASA. The north pole (to the right) is characterized by a swarm of storm-like features, and an unusual scalloped pattern of clouds encircles the planet’s equator.Credit: Lawrence Sromovsky, Pat Fry, Heidi Hammel, Imke de Pater / University of Wisconsin-Madison (US).

    Being a politically astute fellow, Herschel proposed naming the planet Georgium Sidus, or “George’s star,” in honor of King George III. The ploy worked—he promptly was named the king’s astronomer and received a royal stipend—but his colleagues outside of England objected. They wanted a noble and politically neutral name like Urania, the Greek muse of astronomy. In the end, scientists settled on the even more dignified “Uranus,” the ancient Greek god of the sky and ancestor of the other deities. Centuries of snickering ensued.

    But seriously. Uranus orbits the sun at twice the distance of Saturn, so Herschel’s discovery instantly doubled the size of the known solar system. From a modern perspective, it’s hard to appreciate how shocking that was. At the time, the solar system was the only charted region of space; nobody yet had a clue about how far away even the nearest the stars were. In effect, Herschel had doubled the size of the entire known universe. He also brushed away the final traces of classical astronomy and astrology. Uranus is typically described as the first planet discovered since antiquity, but it’s more accurate to say it was the first planet to be discovered, period. All the others are readily visible to the naked eye, and so were known to all. Uranus shattered the common assumption that there were no more planets beyond the six classical ones, establishing an endless-frontier ethos that resonates through science and science fiction to this day.

    That ethos is part of daily life for Kirby Runyon, a young geomorphologist at Johns Hopkins University’s Applied Physics Lab, who is developing new ways to study Uranus and its similar-but-bizarrely-different planetary sister, Neptune. Like the handful of others who study this distant duo, he is enthralled by the boundary-busting nature of the solar system’s outermost planets. “What brought me into space science as a professional was the chance to, as Star Trek says, ‘explore strange, new worlds’,” Runyon says. “If you like seat-of-your-pants, Captain Picard-style exploration, then Uranus and Neptune have to rank high in your list.”

    You have to admire Runyon’s passion. After all, who dreams of a space voyage to Uranus or Neptune? They are not brightly ringed like Saturn, nor do they hold the prospect of life like Mars. The two planets, though, still hold a special status as worlds on the edge. They formed in chaos, at the boundary between the inner, planetary part of our solar system and the outer zone filled with far-flung comets. In this transitional zone, they also took on transitional forms, with a size and composition that places them halfway between gas giant planets like Jupiter and rocky planets like Earth. Astronomers call these in-betweeners “ice giants,” and are now finding that such midsize worlds are extremely common around other stars. “Neptune and Uranus are the closest analogs in our solar system to the most populous type of planet that we know of,” says Heidi Hammel, a veteran researcher of the outer planets who is now at the Association of Universities for Research in Astronomy (US) in Washington, D.C.

    Uranus and Neptune are also fascinatingly odd in themselves. Their cloudy surfaces are marked with raging storms and the fastest winds recorded on any planet, while high above they have complex systems of moons, including ones that may harbor buried oceans. All the more shame, then, that only a single spacecraft has ever visited them—and that was more than three decades ago. “They’re enigmas because they are so far away,” Hammel says wistfully, “but they are such intriguing enigmas.”

    Long before anyone was poking rockets above Earth’s atmosphere, Uranus was already directing astronomers on a virtual voyage through the solar system and beyond. In the decades after Herschel’s discovery, observations of Uranus indicated that it was deviating from its expected orbit around the sun. By the 1820s, the discrepancy was undeniable: Either Newton’s theory of gravity was wrong, or there was some object beyond Uranus that was tugging it off course. “The Newtonian theory appeared to the great majority, perhaps nearly all, astronomers to be the impregnably true system of the world,” writes science historian Robert W. Smith.[1] At the same time, the discovery of Uranus had already demonstrated the possibility of more new worlds. Faith in the laws of physics dictated that there must be another, unseen planet out there.

    That faith paid off on Sept. 23, 1846, when German astronomer Johann Gottfried Galle—using calculations provided by French mathematician Urbain Le Verrier, in an act of trans-national nerd comity—spotted a new planet less than one degree from its predicted location. The orbital calculations pinpointing its location took years to complete; Galle’s visual search for the planet required all of one hour. Galle suggested naming it Janus, the two-faced Roman god, implying that it was facing outward toward the stars. The more optimistic Le Verrier objected. “Janus would indicate that this planet is the last of the solar system, which there is no reason to believe,” he wrote.[2] Instead, he proposed Neptune, the god of the sea.

    The discovery of Neptune was as transformative as that of Uranus, though in a quite different way. Uranus had already expanded the scale of the known universe. Neptune expanded the means by which we could get to know it. When Galle saw the planet exactly where Newton’s equations said it should be, he demonstrated that astronomers could detect celestial bodies by gravity alone. Now they could track down objects that had never been observed, perhaps even ones so dark or distant that they fundamentally could not be observed.

    Modern cosmologists use the term Dark Matter to refer to invisible mass that is thought to influence the formation and structure of galaxies across the universe; in that sense, the term traces back to a 1933 paper by the Swiss-American cosmologist Fritz Zwicky. But the concept of dark matter truly began with Neptune, the first celestial object ever discovered before it was seen. From there, things escalated quickly. German astronomer Frederick Bessell had been tracking the erratic motions of the bright stars Sirius and Procyon and deduced that they, like Uranus, were being pulled off-course by unseen objects. “The existence of numberless visible stars can prove nothing against the existence of numberless invisible ones,” Bessell wrote in 1844.[3] Invisible stars sounded like an oxymoron, but the discovery of Neptune made the outlandish idea seem more plausible. The reality of those dark companions was soon confirmed; in the 1910s the objects were identified as white dwarfs, the faint, collapsed corpses of stars like the sun. Similar detective work in the 1970s led to the discovery of black holes scattered across our galaxy.

    In all this excitement, Uranus and Neptune themselves largely got left behind. They languished in scientific obscurity for another century and a half, mostly because they are so damn hard to study. Uranus never comes within 1.6 billion miles of Earth, 40 times as far as Mars; Neptune is a billion miles farther still. Its apparent size in the sky is equivalent to a dime seen from a mile away. The development of deep-sky photography beginning in the late 19th century greatly boosted the study of our galaxy and led directly to the discovery of countless other galaxies beyond. For the ice giant planets, however, the new technology had an opposite effect. When astronomers stopped looking through the eyepiece and started focusing on photographic plates instead, the planets became even more obscure.

    Neptune’s Rings: NASA Voyager captured Neptune’s rings in 1989. The long-exposure images were taken while the rings were back-lighted by the sun, which enhances the visibility of dust. The bright glare in the center is due to over-exposure of the crescent of Neptune. The two gaps in the upper part of the outer ring (on the left) are due to blemish removal in the computer processing. NASA/JPL-Caltech (US)

    “Going back to the 1800s, early observers would look at Uranus and see bands and other features,” Hammel says. Their eyes were trained to pick out the fleeting moments when the air becomes steady and fine details pop into view. Photography and early forms of digital imaging couldn’t capture those split-seconds of clarity. Instead, they yielded blurry, long-exposure images that suggested the outermost planets were bland and unchanging. “The technology of the time smeared everything out, giving rise to this mythology that Uranus had no cloud features,” Hammel laments. “We had a hundred years of misinterpretation.”

    Then at long last humans developed the technology to visit the ice giants and see them up close … and the misinterpretations just kept coming. On Jan. 24, 1986, NASA’s Voyager 2 swooped over Uranus’s cloud tops and sent back picture after picture of a featureless blue-green orb.

    By bad luck, the spacecraft had arrived at the beginning of summer in the planet’s northern hemisphere, a time when the global weather turns hazy and bland. The Voyager 2 images cemented the idea of Uranus as a boring planet—a knock that still galls Hammel. “It was like, ‘Let me show you a picture of what I looked like on one day in 1986.’ That doesn’t give you an understanding of who I am as a person,” she says.

    The Voyager flyby did offer hints that there’s more to Uranus than meets the eye. The planet has a system of thin, dark rings, which turn out to contain large chunks—possibly the remains of a moon that was destroyed long ago. More surprising, Voyager’s instruments showed that Uranus’s magnetic field is tilted 60 degrees relative to its axis, as if your compass needle pointed to Houston instead of the north pole. There must be a huge, lopsided magnetic generator cranking away inside the planet, which leaves Hammel buzzing with questions: “What kind of internal structure can do that? Is it stable? Does it change over time?”

    But the reputation of the ice giants didn’t really recover until Voyager 2 reached Neptune on Aug. 25, 1989. Unlike its sibling, Neptune was a riot of activity. It seemed to be staring back at the spacecraft with its Great Dark Spot, an anticyclone storm (a hurricane in reverse, with a high-pressure eye) nearly as large as the entire Earth. The Spot was streaked with white clouds of methane ice and surrounded by smaller storms and dark bands circling the entire planet, all tinged a rich, deep blue by methane gas in Neptune’s atmosphere. Beautiful, complex, and not at all boring.

    The Voyager results revealed that weather operates differently on ice giants than it does here on Earth, for reasons that scientists are only starting to decipher. “Wind speeds increase as you go farther out from the sun, which is weird,” says Amy Simon, a Uranus and Neptune enthusiast at NASA’s Goddard Space Flight Center. On Uranus, they blow at 550 mph, as fast as a jet airplane at cruising speed. On Neptune, the winds are even fiercer, averaging 700 mph and gusting to 1,500 mph around the Great Dark Spot. They manage to pick up tremendous energy, even though Neptune receives just 1/900th as much solar heat as Earth does.

    Seasons on the ice giants are also unlike anything seen on Earth or anywhere else. For one thing, the seasons are extreme, especially on Uranus: The planet is tipped sideways, so its poles spend half the time in perpetual sunshine and the other half in total darkness. For another, seasons take a long time to change, because the ice giants follow huge, lazy paths around the sun. Uranus takes 84 years to complete a single orbit, and Neptune takes 165 years. Neptune’s northern hemisphere was heading into winter when Voyager flew by in 1989.[4] Springtime won’t arrive until 2038. The ice giants have both the fastest and the slowest climates in the solar system, which makes them useful as extreme natural laboratories. “We run the same weather and climate codes we use on Earth, and we learn about unknown sensitivities or details that aren’t quite right,” Simon says. “And if someday we want to apply these codes to planets around other stars, they’d better work across our whole solar system first.”

    In 2014, belatedly acknowledging the mad complexity of weather on the ice giants, NASA greenlit the Outer Planet Atmospheres Legacy program, or OPAL, with Simon in charge; her glorious official title is Senior Scientist for Planetary Atmospheres Research. Once a year, OPAL takes over the Hubble Space Telescope and turns it into an outer-planet weather satellite, monitoring conditions on Uranus and Neptune (and Jupiter and Saturn, for good measure), with Simon as the interplanetary weathercaster.

    For the first time, scientists have both the time and the clarity of vision to learn the long-term personalities of Uranus and Neptune. Simon’s reserved demeanor lights up when she describes the quirks she has been observing on her planets. As Uranus has progressed from northern summer to late autumn, its weather has transitioned from hazy to crazy. “We see a polar cap that has gotten really bright and thick. And we’ve seen little storms. They tend to break apart really fast, on order of an hour, because the high winds shear things apart very quickly,” she says. Those changes demonstrate that even tiny variations in solar energy can transform the weather of a giant planet, nearly four times the size of Earth.

    On Neptune, the most significant finding from OPAL is that the activity just never ends. “The big thing we’ve found has been the dark spots. In our few years of monitoring, we’ve seen two more of them. One was already there when we started, and it disappeared over a couple of years. The other one formed in 2019. After they form, the dark spots start to drift in latitude, just like a hurricane on Earth, until eventually they break apart,” Simon says.

    It’s not clear what maintains all of this activity. One possible explanation, Hammel notes, is that the planets’ ultra-cold air is almost frictionless, so it takes very little kick to get big storm systems going. One important clue is that the two ice giants behave quite differently, despite being almost identical in mass, composition, and diameter. “Uranus and Neptune don’t look much alike at all. We see a lot more of the discrete storms on Neptune than on Uranus, and we don’t see much of a bright polar cap,” Simon says.

    The disparity hints at stark differences deep inside the two ice giants. Voyager measurements showed that Neptune emits 2.7 times as much heat as it receives from the sun, apparently retaining a lot of energy from the time of its formation. Uranus, in contrast, sheds just a trickle of additional warmth. “Internal heat’s got to be a much bigger factor than the sunlight in driving the weather, but we’re still trying to puzzle some of that out,” says Simon. If anything, it adds another layer of mystery: Why is Neptune so much hotter under the collar than Uranus? One hypothesis is that Uranus had a near-fatal encounter with a huge proto-planet, twice as massive as Earth, some 4.5 billion years ago. The collision could have knocked it over, explaining its sideways tilt, while scrambling its interior in a way that allowed its primordial warmth to escape: one tidy explanation for two major oddities.

    “It’s a nice idea, but it seems a bit contrived,” Simon says, her measured tone returning. “Every time we think we understand these planets, we realize we don’t.”

    One way to learn more about the ice giants is to get under their skin, by recreating them in the lab. Based on what they can measure and infer about their overall composition, planetary scientists have deduced that both Uranus and Neptune must contain vast quantities of water, ammonia, and methane on the inside. In everyday life, we’d call that combination “Windex and natural gas.” Inside the ice giants, however, these molecules mingle together into a slush that astronomers refer to generically as “ice”—hence the term ice giants. Recent experiments show that it is not like any ice you have ever seen, however.

    Dominik Kraus, a physicist at the University of Rostock [Universität Rostock] (DE) in Germany, leads a group of researchers who have been shooting X-ray lasers at simulated bits of ice-giant material, heating and compressing it to match conditions in the planets’ interiors. He finds that the carbon atoms spontaneously break free of their molecules and arrange themselves into diamonds. Inside Uranus and Neptune, such diamonds could grow to the size of a person, slowly raining down toward the core of the planet. The diamond rain would release energy and stir up huge currents, possibly explaining the unusual magnetic fields of Uranus and Neptune.

    In parallel work, Marius Millot at DOE’s Lawrence Livermore National Laboratory (US) and his colleagues subjected water molecules to similarly extreme conditions and found that it turns into previously unseen material, “superionic ice”—a hot, black, crystalline version of water, also known as Ice XVIII. It sounds exotic, but maybe it shouldn’t. Given how much water is inside the ice giants, and given how many ice giants astronomers are finding around other stars, superionic ice may actually be the most common form of water in the universe. Black ice and diamond rain could be the norm; lakes and rivers and lumps of coal may be the cosmic oddities.

    Another way to know more about the ice giants is to look at the company they keep—the large systems of moons that orbit Uranus and Neptune. Like Uranus itself, its moons are tilted at a rakish, 98-degree angle. No other planet is oriented that way. Whatever knocked the planet over evidently took its moons along for the ride. “If a large impact tipped the whole planet on its side, then the gravitational excess from Uranus’s equatorial bulge would have pulled its whole system of moons to be on its side as well,” says Runyon.

    The moons of Neptune document a whole other style of disaster, one that spared the planet but unleashed pandemonium around it. Neptune’s system of moons is overwhelmingly dominated by a single large one, Triton, surrounded by 13 much smaller bodies, mostly in irregular, looping orbits. Triton orbits the planet in a clockwise direction, the opposite of every other planet and major moon, indicating that it formed separately and later got snared by Neptune. When that happened, it must have rolled through the Neptunian system like a bowling ball, as Runyon explains: “The gravitational interactions from Triton probably scattered Neptune’s original system of moons. If there were rings, it would have scattered them, too.” The current moons either reassembled from the wreckage or were captured afterward. Triton also left behind a set of thin, clumpy rings that bunch together in arcs, unlike any other formation in the solar system.

    The Voyager 2 flybys of the 1980s unveiled the ice-giant moons as a set of distinctive worlds. The five main moons of Uranus display ancient chasms, ripples, and hints of volcanic flows—all made of frozen water and other ices instead of rock. The two largest, Ariel and Titania, appear to have been geologically active for an extended period of time. The smallest, Miranda, is a jumble of formations that looks like a jigsaw puzzle that was put together by an inattentive child; nobody knows how it got that way. But the true marvel is Triton, a geologically youthful world that resembles a cantaloupe, its surface sculpted by “cryovolcanic” eruptions of water-ammonia lava. Triton is also dotted with active geysers, likely caused by the explosive defrosting of underground nitrogen. To the amazement of mission scientists, Voyager 2 sent back images of sooty plumes shooting 5 miles into the air and trailing for hundreds of miles.

    Triton broadly resembles Pluto, but it is in many ways the wilder and more exciting of the duo (not to keep dumping on the dwarf planet, but facts are facts). It is about 15 percent larger than Pluto and, more significantly, it is more geologically active, with liquid water sloshing away underground. That’s right: A moon around the coldest, most distant planet in the solar system contains a huge, underground ocean. “Triton is exchanging gravitational energy with Neptune, so it’s warm and gooey on the inside,” Runyon says. “On Earth, living things like warm, gooey places. If you put Earth microbes in that warm, gooey Triton ocean, they would probably survive and proliferate. Which raises the possibility, since we don’t really know how things go from non-life to life, that Triton could be a habitable and inhabited world.”

    He’s not saying aliens, mind you. He’s just saying there could be aliens.

    Despite all of these insights, we are in many ways still at the handshake stage of getting to know the ice giants. The OPAL program watches the planets for less than one day a year. Voyager 2 gathered only limited information about the planet’s composition and internal structure. The flybys happened so quickly that the spacecraft saw just one side of the Uranian and Neptunian moons. “There’s that whole unexplored other half. We don’t know what the heck is happening on the rest of Triton,” Simon says.

    The yearning for deeper familiarity is even more acute now that astronomers recognize Uranus and Neptune as prototypes of billions of similar worlds all across our galaxy. Right now, these exoplanets—planets around other stars—are true cyphers. Astronomers can deduce their sizes, masses, densities, and not much else. Still, that’s enough to tell that many of them seem like slightly shrunken versions of Neptune, with thick, toxic atmospheres. Others, just a wee bit smaller, seem to be rocky “super-Earths.” Nobody knows why this dividing line exists, or whether super-Earths could be habitable. For that matter, nobody yet knows whether ocean moons like Triton can support life. When you’re dealing with ice giants, you get used to the three-word mantra: We don’t know.

    That mantra explains why Runyon just completed an intense round of work as project scientist on Neptune Odyssey, a proposed flagship—that is, multi-billion-dollar—NASA mission that would perform an extended survey of Neptune and Triton while dropping a probe into Neptune’s atmosphere.

    The technology exists to mount an ambitious expedition like this. Even the sober analysts at the National Academy of Sciences have identified an ice-giant mission as a high scientific priority. Unfortunately, these kinds of projects keep getting shot down. Earlier this year, NASA came close to approving Trident, a stripped-down mission to Triton, but passed it over in favor of a pair of probes to Venus.

    A big part of the problem is the waiting. It took Voyager 2 a dozen years to reach Neptune. If Trident had been approved, it wouldn’t have reached its destination until 2038—and even then, it would have sent back just another snapshot. If you want to study the ice giants, you have to adapt to their pace of doing things. No one researcher is going to live long enough to witness a full cycle of seasons on Uranus, much less on Neptune. The last mission to an ice giant happened 32 years ago, and realistically the next one is not likely to arrive until the 2040s at the earliest; this is inevitably going to be a multi-generational effort. Heidi Hammel (age 0.37 Neptune years) has been at it so long that she has largely moved on to administrative work. “This is sad to say, Corey, but I kind of don’t do astronomy anymore,” she confesses. But she’s encouraged to see people like Kirby Runyon (a spritely 0.21 Neptune years) entering the field.

    Until someone invents warp drive or the like, there is no way to overcome the obstacle of time. The only path forward is embracing extreme patience as the cost—or the joy—of pressing into the unknown. After he discovered Uranus, Herschel explained it was not luck but persistence that brought the planet into view. “I examined every star of the heavens,” he wrote. That night in 1783 was Uranus’ “turn to be discovered.” Perhaps now it is the ice giants’ turn to be truly known, in all of their weird and wonderful glory.


    1. Smith, R.W. The Cambridge network in action: The discovery of Neptune. Isis 80, 395-422 (1989) https://www.journals.uchicago.edu/doi/10.1086/355082.

    2. Kollerstrom, N. The naming of Neptune. Journal of Astronomical History and Heritage 12, 66-71 (2009).

    3. Bessell, F. Extract from the translation of a letter from Professor Bessel, dated Kronigsberg, 10th of August, 1844. On the variations of the proper motions of Procyon and Sirius. Monthly Notices of the Royal Astronomical Society 6, 136-141 (1844). https://academic.oup.com/mnras/article/6/11/136/964304

    4. Meeus, J. Equinoxes and solstices on Uranus and Neptune. Journal of the British Astronomical Association 107, 332 (1997).

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    JHUAPL campus.

    Founded on March 10, 1942—just three months after the United States entered World War II—Applied Physics Lab -was created as part of a federal government effort to mobilize scientific resources to address wartime challenges.

    APL was assigned the task of finding a more effective way for ships to defend themselves against enemy air attacks. The Laboratory designed, built, and tested a radar proximity fuze (known as the VT fuze) that significantly increased the effectiveness of anti-aircraft shells in the Pacific—and, later, ground artillery during the invasion of Europe. The product of the Laboratory’s intense development effort was later judged to be, along with the atomic bomb and radar, one of the three most valuable technology developments of the war.

    On the basis of that successful collaboration, the government, The Johns Hopkins University, and APL made a commitment to continue their strategic relationship. The Laboratory rapidly became a major contributor to advances in guided missiles and submarine technologies. Today, more than seven decades later, the Laboratory’s numerous and diverse achievements continue to strengthen our nation.

    APL continues to relentlessly pursue the mission it has followed since its first day: to make critical contributions to critical challenges for our nation.

    Johns Hopkins Unversity campus.

    Johns Hopkins University opened in 1876, with the inauguration of its first president, Daniel Coit Gilman. “What are we aiming at?” Gilman asked in his installation address. “The encouragement of research … and the advancement of individual scholars, who by their excellence will advance the sciences they pursue, and the society where they dwell.”

    The mission laid out by Gilman remains the university’s mission today, summed up in a simple but powerful restatement of Gilman’s own words: “Knowledge for the world.”

    What Gilman created was a research university, dedicated to advancing both students’ knowledge and the state of human knowledge through research and scholarship. Gilman believed that teaching and research are interdependent, that success in one depends on success in the other. A modern university, he believed, must do both well. The realization of Gilman’s philosophy at Johns Hopkins, and at other institutions that later attracted Johns Hopkins-trained scholars, revolutionized higher education in America, leading to the research university system as it exists today.

    The Johns Hopkins University (US) is a private research university in Baltimore, Maryland. Founded in 1876, the university was named for its first benefactor, the American entrepreneur and philanthropist Johns Hopkins. His $7 million bequest (approximately $147.5 million in today’s currency)—of which half financed the establishment of the Johns Hopkins Hospital—was the largest philanthropic gift in the history of the United States up to that time. Daniel Coit Gilman, who was inaugurated as the institution’s first president on February 22, 1876, led the university to revolutionize higher education in the U.S. by integrating teaching and research. Adopting the concept of a graduate school from Germany’s historic Ruprecht Karl University of Heidelberg, [Ruprecht-Karls-Universität Heidelberg] (DE), Johns Hopkins University is considered the first research university in the United States. Over the course of several decades, the university has led all U.S. universities in annual research and development expenditures. In fiscal year 2016, Johns Hopkins spent nearly $2.5 billion on research. The university has graduate campuses in Italy, China, and Washington, D.C., in addition to its main campus in Baltimore.

    Johns Hopkins is organized into 10 divisions on campuses in Maryland and Washington, D.C., with international centers in Italy and China. The two undergraduate divisions, the Zanvyl Krieger School of Arts and Sciences and the Whiting School of Engineering, are located on the Homewood campus in Baltimore’s Charles Village neighborhood. The medical school, nursing school, and Bloomberg School of Public Health, and Johns Hopkins Children’s Center are located on the Medical Institutions campus in East Baltimore. The university also consists of the Peabody Institute, Applied Physics Laboratory, Paul H. Nitze School of Advanced International Studies, School of Education, Carey Business School, and various other facilities.

    Johns Hopkins was a founding member of the American Association of Universities (US). As of October 2019, 39 Nobel laureates and 1 Fields Medalist have been affiliated with Johns Hopkins. Founded in 1883, the Blue Jays men’s lacrosse team has captured 44 national titles and plays in the Big Ten Conference as an affiliate member as of 2014.


    The opportunity to participate in important research is one of the distinguishing characteristics of Hopkins’ undergraduate education. About 80 percent of undergraduates perform independent research, often alongside top researchers. In FY 2013, Johns Hopkins received $2.2 billion in federal research grants—more than any other U.S. university for the 35th consecutive year. Johns Hopkins has had seventy-seven members of the Institute of Medicine, forty-three Howard Hughes Medical Institute Investigators, seventeen members of the National Academy of Engineering, and sixty-two members of the National Academy of Sciences. As of October 2019, 39 Nobel Prize winners have been affiliated with the university as alumni, faculty members or researchers, with the most recent winners being Gregg Semenza and William G. Kaelin.

    Between 1999 and 2009, Johns Hopkins was among the most cited institutions in the world. It attracted nearly 1,222,166 citations and produced 54,022 papers under its name, ranking No. 3 globally [after Harvard University (US) and the Max Planck Society (DE) in the number of total citations published in Thomson Reuters-indexed journals over 22 fields in America.

    In FY 2000, Johns Hopkins received $95.4 million in research grants from the National Aeronautics and Space Administration (US), making it the leading recipient of NASA research and development funding. In FY 2002, Hopkins became the first university to cross the $1 billion threshold on either list, recording $1.14 billion in total research and $1.023 billion in federally sponsored research. In FY 2008, Johns Hopkins University performed $1.68 billion in science, medical and engineering research, making it the leading U.S. academic institution in total R&D spending for the 30th year in a row, according to a National Science Foundation (US) ranking. These totals include grants and expenditures of JHU’s Applied Physics Laboratory in Laurel, Maryland.

    The Johns Hopkins University also offers the “Center for Talented Youth” program—a nonprofit organization dedicated to identifying and developing the talents of the most promising K-12 grade students worldwide. As part of the Johns Hopkins University, the “Center for Talented Youth” or CTY helps fulfill the university’s mission of preparing students to make significant future contributions to the world. The Johns Hopkins Digital Media Center (DMC) is a multimedia lab space as well as an equipment, technology and knowledge resource for students interested in exploring creative uses of emerging media and use of technology.

    In 2013, the Bloomberg Distinguished Professorships program was established by a $250 million gift from Michael Bloomberg. This program enables the university to recruit fifty researchers from around the world to joint appointments throughout the nine divisions and research centers. Each professor must be a leader in interdisciplinary research and be active in undergraduate education. Directed by Vice Provost for Research Denis Wirtz, there are currently thirty two Bloomberg Distinguished Professors at the university, including three Nobel Laureates, eight fellows of the American Association for the Advancement of Science (US), ten members of the American Academy of Arts and Sciences, and thirteen members of the National Academies.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: