Tagged: ars technica Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:22 am on July 26, 2020 Permalink | Reply
    Tags: "The real science behind SETI’s hunt for intelligent aliens", ars technica, , , , , , ,   

    From ars technica: “The real science behind SETI’s hunt for intelligent aliens” 

    From ars technica

    7/25/2020
    Madeleine O’Keefe

    1
    Aurich Lawson / Getty

    In 1993, a team of scientists published a paper in the scientific journal Nature that announced the detection of a planet harboring life. Using instruments on the spacecraft Galileo, they imaged the planet’s surface and saw continents with colors “compatible with mineral soils” and agriculture, large expanses of ocean with “spectacular reflection,” and frozen water at the poles.

    NASA/Galileo 1989-2003

    An analysis of the planet’s chemistry revealed an atmosphere with oxygen and methane so abundant that they must come from biological sources. “Galileo found such profound departures from equilibrium that the presence of life seems the most probable cause,” the authors wrote.

    But the most telltale sign of life was measured by Galileo’s spectrogram: radio transmissions from the planet’s surface. “Of all Galileo science measurements, these signals provide the only indication of intelligent, technological life,” wrote the authors.

    The paper’s first author was Carl Sagan, the astronomer, author, and science communicator. The planet that he and his co-authors described was Earth.

    Twenty years later, as far as we can tell, Earth remains the only planet in the Universe with any life, intelligent or otherwise. But that Galileo fly-by of Earth was a case study for future work. It confirmed that modern instruments can give us hints about the presence of life on other planets—including intelligent life. And since then, we’ve dedicated decades of funding and enthusiasm to look for life elsewhere in the Universe.

    But one component of this quest has, for the most part, been overlooked: the Search for Extraterrestrial Intelligence (SETI). This is the field of astronomical research that looks for alien civilizations by searching for indicators of technology called “technosignatures.” Despite strong support from Sagan himself (he even made SETI the focus of his 1985 science-fiction novel Contact, which was turned into a hit movie in 1997 starring Jodie Foster and Matthew McConaughey), funding and support for SETI have been paltry compared to the search for extraterrestrial life in general.

    Throughout SETI’s 60-year history, a stalwart group of astronomers has managed to keep the search alive. Today, this cohort is stronger than ever, though they are mostly ignored by the research community, largely unfunded by NASA, and dismissed by some astronomers as a campy fringe pursuit. After decades of interest and funding dedicated toward the search for biological life, there are tentative signs that SETI is making a resurgence.

    At a time when we’re in the process of building hardware that should be capable of finding signatures of life (intelligent or otherwise) in the atmospheres of other planets, SETI astronomers simply want a seat at the table. The stakes are nothing less than the question of our place in the Universe.

    2
    The Arecibo Radio Telescope on Puerto Rico [recently unfunded by NSF and now picked up by UCF and a group of funders] receives interplanetary signals and transmissions. And it was in the movie Contact!

    How to search for life on other worlds

    You may have heard of searching for life on other planets by looking for “biosignatures”—molecules or phenomena that would only occur or persist if life were present. These could be microbes discovered by directly sampling material from the planet (known as “in-situ sampling”) or using spectroscopic biosignatures, like chemical disequilibria in the atmosphere and images of water and agriculture, like those detected by the Galileo probe in 1990.

    The biosignature search is happening now, but it comes with limitations. In-situ sampling requires sending a spacecraft to another planet; we’ve done this, for example, with rovers sent to Mars and the Cassini spacecraft that sampled plumes of water erupting from Saturn’s moon Enceladus. And while in-situ sampling is the ideal option for planets in the Solar System, with our current technology, it will take millennia to get a vehicle to a planet orbiting a different star—and these exoplanets are far, far more numerous.

    To detect spectroscopic biosignatures we will need telescopes like the James Webb Space Telescope (JWST) or the ground-based Extremely Large Telescope, both currently under construction.

    NASA/ESA/CSA Webb Telescope annotated

    ESO/E-ELT, 39 meter telescope to be on top of Cerro Armazones in the Atacama Desert of northern Chile. located at the summit of the mountain at an altitude of 3,060 metres (10,040 ft).

    To directly image an exoplanet and obtain more definitive spectra will require future missions like LUVOIR (Large Ultraviolet Optical Infrared Surveyor) or the Habitable Exoplanet Imaging Mission. But all of these lie a number of years in the future.

    NASA Large UV Optical Infrared Surveyor (LUVOIR)

    NASA Habitable Exoplanet Imaging Mission (HabEx) The Planet Hunter depiction

    SETI researchers, however, are interested in “technosignatures”—biosignatures that indicate intelligent life. They are signals that could only come from technology, including TV and radio transmitters—like the radio transmission detected by the Galileo spacecraft—planetary radar systems, or high-power lasers.

    The first earnest call to search for technosignatures—and SETI’s formal beginning—came in 1959. That was the year that Cornell University physicists Giuseppe Cocconi and Philip Morrison published a landmark paper in Nature outlining the most likely characteristics of alien communication. It would make the most sense, they postulated, for aliens to communicate across interstellar distances using electromagnetic waves since they are the only media known to travel fast enough to conceivably reach us across vast distances of space. Within the electromagnetic spectrum, Cocconi and Morrison determined that it would be most promising to look for radio waves because they are less likely to be absorbed by planetary atmospheres and require less energy to transmit. Specifically, they proposed a narrowband signal around the frequency at which hydrogen atoms emit radiation—a frequency that should be familiar to any civilization with advanced radio technology.

    What’s special about these signals is that they exhibit high degrees of coherence, meaning there is a large amount of electromagnetic energy in just one frequency or a very small instance of time—not something nature typically does.

    “As far as we know, these kinds of [radio] signals would be unmistakable indicators of technology,” says Andrew Siemion, professor of astronomy at the University of California, Berkeley. “We don’t know of any natural source that produces them.”

    Such a signal was detected on August 18, 1977 by the Ohio State University Radio Observatory, known as “Big Ear.”

    Ohio State Big Ear Radio Telescope, Construction of the Big Ear began in 1956 and was completed in 1961, and it was finally turned on for the first time in 1963

    Astronomy professor Jerry Ehman was analyzing Big Ear data in the form of printouts that, to the untrained eye, looked like someone had simply smashed the number row of a typewriter with a preference for lower digits. Numbers and letters in the Big Ear data indicated, essentially, the intensity of the electromagnetic signal picked up by the telescope, starting at 1 and moving up to letters in the double-digits (A was 10, B was 11, and so on). Most of the page was covered in 1s and 2s, with a stray 6 or 7 sprinkled in.

    But that day, Ehman found an anomaly: 6EQUJ5. This signal had started out at an intensity of 6—already an outlier on the page—climbed to E, then Q, peaked at U—the highest power signal Big Ear had ever seen—then decreased again. Ehman circled the sequence in red pen and wrote “Wow!” next to it.

    Alas, SETI researchers have never been able to detect the so-called “Wow! Signal” again, despite many tries with radio telescopes around the world. To this day, no one knows the source of the Wow! Signal, and it remains one of the strongest candidates for alien transmission ever detected.

    NASA began funding SETI studies in 1975, a time when the idea of extraterrestrial life was still unthinkable, according to former NASA Chief Historian Steven J. Dick. After all, no one then knew if there were even other planets outside our Solar System, much less life.

    In 1992, NASA made its strongest-ever commitment to SETI, pledging $100 million over ten years to fund the High Resolution Microwave Survey (HRMS), an expansive SETI project led by astrophysicist Jill Tarter.

    5
    Jill Tarter

    One of today’s most prominent SETI researchers, Tarter was the inspiration for the protagonist of Sagan’s Contact, Eleanor Arroway.

    But less than a year after HRMS got underway, Congress abruptly canceled the project. “The Great Martian Chase may finally come to an end,” said Senator Richard Bryan of Nevada, one of its most vocal detractors. “As of today, millions have been spent and we have yet to bag a single little green fellow. Not a single Martian has said take me to your leader, and not a single flying saucer has applied for FAA approval.”

    The whole ordeal was “incredibly traumatic,” says Tarter. “It [the removal of funding] was so vindictive that, in fact, we became the four-letter S-word that you couldn’t say at NASA headquarters for decades.”

    Since that humiliating public reprimand by Congress, NASA’s astrobiology division has been largely focused on searching for biosignatures. And it has made sure to distinguish its current work from SETI, going so far as to say in a 2015 report that “the traditional Search for Extraterrestrial Intelligence… is not a part of astrobiology.”

    Despite or because of this, the SETI community quickly regrouped and headed to the private sector for funding. Out of those efforts came Project Phoenix, rising from the ashes of the HRMS. From February 1995 to March 2004, Phoenix scanned about 800 nearby candidate stars for microwave transmission in three separate campaigns with the Parkes Observatory in New South Wales, Australia; the National Radio Astronomy Observatory in Green Bank, West Virginia; and Arecibo Observatory in Puerto Rico [above].

    CSIRO/Parkes Observatory, located 20 kilometres north of the town of Parkes, New South Wales, Australia, 414.80m above sea level

    Green Bank Radio Telescope, West Virginia, USA, now the center piece of the GBO, Green Bank Observatory, being cut loose by the NSF

    The project did not find any signs of E.T., but it was considered the most comprehensive and sensitive SETI program ever conducted.

    At the same time, other projects run by the Planetary Society and UC Berkeley (including a project called SERENDIP, which is still active) carried out SETI experiments and found a handful of anomalous radio signals, but none showed up a second time.

    To search or not to search

    There is plenty of understandable skepticism surrounding the search for extraterrestrial intelligence. At first glance, one might reason that biosignatures are more common than technosignatures and therefore easier to detect. After all, complex life takes a long time to develop and so is probably rarer. But as astronomer and SETI researcher Jason Wright points out, “Slimes and fungus and molds and things are extremely hard to detect [on an exoplanet]. They’re not doing anything to get your attention. They’re not commanding energy resources that might be obvious at interstellar distances.”

    Linda Billings, a communications consultant for NASA’s Astrobiology Division, is not so convinced that SETI is worth it. She worked with SETI in the early 1990s when it was still being funded by the space agency.

    “I felt like there was a resistance to providing a realistic depiction of the SETI search, of how limited it is, how little of our own galaxy that we are capable of detecting in radio signals,” Billings says.

    While she supports NASA’s biosignature searches, she feels that there are too many assumptions embedded into the idea that intelligent aliens would emit signals that we can intercept and understand, so the likelihood of successfully detecting technosignatures is too low.

    What is the likelihood of encountering extraterrestrial intelligence? Astronomers have thought about this question and have even tried to quantify it, most famously in the Drake equation, introduced by radio astronomer Frank Drake in 1961. The equation estimates the number of active and communicative alien civilizations in the Milky Way galaxy by considering seven factors:

    Frank Drake with his Drake Equation. Credit Frank Drake

    Drake Equation, Frank Drake, Seti Institute

    Since these values have been largely conjectural, the Drake equation has served as more of a thought exercise than a precise calculation of probability. But SETI skeptics reason that the equation’s huge uncertainties render the search futile until we know more.

    Plus, the question remains as to whether we are looking the “right” way. By assuming aliens will transmit radio waves, SETI researchers also assume that alien civilizations must have intelligence similar to humans’. But intelligence—like life—could develop elsewhere in ways we can’t possibly imagine. So for some, the small chance that aliens are sending out radio transmissions isn’t enough to justify the search.

    Seth Shostak, senior astronomer at the SETI Institute, defended the radio approach in a blog post honoring Frank Drake’s 90th birthday earlier this year. “…[A] search for radio transmissions is not a parochial enterprise,” he wrote. “It doesn’t assume that the aliens are like us in any particular, only that they live in the same Universe, with the same physics.”

    SETI researchers can also cast a much wider net with their radio searches: Optical telescopes looking for biosignatures can only resolve data from exoplanets within a few tens of light-years, totaling to no more than 100 tractable targets. But existing radio observatories, like those at Green Bank and in Arecibo, can detect signals as far as 10,000 light-years away, producing 10-million more targets than biosignature search methods.

    The SETI community has no desire to stop the search for biosignatures. “Technosignatures and biosignatures both lie under the same umbrella that we call ‘astrobiology,’ so we are trying to learn from each other,” says Tarter.

    The current state of SETI

    Since the 1990s, new discoveries have strengthened the case to search for technosignatures. For example, NASA’s Kepler Space Telescope has identified over 4,000 exoplanets, and Kepler data suggest that half of all stars may harbor Earth-sized exoplanets, many of which may be the right distance from their stars to be conducive to life.

    NASA/Kepler Telescope, and K2 March 7, 2009 until November 15, 2018

    NASA/MIT TESS replaced Kepler in search for exoplanets

    Plus, the discovery of extremophiles—organisms that can grow and thrive in extreme temperature, acidity, or pressure—has shown astrobiologists that life exists in environments previously assumed to be inhospitable.

    But of the two arms of the search for life, SETI is still up against a perception problem—what some call a “giggle factor.” What does it take for SETI to be taken seriously? There are some indications that the perception problem is solving itself, albeit slowly.

    In 2015, SETI got a much-needed injection of cash—and faith—when Russian-born billionaire Yuri Milner pledged $100 million over 10 years to form the Breakthrough Initiatives, including Breakthrough Listen, a SETI project based at UC Berkeley and directed by Andrew Siemion.

    Breakthrough Listen Project

    1

    UC Observatories Lick Autmated Planet Finder, fully robotic 2.4-meter optical telescope at Lick Observatory, situated on the summit of Mount Hamilton, east of San Jose, California, USA




    GBO radio telescope, Green Bank, West Virginia, USA


    CSIRO/Parkes Observatory, located 20 kilometres north of the town of Parkes, New South Wales, Australia


    SKA Meerkat telescope, 90 km outside the small Northern Cape town of Carnarvon, SA

    Newly added

    CfA/VERITAS, a major ground-based gamma-ray observatory with an array of four Čerenkov Telescopes for gamma-ray astronomy in the GeV – TeV energy range. Located at Fred Lawrence Whipple Observatory,Mount Hopkins, Arizona, US in AZ, USA, Altitude 2,606 m (8,550 ft)

    As the name suggests, Breakthrough Listen’s goal is to listen for signs of intelligent life. Breakthrough Listen has access to more than a dozen facilities around the world, including the NRAO in Green Bank, the Arecibo Observatory, and the MeerKAT radio telescope in South Africa.

    A few years later in 2018, NASA—prodded by SETI fan and Texas Congressman Lamar Smith—hosted a technosignatures workshop at the Lunar and Planetary Institute in Houston, Texas. Over the course of three days, SETI scientists including Wright and Siemion met and discussed the current state of technosignature searches and how NASA could contribute to the field’s future. But Smith retired from Congress that same year, which put SETI’s future with federal funding back into question.

    In March 2019, Pennsylvania State University announced the new Penn State Extraterrestrial Intelligence Center (PSETI)—to be led by Wright, who is an associate professor of astronomy and astrophysics at the school. One of just two astrobiology PhD programs in the world (the other is at UCLA), PSETI plans on hosting the first Penn State SETI Symposium in June 2021.

    Some of PSETI’s main goals are to permanently fund SETI research worldwide, train the next generation of SETI practitioners, and support and foster a worldwide SETI community. These elements are important to any scientific endeavor but are currently lacking in the small field, even with initiatives like Breakthrough Listen. According to a recent white paper, only five people in the US have ever earned a PhD with SETI as the focus of their dissertations, and that number won’t be growing rapidly any time soon.

    “If you can’t propose for grants to work on a topic, it’s really difficult to convince young graduate students and postdocs to work in the field, because they don’t really see a future in it,” says Siemion.

    Tarter agrees that community and funding are the essential ingredients to SETI’s future. “We sort of lost a generation of scientists and engineers in this fallow period where a few of us could manage to keep this going,” she says. “A really well-educated, larger population of young exploratory scientists—and a stable path to allow them to pursue this large question into the future—is what we need.”

    Wright often calls SETI low-hanging fruit. “This field has been starved of resources for so long that there is still a ton of work to do that could have been done decades ago,” says Wright. “We can very quickly make a lot of progress in this field without a lot of effort.” This is made clear in Wright’s SETI graduate course at Penn State, in which his students’ final projects have sometimes become papers that get published in peer-reviewed journals—something that rarely happens in any other field of astronomy.

    In February 2020, Penn State graduate student Sofia Sheikh submitted a paper to The Astrophysical Journal outlining a survey of 20 stars in the “restricted Earth Transit Zone,” the area of the sky in which an observer on another planet could see Earth pass in front of the sun. Sheikh didn’t find any technosignatures in the direction of those 20 stars, but her paper is one of a number of events in the past year that seem to signal the resurgence of SETI.

    In July 2019, Breakthrough Listen announced a collaboration with VERITAS, an array of gamma-ray telescopes in Arizona [above]. VERITAS agreed to spend 30 hours per year looking at Breakthrough Listen’s targets for signs of extraterrestrial intelligence starting in 2021. Breakthrough Listen also announced, in March 2020, that it will soon partner with the NRAO to use the Very Large Array (VLA), an array of radio telescopes in Socorro, New Mexico.

    NRAO/Karl V Jansky Expanded Very Large Array, on the Plains of San Agustin fifty miles west of Socorro, NM, USA, at an elevation of 6970 ft (2124 m)

    (Coincidentally, the VLA was featured in the film Contact but was never actually used in SETI research.)

    And there are other forthcoming projects that take advantage of alternate avenues to search. Optical SETI instruments, like PANOSETI, will look for bright pulses in optical or near-infrared light that could be artificial in origin. Similarly, LaserSETI will use inexpensive, wide-field, astronomical grade cameras to probe the whole sky, all the time, for brief flickers of laser light coming from deep space. However, neither PANOSETI nor LaserSETI are fully funded.

    Panoseti

    LASERSETi

    Just last month, though, NASA did award a grant to a group of scientists to search for technosignatures. It is the first time NASA has given funding to a non-radio technosignature search, and it’s also the first grant to support work at PSETI. The project team, led by Adam Frank from the University of Rochester, includes Jason Wright.

    “It’s a great sign that the winds are changing at NASA,” Wright said in an email. He credits NASA’s 2018 technosignatures workshop as a catalyst that led NASA to relax its stance against SETI research. “We have multiple proposals in to NASA right now to do more SETI work across its science portfolio and I’m more optimistic now that it will be fairly judged against the rest of the proposals.”

    Despite all the obstacles in their path, today’s SETI researchers have no plans to stop searching. After all, they are trying to answer one of the most profound and captivating questions in the entire Universe: are we alone?

    “You can certainly get a little tired and a little beat down by the challenges associated with any kind of job. We’re certainly not immune from that in SETI or in astronomy,” admits Siemion. “But you need only take 30 seconds to just contemplate the fact that you’re potentially on the cusp of making really an incredibly profound discovery—a discovery that would forever change the human view of our place in the universe. And, you know, it gets you out of bed.”

    SETI Institute

    Laser SETI, the future of SETI Institute research

    SETI/Allen Telescope Array situated at the Hat Creek Radio Observatory, 290 miles (470 km) northeast of San Francisco, California, USA, Altitude 986 m (3,235 ft), the origins of the Institute’s search.

    ____________________________________________________

    Further to the story

    UCSC alumna Shelley Wright, now an assistant professor of physics at UC San Diego, discusses the dichroic filter of the NIROSETI instrument, developed at the Dunlap Institute, U Toronto and brought to UCSD and installed at the Nickel telescope at UCSC (Photo by Laurie Hatch)

    Shelley Wright of UC San Diego, with NIROSETI, developed at Dunlap Institute U Toronto, at the 1-meter Nickel Telescope at Lick Observatory at UC Santa Cruz

    NIROSETI team from left to right Rem Stone UCO Lick Observatory Dan Werthimer UC Berkeley Jérôme Maire U Toronto, Shelley Wright UCSD Patrick Dorval, U Toronto Richard Treffers Starman Systems. (Image by Laurie Hatch)

    LASERSETI

    And separately and not connected to the SETI Institute

    SETI@home a BOINC project based at UC Berkeley


    SETI@home, a BOINC project originated in the Space Science Lab at UC Berkeley


    For transparency, I am a financial supporter of the SETI Institute. I was a BOINC cruncher for many years.

    My BOINC

    I am also a financial supporter of UC Santa Cruz and Dunlap Institute at U Toronto.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 12:54 pm on July 11, 2020 Permalink | Reply
    Tags: "How small satellites are radically remaking space exploration", ars technica, , , , ,   

    From ars technica: “How small satellites are radically remaking space exploration” 


    From ars technica

    7/11/2020
    Eric Berger
    eric.berger@arstechnica.com

    “There’s so much of the Solar System that we have not explored.”

    1
    An Electron rocket launches in August 2019 from New Zealand.

    At the beginning of this year, a group of NASA scientists agonized over which robotic missions they should choose to explore our Solar System. Researchers from around the United States had submitted more than 20 intriguing ideas, such as whizzing by asteroids, diving into lava tubes on the Moon, and hovering in the Venusian atmosphere.

    Ultimately, NASA selected four of these Discovery-class missions for further study. In several months, the space agency will pick two of the four missions to fully fund, each with a cost cap of $450 million and a launch late within this decade. For the losing ideas, there may be more chances in future years—but until new opportunities arise, scientists can only plan, wait, and hope.

    This is more or less how NASA has done planetary science for decades. Scientists come up with all manner of great ideas to answer questions about our Solar System; then, NASA announces an opportunity, a feeding frenzy ensues for those limited slots. Ultimately, one or two missions get picked and fly. The whole process often takes a couple of decades from the initial idea to getting data back to Earth.

    This process has succeeded phenomenally. In the last half century, NASA has explored most of the large bodies in the Solar System, from the Sun and Mercury on one end to Pluto and the heliopause at the other. No other country or space agency has come close to NASA’s planetary science achievements. And yet, as the abundance of Discovery-class mission proposals tells us, there is so much more we can learn about the Solar System.

    Now, two emerging technologies may propel NASA and the rest of the world into an era of faster, low-cost exploration. Instead of spending a decade or longer planning and developing a mission, then spending hundreds of millions (to billions!) of dollars bringing it off, perhaps we can fly a mission within a couple of years for a few tens of millions of dollars. This would lead to more exploration and also democratize access to the Solar System.

    In recent years, a new generation of companies is developing new rockets for small satellites that cost roughly $10 million for a launch. Already, Rocket Lab has announced a lunar program for its small Electron rocket. And Virgin Orbit has teamed up with a group of Polish universities to launch up to three missions to Mars with its LauncherOne vehicle.

    At the same time, the various components of satellites, from propulsion to batteries to instruments, are being miniaturized. It’s not quite like a mobile phone, which today has more computing power than a machine that filled a room a few decades ago. But small satellites are following the same basic trend line.

    Moreover, the potential of tiny satellites is no longer theoretical. Two years ago, a pair of CubeSats built by NASA (and called MarCO-A and MarCO-B) launched along with the InSight mission. In space, the small satellites deployed their own solar arrays, stabilized themselves, pivoted toward the Sun, and then journeyed to Mars.

    “We are at a time when there are really interesting opportunities for people to do missions much more quickly,” said Elizabeth Frank, an Applied Planetary Scientist at First Mode, a Seattle-based technology company. “It doesn’t have to take decades. It creates more opportunity. This is a very exciting time in planetary science.”

    Small sats

    NASA had several goals with its MarCO spacecraft, said Andy Klesh, an engineer at the Jet Propulsion Laboratory who served as technical lead for the mission.

    JPL Cubesat MarCO Mars Cube

    CubeSats had never flown beyond low-Earth orbit before. So during their six-month transit to Mars, the MarCOs proved small satellites could thrive in deep space, control their attitudes and, upon reaching their destination, use a high-gain antenna to stream data back home at 8 kilobits per second.

    But the briefcase-sized MarCO satellites were more than a mere technology demonstration. With the launch of its Mars InSight lander in 2018, NASA faced a communications blackout during the critical period when the spacecraft was due to enter the Martian atmosphere and touch down on the red planet.

    NASA/Mars InSight Lander

    To close the communications gap, NASA built the two MarCO 6U CubeSats for $18.5 million and used them to relay data back from InSight during the landing process. Had InSight failed to land, the MarCOs would have served as black box data recorders, Klesh told Ars.

    The success of the MarCOs changed the perception of small satellites and planetary science. A few months after their mission ended, the European Space Agency announced that it would send two CubeSats on its “Hera” mission to a binary asteroid system.

    ESA’s proposed Hera spaceraft depiction

    European engineers specifically cited the success of the MarCOs in their decision to send along CubeSats on the asteroid mission.

    The concept of interplanetary small satellite missions also spurred interest in the emerging new space industry. “That mission got our attention at Virgin Orbit,” said Will Pomerantz, director of special projects at the California-based launch company. “We were inspired by it, and we wondered what else we might be able to do.”

    After the MarCO missions, Pomerantz said, the company began to receive phone calls from research groups about LauncherOne, Virgin’s small rocket that is dropped from a 747 aircraft before igniting its engine. How many kilograms could LauncherOne put into lunar orbit? Could the company add a highly energetic third stage? Ideas for missions to Venus, the asteroids, and Mars poured in.

    Polish scientists believe they can build a spacecraft with a mass of 50kg or less (each of the MarCO spacecraft weighed 13.5kg) that can take high-quality images of Mars and its moon, Phobos. Such a spacecraft might also be able to study the Martian atmosphere or even find reservoirs of liquid water beneath the surface of Mars. Access to low-cost launch was a key enabler of the idea.

    Absent this new mode of planetary exploration, Pomerantz noted, a country like Poland might only be able to participate as one of several secondary partners on a Mars mission. Now it can get full credit. “With even a modest mission like this, it could really put Poland on the map,” Pomerantz said.

    1
    Engineers inspect one of the two MarCO CubeSats in 2016 at JPL.

    2
    Engineer Joel Steinkraus stands with both of the MarCO spacecraft. The one on the left is folded up the way it will be stowed on its rocket; the one on the right has its solar panels fully deployed, along with its high-gain antenna on top. NASA/JPL-Caltech

    Small rockets

    A few months before the MarCO satellites launched with the InSight lander on the large Atlas V rocket, the much smaller Electron rocket took flight for the first time. Developed and launched from New Zealand by Rocket Lab, Electron is the first of a new generation of commercial, small satellite rockets to reach orbit.

    The small booster has a payload capacity of about 200kg to low-Earth orbit. But since Electron’s debut, Rocket Lab has developed a Photon kick stage to provide additional performance.

    In an interview, Rocket Lab’s founder, Peter Beck, said the company believes it can deliver 25kg to Mars or Venus and up to 37kg to the Moon. Because the Photon stage provides many of the functions of a deep space vehicle, most of the mass can be used for sensors and scientific instruments.

    “We’re saying that for just $15 to $20 million you can go to the Moon,” he said. “I think this is a huge, disruptive program for the scientific community.”

    Of the destinations Electron can reach, Beck is most interested in Venus. “I think it’s the unsung hero of our Solar System,” he said. “We can learn a tremendous amount about our own Earth from Venus. Mars gets all the press, but Venus is where it’s really happening. That’s a mission that we really, really want to do.”

    There are other, somewhat larger rockets coming along, too. Firefly’s Alpha booster can put nearly 1 ton into low-Earth orbit, and Relativity Space is developing a Terran 1 rocket that can launch a little more than a ton. These vehicles probably could put CubeSats beyond the asteroid belt, toward Jupiter or beyond.

    Finally, the low-cost launch revolution spurred by SpaceX with larger rockets may also help. The company’s Falcon 9 rocket costs less than $60 million in reusable mode and could get larger spacecraft into deep space cheaply. Historically, NASA has paid triple this price, or more, for scientific launches.

    Accepting failure

    There will be some trade-offs, of course. One of the reasons NASA missions cost so much is that the agency takes extensive precautions to ensure that its vehicles will not fail in the unforgiving environment of space. And ultimately, most of NASA’s missions—so complex and large and capable—do succeed wonderfully.

    CubeSats will be riskier, with fewer redundancies. But that’s okay, says Pomerantz. As an example, he cited NASA’s Curiosity rover mission, launched in 2011 at a cost of $2.5 billion. Imagine sending 100 tiny robots into the Solar System for the price of one Curiosity, Pomerantz said. If just one quarter of the missions work, that’s 25 mini Curiosities.

    Frank agreed that NASA would have to learn to accept failure, taking chances on riskier technologies. Failure must be an option.

    NASA Mars Curiosity Rover


    What’s better than one Curiosity rover? How about 25 mini missions?

    “You want to fail for the right reasons, because you took technical chances and not because you messed up,” she said. “But I think you could create a new culture around failure, where you learn things and fix them and apply what you learn to new missions.”

    NASA seems open to this idea. Already, as it seeks to control costs and work with commercial partners for its new lunar science program, the space agency has said it will accept failure. The leader of NASA’s scientific programs, Thomas Zurbuchen, said he would tolerate some misses as NASA takes “shots on goal” in attempting to land scientific experiments on the Moon. “We do not expect every launch and landing to be successful,” he said last year.

    At the Jet Propulsion Laboratory, too, planetary scientists and engineers are open-minded. John Baker, who leads “game-changing” technology development and missions at the lab, said no one wants to spend 20 years or longer going from mission concept to flying somewhere in the Solar System. “Now, people want to design and print their structure, add instruments and avionics, fuel it and launch it,” he said. “That’s the vision.”

    Spaceflight remains highly challenging, of course. Many technologies can be miniaturized, but propulsion and fuel remain difficult problems. However, a willingness to fail opens up a wealth of new possibilities. One of Baker’s favorite designs is a “Cupid’s Arrow” mission to Venus where a MarCO-like spacecraft is shot through Venus’s atmosphere. An on-board mass spectrometer would analyze a sample of the atmosphere. It’s the kind of mission that could launch as a secondary payload on a Moon mission and use a gravity assist to reach Venus.

    “There’s so much of the Solar System that we have not explored,” Baker said. “There are how many thousands of asteroids? And they’re completely different. Each one of them tells us a different story.”

    Democratizing space

    One of the exciting aspects of bringing down the cost of interplanetary missions is that it increases access for new players—smaller countries like Poland as well as universities around the world.

    “I think the best thing that can be done is to figure out how to lower the price and then make this technology publicly available to everyone,” Baker said. “As more and more countries get engaged in Solar System exploration, we’re just going to learn so much more.”

    Already, organizations such as the Milo Institute at Arizona State University have started to foster collaborations between universities, emerging space agencies, private philanthropy, and small space companies.

    Historically, there have been so few opportunities for planetary scientists to get involved in missions that it has been difficult for researchers to gain the necessary project management skills to lead large projects. With a larger number of smaller missions, Frank said she believes it will increase the diversity of the planetary science community.

    In turn, she said, this will ultimately help NASA and other large space agencies by increasing and developing the global pool of talent for carrying out the biggest and most challenging planetary science missions that still require billions of dollars and big rockets. Because, while some things can be done on the cheap, really ambitious planetary science missions like plumbing the depths of Europa’s oceans or orbiting Pluto will remain quite costly.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 12:02 pm on December 31, 2019 Permalink | Reply
    Tags: ars technica, , , , , ESA’s Characterising Exoplanet Satellite Cheops, , Future giant ground based optical telescopes, ,   

    From ars technica: “The 2010s: Decade of the exoplanet” 

    Ars Technica
    From ars technica

    12/31/2019
    John Timmer

    1
    Artist conception of Kepler-186f, the first Earth-size exoplanet found in a star’s “habitable zone.”

    ESO Belgian robotic Trappist National Telescope at Cerro La Silla, Chile

    A size comparison of the planets of the TRAPPIST-1 system, lined up in order of increasing distance from their host star. The planetary surfaces are portrayed with an artist’s impression of their potential surface features, including water, ice, and atmospheres. NASA

    Centauris Alpha Beta Proxima 27, February 2012. Skatebiker

    The last ten years will arguably be seen as the “decade of the exoplanet.” That might seem like an obvious thing to say, given that the discovery of the first exoplanet was honored with a Nobel Prize this year. But that discovery happened back in 1995—so what made the 2010s so pivotal?

    One key event: 2009’s launch of the Kepler planet-hunting probe.

    NASA/Kepler Telescope, and K2 March 7, 2009 until November 15, 2018

    Kepler spawned a completely new scientific discipline, one that has moved from basic discovery—there are exoplanets!—to inferring exoplanetary composition, figuring out exoplanetary atmosphere, and pondering what exoplanets might tell us about prospects for life outside our Solar System.

    To get a sense of how this happened, we talked to someone who was in the field when the decade started: Andrew Szentgyorgyi, currently at the Harvard-Smithsonian Center for Astrophysics, where he’s the principal investigator on the Giant Magellan Telescope’s Large Earth Finder instrument.

    Giant Magellan Telescope, 21 meters, to be at the Carnegie Institution for Science’s Las Campanas Observatory, to be built some 115 km (71 mi) north-northeast of La Serena, Chile, over 2,500 m (8,200 ft) high

    In addition to being famous for having taught your author his “intro to physics” course, Szentgyorgyi was working on a similar instrument when the first exoplanet was discovered.

    Two ways to find a planet

    The Nobel-winning discovery of 51 Pegasi b came via the “radial velocity” method, which relies on the fact that a planet exerts a gravitational influence on its host star, causing the star to accelerate slightly toward the planet.

    Radial Velocity Method-Las Cumbres Observatory

    Radial velocity Image via SuperWasp http http://www.superwasp.org-exoplanets.htm

    Unless the planet’s orbit is oriented so that it’s perpendicular to the line of sight between Earth and the star, some of that acceleration will draw the star either closer to or farther from Earth. This acceleration can be detected via a blue or red shift in the star’s light, respectively.

    The surfaces of stars can expand and contract, which also produces red and blue shifts, but these won’t have the regularity of acceleration produced by an orbital body. But it explains why, back in the 1990s, people studying the surface changes in stars were already building the necessary hardware to study radial velocity.

    “We had a group that was building instruments that I’ve worked with to study the pulsations of stars—astroseismology,” Szentgyorgyi told Ars, “but that turns out to be sort of the same instrumentation you would use” to discern exoplanets.

    He called the discovery of 51 Pegasi b a “seismic event” and said that he and his collaborators began thinking about how to use their instruments “probably when I got the copy of Nature” that the discovery was published in. Because some researchers already had the right equipment, a steady if small flow of exoplanet announcements followed.

    During this time, researchers developed an alternate way to find exoplanets, termed the “transit method.”

    Planet transit. NASA/Ames

    The transit method requires a more limited geometry from an exoplanet’s orbit: the plane has to cause the exoplanet to pass through the line of sight between its host star and Earth. During these transits, the planet will eclipse a small fraction of light from the host star, causing a dip in its brightness. This doesn’t require the specialized equipment needed for radial velocity detections, but it does require a telescope that can detect small brightness differences despite the flicker caused by the light passing through our atmosphere.

    By 2009, transit detections were adding regularly to the growing list of exoplanets.

    The tsunami

    In the first year it was launched, Kepler started finding new planets. Given time and a better understanding of how to use the instrument, the early years of the 2010s saw thousands of new planets cataloged. In 2009, Szentgyorgyi said, “it was still ‘you’re finding handfuls of exoplanetary systems.’ And then with the launch of Kepler, there’s this tsunami of results which has transformed the field.”

    Suddenly, rather than dozens of exoplanets, we knew about thousands.

    2
    The tsunami of Kepler planet discoveries.

    The sheer numbers involved had a profound effect on our understanding of planet formation. Rather than simply having a single example to test our models against—our own Solar System—we suddenly had many systems to examine (containing over 4,000 currently known exoplanets). These include objects that don’t exist in our Solar System, things like hot Jupiters, super-Earths, warm Neptunes, and more. “You found all these crazy things that, you know, don’t make any sense from the context of what we knew about the Solar System,” Szentgyorgyi told Ars.

    It’s one thing to have models of planet formation that say some of these planets can form; it’s quite another to know that hundreds of them actually exist. And, in the case of hot Jupiters, it suggests that many exosolar systems are dynamic, shuffling planets to places where they can’t form and, in some cases, can’t survive indefinitely.

    But Kepler gave us more than new exoplanets; it provided a different kind of data. Radial velocity measurements only tell you how much the star is moving, but that motion could be caused by a relatively small planet with an orbital plane aligned with the line of sight from Earth. Or it could be caused by a massive planet with an orbit that’s highly inclined from that line of sight. Physics dictates that, from our perspective, these will produce the same acceleration of the star. Kepler helped us sort out the differences.

    3
    A massive planet orbiting at a steep angle (left) and a small one orbiting at a shallow one will both produce the same motion of a star relative to Earth.

    “Kepler not only found thousands and thousands of exoplanets, but it found them where we know the geometry,” Szentgyorgyi told Ars. “If you know the geometry—if you know the planet transits—you know your orbital inclination is in the plane you’re looking.” This allows follow-on observations using radial velocity to provide a more definitive mass of the exoplanet. Kepler also gave us the radius of each exoplanet.

    “Once you know the mass and radius, you can infer the density,” Szentgyorgyi said. “There’s a remarkable amount of science you can do with that. It doesn’t seem like a lot, but it’s really huge.”

    Density can tell us if a planet is rocky or watery—or whether it’s likely to have a large atmosphere or a small one. Sometimes, it can be tough to tell two possibilities apart; density consistent with a watery world could also be provided by a rocky core and a large atmosphere. But some combinations are either physically implausible or not consistent with planetary formation models, so knowing the density gives us good insight into the planetary type.

    Beyond Kepler

    Despite NASA’s heroic efforts, which kept Kepler going even after its hardware started to fail, its tsunami of discoveries slowed considerably before the decade was over. By that point, however, it had more than done its job. We had a new catalog of thousands of confirmed exoplanets, along with a new picture of our galaxy.

    For instance, binary star systems are common in the Milky Way; we now know that their complicated gravitational environment isn’t a barrier to planet formation.

    We also know that the most common type of star is the low-mass red dwarf. It was previously possible to think that the star’s low mass would be matched by a low-mass planet-forming disk, preventing the generation of large planets and the generation of large families of smaller planets. Neither turned out to be true.

    “We’ve moved into a mode where we can actually say interesting, global, statistical things about exoplanets,” Szentgyorgyi told Ars. “Most exoplanets are small—they’re sort of Earth to sub-Neptune size. It would seem that probably most of the solar-type stars have exoplanets.” And, perhaps most important, there’s a lot of them. “The ubiquity of exoplanets certainly is a stunner… they’re just everywhere,” Szentgyorgyi added.

    That ubiquity has provided the field with two things. First, it has given scientists the confidence to build new equipment, knowing that there are going to be planets to study. The most prominent piece of gear is NASA’s Transiting Exoplanet Survey Satellite, a space-based telescope designed to perform an all-sky exoplanet survey using methods similar to Kepler’s.

    NASA/MIT TESS replaced Kepler in search for exoplanets

    But other projects are smaller, focused on finding exoplanets closer to Earth. If exoplanets are everywhere, they’re also likely to be orbiting stars that are close enough so we can do detailed studies, including characterizing their atmospheres. One famous success in this area came courtesy of the TRAPPIST telescopes [above], which spotted a system hosting at least seven planets. More data should be coming soon, too; on December 17, the European Space Agency launched the first satellite dedicated to studying known exoplanets.

    ESA/CHEOPS

    With future telescopes and associated hardware similar to what Szentgyorgyi is working on, we should be able to characterize the atmospheres of planets out to about 30 light years from Earth. One catch: this method requires that the planet passes in front of its host star from Earth’s point of view.

    When an exoplanet transits in front of its star, most of the light that reaches Earth comes directly to us from the star. But a small percentage passes through the atmosphere of the exoplanet, allowing it to interact with the gases there. The molecules that make up the atmosphere can absorb light of specific wavelengths—essentially causing them to drop out of the light that makes its way to Earth. Thus, the spectrum of the light that we can see using a telescope can contain the signatures of various gases in the exoplanet’s atmosphere.

    There are some important caveats to this method, though. Since the fraction of light that passes through the exoplanet atmosphere is small compared to that which comes directly to us from the star, we have to image multiple transits for the signal to stand out. And the host star has to have a steady output at the wavelengths we’re examining in order to keep its own variability from swamping the exoplanetary signal. Finally, gases in the exoplanet’s atmosphere are constantly in motion, which can make their signals challenging to interpret. (Clouds can also complicate matters.) Still, the approach has been used successfully on a number of exoplanets now.

    In the air

    Understanding atmospheric composition can tell us critical things about an exoplanet. Much of the news about exoplanet discoveries has been driven by what’s called the “habitable zone.” That zone is defined as the orbital region around a star where the amount of light reaching a planet’s surface is sufficient to keep water liquid. Get too close to the star and there’s enough energy reaching the planet to vaporize the water; get too far away and the energy is insufficient to keep water liquid.

    These limits, however, assume an atmosphere that’s effectively transparent at all wavelengths. As we’ve seen in the Solar System, greenhouse gases can play an outsized role in altering the properties of planets like Venus, Earth, and Mars. At the right distance from a star, greenhouse gases can make the difference between a frozen rock and a Venus-like oven. The presence of clouds can also alter a planet’s temperature and can sometimes be identified by imaging the atmosphere. Finally, the reflectivity of a planet’s surface might also influence its temperature.

    The net result is that we don’t know whether any of the planets in a star’s “habitable zone” are actually habitable. But understanding the atmosphere can give us good probabilities, at least.

    The atmosphere can also open a window into the planet’s chemistry and history. On Venus, for example, the huge levels of carbon dioxide and the presence of sulfur dioxide clouds indicate that the planet has an oxidizing environment and that its atmosphere is dominated by volcanic activity. The composition of the gas giants in the outer Solar System likely reflects the gas that was present in the disk that formed the planets early in the Solar System’s history.

    But the most intriguing prospect is that we could find something like Earth, where biological processes produce both methane and the oxygen that ultimately converts it to carbon dioxide. The presence of both in an atmosphere indicates that some process(es) are constantly producing the gases, maintaining a long-term balance. While some geological phenomena can produce both these chemicals, finding them together in an atmosphere would at least be suggestive of possible life.

    Interdisciplinary

    Just the prospect of finding hints of life on other worlds has rapidly transformed the study of exoplanets, since it’s a problem that touches on nearly every area of science. Take the issue of atmospheres and habitability. Even if we understand the composition of a planet’s atmosphere, its temperature won’t just pop out of a simple equation. Distance from the star, type of star, the planet’s rotation, and the circulation of the atmosphere will all play a role in determining conditions. But the climate models that we use to simulate Earth’s atmosphere haven’t been capable of handling anything but the Sun and an Earth-like atmosphere. So extensive work has had to be done to modify them to work with the conditions found elsewhere.

    Similar problems appear everywhere. Geologists and geochemists have to infer likely compositions given little more than a planet’s density and perhaps its atmospheric compositions. Their results need to be combined with atmospheric models to figure out what the surface chemistry of a planet might be. Biologists and biochemists can then take that chemistry and figure out what reactions might be possible there. Meanwhile, the planetary scientists who study our own Solar System can provide insight into how those processes have worked out here.

    “I think it’s part of the Renaissance aspect of exoplanets,” Szentgyorgyi told Ars. “A lot of people now think a lot more broadly, there’s a lot more cross-disciplinary interaction. I find that I’m going to talks about geology, I’m going to talks about the atmospheric chemistry on Titan.”

    The next decade promises incredible progress. A new generation of enormous telescopes is expected to come online, and the James Webb space telescope should devote significant time to imaging exosolar systems.

    NASA/ESA/CSA Webb Telescope annotated


    ____________________________________________
    Other giant 30 meter class telescopes planned

    ESO/E-ELT,39 meter telescope to be on top of Cerro Armazones in the Atacama Desert of northern Chile. located at the summit of the mountain at an altitude of 3,060 metres (10,040 ft).

    TMT-Thirty Meter Telescope, proposed and now approved for Mauna Kea, Hawaii, USA4,207 m (13,802 ft) above sea level, the only giant 30 meter class telescope for the Northern hemisphere


    ____________________________________________

    We’re likely to end up with much more detailed pictures of some intriguing bodies in our galactic neighborhood.

    The data that will flow from new experiments and new devices will be interpreted by scientists who have already transformed their field. That transformation—from proving that exoplanets exist to establishing a vibrant, multidisciplinary discipline—really took place during the 2010s, which is why it deserves the title “decade of exoplanets.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 4:45 pm on October 3, 2019 Permalink | Reply
    Tags: ars technica,   

    From ars technica: “Plate tectonics runs deeper than we thought” 

    Ars Technica
    From ars technica

    10/3/2019
    Howard Lee

    At 52 years old, plate tectonics has given geologists a whole new level to explore.

    1
    Þingvellir or Thingvellir, is a national park in Southwestern Iceland, about 40km northeast of Iceland’s capital, Reykjavík. It’s a site of geological significance, as the visuals may indicate.

    It’s right there in the name: “plate tectonics.” Geology’s organizing theory hinges on plates—thin, interlocking pieces of Earth’s rocky skin. Plates’ movements explain earthquakes, volcanoes, mountains, the formation of mineral resources, a habitable climate, and much else. They’re part of the engine that drags carbon from the atmosphere down into Earth’s mantle, preventing a runaway greenhouse climate like Venus. Their recycling through the mantle helps to release heat from Earth’s liquid metal core, making it churn and generate a magnetic field to protect our atmosphere from erosion by the solar wind.

    The name may not have changed, but today the theory is in the midst of an upgrade to include a deeper level—both in our understanding and in its depth in our planet. “There is a huge transformation,” says Thorsten Becker, the distinguished chair in geophysics at the University of Texas at Austin. “Where we say: ‘plate tectonics’ now, we might mean something that’s entirely different than the 1970s.”

    Plate Tectonics emerged in the late1960s when geologists realized that plates moving on Earth’s surface at fingernail-growth speeds side-swipe each other at some places (like California) and converge at others (like Japan). When they converge, one plate plunges down into Earth’s mantle under the other plate, but what happened to it deeper in the mantle remained a mystery for most of the 20th century. Like an ancient map labeled “here be dragons,” knowledge of the mantle remained skin-deep except for its major boundaries.

    Now a marriage of improved computing power and new techniques to investigate Earth’s interior has enabled scientists to address some startling gaps in the original theory, like why there are earthquakes and other tectonic phenomena on continents thousands of miles from plate boundaries:

    “Plate tectonics as a theory says zero about the continents; [it] says that the plates are rigid and are moving with respect to each other and that the deformation happens only at the boundaries,” Becker told Ars. ”That is nowhere exactly true! It’s [only] approximately true in the oceanic plates.”

    There are other puzzles, too. Why did the Andes and Tibet wait tens of millions of years after their plates began to converge before they grew tall? And why did the Sea of Japan and the Aegean Sea form rapidly, but only after plates had been plunging under them for tens of millions of years?

    “They’ve been puzzling us for ages, and they don’t fit well into plate tectonic theory,” says Jonny Wu, a professor focused on tectonics and mantle structure at the University of Houston. “That’s why we’re looking deeper into the mantle to see if this could explain a whole side of tectonics that we don’t really understand.”

    2
    The subduction of a tectonic plate. British Geological Survey

    Plate Tectonics meet Slab Tectonics

    The Plate Tectonics theory’s modern upgrade is the result of new information. Beginning in the mid-1990s, Earth’s interior has gradually been charted by CAT-Scan-like images, built by mapping the echoes of powerful earthquakes that bounce off features within Earth’s underworld, the way a bat screeches to echo-locate surroundings. These “seismic tomography” pictures show that plates that plunge down from the surface and into the mantle (“subduct” in the language of geologists) don’t just assimilate into a formless blur, as often depicted. In fact, they have a long and eventful afterlife in the mantle.

    “When I was a PhD student in the early 2000’s, we were still raised with the idea that there is a rapidly convecting upper mantle that doesn’t communicate with the lower mantle,” says Douwe van Hinsbergen, a professor of global plate tectonics at the University of Utrecht. Now, that seismic tomography shows “unequivocal evidence that subducted lithosphere [plate material] goes right down into the lower mantle.” This has settled decades of debate about how deep the heat-driven convection extends through the mantle.

    van Hinsbergen and his colleagues have mapped many descending plates (dubbed “slabs”), scattered throughout the mantle, oozing and sagging inexorably toward the core-mantle boundary 2,900 kilometers (1,800 miles) below our feet, in an “Atlas of the Underworld.” Some slabs are so old they were tectonic plates on Earth’s surface long before the first dinosaurs evolved.

    Moving through the mantle from top to bottom, blue areas are roughly equivalent to subducting slabs. Top panel seismic tomography based on earthquake P (primary) waves, bottom panel seismic tomography based on earthquake S (secondary) waves. Credit: van der Meer et al Tectonophysics 2018, atlas-of-the-underworld.

    Fluid solid

    Slabs sink through the mantle because they are cooler and therefore denser than the surrounding mantle. This works because “the Earth acts as a fluid on very long timescales,” explains Carolina Lithgow-Bertelloni, the endowed chair in geosciences at UCLA.

    High-pressure, high-temperature diamond-tipped anvil apparatuses can now recreate the conditions of the mantle and even the center of the core, albeit on a tiny scale. They show that rock at mantle pressures and temperatures is fluid but not liquid, solid yet mobile—confounding our intuition like a Salvador Dali painting. Here rigidity is time-dependent: solid crystals flow, and ice is burning hot.

    But even by the surreal standards of Earth’s underworld, a layer within the mantle between 410 and 660 kilometers (255-410 miles) deep is especially peculiar. Blobs in diamonds that made it back from there to Earth’s surface reveal it to be rich in water, where carbon, that once was life on—or in—the seafloor, waits as carbonate minerals to be recycled into the atmosphere, where diamonds grow fat over eons before, occasionally, being recycled into the crowns of royalty. Earthquake waves are distorted as they pass through it, showing the 660-kilometer-deep boundary has mountainous topography with peaks up to 3 kilometers (2 miles) tall, frosted with a layer of weak matter.

    Called the “Mantle Transition Zone,” this layer is a natural consequence of the increasing weight of the rock above as you go deeper underground. At certain depths, the pressure forces atoms to huddle tighter together, forming new, more compact minerals. The biggest of these “phase transitions” occurs at a 660-kilometer-deep horizon, where seawater that was trapped in subducting slabs is squeezed out of minerals. The resulting dryer, ultra-dense, and ultra-viscous material sinks down into the lower mantle, moving more than 10 times slower than it did in the upper mantle.

    For sinking slabs, that’s like a traffic light on a highway (in this analogy your commute takes about 20 million years, one-way), so slabs typically grind to a halt like cars in a traffic jam when they hit the 660-kilometer level. Seismic tomography shows that they stagnate there, sometimes for millions of years. Or, they pile-up, buckle, and concertina. Or, they slide horizontally. Or, sometimes they just pierce the Transition Zone like a spear.

    It’s these differences in how slabs cross the Mantle Transition Zone that’s the key to explaining those puzzling phenomena on Earth’s continents.

    Pulling back the subducted bedsheet

    To see how the Andes were affected when a slab crossed the Mantle Transition Zone, Wu’s PhD student Yi-Wei Chen worked with Wu and structural geologist John Suppe, using seismic tomography pictures of the Nazca Slab that’s in the mantle under South America.

    They clicked the equivalent of an “undo” button to “un-subduct” the slab: “Like a giant bedsheet that’s fallen off the bed, we could slowly pull it back up and just keep pulling and see how big it was,” says Wu. Their technique is borrowed from the way geologists flatten-out contorted crustal rocks in mountain belts and oil fields to understand what the layers were like before they were folded. Using the age of the Pacific Ocean floor, the rate that ocean plates are being manufactured at midocean ridges, and the configuration of those ridges, the team compared the subduction history of South America with a large database of surface geological observations, including the timing of volcanic eruptions.

    3
    A slice through Earth’s mantle under the Andes. Jonny Wu/University of Houston

    “Our plate model is just a model, but there is a huge catalog of tectonic signals, especially magmatism, to work with,” says Wu. “We started to see a link between when the slab reached the mid-mantle viscosity change and things that were happening in the surface.”

    They found that the main uplift of the Andes was delayed by 20–30 million years after the most recent episode of subduction began, a delay that matches the time for the slab to arrive at, stagnate in, and then sink below the Mantle Transition Zone. Delays like that—millions of years between the start of subduction and the start of serious mountain building—have also been recognized in Turkey and Tibet.

    How can a slab sinking through the Mantle Transition Zone build mountains on an entirely different plate, 660 kilometers away through the mantle?

    It’s a mantle wind that blows continents into mountains

    “If you take something that’s dense and you make it go down, that’s going to generate flow everywhere, and that is the ‘mantle wind,’ so there’s nothing mysterious about it!” says Lithgow-Bertelloni.

    Geodynamicists like Lithgow-Bertelloni and Becker use a different approach than Wu’s bedsheet-like un-subduction process. Instead, they code the equations of fluid dynamics into computer models to simulate the flow of high-pressure rock. These models are constrained by the physical conditions in Earth’s mantle gleaned from high-pressure experiments and by the properties of earthquake waves that have traveled through those depths. By playing a “video” of these simulations, scientists can check the behavior of slabs in their models against the “ground truth” of seismic tomographic images. The better they match, the more accurately their models represent how this planet works.

    “How the geometry evolves has to conform to physics,” says Becker. “The deformation is different in the mantle from the shallow crust because things tend to flow rather than break, as temperatures and pressures are higher.”

    Their models show that, as slabs sink below the Mantle Transition Zone, they suck mantle down behind them, creating a far-reaching downwelling current of flowing rock. And it’s that down-going gust of mantle wind that drags continental plates above it, like a conveyor belt, compressing them and squeezing mountain belts skyward in places like the Andes, Turkey, and Tibet.

    The location of the slabs relative to that 660-kilometer horizon determines what kind of mountain chain you get. If a subducting slab hasn’t yet sunk below the 660-kilometer layer, you get the kind of mountains envisaged by classic plate tectonics—without extreme altitudes and confined to a narrow belt above the subducting slab. Examples include the ones around the Western Pacific and Italy: “We think the present-day Apennines are an example of that,” says Becker.

    The bigger mountain belts east of the Pacific and the Tibetan Plateau are in a different category: “Once the slab transitions through the 660, you induce a much larger scale of convection cell. That’s when we are engaging what we call whole mantle ‘conveyor belts.’ And it’s when you have those global conveyor belts and symmetric downwelling rather than a one-sided downwelling, that’s when you get a lot of the [mountain building],” Becker said.

    So one slab sinking below the Mantle Transition Zone can create a mantle undertow that squeezes up mountains on an entirely different plate, 660 kilometers above it. This new level of tectonics now makes sense of other geological puzzles.

    What’s stressing Asia?

    The Tibetan Plateau north of the Himalayas is known as the “Roof of the World” because it stands an average of 4.5 kilometers (15,000 feet) above sea level. It achieved that altitude around 34 million years ago, some 24 million years after the Indian continent began to collide with Asia, and more than 100 million years after seafloor first began to plunge into the mantle under South Asia.

    “The surface elevation of the Tibetan Plateau was acquired after much of the crustal deformation took place, suggesting that processes in the underlying mantle may have played a key role in the uplift,” van Hinsbergen commented in the journal Science recently.

    During those 100 million years, the oceanic slab attached to India seems to have stagnated and then penetrated the mantle’s 660-kilometer layer several times before the Indian continent finally collided with Asia. With that collision, continental crust began to plunge into the mantle. But it took millions of years for that continental rock, more buoyant than the oceanic rock that preceded it, to cause a slab pile-up in the Mantle Transition Zone beneath Tibet. India’s slab buckled and broke off, releasing the amputated Indian plate to buoy up the Tibetan Plateau.

    4
    Seismic tomographic slice through India and Tibet showing the broken-off Himalaya Slab (Hi) and older Indian slab (In) sinking toward the core.

    The fact that India continues, even today, to bulldoze its way under Asia, long after the continents collided and the slab broke off, has been another puzzle for geologists. It shows that forces beyond classic plate tectonics must be at work.

    But India’s continued motion isn’t the only mystery of Central Asia. Lake Baikal in Siberia occupies a deep rift in Earth’s crust caused by stresses that pull the crust there apart, and across Central Asia there are San-Andreas-like fault zones responsible for devastating earthquakes. These are out of place for classic plate tectonics since they are thousands of miles from a plate boundary. What, then, is stressing the interior of Asia?

    The answer is, again, blowing in the mantle wind.

    “This is not due simply to the fact that India has collided into Asia. This is the result of longstanding subduction in the region. You have compression all through the Japan subduction zone and into Indonesia and India,” says Lithgow-Bertelloni. “There’s been a ring of compression and there’s been downwelling, and that’s what gives you the regional stress pattern today.”

    In other words, in parts of the world where the mantle wind converges and sinks, it drags plates together forming big mountain chains. Farther away from that convergence, the same mantle flow stretches the overlying plates, causing rifts and faults.

    Becker and Claudio Faccenna of Roma TRE University linked the downwelling current under East Asia to upwelling of hot mantle rock under Africa, a giant circuit of mantle wind that drives India and Arabia northward today. With Laurent Jolivet of the Sorbonne University, they reason that this mantle wind flows under Asia, stressing those Central Asian faults and rifting the ground under Lake Baikal. They also think it may have stretched East Asia apart to form a series of inland lakes and seas, like the Sea of Japan.

    Slab syringe?

    Wu thinks those East Asian seas and lakes might instead owe their origin to a different gust of mantle wind: “East Asia is puzzling in that you have these marginal basins that have formed since the Pacific Slab began to subduct under that region. We think the Pacific Slab began to subduct around 50 million years ago and, shortly after that, many of these marginal basins opened up, including the Japan Sea, the Kuril Basin, the Sea of Okhotsk. We don’t really have a good idea why they formed, but the timings overlap.”

    Japan was part of the Asian mainland until about 23 million years ago, when it rapidly (for geologists) swung away from the mainland like double saloon doors in an old western movie: “These doors swung open very quickly, apparently in less than 2 or 3 million years. The Japan Slab [is] underneath Beijing today, 2,500 kilometers inland. It’s puzzling that it’s so far inland, and it can be followed all the way back to the actual Pacific Slab today,” Wu told me at the AGU conference in Washington, DC, last December. “What we’ve shown at this conference is that slab is most likely all Pacific Slab, and it looks like this slab has to move laterally in the Mantle Transition Zone.”

    In other words, rather than sinking further down into the mantle, the Pacific Slab seems to have slid sideways in the Mantle Transition Zone, hundreds of kilometers beneath the Asian Plate on the surface. Like a syringe plunger, it must have squeezed mantle material out of its way, and it could be that fugitive flow of mantle that stretched East Asia apart to create the Sea of Japan, the Kuril Basin and the Sea of Okhotsk.

    6
    Seismic tomographic picture showing the subducted Pacific Slab (white to purple colors) extending in the Mantle Transition Zone as far as Beijing.

    Perhaps. Wu is the first to say this is speculative, but it’s an idea that’s consistent with plate reconstructions by other scientists. It also fits the weird, water-rich properties of the Mantle Transition Zone, with weak minerals and pockets of fluid that would lubricate the slab’s penetration sideways rather than downwards. Fluid-dynamic computer models expect a weak lubricating layer at the 660-kilometer horizon. “We see this in the numerical simulations of convection,” says Lithgow-Bertelloni. “You see a lot of horizontal travel because the slab can’t go down because of a combination of things that are going on in terms of the viscosity structure and the phase transitions, and so it gets trapped in the Transition Zone, and so it has to travel.”

    Becker is more skeptical: “How far the slab under Asia travelled laterally is a very interesting question that a lot of people are thinking about, and it’s one that comes down to what sort of tomographic models you look at,” he says.

    Science by upgrade not by uproot

    It’s skepticism like Becker’s that drives science forward through a never-ending trial by data. Scientists try to break a theory by throwing observations at it to see if it handles them. New data and new techniques sometimes throw up puzzles that demand upgrades or bugfixes to the theory, but most of the theory tends to remain intact. So it is, and always has been, with plate tectonics. Even though its key ideas crystallized in 1967, it didn’t arrive fully formed in a blinding “eureka!” moment. It was built on discoveries and ideas from more than two dozen scientists over six decades until it explained a range of geological and geophysical observations all over the world. That process continues today.

    “What has changed dramatically since the late ‘90s is that we’re now approaching understanding of plate tectonics that actually includes the continents!” says Becker.

    Ironically this new direction harks back to the 1930s: “Arthur Holmes had a textbook in the 1930s where he associated mountain building such as the Andes with mantle convection,” says Becker. When Alfred Wegner proposed that continents drifted, he lacked a mechanism for that. With hindsight it’s strange that few made the link with Holmes’ work: “For some reason science was not ready to make that connection,” says Becker, “and it took until the establishment of seafloor spreading in the late ‘60s for people to make the link.”

    The grand challenge ahead

    This new, deeper, understanding of plate tectonics is now rippling through the Earth sciences. “Modern tectonics no longer is restricted to classical concepts involving the movements and interactions of thin, rigid tectonic (lithospheric) plates,” says a Grand Challenge report to the US National Science Foundation last year. So Earth scientists need to, “revisit our traditional definition of tectonics as a field.”

    Ramifications of a new plate tectonics theory extend far beyond geology, too, because it’s woven into the fabric of other sciences, like long-term climate change and the habitability of exoplanets. We’re also realizing that life and climate can affect plate tectonics over long timescales.

    “Plate tectonics 2.0 is a model of Earth evolution that includes not just oceanic plates but includes the continental plates,” Becker says. “And once you include continental plates then you have to worry about the processes such as sediments coming down from the mountains, lubricating the plate, carbon gets dumped on them, then carbon gets released at the subduction zones. Perhaps you might have control of subduction by climate.”

    van Hinsbergen puts it this way: “Undoubtedly we’ll have major progress to make in the next decades. But the black box of the dynamics of our planet interior is now starting to be comprehensively constrained by observations, even as deep as to the core-mantle boundary.”

    So like the plates themselves, it seems plate tectonics as a theory will continue to shift, too.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 12:35 pm on December 9, 2018 Permalink | Reply
    Tags: AI at NASA, ars technica, ,   

    From ars technica: “NASA’s next Mars rover will use AI to be a better science partner” 

    Ars Technica
    From ars technica

    12/6/2018
    Alyson Behr

    Experience gleaned from EO-1 satellite will help JPL build science smarts into next rover.

    NASA Mars 2020 rover schematic


    NASA Mars Rover 2020 NASA

    NASA can’t yet put a scientist on Mars. But in its next rover mission to the Red Planet, NASA’s Jet Propulsion Laboratory is hoping to use artificial intelligence to at least put the equivalent of a talented research assistant there. Steve Chien, head of the AI Group at NASA JPL, envisions working with the Mars 2020 Rover “much more like [how] you would interact with a graduate student instead of a rover that you typically have to micromanage.”

    The 13-minute delay in communications between Earth and Mars means that the movements and experiments conducted by past and current Martian rovers have had to be meticulously planned. While more recent rovers have had the capability of recognizing hazards and performing some tasks autonomously, they’ve still placed great demands on their support teams.

    Chien sees AI’s future role in the human spaceflight program as one in which humans focus on the hard parts, like directing robots in a natural way while the machines operate autonomously and give the humans a high-level summary.

    “AI will be almost like a partner with us,” Chien predicted. “It’ll try this, and then we’ll say, ‘No, try something that’s more elongated, because I think that might look better,’ and then it tries that. It understands what elongated means, and it knows a lot of the details, like trying to fly the formations. That’s the next level.

    “Then, of course, at the dystopian level it becomes sentient,” Chien joked. But he doesn’t see that happening soon.

    Old-school autonomy

    NASA has a long history with AI and machine-learning technologies, Chien said. Much of that history has been focused on using machine learning to help interpret extremely large amounts of data. While much of that machine learning involved spacecraft data sent back to Earth for processing, there’s a good reason to put more intelligence directly on the spacecraft: to help manage the volume of communications.

    Earth Observing One was an early example of putting intelligence aboard a spacecraft. Launched in November 2000, EO-1 was originally planned to have a one-year mission, part of which was to test how basic AI could handle some scientific tasks onboard. One of the AI systems tested aboard EO-1 was the Autonomous Sciencecraft Experiment (ASE), a set of software that allowed the satellite to make decisions based on data collected by its imaging sensors. ASE included onboard science algorithms that performed image data analysis to detect trigger conditions to make the spacecraft pay more attention to something, such as interesting features discovered or changes relative to previous observations. The software could also detect cloud cover and edit it out of final image packages transmitted home. EO-1’s ASE could also adjust the satellite’s activities based on the science collected in a previous orbit.

    With volcano imagery, for example, Chien said, JPL had trained the machine-learning software to recognize volcanic eruptions from spectral and image data. Once the software spotted an eruption, it would then act out pre-programmed policies on how to use that data and schedule follow-up observations. For example, scientists might set the following policy: if the spacecraft spots a thermal emission that is above two megawatts, the spacecraft should keep observing it on the next overflight. The AI software aboard the spacecraft already knows when it’s going to overfly the emission next, so it calculates how much space is required for the observation on the solid-state recorder as well as all the other variables required for the next pass. The software can also push other observations off for an orbit to prioritize emerging science.

    2020 and beyond

    “That’s a great example of things that we were able to do and that are now being pushed in the future to more complicated missions,” Chien said. “Now we’re looking at putting a similar scheduling system onboard the Mars 2020 rover, which is much more complicated. Since a satellite follows a very predictable orbit, the only variable that an orbiter has to deal with is the science data it collects.

    “When you plan to take a picture of this volcano at 10am, you pretty much take a picture of the volcano at 10am, because it’s very easy to predict,” Chien continued. “What’s unpredictable is whether the volcano is erupting or not, so the AI is used to respond to that.” A rover, on the other hand, has to deal with a vast collection of environmental variables that shift moment by moment.

    Even for an orbiting satellite, scheduling observations can be very complicated. So AI plays an important role even when a human is making the decisions, said Chien. “Depending on mission complexity and how many constraints you can get into the software, it can be done completely automatically or with the AI increasing the person’s capabilities. The person can fiddle with priorities and see what different schedules come out and explore a larger proportion of the space in order to come up with better plans. For simpler missions, we can just automate that.”

    Despite the lessons learned from EO-1, Chien said that spacecraft using AI remain “the exception, not the norm. I can tell you about different space missions that are using AI, but if you were to pick a space mission at random, the chance that it was using AI in any significant fashion is very low. As a practitioner, that’s something we have to increase uptake on. That’s going to be a big change.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 9:06 am on October 11, 2018 Permalink | Reply
    Tags: , ars technica, , , Turbulance unsolved, Werner Heisenberg   

    From ars technica: “Turbulence, the oldest unsolved problem in physics” 

    Ars Technica
    From ars technica

    10/10/2018
    Lee Phillips

    The flow of water through a pipe is still in many ways an unsolved problem.

    Werner Heisenberg won the 1932 Nobel Prize for helping to found the field of quantum mechanics and developing foundational ideas like the Copenhagen interpretation and the uncertainty principle. The story goes that he once said that, if he were allowed to ask God two questions, they would be, “Why quantum mechanics? And why turbulence?” Supposedly, he was pretty sure God would be able to answer the first question.

    Werner Heisenberg from German Federal Archives

    The quote may be apocryphal, and there are different versions floating around. Nevertheless, it is true that Heisenberg banged his head against the turbulence problem for several years.

    His thesis advisor, Arnold Sommerfeld, assigned the turbulence problem to Heisenberg simply because he thought none of his other students were up to the challenge—and this list of students included future luminaries like Wolfgang Pauli and Hans Bethe. But Heisenberg’s formidable math skills, which allowed him to make bold strides in quantum mechanics, only afforded him a partial and limited success with turbulence.

    Some nearly 90 years later, the effort to understand and predict turbulence remains of immense practical importance. Turbulence factors into the design of much of our technology, from airplanes to pipelines, and it factors into predicting important natural phenomena such as the weather. But because our understanding of turbulence over time has stayed largely ad-hoc and limited, the development of technology that interacts significantly with fluid flows has long been forced to be conservative and incremental. If only we became masters of this ubiquitous phenomenon of nature, these technologies might be free to evolve in more imaginative directions.

    An undefined definition

    Here is the point at which you might expect us to explain turbulence, ostensibly the subject of the article. Unfortunately, physicists still don’t agree on how to define it. It’s not quite as bad as “I know it when I see it,” but it’s not the best defined idea in physics, either.

    So for now, we’ll make do with a general notion and try to make it a bit more precise later on. The general idea is that turbulence involves the complex, chaotic motion of a fluid. A “fluid” in physics talk is anything that flows, including liquids, gases, and sometimes even granular materials like sand.

    Turbulence is all around us, yet it’s usually invisible. Simply wave your hand in front of your face, and you have created incalculably complex motions in the air, even if you can’t see it. Motions of fluids are usually hidden to the senses except at the interface between fluids that have different optical properties. For example, you can see the swirls and eddies on the surface of a flowing creek but not the patterns of motion beneath the surface. The history of progress in fluid dynamics is closely tied to the history of experimental techniques for visualizing flows. But long before the advent of the modern technologies of flow sensors and high-speed video, there were those who were fascinated by the variety and richness of complex flow patterns.

    2
    One of the first to visualize these flows was scientist, artist, and engineer Leonardo da Vinci, who combined keen observational skills with unparalleled artistic talent to catalog turbulent flow phenomena. Back in 1509, Leonardo was not merely drawing pictures. He was attempting to capture the essence of nature through systematic observation and description. In this figure, we see one of his studies of wake turbulence, the development of a region of chaotic flow as water streams past an obstacle.

    For turbulence to be considered a solved problem in physics, we would need to be able to demonstrate that we can start with the basic equation describing fluid motion and then solve it to predict, in detail, how a fluid will move under any particular set of conditions. That we cannot do this in general is the central reason that many physicists consider turbulence to be an unsolved problem.

    I say “many” because some think it should be considered solved, at least in principle. Their argument is that calculating turbulent flows is just an application of Newton’s laws of motion, albeit a very complicated one; we already know Newton’s laws, so everything else is just detail. Naturally, I hold the opposite view: the proof is in the pudding, and this particular pudding has not yet come out right.

    The lack of a complete and satisfying theory of turbulence based on classical physics has even led to suggestions that a full account requires some quantum mechanical ingredients: that’s a minority view, but one that can’t be discounted.

    An example of why turbulence is said to be an unsolved problem is that we can’t generally predict the speed at which an orderly, non-turbulent (“laminar”) flow will make the transition to a turbulent flow. We can do pretty well in some special cases—this was one of the problems that Heisenberg had some success with—but, in general, our rules of thumb for predicting the transition speeds are summaries of experiments and engineering experience.

    3
    There are many phenomena in nature that illustrate the often sudden transformation from a calm, orderly flow to a turbulent flow.The transition to turbulence. Credit: Dr. Gary Settles

    This figure above is a nice illustration of this transition phenomenon. It shows the hot air rising from a candle flame, using a 19th century visualization technique that makes gases of different densities look different. Here, the air heated by the candle is less dense than the surrounding atmosphere.

    For another turbulent transition phenomenon familiar to anyone who frequents the beach, consider gentle, rolling ocean waves that become complex and foamy as they approach the shore and “break.” In the open ocean, wind-driven waves can also break if the windspeed is high or if multiple waves combine to form a larger one.

    For another visual aid, there is a centuries-old tradition in Japanese painting of depicting turbulent, breaking ocean waves. In these paintings, the waves are not merely part of the landscape but the main subjects. These artists seemed to be mainly concerned with conveying the beauty and terrible power of the phenomenon, rather than, as was Leonardo, being engaged in a systematic study of nature. One of the most famous Japanese artworks, and an iconic example of this genre, is Hokusai’s “Great Wave,” a woodblock print published in 1831.

    4
    Hokusai’s “Great Wave.”

    For one last reason to consider turbulence an unsolved problem, turbulent flows exhibit a wide range of interesting behavior in time and space. Most of these have been discovered by measurement, not predicted, and there’s still no satisfying theoretical explanation for them.

    Simulation

    Reasons for and against “mission complete” aside, why is the turbulence problem so hard? The best answer comes from looking at both the history and current research directed at what Richard Feynman once called “the most important unsolved problem of classical physics.”

    The most commonly used formula for describing fluid flow is the Navier-Stokes equation. This is the equation you get if you apply Newton’s first law of motion, F = ma (force = mass × acceleration), to a fluid with simple material properties, excluding elasticity, memory effects, and other complications. Complications like these arise when we try to accurately model the flows of paint, polymers, some biological fluids such as blood (there are many other substances also that violate the assumptions of the Navier-Stokes equations). But for water, air, and other simple liquids and gases, it’s an excellent approximation.

    The Navier-Stokes equation is difficult to solve because it is nonlinear. This word is thrown around quite a bit, but here it means something specific. You can build up a complicated solution to a linear equation by adding up many simple solutions. An example you may be aware of is sound: the equation for sound waves is linear, so you can build up a complex sound by adding together many simple sounds of different frequencies (“harmonics”). Elementary quantum mechanics is also linear; the Schrödinger equation allows you to add together solutions to find a new solution.

    But fluid dynamics doesn’t work this way: the nonlinearity of the Navier-Stokes equation means that you can’t build solutions by adding together simpler solutions. This is part of the reason that Heisenberg’s mathematical genius, which served him so well in helping to invent quantum mechanics, was put to such a severe test when it came to turbulence.

    Heisenberg was forced to make various approximations and assumptions to make any progress with his thesis problem. Some of these were hard to justify; for example, the applied mathematician Fritz Noether (a brother of Emmy Noether) raised prominent objections to Heisenberg’s turbulence calculations for decades before finally admitting that they seemed to be correct after all.

    (The situation was so hard to resolve that Heisenberg himself said, while he thought his methods were justified, he couldn’t find the flaw in Fritz Noether’s reasoning, either!)

    The cousins of the Navier-Stokes equation that are used to describe more complex fluids are also nonlinear, as is a simplified form, the Euler equation, that omits the effects of friction. There are cases where a linear approximation does work well, such as flow at extremely slow speeds (imagine honey flowing out of a jar), but this excludes most problems of interest including turbulence.

    Who’s down with CFD?

    Despite the near impossibility of finding mathematical solutions to the equations for fluid flows under realistic conditions, science still needs to get some kind of predictive handle on turbulence. For this, scientists and engineers have turned to the only option available when pencil and paper failed them—the computer. These groups are trying to make the most of modern hardware to put a dent in one of the most demanding applications for numerical computing: calculating turbulent flows.

    The need to calculate these chaotic flows has benefited from (and been a driver of) improvements in numerical methods and computer hardware almost since the first giant computers appeared. The field is called computational fluid dynamics, often abbreviated as CFD.

    Early in the history of CFD, engineers and scientists applied straightforward numerical techniques in order to try to directly approximate solutions to the Navier-Stokes equations. This involves dividing up space into a grid and calculating the fluid variables (pressure, velocity) at each grid point. The problem of the large range of spatial scales immediately makes this approach expensive: you need to find a solution where the flow features are accurate for the largest scales—meters for pipes, thousands of kilometers for weather, and down to near the molecular scale. Even if you cut off the length scale at the small end at millimeters or centimeters, you will still need millions of grid points.

    5
    A possible grid for calculating the flow over an airfoil.

    One approach to getting reasonable accuracy with a manageable-sized grid begins with the realization that there are often large regions where not much is happening. Put another way, in regions far away from solid objects or other disturbances, the flow is likely to vary slowly in both space and time. All the action is elsewhere; the turbulent areas are usually found near objects or interfaces.

    6
    A non-uniform grid for calculating the flow over an airfoil.

    If we take another look at our airfoil and imagine a uniform flow beginning at the left and passing over it, it can be more efficient to concentrate the grid points near the object, especially at the leading and trailing edges, and not “waste” grid points far away from the airfoil. The next figure shows one possible gridding for simulating this problem.

    This is the simplest type of 2D non-uniform grid, containing nothing but straight lines. The state of the art in nonuniform grids is called adaptive mesh refinement (AMR), where the mesh, or grid, actually changes and adapts to the flow during the simulation. This concentrates grid points where they are needed, not wasting them in areas of nearly uniform flow. Research in this field is aimed at optimizing the grid generation process while minimizing the artificial effects of the grid on the solution. Here it’s used in a NASA simulation of the flow around an oscillating rotor blade. The color represents vorticity, a quantity related to angular momentum.

    7
    Using AMR to simulate the flow around a rotor blade.Neal M. Chaderjian, NASA/Ames

    The above image shows the computational grid, rendered as blue lines, as well as the airfoil and the flow solution, showing how the grid adapts itself to the flow. (The grid points are so close together at the areas of highest grid resolution that they appear as solid blue regions.) Despite the efficiencies gained by the use of adaptive grids, simulations such as this are still computationally intensive; a typical calculation of this type occupies 2,000 compute cores for about a week.

    Dimitri Mavriplis and his collaborators at the Mavriplis CFD Lab at the University of Wyoming have made available several videos of their AMR simulations.

    8
    AMR simulation of flow past a sphere.Mavriplis CFD Lab

    Above is a frame from a video of a simulation of the flow past an object; the video is useful for getting an idea of how the AMR technique works, because it shows how the computational grid tracks the flow features.

    This work is an example of how state-of-the-art numerical techniques are capable of capturing some of the physics of the transition to turbulence, illustrated in the image of candle-heated air above.

    Another approach to getting the most out of finite computer resources involves making alterations to the equation of motion, rather than, or in addition to, altering the computational grid.

    Since the first direct numerical simulations of the Navier-Stokes equations were begun at Los Alamos in the late 1950s, the problem of the vast range of spatial scales has been attacked by some form of modeling of the flow at small scales. In other words, the actual Navier-Stokes equations are solved for motion on the medium and large scales, but, below some cutoff, a statistical or other model is substituted.

    The idea is that the interesting dynamics occur at larger scales, and grid points are placed to cover these. But the “subgrid” motions that happen between the gridpoints mainly just dissipate energy, or turn motion into heat, so don’t need to be tracked in detail. This approach is also called large-eddy simulation (LES), the term “eddy” standing in for a flow feature at a particular length scale.

    The development of subgrid modeling, although it began with the beginning of CFD, is an active area of research to this day. This is because we always want to get the most bang for the computer buck. No matter how powerful the computer, a sophisticated numerical technique that allows us to limit the required grid resolution will enable us to handle more complex problems.

    There are several other prominent approaches to modeling fluid flows on computers, some of which do not make use of grids at all. Perhaps the most successful of these is the technique called “smoothed particle hydrodynamics,” which, as its name suggests, models the fluid as a collection of computational “particles,” which are moved around without the use of a grid. The “smoothed” in the name comes from the smooth interpolations between particles that are used to derive the fluid properties at different points in space.

    Theory and experiment

    Despite the impressive (and ever-improving) ability of fluid dynamicists to calculate complex flows with computers, the search for a better theoretical understanding of turbulence continues, for computers can only calculate flow solutions in particular situations, one case at a time. Only through the use of mathematics do physicists feel that they’ve achieved a general understanding of a group of related phenomena. Luckily, there are a few main theoretical approaches to turbulence, each with some interesting phenomena they seek to penetrate.

    Only a few exact solutions of the Navier-Stokes equations are known; these describe simple, laminar flows (and certainly not turbulent flows of any kind). For flow in a pipe or between two flat plates, the flow velocity profile between two plates is zero at the boundaries and a maximum half-way between them. This parabolic flow profile (shown below) solves the equations: something that has been known for over a century. Laminar flow in a pipe is similar, with the maximum velocity occurring at the center.

    9
    Exact solution for flow between plates.

    The interesting thing about this parabolic solution, and similar exact solutions, is that they are valid (mathematically speaking) at any flow velocity, no matter how high. However, experience shows that while this works at low speeds, the flow breaks up and becomes turbulent at some moderate “critical” speed. Using mathematical methods to try to find this critical speed is part of what Heisenberg was up to in his thesis work.

    Theorists describe what’s happening here by using the language of stability theory. Stability theory is the examination of the exact solutions to the Navier-Stokes equation and their ability to survive “perturbations,” which are small disturbances added to the flow. These disturbances can be in the form of boundaries that are less than perfectly smooth, variations in the pressure driving the flow, etc.

    The idea is that, while the low-speed solution is valid at any speed, near a critical speed another solution also becomes valid, and nature prefers that second, more complex solution. In other words, the simple solution has become unstable and is replaced by a second one. As the speed is ramped up further, each solution gives way to a more complicated one, until we arrive at the chaotic flow we call turbulence.

    In the real world, this will always happen, because perturbations are always present—and this is why laminar flows are much less common in everyday experience than turbulence.

    Experiments to directly observe these instabilities are delicate, because the distance between the first instability and the onset of full-blown turbulence is usually quite small. You can see a version of the process in the figure above, showing the transition to turbulence in the heated air column above a candle. The straight column is unstable, but it takes a while before the sinuous instability grows large enough for us to see it as a visible wiggle. Almost as soon as this happens, the cascade of instabilities piles up, and we see a sudden explosion into turbulence.

    Another example of the common pattern is in the next illustration, which shows the typical transition to turbulence in a flow bounded by a single wall.

    10
    Transition to turbulence in a wall-bounded flow. NASA.

    We can again see an approximately periodic disturbance to the laminar flow begin to grow, and after just a few wavelengths the flow suddenly becomes turbulent.

    Capturing, and predicting, the transition to turbulence is an ongoing challenge for simulations and theory; on the theoretical side, the effort begins with stability theory.

    In fluid flows close to a wall, the transition to turbulence can take a somewhat different form. As in the other examples illustrated here, small disturbances get amplified by the flow until they break down into chaotic, turbulent motion. But the turbulence does not involve the entire fluid, instead confining itself to isolated spots, which are surrounded by calm, laminar flow. Eventually, more spots develop, enlarge, and ultimately merge, until the entire flow is turbulent.

    The fascinating thing about these spots is that, somehow, the fluid can enter them, undergo a complex, chaotic motion, and emerge calmly as a non-turbulent, organized flow on the other side. Meanwhile, the spots persist as if they were objects embedded in the flow and attached to the boundary.


    Turbulent spot experiment: pressure fluctuation. (Credit: Katya Casper et al., Sandia National Labs)

    Despite a succession of first-rate mathematical minds puzzling over the Navier-Stokes equation since it was written down almost two centuries ago, exact solutions still are rare and cherished possessions, and basic questions about the equation remain unanswered. For example, we still don’t know whether the equation has solutions in all situations. We’re also not sure if its solutions, which supposedly represent the real flows of water and air, remain well-behaved and finite, or whether some of them blow up with infinite energies or become unphysically unsmooth.

    The scientist who can settle this, either way, has a cool million dollars waiting for them—this is one of the seven unsolved “Millennium Prize” mathematical problems set by the Clay Mathematics Institute.

    Fortunately, there are other ways to approach the theory of turbulence, some of which don’t depend on the knowledge of exact solutions to the equations of motion. The study of the statistics of turbulence uses the Navier-Stokes equation to deduce average properties of turbulent flows without trying to solve the equations exactly. It addresses questions like, “if the velocity of the flow here is so and so, then what is the probability that the velocity one centimeter away will be within a certain range?” It also answers questions about the average of quantities such as the resistance encountered when trying to push water through a pipe, or the lifting force on an airplane wing.

    These are the quantities of real interest to the engineer, who has little use for the physicist’s or mathematician’s holy grail of a detailed, exact description.

    Despite a succession of first-rate mathematical minds puzzling over the Navier-Stokes equation since it was written down almost two centuries ago, exact solutions still are rare and cherished possessions, and basic questions about the equation remain unanswered. For example, we still don’t know whether the equation has solutions in all situations. We’re also not sure if its solutions, which supposedly represent the real flows of water and air, remain well-behaved and finite, or whether some of them blow up with infinite energies or become unphysically unsmooth.

    The scientist who can settle this, either way, has a cool million dollars waiting for them—this is one of the seven unsolved “Millennium Prize” mathematical problems set by the Clay Mathematics Institute.

    Fortunately, there are other ways to approach the theory of turbulence, some of which don’t depend on the knowledge of exact solutions to the equations of motion. The study of the statistics of turbulence uses the Navier-Stokes equation to deduce average properties of turbulent flows without trying to solve the equations exactly. It addresses questions like, “if the velocity of the flow here is so and so, then what is the probability that the velocity one centimeter away will be within a certain range?” It also answers questions about the average of quantities such as the resistance encountered when trying to push water through a pipe, or the lifting force on an airplane wing.

    These are the quantities of real interest to the engineer, who has little use for the physicist’s or mathematician’s holy grail of a detailed, exact description.

    It turns out that the one great obstacle in the way of a statistical approach to turbulence theory is, once again, the nonlinear term in the Navier-Stokes equation. When you use this equation to derive another equation for the average velocity at a single point, it contains a term involving something new: the velocity correlation between two points. When you derive the equation for this velocity correlation, you get an equation with yet another new term: the velocity correlation involving three points. This process never ends, as the diabolical nonlinear term keeps generating higher-order correlations.

    The need to somehow terminate, or “close,” this infinite sequence of equations is known as the “closure problem” in turbulence theory and is still the subject of active research. Very briefly, to close the equations you need to step outside of the mathematical procedure and appeal to a physically motivated assumption or approximation.

    Despite its difficulty, some type of statistical solution to the fluid equations is essential for describing the phenomena of fully developed turbulence, of which there are a number. Turbulence need not be merely a random, featureless expanse of roiling fluid; in fact, it usually is more interesting than that. One of the most intriguing phenomena is the existence of persistent, organized structures within a violent, chaotic flow environment. We are all familiar with magnificent examples of these in the form of the storms on Jupiter, recognizable, even iconic, features that last for years, embedded within a highly turbulent flow.

    More down-to-Earth examples occur in almost any real-world case of a turbulent flow—in fact, experimenters have to take great pains if they want to create a turbulent flow field that is truly homogeneous, without any embedded structure.

    In the below images of a turbulent wake behind a cylinder and of the transition to turbulence in a wall-bounded flow, you can see the echoes of the wave-like disturbance that precedes the onset of fully developed turbulence: a periodicity that persists even as the flow becomes chaotic.

    12
    Cyclones at Jupiter’s north pole. NASA, JPL-Caltech, SwRI, ASI, INAF, JIRAM.

    13
    Wake behind a cylinder. Joseph Straccia et al. (CC By NC-ND)

    When your basic governing equation is very hard to solve or even to simulate, it’s natural to look for a more tractable equation or model that still captures most of the important physics. Much of the theoretical effort to understand turbulence is of this nature.

    We’ve mentioned subgrid models above, used to reduce the number of grid points required in a numerical simulation. Another approach to simplifying the Navier-Stokes equation is a class of models called “shell models.” Roughly speaking, in these models you take the Fourier transform of the Navier-Stokes equation, leading to a description of the fluid as a large number of interacting waves at different wavelengths. Then, in a systematic way, you discard most of the waves, keeping just a handful of significant ones. You can then calculate, using a computer or, with the simplest models, by hand, the mode interactions and the resulting turbulent properties. While, naturally, much of the physics is lost in these types of models, they allow some aspects of the statistical properties of turbulence to be studied in situations where the full equations cannot be solved.

    Occasionally, we hear about the “end of physics”—the idea that we are approaching the stage where all the important questions will be answered, and we will have a theory of everything. But from another point of view, the fact that such a commonplace phenomenon as the flow of water through a pipe is still in many ways an unsolved problem means that we are unlikely to ever reach a point that all physicists will agree is the end of their discipline. There remains enough mystery in the everyday world around us to keep physicists busy far into the future.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 1:03 pm on July 8, 2017 Permalink | Reply
    Tags: "Answers in Genesis", A great test case, , ars technica, ,   

    From ars technica: “Creationist sues national parks, now gets to take rocks from Grand Canyon” a Test Case Too Good to be True 

    Ars Technica
    ars technica

    7/7/2017
    Scott K. Johnson

    1
    Scott K. Johnson

    “Alternative facts” aren’t new. Young-Earth creationist groups like Answers in Genesis believe the Earth is no more than 6,000 years old despite actual mountains of evidence to the contrary, and they’ve been playing the “alternative facts” card for years. In lieu of conceding incontrovertible geological evidence, they sidestep it by saying, “Well, we just look at those facts differently.”

    Nowhere is this more apparent than the Grand Canyon, which young-Earth creationist groups have long been enamored with. A long geologic record (spanning almost 2 billion years, in total) is on display in the layers of the Grand Canyon thanks to the work of the Colorado River. But many creationists instead assert that the canyon’s rocks—in addition to the spectacular erosion that reveals them—are actually the product of the Biblical “great flood” several thousand years ago.

    Andrew Snelling, who got a PhD in geology before joining Answers in Genesis, continues working to interpret the canyon in a way that is consistent with his views. In 2013, he requested permission from the National Park Service to collect some rock samples in the canyon for a new project to that end. The Park Service can grant permits for collecting material, which is otherwise illegal.

    Snelling wanted to collect rocks from structures in sedimentary formations known as “soft-sediment deformation”—basically, squiggly disturbances of the layering that occur long before the sediment solidifies into rock. While solid rock layers can fold (bend) on a larger scale under the right pressures, young-Earth creationists assert that all folds are soft sediment structures, since forming them doesn’t require long periods of time.

    The National Park Service sent Snelling’s proposal out for review, having three academic geologists who study the canyon look at it. Those reviews were not kind. None felt the project provided any value to justify the collection. One reviewer, the University of New Mexico’s Karl Karlstrom, pointed out that examples of soft-sediment deformation can be found all over the place, so Snelling didn’t need to collect rock from a national park. In the end, Snelling didn’t get his permit.

    In May, Snelling filed a lawsuit alleging that his rights had been violated, as he believed his application had been denied by a federal agency because of his religious views. The complaint cites, among other things, President Trump’s executive order on religious freedom.

    That lawsuit was withdrawn by Snelling on June 28. According to a story in The Australian, Snelling withdrew his suit because the National Park Service has relented and granted him his permit. He will be able to collect about 40 fist-sized samples, provided that he makes the data from any analyses freely available.

    Not that anything he collects will matter. “Even if I don’t find the evidence I think I will find, it wouldn’t assault my core beliefs,” Snelling told The Australian. “We already have evidence that is consistent with a great flood that swept the world.”

    Again, in actuality, that hypothesis is in conflict with the entirety of Earth’s surface geology.

    Snelling says he will publish his results in a peer-reviewed scientific journal. That likely means Answers in Genesis’ own Answers Research Journal, of which he is editor-in-chief.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 9:07 am on June 18, 2017 Permalink | Reply
    Tags: ars technica, , , , , Molybdenum isotopes serve as a marker of the source material for our Solar System, Tungsten acts as a timer for events early in the Solar System’s history, U Münster   

    From ars technica: “New study suggests Jupiter’s formation divided Solar System in two” 

    Ars Technica
    ars technica

    6/17/2017
    John Timmer

    1
    NASA

    Gas giants like Jupiter have to grow fast. Newborn stars are embedded in a disk of gas and dust that goes on to form planets. But the ignition of the star releases energy that drives away much of the gas within a relatively short time. Thus, producing something like Jupiter involved a race to gather material before it was pushed out of the Solar System entirely.

    Simulations have suggested that Jupiter could have won this race by quickly building a massive, solid core that was able to start drawing in nearby gas. But, since we can’t look at the interior or Jupiter to see whether it’s solid, finding evidence to support these simulations has been difficult. Now, a team at the University of Münster has discovered some relevant evidence [PNAS] in an unexpected location: the isotope ratios found in various meteorites. These suggest that the early Solar System was quickly divided in two, with the rapidly forming Jupiter creating the dividing line.

    2

    Divide and conquer

    Based on details of their composition, we already knew that meteorites formed from more than one pool of material in the early Solar System. The new work extends that by looking at specific elements: tungsten and molybdenum. Molybdenum isotopes serve as a marker of the source material for our Solar System, determining what type of star contributed that material. Tungsten acts as a timer for events early in the Solar System’s history, as it’s produced by a radioactive decay with a half life of just under nine million years.

    While we have looked at tungsten and molybdenum in a number of meteorite populations before, the German team extended that work to iron-rich meteorites. These are thought to be fragments of the cores of planetesimals that formed early in the Solar System’s history. In many cases, these bodies went on to contribute to building the first planets.

    The chemical composition of meteorites had suggested a large number of different classes produced as different materials solidified at different distances from the Sun. But the new data suggests that, from the perspective of these isotopes, everything falls into just two classes: carbonaceous and noncarbonaceous.

    These particular isotopes tell us a few things. One is that the two populations probably have a different formation history. The molybdenum data indicates that material was added to the Solar System as it was forming, material that originated from a different type of source star. (One way to visualize this is to think of our Solar System as forming in two steps: first, from the debris of a supernova, then later we received additional material ejected by a red giant star.) And, because the two populations are so distinct, it appears that the later addition of material didn’t spread throughout the entire Solar System. If the later material had spread, you’d find some objects with intermediate compositions.

    A second thing that’s clear from the tungsten data is that the two classes of objects condensed at two different times. This suggests the noncarbonaceous bodies were forming from one to two million years into the Solar System’s history, while carbonaceous materials condensed later, from two to three million years.

    Putting it together

    To explain this, the authors suggest that the Solar System was divided early in its history, creating two different reservoirs of material. “The most plausible mechanism to efficiently separate two disk reservoirs for an extended period,” they suggest, “is the accretion of a giant planet in between them.” That giant planet, obviously, would be Jupiter.

    Modeling indicates that Jupiter would need to be 20 Earth masses to physically separate the two reservoirs. And the new data suggest that a separation had to take place by a million years into the Solar System’s history. All of which means that Jupiter had to grow very large, very quickly. This would be large enough for Jupiter to start accumulating gas well before the newly formed Sun started driving the gas out of the disk. By the time Jupiter grew to 50 Earth masses, it would create a permanent physical separation between the two parts of the disk.

    The authors suggest that the quick formation of Jupiter may have partially starved the inner disk of material, as it prevented material from flowing in from the outer areas of the planet-forming disk. This could explain why the inner Solar System lacks any “super Earths,” larger planets that would have required more material to form.

    Overall, the work does provide some evidence for a quick formation of Jupiter, probably involving a solid core. Other researchers are clearly going to want to check both the composition of additional meteorites and the behavior of planet formation models to see whether the results hold together. But the overall finding of two distinct reservoirs of material in the early Solar System seems to be very clear in their data, and those reservoirs will have to be explained one way or another.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 7:53 am on June 10, 2017 Permalink | Reply
    Tags: , , ars technica, Common Crawl, Implicit Association Test (IAT), Princeton researchers discover why AI become racist and sexist, Word-Embedding Association Test (WEAT)   

    From ars technica: “Princeton researchers discover why AI become racist and sexist” 

    Ars Technica
    ars technica

    19/4/2017
    Annalee Newitz

    Study of language bias has implications for AI as well as human cognition.

    1
    No image caption or credit

    Ever since Microsoft’s chatbot Tay started spouting racist commentary after 24 hours of interacting with humans on Twitter, it has been obvious that our AI creations can fall prey to human prejudice. Now a group of researchers has figured out one reason why that happens. Their findings shed light on more than our future robot overlords, however. They’ve also worked out an algorithm that can actually predict human prejudices based on an intensive analysis of how people use English online.

    The implicit bias test

    Many AIs are trained to understand human language by learning from a massive corpus known as the Common Crawl. The Common Crawl is the result of a large-scale crawl of the Internet in 2014 that contains 840 billion tokens, or words. Princeton Center for Information Technology Policy researcher Aylin Caliskan and her colleagues wondered whether that corpus—created by millions of people typing away online—might contain biases that could be discovered by algorithm. To figure it out, they turned to an unusual source: the Implicit Association Test (IAT), which is used to measure often unconscious social attitudes.

    People taking the IAT are asked to put words into two categories. The longer it takes for the person to place a word in a category, the less they associate the word with the category. (If you’d like to take an IAT, there are several online at Harvard University.) IAT is used to measure bias by asking people to associate random words with categories like gender, race, disability, age, and more. Outcomes are often unsurprising: for example, most people associate women with family, and men with work. But that obviousness is actually evidence for the IAT’s usefulness in discovering people’s latent stereotypes about each other. (It’s worth noting that there is some debate among social scientists about the IAT’s accuracy.)

    Using the IAT as a model, Caliskan and her colleagues created the Word-Embedding Association Test (WEAT), which analyzes chunks of text to see which concepts are more closely associated than others. The “word-embedding” part of the test comes from a project at Stanford called GloVe, which packages words together into “vector representations,” basically lists of associated terms. So the word “dog,” if represented as a word-embedded vector, would be composed of words like puppy, doggie, hound, canine, and all the various dog breeds. The idea is to get at the concept of dog, not the specific word. This is especially important if you are working with social stereotypes, where somebody might be expressing ideas about women by using words like “girl” or “mother.” To keep things simple, the researchers limited each concept to 300 vectors.

    To see how concepts get associated with each other online, the WEAT looks at a variety of factors to measure their “closeness” in text. At a basic level, Caliskan told Ars, this means how many words apart the two concepts are, but it also accounts for other factors like word frequency. After going through an algorithmic transform, closeness in the WEAT is equivalent to the time it takes for a person to categorize a concept in the IAT. The further apart the two concepts, the more distantly they are associated in people’s minds.

    The WEAT worked beautifully to discover biases that the IAT had found before. “We adapted the IAT to machines,” Caliskan said. And what that tool revealed was that “if you feed AI with human data, that’s what it will learn. [The data] contains biased information from language.” That bias will affect how the AI behaves in the future, too. As an example, Caliskan made a video (see above) where she shows how the Google Translate AI actually mistranslates words into the English language based on stereotypes it has learned about gender.

    Imagine an army of bots unleashed on the Internet, replicating all the biases that they learned from humanity. That’s the future we’re looking at if we don’t build some kind of corrective for the prejudices in these systems.

    A problem that AI can’t solve

    Though Caliskan and her colleagues found language was full of biases based on prejudice and stereotypes, it was also full of latent truths as well. In one test, they found strong associations between the concept of woman and the concept of nursing. This reflects a truth about reality, which is that nursing is a majority female profession.

    “Language reflects facts about the world,” Caliskan told Ars. She continued:

    Removing bias or statistical facts about the world will make the machine model less accurate. But you can’t easily remove bias, so you have to learn how to work with it. We are self-aware, we can decide to do the right thing instead of the prejudiced option. But machines don’t have self awareness. An expert human might be able to aid in [the AIs’] decision-making process so the outcome isn’t stereotyped or prejudiced for a given task.”

    The solution to the problem of human language is… humans. “I can’t think of many cases where you wouldn’t need a human to make sure that the right decisions are being made,” concluded Caliskan. “A human would know the edge cases for whatever the application is. Once they test the edge cases they can make sure it’s not biased.”

    So much for the idea that bots will be taking over human jobs. Once we have AIs doing work for us, we’ll need to invent new jobs for humans who are testing the AIs’ results for accuracy and prejudice. Even when chatbots get incredibly sophisticated, they are still going to be trained on human language. And since bias is built into language, humans will still be necessary as decision-makers.

    In a recent paper for Science about their work, the researchers say the implications are far-reaching. “Our findings are also sure to contribute to the debate concerning the Sapir Whorf hypothesis,” they write. “Our work suggests that behavior can be driven by cultural history embedded in a term’s historic use. Such histories can evidently vary between languages.” If you watched the movie Arrival, you’ve probably heard of Sapir Whorf—it’s the hypothesis that language shapes consciousness. Now we have an algorithm that suggests this may be true, at least when it comes to stereotypes.

    Caliskan said her team wants to branch out and try to find as-yet-unknown biases in human language. Perhaps they could look for patterns created by fake news or look into biases that exist in specific subcultures or geographical locations. They would also like to look at other languages, where bias is encoded very differently than it is in English.

    “Let’s say in the future, someone suspects there’s a bias or stereotype in a certain culture or location,” Caliskan mused. “Instead of testing with human subjects first, which takes time, money, and effort, they can get text from that group of people and test to see if they have this bias. It would save so much time.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 4:44 pm on May 16, 2017 Permalink | Reply
    Tags: ars technica, ,   

    From ars technica: “Atomic clocks and solid walls: New tools in the search for dark matter” 

    Ars Technica
    ars technica

    5/15/2017
    Jennifer Ouellette

    1
    An atomic clock based on a fountain of atoms. NSF

    Countless experiments around the world are hoping to reap scientific glory for the first detection of dark matter particles. Usually, they do this by watching for dark matter to bump into normal matter or by slamming particles into other particles and hoping for some dark stuff to pop out. But what if the dark matter behaves more like a wave?

    That’s the intriguing possibility championed by Asimina Arvanitaki, a theoretical physicist at the Perimeter Institute in Waterloo, Ontario, Canada, where she holds the Aristarchus Chair in Theoretical Physics—the first woman to hold a research chair at the institute. Detecting these hypothetical dark matter waves requires a bit of experimental ingenuity. So she and her collaborators are adapting a broad range of radically different techniques to the search: atomic clocks and resonating bars originally designed to hunt for gravitational waves—and even lasers shined at walls in hopes that a bit of dark matter might seep through to the other side.

    “Progress in particle physics for the last 50 years has been focused on colliders, and rightfully so, because whenever we went to a new energy scale, we found something new,” says Arvanitaki. That focus is beginning to shift. To reach higher and higher energies, physicists must build ever-larger colliders—an expensive proposition when funding for science is in decline. There is now more interest in smaller, cheaper options. “These are things that usually fit in the lab, and the turnaround time for results is much shorter than that of the collider,” says Arvanitaki, admitting, “I’ve done this for a long time, and it hasn’t always been popular.”

    The end of the WIMP?

    While most dark matter physicists have focused on hunting for weakly interacting massive particles, or WIMPs, Arvanitaki is one of a growing number who are focusing on less well-known alternatives, such as axions—hypothetical ultralight particles with masses that could be as little as ten thousand trillion trillion times smaller than the mass of the electron. The masses of WIMPs, by contrast, would be larger than the mass of the proton.

    Cosmology gave us very good reason to be excited about WIMPs and focus initial searches in their mass range, according to David Kaplan, a theorist at Johns Hopkins University (and producer of the 2013 documentary Particle Fever). But the WIMP’s dominance in the field to date has also been due, in part, to excitement over the idea of supersymmetry. That model requires every known particle in the Standard Model—whether fermion or boson—to have a superpartner that is heavier and in the opposite class. So an electron, which is a fermion, would have a boson superpartner called the selectron, and so on.

    Physicists suspect one or more of those unseen superpartners might make up dark matter. Supersymmetry predicts not just the existence of dark matter, but how much of it there should be. That fits neatly within a WIMP scenario. Dark matter could be any number of things, after all, and the supersymmetry mass range seemed like a good place to start the search, given the compelling theory behind it.

    But in the ensuing decades, experiment after experiment has come up empty. With each null result, the parameter space where WIMPs might be lurking shrinks. This makes distinguishing a possible signal from background noise in the data increasingly difficult.

    “We’re about to bump up against what’s called the ‘neutrino floor,’” says Kaplan. “All the technology we use to discover WIMPs will soon be sensitive to random neutrinos flying through the Universe. Once it gets there, it becomes a much messier signal and harder to see.”

    Particles are waves

    Despite its momentous discovery of the Higgs boson in 2012, the Large Hadron Collider has yet to find any evidence of supersymmetry. So we shouldn’t wonder that physicists are turning their attention to alternative dark matter candidates outside of the mass ranges of WIMPs. “It’s now a fishing expedition,” says Kaplan. “If you’re going on a fishing expedition, you want to search as broadly as possible, and the WIMP search is narrow and deep.”

    Enter Asimina Arvanitaki—“Mina” for short. She grew up in a small Greek Village called Koklas, and, since her parents were teachers, she grew up with no shortage of books around the house. Arvanitaki excelled in math and physics—at a very young age, she calculated the time light takes to travel from the Earth to the Sun. While she briefly considered becoming a car mechanic in high school because she loved cars, she decided, “I was more interested in why things are the way they are, not in how to make them work.” So she majored in physics instead.

    Similar reasoning convinced her to switch her graduate-school focus at Stanford from experimental condensed matter physics to theory: she found her quantum field theory course more scintillating than any experimental results she produced in the laboratory.

    Central to Arvanitaki’s approach is a theoretical reimagining of dark matter as more than just a simple particle. A peculiar quirk of quantum mechanics is that particles exhibit both particle- and wave-like behavior, so we’re really talking about something more akin to a wavepacket, according to Arvanitaki. The size of those wave packets is inversely proportional to their mass. “So the elementary particles in our theory don’t have to be tiny,” she says. “They can be super light, which means they can be as big as the room or as big as the entire Universe.”

    Axions fit the bill as a dark matter candidate, but they interact so weakly with regular matter that they cannot be produced in colliders. Arvanitaki has proposed several smaller experiments that might succeed in detecting them in ways that colliders cannot.

    Walls, clocks, and bars

    One of her experiments relies on atomic clocks—the most accurate timekeeping devices we have, in which the natural frequency oscillations of atoms serve the same purpose as the pendulum in a grandfather clock. An average wristwatch loses roughly one second every year; atomic clocks are so precise that the best would only lose one second every age of the Universe.

    Within her theoretical framework, dark matter particles (including axions) would behave like waves and oscillate at specific frequencies determined by the mass of the particles. Dark matter waves would cause the atoms in an atomic clock to oscillate as well. The effect is very tiny, but it should be possible to see such oscillations in the data. A trial search of existing data from atomic clocks came up empty, but Arvanitaki suspects that a more dedicated analysis would prove more fruitful.

    Then there are so-called “Weber bars,” which are solid aluminum cylinders that Arvanitaki says should ring like a tuning fork should a dark matter wavelet hit them at just the right frequency. The bars get their name from physicist Joseph Weber, who used them in the 1960s to search for gravitational waves. He claimed to have detected those waves, but nobody could replicate his findings, and his scientific reputation never quite recovered from the controversy.

    Weber died in 2000, but chances are he’d be pleased that his bars have found a new use. Since we don’t know the precise frequency of the dark matter particles we’re hunting, Arvanitaki suggests building a kind of xylophone out of Weber bars. Each bar would be tuned to a different frequency to scan for many different frequencies at once.

    Walking through walls

    Yet another inventive approach involves sending axions through walls. Photons (light) can’t pass through walls—shine a flashlight onto a wall, and someone on the other side won’t be able to see that light. But axions are so weakly interacting that they can pass through a solid wall. Arvanitaki’s experiment exploits the fact that it should be possible to turn photons into axions and then reverse the process to restore the photons. Place a strong magnetic field in front of that wall and then shine a laser onto it. Some of the photons will become axions and pass through the wall. A second magnetic field on the other side of the wall then converts those axions back into photons, which should be easily detected.

    This is a new kind of dark matter detection relying on small, lab-based experiments that are easier to perform (and hence easier to replicate). They’re also much cheaper than setting up detectors deep underground or trying to produce dark matter particles at the LHC—the biggest, most complicated scientific machine ever built, and the most expensive.

    “I think this is the future of dark matter detection,” says Kaplan, although both he and Arvanitaki are adamant that this should complement, not replace, the many ongoing efforts to hunt for WIMPs, whether deep underground or at the LHC.

    “You have to look everywhere, because there are no guarantees. This is what research is all about,” says Arvanitaki. “What we think is correct, and what Nature does, may be two different things.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: