Tagged: ars technica Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:02 pm on December 31, 2019 Permalink | Reply
    Tags: ars technica, , , , , ESA’s Characterising Exoplanet Satellite Cheops, , Future giant ground based optical telescopes, ,   

    From ars technica: “The 2010s: Decade of the exoplanet” 

    Ars Technica
    From ars technica

    12/31/2019
    John Timmer

    1
    Artist conception of Kepler-186f, the first Earth-size exoplanet found in a star’s “habitable zone.”

    ESO Belgian robotic Trappist National Telescope at Cerro La Silla, Chile

    A size comparison of the planets of the TRAPPIST-1 system, lined up in order of increasing distance from their host star. The planetary surfaces are portrayed with an artist’s impression of their potential surface features, including water, ice, and atmospheres. NASA

    Centauris Alpha Beta Proxima 27, February 2012. Skatebiker

    The last ten years will arguably be seen as the “decade of the exoplanet.” That might seem like an obvious thing to say, given that the discovery of the first exoplanet was honored with a Nobel Prize this year. But that discovery happened back in 1995—so what made the 2010s so pivotal?

    One key event: 2009’s launch of the Kepler planet-hunting probe.

    NASA/Kepler Telescope, and K2 March 7, 2009 until November 15, 2018

    Kepler spawned a completely new scientific discipline, one that has moved from basic discovery—there are exoplanets!—to inferring exoplanetary composition, figuring out exoplanetary atmosphere, and pondering what exoplanets might tell us about prospects for life outside our Solar System.

    To get a sense of how this happened, we talked to someone who was in the field when the decade started: Andrew Szentgyorgyi, currently at the Harvard-Smithsonian Center for Astrophysics, where he’s the principal investigator on the Giant Magellan Telescope’s Large Earth Finder instrument.

    Giant Magellan Telescope, 21 meters, to be at the Carnegie Institution for Science’s Las Campanas Observatory, to be built some 115 km (71 mi) north-northeast of La Serena, Chile, over 2,500 m (8,200 ft) high

    In addition to being famous for having taught your author his “intro to physics” course, Szentgyorgyi was working on a similar instrument when the first exoplanet was discovered.

    Two ways to find a planet

    The Nobel-winning discovery of 51 Pegasi b came via the “radial velocity” method, which relies on the fact that a planet exerts a gravitational influence on its host star, causing the star to accelerate slightly toward the planet.

    Radial Velocity Method-Las Cumbres Observatory

    Radial velocity Image via SuperWasp http http://www.superwasp.org-exoplanets.htm

    Unless the planet’s orbit is oriented so that it’s perpendicular to the line of sight between Earth and the star, some of that acceleration will draw the star either closer to or farther from Earth. This acceleration can be detected via a blue or red shift in the star’s light, respectively.

    The surfaces of stars can expand and contract, which also produces red and blue shifts, but these won’t have the regularity of acceleration produced by an orbital body. But it explains why, back in the 1990s, people studying the surface changes in stars were already building the necessary hardware to study radial velocity.

    “We had a group that was building instruments that I’ve worked with to study the pulsations of stars—astroseismology,” Szentgyorgyi told Ars, “but that turns out to be sort of the same instrumentation you would use” to discern exoplanets.

    He called the discovery of 51 Pegasi b a “seismic event” and said that he and his collaborators began thinking about how to use their instruments “probably when I got the copy of Nature” that the discovery was published in. Because some researchers already had the right equipment, a steady if small flow of exoplanet announcements followed.

    During this time, researchers developed an alternate way to find exoplanets, termed the “transit method.”

    Planet transit. NASA/Ames

    The transit method requires a more limited geometry from an exoplanet’s orbit: the plane has to cause the exoplanet to pass through the line of sight between its host star and Earth. During these transits, the planet will eclipse a small fraction of light from the host star, causing a dip in its brightness. This doesn’t require the specialized equipment needed for radial velocity detections, but it does require a telescope that can detect small brightness differences despite the flicker caused by the light passing through our atmosphere.

    By 2009, transit detections were adding regularly to the growing list of exoplanets.

    The tsunami

    In the first year it was launched, Kepler started finding new planets. Given time and a better understanding of how to use the instrument, the early years of the 2010s saw thousands of new planets cataloged. In 2009, Szentgyorgyi said, “it was still ‘you’re finding handfuls of exoplanetary systems.’ And then with the launch of Kepler, there’s this tsunami of results which has transformed the field.”

    Suddenly, rather than dozens of exoplanets, we knew about thousands.

    2
    The tsunami of Kepler planet discoveries.

    The sheer numbers involved had a profound effect on our understanding of planet formation. Rather than simply having a single example to test our models against—our own Solar System—we suddenly had many systems to examine (containing over 4,000 currently known exoplanets). These include objects that don’t exist in our Solar System, things like hot Jupiters, super-Earths, warm Neptunes, and more. “You found all these crazy things that, you know, don’t make any sense from the context of what we knew about the Solar System,” Szentgyorgyi told Ars.

    It’s one thing to have models of planet formation that say some of these planets can form; it’s quite another to know that hundreds of them actually exist. And, in the case of hot Jupiters, it suggests that many exosolar systems are dynamic, shuffling planets to places where they can’t form and, in some cases, can’t survive indefinitely.

    But Kepler gave us more than new exoplanets; it provided a different kind of data. Radial velocity measurements only tell you how much the star is moving, but that motion could be caused by a relatively small planet with an orbital plane aligned with the line of sight from Earth. Or it could be caused by a massive planet with an orbit that’s highly inclined from that line of sight. Physics dictates that, from our perspective, these will produce the same acceleration of the star. Kepler helped us sort out the differences.

    3
    A massive planet orbiting at a steep angle (left) and a small one orbiting at a shallow one will both produce the same motion of a star relative to Earth.

    “Kepler not only found thousands and thousands of exoplanets, but it found them where we know the geometry,” Szentgyorgyi told Ars. “If you know the geometry—if you know the planet transits—you know your orbital inclination is in the plane you’re looking.” This allows follow-on observations using radial velocity to provide a more definitive mass of the exoplanet. Kepler also gave us the radius of each exoplanet.

    “Once you know the mass and radius, you can infer the density,” Szentgyorgyi said. “There’s a remarkable amount of science you can do with that. It doesn’t seem like a lot, but it’s really huge.”

    Density can tell us if a planet is rocky or watery—or whether it’s likely to have a large atmosphere or a small one. Sometimes, it can be tough to tell two possibilities apart; density consistent with a watery world could also be provided by a rocky core and a large atmosphere. But some combinations are either physically implausible or not consistent with planetary formation models, so knowing the density gives us good insight into the planetary type.

    Beyond Kepler

    Despite NASA’s heroic efforts, which kept Kepler going even after its hardware started to fail, its tsunami of discoveries slowed considerably before the decade was over. By that point, however, it had more than done its job. We had a new catalog of thousands of confirmed exoplanets, along with a new picture of our galaxy.

    For instance, binary star systems are common in the Milky Way; we now know that their complicated gravitational environment isn’t a barrier to planet formation.

    We also know that the most common type of star is the low-mass red dwarf. It was previously possible to think that the star’s low mass would be matched by a low-mass planet-forming disk, preventing the generation of large planets and the generation of large families of smaller planets. Neither turned out to be true.

    “We’ve moved into a mode where we can actually say interesting, global, statistical things about exoplanets,” Szentgyorgyi told Ars. “Most exoplanets are small—they’re sort of Earth to sub-Neptune size. It would seem that probably most of the solar-type stars have exoplanets.” And, perhaps most important, there’s a lot of them. “The ubiquity of exoplanets certainly is a stunner… they’re just everywhere,” Szentgyorgyi added.

    That ubiquity has provided the field with two things. First, it has given scientists the confidence to build new equipment, knowing that there are going to be planets to study. The most prominent piece of gear is NASA’s Transiting Exoplanet Survey Satellite, a space-based telescope designed to perform an all-sky exoplanet survey using methods similar to Kepler’s.

    NASA/MIT TESS replaced Kepler in search for exoplanets

    But other projects are smaller, focused on finding exoplanets closer to Earth. If exoplanets are everywhere, they’re also likely to be orbiting stars that are close enough so we can do detailed studies, including characterizing their atmospheres. One famous success in this area came courtesy of the TRAPPIST telescopes [above], which spotted a system hosting at least seven planets. More data should be coming soon, too; on December 17, the European Space Agency launched the first satellite dedicated to studying known exoplanets.

    ESA/CHEOPS

    With future telescopes and associated hardware similar to what Szentgyorgyi is working on, we should be able to characterize the atmospheres of planets out to about 30 light years from Earth. One catch: this method requires that the planet passes in front of its host star from Earth’s point of view.

    When an exoplanet transits in front of its star, most of the light that reaches Earth comes directly to us from the star. But a small percentage passes through the atmosphere of the exoplanet, allowing it to interact with the gases there. The molecules that make up the atmosphere can absorb light of specific wavelengths—essentially causing them to drop out of the light that makes its way to Earth. Thus, the spectrum of the light that we can see using a telescope can contain the signatures of various gases in the exoplanet’s atmosphere.

    There are some important caveats to this method, though. Since the fraction of light that passes through the exoplanet atmosphere is small compared to that which comes directly to us from the star, we have to image multiple transits for the signal to stand out. And the host star has to have a steady output at the wavelengths we’re examining in order to keep its own variability from swamping the exoplanetary signal. Finally, gases in the exoplanet’s atmosphere are constantly in motion, which can make their signals challenging to interpret. (Clouds can also complicate matters.) Still, the approach has been used successfully on a number of exoplanets now.

    In the air

    Understanding atmospheric composition can tell us critical things about an exoplanet. Much of the news about exoplanet discoveries has been driven by what’s called the “habitable zone.” That zone is defined as the orbital region around a star where the amount of light reaching a planet’s surface is sufficient to keep water liquid. Get too close to the star and there’s enough energy reaching the planet to vaporize the water; get too far away and the energy is insufficient to keep water liquid.

    These limits, however, assume an atmosphere that’s effectively transparent at all wavelengths. As we’ve seen in the Solar System, greenhouse gases can play an outsized role in altering the properties of planets like Venus, Earth, and Mars. At the right distance from a star, greenhouse gases can make the difference between a frozen rock and a Venus-like oven. The presence of clouds can also alter a planet’s temperature and can sometimes be identified by imaging the atmosphere. Finally, the reflectivity of a planet’s surface might also influence its temperature.

    The net result is that we don’t know whether any of the planets in a star’s “habitable zone” are actually habitable. But understanding the atmosphere can give us good probabilities, at least.

    The atmosphere can also open a window into the planet’s chemistry and history. On Venus, for example, the huge levels of carbon dioxide and the presence of sulfur dioxide clouds indicate that the planet has an oxidizing environment and that its atmosphere is dominated by volcanic activity. The composition of the gas giants in the outer Solar System likely reflects the gas that was present in the disk that formed the planets early in the Solar System’s history.

    But the most intriguing prospect is that we could find something like Earth, where biological processes produce both methane and the oxygen that ultimately converts it to carbon dioxide. The presence of both in an atmosphere indicates that some process(es) are constantly producing the gases, maintaining a long-term balance. While some geological phenomena can produce both these chemicals, finding them together in an atmosphere would at least be suggestive of possible life.

    Interdisciplinary

    Just the prospect of finding hints of life on other worlds has rapidly transformed the study of exoplanets, since it’s a problem that touches on nearly every area of science. Take the issue of atmospheres and habitability. Even if we understand the composition of a planet’s atmosphere, its temperature won’t just pop out of a simple equation. Distance from the star, type of star, the planet’s rotation, and the circulation of the atmosphere will all play a role in determining conditions. But the climate models that we use to simulate Earth’s atmosphere haven’t been capable of handling anything but the Sun and an Earth-like atmosphere. So extensive work has had to be done to modify them to work with the conditions found elsewhere.

    Similar problems appear everywhere. Geologists and geochemists have to infer likely compositions given little more than a planet’s density and perhaps its atmospheric compositions. Their results need to be combined with atmospheric models to figure out what the surface chemistry of a planet might be. Biologists and biochemists can then take that chemistry and figure out what reactions might be possible there. Meanwhile, the planetary scientists who study our own Solar System can provide insight into how those processes have worked out here.

    “I think it’s part of the Renaissance aspect of exoplanets,” Szentgyorgyi told Ars. “A lot of people now think a lot more broadly, there’s a lot more cross-disciplinary interaction. I find that I’m going to talks about geology, I’m going to talks about the atmospheric chemistry on Titan.”

    The next decade promises incredible progress. A new generation of enormous telescopes is expected to come online, and the James Webb space telescope should devote significant time to imaging exosolar systems.

    NASA/ESA/CSA Webb Telescope annotated


    ____________________________________________
    Other giant 30 meter class telescopes planned

    ESO/E-ELT,39 meter telescope to be on top of Cerro Armazones in the Atacama Desert of northern Chile. located at the summit of the mountain at an altitude of 3,060 metres (10,040 ft).

    TMT-Thirty Meter Telescope, proposed and now approved for Mauna Kea, Hawaii, USA4,207 m (13,802 ft) above sea level, the only giant 30 meter class telescope for the Northern hemisphere


    ____________________________________________

    We’re likely to end up with much more detailed pictures of some intriguing bodies in our galactic neighborhood.

    The data that will flow from new experiments and new devices will be interpreted by scientists who have already transformed their field. That transformation—from proving that exoplanets exist to establishing a vibrant, multidisciplinary discipline—really took place during the 2010s, which is why it deserves the title “decade of exoplanets.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 4:45 pm on October 3, 2019 Permalink | Reply
    Tags: ars technica,   

    From ars technica: “Plate tectonics runs deeper than we thought” 

    Ars Technica
    From ars technica

    10/3/2019
    Howard Lee

    At 52 years old, plate tectonics has given geologists a whole new level to explore.

    1
    Þingvellir or Thingvellir, is a national park in Southwestern Iceland, about 40km northeast of Iceland’s capital, Reykjavík. It’s a site of geological significance, as the visuals may indicate.

    It’s right there in the name: “plate tectonics.” Geology’s organizing theory hinges on plates—thin, interlocking pieces of Earth’s rocky skin. Plates’ movements explain earthquakes, volcanoes, mountains, the formation of mineral resources, a habitable climate, and much else. They’re part of the engine that drags carbon from the atmosphere down into Earth’s mantle, preventing a runaway greenhouse climate like Venus. Their recycling through the mantle helps to release heat from Earth’s liquid metal core, making it churn and generate a magnetic field to protect our atmosphere from erosion by the solar wind.

    The name may not have changed, but today the theory is in the midst of an upgrade to include a deeper level—both in our understanding and in its depth in our planet. “There is a huge transformation,” says Thorsten Becker, the distinguished chair in geophysics at the University of Texas at Austin. “Where we say: ‘plate tectonics’ now, we might mean something that’s entirely different than the 1970s.”

    Plate Tectonics emerged in the late1960s when geologists realized that plates moving on Earth’s surface at fingernail-growth speeds side-swipe each other at some places (like California) and converge at others (like Japan). When they converge, one plate plunges down into Earth’s mantle under the other plate, but what happened to it deeper in the mantle remained a mystery for most of the 20th century. Like an ancient map labeled “here be dragons,” knowledge of the mantle remained skin-deep except for its major boundaries.

    Now a marriage of improved computing power and new techniques to investigate Earth’s interior has enabled scientists to address some startling gaps in the original theory, like why there are earthquakes and other tectonic phenomena on continents thousands of miles from plate boundaries:

    “Plate tectonics as a theory says zero about the continents; [it] says that the plates are rigid and are moving with respect to each other and that the deformation happens only at the boundaries,” Becker told Ars. ”That is nowhere exactly true! It’s [only] approximately true in the oceanic plates.”

    There are other puzzles, too. Why did the Andes and Tibet wait tens of millions of years after their plates began to converge before they grew tall? And why did the Sea of Japan and the Aegean Sea form rapidly, but only after plates had been plunging under them for tens of millions of years?

    “They’ve been puzzling us for ages, and they don’t fit well into plate tectonic theory,” says Jonny Wu, a professor focused on tectonics and mantle structure at the University of Houston. “That’s why we’re looking deeper into the mantle to see if this could explain a whole side of tectonics that we don’t really understand.”

    2
    The subduction of a tectonic plate. British Geological Survey

    Plate Tectonics meet Slab Tectonics

    The Plate Tectonics theory’s modern upgrade is the result of new information. Beginning in the mid-1990s, Earth’s interior has gradually been charted by CAT-Scan-like images, built by mapping the echoes of powerful earthquakes that bounce off features within Earth’s underworld, the way a bat screeches to echo-locate surroundings. These “seismic tomography” pictures show that plates that plunge down from the surface and into the mantle (“subduct” in the language of geologists) don’t just assimilate into a formless blur, as often depicted. In fact, they have a long and eventful afterlife in the mantle.

    “When I was a PhD student in the early 2000’s, we were still raised with the idea that there is a rapidly convecting upper mantle that doesn’t communicate with the lower mantle,” says Douwe van Hinsbergen, a professor of global plate tectonics at the University of Utrecht. Now, that seismic tomography shows “unequivocal evidence that subducted lithosphere [plate material] goes right down into the lower mantle.” This has settled decades of debate about how deep the heat-driven convection extends through the mantle.

    van Hinsbergen and his colleagues have mapped many descending plates (dubbed “slabs”), scattered throughout the mantle, oozing and sagging inexorably toward the core-mantle boundary 2,900 kilometers (1,800 miles) below our feet, in an “Atlas of the Underworld.” Some slabs are so old they were tectonic plates on Earth’s surface long before the first dinosaurs evolved.

    Moving through the mantle from top to bottom, blue areas are roughly equivalent to subducting slabs. Top panel seismic tomography based on earthquake P (primary) waves, bottom panel seismic tomography based on earthquake S (secondary) waves. Credit: van der Meer et al Tectonophysics 2018, atlas-of-the-underworld.

    Fluid solid

    Slabs sink through the mantle because they are cooler and therefore denser than the surrounding mantle. This works because “the Earth acts as a fluid on very long timescales,” explains Carolina Lithgow-Bertelloni, the endowed chair in geosciences at UCLA.

    High-pressure, high-temperature diamond-tipped anvil apparatuses can now recreate the conditions of the mantle and even the center of the core, albeit on a tiny scale. They show that rock at mantle pressures and temperatures is fluid but not liquid, solid yet mobile—confounding our intuition like a Salvador Dali painting. Here rigidity is time-dependent: solid crystals flow, and ice is burning hot.

    But even by the surreal standards of Earth’s underworld, a layer within the mantle between 410 and 660 kilometers (255-410 miles) deep is especially peculiar. Blobs in diamonds that made it back from there to Earth’s surface reveal it to be rich in water, where carbon, that once was life on—or in—the seafloor, waits as carbonate minerals to be recycled into the atmosphere, where diamonds grow fat over eons before, occasionally, being recycled into the crowns of royalty. Earthquake waves are distorted as they pass through it, showing the 660-kilometer-deep boundary has mountainous topography with peaks up to 3 kilometers (2 miles) tall, frosted with a layer of weak matter.

    Called the “Mantle Transition Zone,” this layer is a natural consequence of the increasing weight of the rock above as you go deeper underground. At certain depths, the pressure forces atoms to huddle tighter together, forming new, more compact minerals. The biggest of these “phase transitions” occurs at a 660-kilometer-deep horizon, where seawater that was trapped in subducting slabs is squeezed out of minerals. The resulting dryer, ultra-dense, and ultra-viscous material sinks down into the lower mantle, moving more than 10 times slower than it did in the upper mantle.

    For sinking slabs, that’s like a traffic light on a highway (in this analogy your commute takes about 20 million years, one-way), so slabs typically grind to a halt like cars in a traffic jam when they hit the 660-kilometer level. Seismic tomography shows that they stagnate there, sometimes for millions of years. Or, they pile-up, buckle, and concertina. Or, they slide horizontally. Or, sometimes they just pierce the Transition Zone like a spear.

    It’s these differences in how slabs cross the Mantle Transition Zone that’s the key to explaining those puzzling phenomena on Earth’s continents.

    Pulling back the subducted bedsheet

    To see how the Andes were affected when a slab crossed the Mantle Transition Zone, Wu’s PhD student Yi-Wei Chen worked with Wu and structural geologist John Suppe, using seismic tomography pictures of the Nazca Slab that’s in the mantle under South America.

    They clicked the equivalent of an “undo” button to “un-subduct” the slab: “Like a giant bedsheet that’s fallen off the bed, we could slowly pull it back up and just keep pulling and see how big it was,” says Wu. Their technique is borrowed from the way geologists flatten-out contorted crustal rocks in mountain belts and oil fields to understand what the layers were like before they were folded. Using the age of the Pacific Ocean floor, the rate that ocean plates are being manufactured at midocean ridges, and the configuration of those ridges, the team compared the subduction history of South America with a large database of surface geological observations, including the timing of volcanic eruptions.

    3
    A slice through Earth’s mantle under the Andes. Jonny Wu/University of Houston

    “Our plate model is just a model, but there is a huge catalog of tectonic signals, especially magmatism, to work with,” says Wu. “We started to see a link between when the slab reached the mid-mantle viscosity change and things that were happening in the surface.”

    They found that the main uplift of the Andes was delayed by 20–30 million years after the most recent episode of subduction began, a delay that matches the time for the slab to arrive at, stagnate in, and then sink below the Mantle Transition Zone. Delays like that—millions of years between the start of subduction and the start of serious mountain building—have also been recognized in Turkey and Tibet.

    How can a slab sinking through the Mantle Transition Zone build mountains on an entirely different plate, 660 kilometers away through the mantle?

    It’s a mantle wind that blows continents into mountains

    “If you take something that’s dense and you make it go down, that’s going to generate flow everywhere, and that is the ‘mantle wind,’ so there’s nothing mysterious about it!” says Lithgow-Bertelloni.

    Geodynamicists like Lithgow-Bertelloni and Becker use a different approach than Wu’s bedsheet-like un-subduction process. Instead, they code the equations of fluid dynamics into computer models to simulate the flow of high-pressure rock. These models are constrained by the physical conditions in Earth’s mantle gleaned from high-pressure experiments and by the properties of earthquake waves that have traveled through those depths. By playing a “video” of these simulations, scientists can check the behavior of slabs in their models against the “ground truth” of seismic tomographic images. The better they match, the more accurately their models represent how this planet works.

    “How the geometry evolves has to conform to physics,” says Becker. “The deformation is different in the mantle from the shallow crust because things tend to flow rather than break, as temperatures and pressures are higher.”

    Their models show that, as slabs sink below the Mantle Transition Zone, they suck mantle down behind them, creating a far-reaching downwelling current of flowing rock. And it’s that down-going gust of mantle wind that drags continental plates above it, like a conveyor belt, compressing them and squeezing mountain belts skyward in places like the Andes, Turkey, and Tibet.

    The location of the slabs relative to that 660-kilometer horizon determines what kind of mountain chain you get. If a subducting slab hasn’t yet sunk below the 660-kilometer layer, you get the kind of mountains envisaged by classic plate tectonics—without extreme altitudes and confined to a narrow belt above the subducting slab. Examples include the ones around the Western Pacific and Italy: “We think the present-day Apennines are an example of that,” says Becker.

    The bigger mountain belts east of the Pacific and the Tibetan Plateau are in a different category: “Once the slab transitions through the 660, you induce a much larger scale of convection cell. That’s when we are engaging what we call whole mantle ‘conveyor belts.’ And it’s when you have those global conveyor belts and symmetric downwelling rather than a one-sided downwelling, that’s when you get a lot of the [mountain building],” Becker said.

    So one slab sinking below the Mantle Transition Zone can create a mantle undertow that squeezes up mountains on an entirely different plate, 660 kilometers above it. This new level of tectonics now makes sense of other geological puzzles.

    What’s stressing Asia?

    The Tibetan Plateau north of the Himalayas is known as the “Roof of the World” because it stands an average of 4.5 kilometers (15,000 feet) above sea level. It achieved that altitude around 34 million years ago, some 24 million years after the Indian continent began to collide with Asia, and more than 100 million years after seafloor first began to plunge into the mantle under South Asia.

    “The surface elevation of the Tibetan Plateau was acquired after much of the crustal deformation took place, suggesting that processes in the underlying mantle may have played a key role in the uplift,” van Hinsbergen commented in the journal Science recently.

    During those 100 million years, the oceanic slab attached to India seems to have stagnated and then penetrated the mantle’s 660-kilometer layer several times before the Indian continent finally collided with Asia. With that collision, continental crust began to plunge into the mantle. But it took millions of years for that continental rock, more buoyant than the oceanic rock that preceded it, to cause a slab pile-up in the Mantle Transition Zone beneath Tibet. India’s slab buckled and broke off, releasing the amputated Indian plate to buoy up the Tibetan Plateau.

    4
    Seismic tomographic slice through India and Tibet showing the broken-off Himalaya Slab (Hi) and older Indian slab (In) sinking toward the core.

    The fact that India continues, even today, to bulldoze its way under Asia, long after the continents collided and the slab broke off, has been another puzzle for geologists. It shows that forces beyond classic plate tectonics must be at work.

    But India’s continued motion isn’t the only mystery of Central Asia. Lake Baikal in Siberia occupies a deep rift in Earth’s crust caused by stresses that pull the crust there apart, and across Central Asia there are San-Andreas-like fault zones responsible for devastating earthquakes. These are out of place for classic plate tectonics since they are thousands of miles from a plate boundary. What, then, is stressing the interior of Asia?

    The answer is, again, blowing in the mantle wind.

    “This is not due simply to the fact that India has collided into Asia. This is the result of longstanding subduction in the region. You have compression all through the Japan subduction zone and into Indonesia and India,” says Lithgow-Bertelloni. “There’s been a ring of compression and there’s been downwelling, and that’s what gives you the regional stress pattern today.”

    In other words, in parts of the world where the mantle wind converges and sinks, it drags plates together forming big mountain chains. Farther away from that convergence, the same mantle flow stretches the overlying plates, causing rifts and faults.

    Becker and Claudio Faccenna of Roma TRE University linked the downwelling current under East Asia to upwelling of hot mantle rock under Africa, a giant circuit of mantle wind that drives India and Arabia northward today. With Laurent Jolivet of the Sorbonne University, they reason that this mantle wind flows under Asia, stressing those Central Asian faults and rifting the ground under Lake Baikal. They also think it may have stretched East Asia apart to form a series of inland lakes and seas, like the Sea of Japan.

    Slab syringe?

    Wu thinks those East Asian seas and lakes might instead owe their origin to a different gust of mantle wind: “East Asia is puzzling in that you have these marginal basins that have formed since the Pacific Slab began to subduct under that region. We think the Pacific Slab began to subduct around 50 million years ago and, shortly after that, many of these marginal basins opened up, including the Japan Sea, the Kuril Basin, the Sea of Okhotsk. We don’t really have a good idea why they formed, but the timings overlap.”

    Japan was part of the Asian mainland until about 23 million years ago, when it rapidly (for geologists) swung away from the mainland like double saloon doors in an old western movie: “These doors swung open very quickly, apparently in less than 2 or 3 million years. The Japan Slab [is] underneath Beijing today, 2,500 kilometers inland. It’s puzzling that it’s so far inland, and it can be followed all the way back to the actual Pacific Slab today,” Wu told me at the AGU conference in Washington, DC, last December. “What we’ve shown at this conference is that slab is most likely all Pacific Slab, and it looks like this slab has to move laterally in the Mantle Transition Zone.”

    In other words, rather than sinking further down into the mantle, the Pacific Slab seems to have slid sideways in the Mantle Transition Zone, hundreds of kilometers beneath the Asian Plate on the surface. Like a syringe plunger, it must have squeezed mantle material out of its way, and it could be that fugitive flow of mantle that stretched East Asia apart to create the Sea of Japan, the Kuril Basin and the Sea of Okhotsk.

    6
    Seismic tomographic picture showing the subducted Pacific Slab (white to purple colors) extending in the Mantle Transition Zone as far as Beijing.

    Perhaps. Wu is the first to say this is speculative, but it’s an idea that’s consistent with plate reconstructions by other scientists. It also fits the weird, water-rich properties of the Mantle Transition Zone, with weak minerals and pockets of fluid that would lubricate the slab’s penetration sideways rather than downwards. Fluid-dynamic computer models expect a weak lubricating layer at the 660-kilometer horizon. “We see this in the numerical simulations of convection,” says Lithgow-Bertelloni. “You see a lot of horizontal travel because the slab can’t go down because of a combination of things that are going on in terms of the viscosity structure and the phase transitions, and so it gets trapped in the Transition Zone, and so it has to travel.”

    Becker is more skeptical: “How far the slab under Asia travelled laterally is a very interesting question that a lot of people are thinking about, and it’s one that comes down to what sort of tomographic models you look at,” he says.

    Science by upgrade not by uproot

    It’s skepticism like Becker’s that drives science forward through a never-ending trial by data. Scientists try to break a theory by throwing observations at it to see if it handles them. New data and new techniques sometimes throw up puzzles that demand upgrades or bugfixes to the theory, but most of the theory tends to remain intact. So it is, and always has been, with plate tectonics. Even though its key ideas crystallized in 1967, it didn’t arrive fully formed in a blinding “eureka!” moment. It was built on discoveries and ideas from more than two dozen scientists over six decades until it explained a range of geological and geophysical observations all over the world. That process continues today.

    “What has changed dramatically since the late ‘90s is that we’re now approaching understanding of plate tectonics that actually includes the continents!” says Becker.

    Ironically this new direction harks back to the 1930s: “Arthur Holmes had a textbook in the 1930s where he associated mountain building such as the Andes with mantle convection,” says Becker. When Alfred Wegner proposed that continents drifted, he lacked a mechanism for that. With hindsight it’s strange that few made the link with Holmes’ work: “For some reason science was not ready to make that connection,” says Becker, “and it took until the establishment of seafloor spreading in the late ‘60s for people to make the link.”

    The grand challenge ahead

    This new, deeper, understanding of plate tectonics is now rippling through the Earth sciences. “Modern tectonics no longer is restricted to classical concepts involving the movements and interactions of thin, rigid tectonic (lithospheric) plates,” says a Grand Challenge report to the US National Science Foundation last year. So Earth scientists need to, “revisit our traditional definition of tectonics as a field.”

    Ramifications of a new plate tectonics theory extend far beyond geology, too, because it’s woven into the fabric of other sciences, like long-term climate change and the habitability of exoplanets. We’re also realizing that life and climate can affect plate tectonics over long timescales.

    “Plate tectonics 2.0 is a model of Earth evolution that includes not just oceanic plates but includes the continental plates,” Becker says. “And once you include continental plates then you have to worry about the processes such as sediments coming down from the mountains, lubricating the plate, carbon gets dumped on them, then carbon gets released at the subduction zones. Perhaps you might have control of subduction by climate.”

    van Hinsbergen puts it this way: “Undoubtedly we’ll have major progress to make in the next decades. But the black box of the dynamics of our planet interior is now starting to be comprehensively constrained by observations, even as deep as to the core-mantle boundary.”

    So like the plates themselves, it seems plate tectonics as a theory will continue to shift, too.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 12:35 pm on December 9, 2018 Permalink | Reply
    Tags: AI at NASA, ars technica, ,   

    From ars technica: “NASA’s next Mars rover will use AI to be a better science partner” 

    Ars Technica
    From ars technica

    12/6/2018
    Alyson Behr

    Experience gleaned from EO-1 satellite will help JPL build science smarts into next rover.

    NASA Mars 2020 rover schematic


    NASA Mars Rover 2020 NASA

    NASA can’t yet put a scientist on Mars. But in its next rover mission to the Red Planet, NASA’s Jet Propulsion Laboratory is hoping to use artificial intelligence to at least put the equivalent of a talented research assistant there. Steve Chien, head of the AI Group at NASA JPL, envisions working with the Mars 2020 Rover “much more like [how] you would interact with a graduate student instead of a rover that you typically have to micromanage.”

    The 13-minute delay in communications between Earth and Mars means that the movements and experiments conducted by past and current Martian rovers have had to be meticulously planned. While more recent rovers have had the capability of recognizing hazards and performing some tasks autonomously, they’ve still placed great demands on their support teams.

    Chien sees AI’s future role in the human spaceflight program as one in which humans focus on the hard parts, like directing robots in a natural way while the machines operate autonomously and give the humans a high-level summary.

    “AI will be almost like a partner with us,” Chien predicted. “It’ll try this, and then we’ll say, ‘No, try something that’s more elongated, because I think that might look better,’ and then it tries that. It understands what elongated means, and it knows a lot of the details, like trying to fly the formations. That’s the next level.

    “Then, of course, at the dystopian level it becomes sentient,” Chien joked. But he doesn’t see that happening soon.

    Old-school autonomy

    NASA has a long history with AI and machine-learning technologies, Chien said. Much of that history has been focused on using machine learning to help interpret extremely large amounts of data. While much of that machine learning involved spacecraft data sent back to Earth for processing, there’s a good reason to put more intelligence directly on the spacecraft: to help manage the volume of communications.

    Earth Observing One was an early example of putting intelligence aboard a spacecraft. Launched in November 2000, EO-1 was originally planned to have a one-year mission, part of which was to test how basic AI could handle some scientific tasks onboard. One of the AI systems tested aboard EO-1 was the Autonomous Sciencecraft Experiment (ASE), a set of software that allowed the satellite to make decisions based on data collected by its imaging sensors. ASE included onboard science algorithms that performed image data analysis to detect trigger conditions to make the spacecraft pay more attention to something, such as interesting features discovered or changes relative to previous observations. The software could also detect cloud cover and edit it out of final image packages transmitted home. EO-1’s ASE could also adjust the satellite’s activities based on the science collected in a previous orbit.

    With volcano imagery, for example, Chien said, JPL had trained the machine-learning software to recognize volcanic eruptions from spectral and image data. Once the software spotted an eruption, it would then act out pre-programmed policies on how to use that data and schedule follow-up observations. For example, scientists might set the following policy: if the spacecraft spots a thermal emission that is above two megawatts, the spacecraft should keep observing it on the next overflight. The AI software aboard the spacecraft already knows when it’s going to overfly the emission next, so it calculates how much space is required for the observation on the solid-state recorder as well as all the other variables required for the next pass. The software can also push other observations off for an orbit to prioritize emerging science.

    2020 and beyond

    “That’s a great example of things that we were able to do and that are now being pushed in the future to more complicated missions,” Chien said. “Now we’re looking at putting a similar scheduling system onboard the Mars 2020 rover, which is much more complicated. Since a satellite follows a very predictable orbit, the only variable that an orbiter has to deal with is the science data it collects.

    “When you plan to take a picture of this volcano at 10am, you pretty much take a picture of the volcano at 10am, because it’s very easy to predict,” Chien continued. “What’s unpredictable is whether the volcano is erupting or not, so the AI is used to respond to that.” A rover, on the other hand, has to deal with a vast collection of environmental variables that shift moment by moment.

    Even for an orbiting satellite, scheduling observations can be very complicated. So AI plays an important role even when a human is making the decisions, said Chien. “Depending on mission complexity and how many constraints you can get into the software, it can be done completely automatically or with the AI increasing the person’s capabilities. The person can fiddle with priorities and see what different schedules come out and explore a larger proportion of the space in order to come up with better plans. For simpler missions, we can just automate that.”

    Despite the lessons learned from EO-1, Chien said that spacecraft using AI remain “the exception, not the norm. I can tell you about different space missions that are using AI, but if you were to pick a space mission at random, the chance that it was using AI in any significant fashion is very low. As a practitioner, that’s something we have to increase uptake on. That’s going to be a big change.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 9:06 am on October 11, 2018 Permalink | Reply
    Tags: , ars technica, , , Turbulance unsolved, Werner Heisenberg   

    From ars technica: “Turbulence, the oldest unsolved problem in physics” 

    Ars Technica
    From ars technica

    10/10/2018
    Lee Phillips

    The flow of water through a pipe is still in many ways an unsolved problem.

    Werner Heisenberg won the 1932 Nobel Prize for helping to found the field of quantum mechanics and developing foundational ideas like the Copenhagen interpretation and the uncertainty principle. The story goes that he once said that, if he were allowed to ask God two questions, they would be, “Why quantum mechanics? And why turbulence?” Supposedly, he was pretty sure God would be able to answer the first question.

    Werner Heisenberg from German Federal Archives

    The quote may be apocryphal, and there are different versions floating around. Nevertheless, it is true that Heisenberg banged his head against the turbulence problem for several years.

    His thesis advisor, Arnold Sommerfeld, assigned the turbulence problem to Heisenberg simply because he thought none of his other students were up to the challenge—and this list of students included future luminaries like Wolfgang Pauli and Hans Bethe. But Heisenberg’s formidable math skills, which allowed him to make bold strides in quantum mechanics, only afforded him a partial and limited success with turbulence.

    Some nearly 90 years later, the effort to understand and predict turbulence remains of immense practical importance. Turbulence factors into the design of much of our technology, from airplanes to pipelines, and it factors into predicting important natural phenomena such as the weather. But because our understanding of turbulence over time has stayed largely ad-hoc and limited, the development of technology that interacts significantly with fluid flows has long been forced to be conservative and incremental. If only we became masters of this ubiquitous phenomenon of nature, these technologies might be free to evolve in more imaginative directions.

    An undefined definition

    Here is the point at which you might expect us to explain turbulence, ostensibly the subject of the article. Unfortunately, physicists still don’t agree on how to define it. It’s not quite as bad as “I know it when I see it,” but it’s not the best defined idea in physics, either.

    So for now, we’ll make do with a general notion and try to make it a bit more precise later on. The general idea is that turbulence involves the complex, chaotic motion of a fluid. A “fluid” in physics talk is anything that flows, including liquids, gases, and sometimes even granular materials like sand.

    Turbulence is all around us, yet it’s usually invisible. Simply wave your hand in front of your face, and you have created incalculably complex motions in the air, even if you can’t see it. Motions of fluids are usually hidden to the senses except at the interface between fluids that have different optical properties. For example, you can see the swirls and eddies on the surface of a flowing creek but not the patterns of motion beneath the surface. The history of progress in fluid dynamics is closely tied to the history of experimental techniques for visualizing flows. But long before the advent of the modern technologies of flow sensors and high-speed video, there were those who were fascinated by the variety and richness of complex flow patterns.

    2
    One of the first to visualize these flows was scientist, artist, and engineer Leonardo da Vinci, who combined keen observational skills with unparalleled artistic talent to catalog turbulent flow phenomena. Back in 1509, Leonardo was not merely drawing pictures. He was attempting to capture the essence of nature through systematic observation and description. In this figure, we see one of his studies of wake turbulence, the development of a region of chaotic flow as water streams past an obstacle.

    For turbulence to be considered a solved problem in physics, we would need to be able to demonstrate that we can start with the basic equation describing fluid motion and then solve it to predict, in detail, how a fluid will move under any particular set of conditions. That we cannot do this in general is the central reason that many physicists consider turbulence to be an unsolved problem.

    I say “many” because some think it should be considered solved, at least in principle. Their argument is that calculating turbulent flows is just an application of Newton’s laws of motion, albeit a very complicated one; we already know Newton’s laws, so everything else is just detail. Naturally, I hold the opposite view: the proof is in the pudding, and this particular pudding has not yet come out right.

    The lack of a complete and satisfying theory of turbulence based on classical physics has even led to suggestions that a full account requires some quantum mechanical ingredients: that’s a minority view, but one that can’t be discounted.

    An example of why turbulence is said to be an unsolved problem is that we can’t generally predict the speed at which an orderly, non-turbulent (“laminar”) flow will make the transition to a turbulent flow. We can do pretty well in some special cases—this was one of the problems that Heisenberg had some success with—but, in general, our rules of thumb for predicting the transition speeds are summaries of experiments and engineering experience.

    3
    There are many phenomena in nature that illustrate the often sudden transformation from a calm, orderly flow to a turbulent flow.The transition to turbulence. Credit: Dr. Gary Settles

    This figure above is a nice illustration of this transition phenomenon. It shows the hot air rising from a candle flame, using a 19th century visualization technique that makes gases of different densities look different. Here, the air heated by the candle is less dense than the surrounding atmosphere.

    For another turbulent transition phenomenon familiar to anyone who frequents the beach, consider gentle, rolling ocean waves that become complex and foamy as they approach the shore and “break.” In the open ocean, wind-driven waves can also break if the windspeed is high or if multiple waves combine to form a larger one.

    For another visual aid, there is a centuries-old tradition in Japanese painting of depicting turbulent, breaking ocean waves. In these paintings, the waves are not merely part of the landscape but the main subjects. These artists seemed to be mainly concerned with conveying the beauty and terrible power of the phenomenon, rather than, as was Leonardo, being engaged in a systematic study of nature. One of the most famous Japanese artworks, and an iconic example of this genre, is Hokusai’s “Great Wave,” a woodblock print published in 1831.

    4
    Hokusai’s “Great Wave.”

    For one last reason to consider turbulence an unsolved problem, turbulent flows exhibit a wide range of interesting behavior in time and space. Most of these have been discovered by measurement, not predicted, and there’s still no satisfying theoretical explanation for them.

    Simulation

    Reasons for and against “mission complete” aside, why is the turbulence problem so hard? The best answer comes from looking at both the history and current research directed at what Richard Feynman once called “the most important unsolved problem of classical physics.”

    The most commonly used formula for describing fluid flow is the Navier-Stokes equation. This is the equation you get if you apply Newton’s first law of motion, F = ma (force = mass × acceleration), to a fluid with simple material properties, excluding elasticity, memory effects, and other complications. Complications like these arise when we try to accurately model the flows of paint, polymers, some biological fluids such as blood (there are many other substances also that violate the assumptions of the Navier-Stokes equations). But for water, air, and other simple liquids and gases, it’s an excellent approximation.

    The Navier-Stokes equation is difficult to solve because it is nonlinear. This word is thrown around quite a bit, but here it means something specific. You can build up a complicated solution to a linear equation by adding up many simple solutions. An example you may be aware of is sound: the equation for sound waves is linear, so you can build up a complex sound by adding together many simple sounds of different frequencies (“harmonics”). Elementary quantum mechanics is also linear; the Schrödinger equation allows you to add together solutions to find a new solution.

    But fluid dynamics doesn’t work this way: the nonlinearity of the Navier-Stokes equation means that you can’t build solutions by adding together simpler solutions. This is part of the reason that Heisenberg’s mathematical genius, which served him so well in helping to invent quantum mechanics, was put to such a severe test when it came to turbulence.

    Heisenberg was forced to make various approximations and assumptions to make any progress with his thesis problem. Some of these were hard to justify; for example, the applied mathematician Fritz Noether (a brother of Emmy Noether) raised prominent objections to Heisenberg’s turbulence calculations for decades before finally admitting that they seemed to be correct after all.

    (The situation was so hard to resolve that Heisenberg himself said, while he thought his methods were justified, he couldn’t find the flaw in Fritz Noether’s reasoning, either!)

    The cousins of the Navier-Stokes equation that are used to describe more complex fluids are also nonlinear, as is a simplified form, the Euler equation, that omits the effects of friction. There are cases where a linear approximation does work well, such as flow at extremely slow speeds (imagine honey flowing out of a jar), but this excludes most problems of interest including turbulence.

    Who’s down with CFD?

    Despite the near impossibility of finding mathematical solutions to the equations for fluid flows under realistic conditions, science still needs to get some kind of predictive handle on turbulence. For this, scientists and engineers have turned to the only option available when pencil and paper failed them—the computer. These groups are trying to make the most of modern hardware to put a dent in one of the most demanding applications for numerical computing: calculating turbulent flows.

    The need to calculate these chaotic flows has benefited from (and been a driver of) improvements in numerical methods and computer hardware almost since the first giant computers appeared. The field is called computational fluid dynamics, often abbreviated as CFD.

    Early in the history of CFD, engineers and scientists applied straightforward numerical techniques in order to try to directly approximate solutions to the Navier-Stokes equations. This involves dividing up space into a grid and calculating the fluid variables (pressure, velocity) at each grid point. The problem of the large range of spatial scales immediately makes this approach expensive: you need to find a solution where the flow features are accurate for the largest scales—meters for pipes, thousands of kilometers for weather, and down to near the molecular scale. Even if you cut off the length scale at the small end at millimeters or centimeters, you will still need millions of grid points.

    5
    A possible grid for calculating the flow over an airfoil.

    One approach to getting reasonable accuracy with a manageable-sized grid begins with the realization that there are often large regions where not much is happening. Put another way, in regions far away from solid objects or other disturbances, the flow is likely to vary slowly in both space and time. All the action is elsewhere; the turbulent areas are usually found near objects or interfaces.

    6
    A non-uniform grid for calculating the flow over an airfoil.

    If we take another look at our airfoil and imagine a uniform flow beginning at the left and passing over it, it can be more efficient to concentrate the grid points near the object, especially at the leading and trailing edges, and not “waste” grid points far away from the airfoil. The next figure shows one possible gridding for simulating this problem.

    This is the simplest type of 2D non-uniform grid, containing nothing but straight lines. The state of the art in nonuniform grids is called adaptive mesh refinement (AMR), where the mesh, or grid, actually changes and adapts to the flow during the simulation. This concentrates grid points where they are needed, not wasting them in areas of nearly uniform flow. Research in this field is aimed at optimizing the grid generation process while minimizing the artificial effects of the grid on the solution. Here it’s used in a NASA simulation of the flow around an oscillating rotor blade. The color represents vorticity, a quantity related to angular momentum.

    7
    Using AMR to simulate the flow around a rotor blade.Neal M. Chaderjian, NASA/Ames

    The above image shows the computational grid, rendered as blue lines, as well as the airfoil and the flow solution, showing how the grid adapts itself to the flow. (The grid points are so close together at the areas of highest grid resolution that they appear as solid blue regions.) Despite the efficiencies gained by the use of adaptive grids, simulations such as this are still computationally intensive; a typical calculation of this type occupies 2,000 compute cores for about a week.

    Dimitri Mavriplis and his collaborators at the Mavriplis CFD Lab at the University of Wyoming have made available several videos of their AMR simulations.

    8
    AMR simulation of flow past a sphere.Mavriplis CFD Lab

    Above is a frame from a video of a simulation of the flow past an object; the video is useful for getting an idea of how the AMR technique works, because it shows how the computational grid tracks the flow features.

    This work is an example of how state-of-the-art numerical techniques are capable of capturing some of the physics of the transition to turbulence, illustrated in the image of candle-heated air above.

    Another approach to getting the most out of finite computer resources involves making alterations to the equation of motion, rather than, or in addition to, altering the computational grid.

    Since the first direct numerical simulations of the Navier-Stokes equations were begun at Los Alamos in the late 1950s, the problem of the vast range of spatial scales has been attacked by some form of modeling of the flow at small scales. In other words, the actual Navier-Stokes equations are solved for motion on the medium and large scales, but, below some cutoff, a statistical or other model is substituted.

    The idea is that the interesting dynamics occur at larger scales, and grid points are placed to cover these. But the “subgrid” motions that happen between the gridpoints mainly just dissipate energy, or turn motion into heat, so don’t need to be tracked in detail. This approach is also called large-eddy simulation (LES), the term “eddy” standing in for a flow feature at a particular length scale.

    The development of subgrid modeling, although it began with the beginning of CFD, is an active area of research to this day. This is because we always want to get the most bang for the computer buck. No matter how powerful the computer, a sophisticated numerical technique that allows us to limit the required grid resolution will enable us to handle more complex problems.

    There are several other prominent approaches to modeling fluid flows on computers, some of which do not make use of grids at all. Perhaps the most successful of these is the technique called “smoothed particle hydrodynamics,” which, as its name suggests, models the fluid as a collection of computational “particles,” which are moved around without the use of a grid. The “smoothed” in the name comes from the smooth interpolations between particles that are used to derive the fluid properties at different points in space.

    Theory and experiment

    Despite the impressive (and ever-improving) ability of fluid dynamicists to calculate complex flows with computers, the search for a better theoretical understanding of turbulence continues, for computers can only calculate flow solutions in particular situations, one case at a time. Only through the use of mathematics do physicists feel that they’ve achieved a general understanding of a group of related phenomena. Luckily, there are a few main theoretical approaches to turbulence, each with some interesting phenomena they seek to penetrate.

    Only a few exact solutions of the Navier-Stokes equations are known; these describe simple, laminar flows (and certainly not turbulent flows of any kind). For flow in a pipe or between two flat plates, the flow velocity profile between two plates is zero at the boundaries and a maximum half-way between them. This parabolic flow profile (shown below) solves the equations: something that has been known for over a century. Laminar flow in a pipe is similar, with the maximum velocity occurring at the center.

    9
    Exact solution for flow between plates.

    The interesting thing about this parabolic solution, and similar exact solutions, is that they are valid (mathematically speaking) at any flow velocity, no matter how high. However, experience shows that while this works at low speeds, the flow breaks up and becomes turbulent at some moderate “critical” speed. Using mathematical methods to try to find this critical speed is part of what Heisenberg was up to in his thesis work.

    Theorists describe what’s happening here by using the language of stability theory. Stability theory is the examination of the exact solutions to the Navier-Stokes equation and their ability to survive “perturbations,” which are small disturbances added to the flow. These disturbances can be in the form of boundaries that are less than perfectly smooth, variations in the pressure driving the flow, etc.

    The idea is that, while the low-speed solution is valid at any speed, near a critical speed another solution also becomes valid, and nature prefers that second, more complex solution. In other words, the simple solution has become unstable and is replaced by a second one. As the speed is ramped up further, each solution gives way to a more complicated one, until we arrive at the chaotic flow we call turbulence.

    In the real world, this will always happen, because perturbations are always present—and this is why laminar flows are much less common in everyday experience than turbulence.

    Experiments to directly observe these instabilities are delicate, because the distance between the first instability and the onset of full-blown turbulence is usually quite small. You can see a version of the process in the figure above, showing the transition to turbulence in the heated air column above a candle. The straight column is unstable, but it takes a while before the sinuous instability grows large enough for us to see it as a visible wiggle. Almost as soon as this happens, the cascade of instabilities piles up, and we see a sudden explosion into turbulence.

    Another example of the common pattern is in the next illustration, which shows the typical transition to turbulence in a flow bounded by a single wall.

    10
    Transition to turbulence in a wall-bounded flow. NASA.

    We can again see an approximately periodic disturbance to the laminar flow begin to grow, and after just a few wavelengths the flow suddenly becomes turbulent.

    Capturing, and predicting, the transition to turbulence is an ongoing challenge for simulations and theory; on the theoretical side, the effort begins with stability theory.

    In fluid flows close to a wall, the transition to turbulence can take a somewhat different form. As in the other examples illustrated here, small disturbances get amplified by the flow until they break down into chaotic, turbulent motion. But the turbulence does not involve the entire fluid, instead confining itself to isolated spots, which are surrounded by calm, laminar flow. Eventually, more spots develop, enlarge, and ultimately merge, until the entire flow is turbulent.

    The fascinating thing about these spots is that, somehow, the fluid can enter them, undergo a complex, chaotic motion, and emerge calmly as a non-turbulent, organized flow on the other side. Meanwhile, the spots persist as if they were objects embedded in the flow and attached to the boundary.


    Turbulent spot experiment: pressure fluctuation. (Credit: Katya Casper et al., Sandia National Labs)

    Despite a succession of first-rate mathematical minds puzzling over the Navier-Stokes equation since it was written down almost two centuries ago, exact solutions still are rare and cherished possessions, and basic questions about the equation remain unanswered. For example, we still don’t know whether the equation has solutions in all situations. We’re also not sure if its solutions, which supposedly represent the real flows of water and air, remain well-behaved and finite, or whether some of them blow up with infinite energies or become unphysically unsmooth.

    The scientist who can settle this, either way, has a cool million dollars waiting for them—this is one of the seven unsolved “Millennium Prize” mathematical problems set by the Clay Mathematics Institute.

    Fortunately, there are other ways to approach the theory of turbulence, some of which don’t depend on the knowledge of exact solutions to the equations of motion. The study of the statistics of turbulence uses the Navier-Stokes equation to deduce average properties of turbulent flows without trying to solve the equations exactly. It addresses questions like, “if the velocity of the flow here is so and so, then what is the probability that the velocity one centimeter away will be within a certain range?” It also answers questions about the average of quantities such as the resistance encountered when trying to push water through a pipe, or the lifting force on an airplane wing.

    These are the quantities of real interest to the engineer, who has little use for the physicist’s or mathematician’s holy grail of a detailed, exact description.

    Despite a succession of first-rate mathematical minds puzzling over the Navier-Stokes equation since it was written down almost two centuries ago, exact solutions still are rare and cherished possessions, and basic questions about the equation remain unanswered. For example, we still don’t know whether the equation has solutions in all situations. We’re also not sure if its solutions, which supposedly represent the real flows of water and air, remain well-behaved and finite, or whether some of them blow up with infinite energies or become unphysically unsmooth.

    The scientist who can settle this, either way, has a cool million dollars waiting for them—this is one of the seven unsolved “Millennium Prize” mathematical problems set by the Clay Mathematics Institute.

    Fortunately, there are other ways to approach the theory of turbulence, some of which don’t depend on the knowledge of exact solutions to the equations of motion. The study of the statistics of turbulence uses the Navier-Stokes equation to deduce average properties of turbulent flows without trying to solve the equations exactly. It addresses questions like, “if the velocity of the flow here is so and so, then what is the probability that the velocity one centimeter away will be within a certain range?” It also answers questions about the average of quantities such as the resistance encountered when trying to push water through a pipe, or the lifting force on an airplane wing.

    These are the quantities of real interest to the engineer, who has little use for the physicist’s or mathematician’s holy grail of a detailed, exact description.

    It turns out that the one great obstacle in the way of a statistical approach to turbulence theory is, once again, the nonlinear term in the Navier-Stokes equation. When you use this equation to derive another equation for the average velocity at a single point, it contains a term involving something new: the velocity correlation between two points. When you derive the equation for this velocity correlation, you get an equation with yet another new term: the velocity correlation involving three points. This process never ends, as the diabolical nonlinear term keeps generating higher-order correlations.

    The need to somehow terminate, or “close,” this infinite sequence of equations is known as the “closure problem” in turbulence theory and is still the subject of active research. Very briefly, to close the equations you need to step outside of the mathematical procedure and appeal to a physically motivated assumption or approximation.

    Despite its difficulty, some type of statistical solution to the fluid equations is essential for describing the phenomena of fully developed turbulence, of which there are a number. Turbulence need not be merely a random, featureless expanse of roiling fluid; in fact, it usually is more interesting than that. One of the most intriguing phenomena is the existence of persistent, organized structures within a violent, chaotic flow environment. We are all familiar with magnificent examples of these in the form of the storms on Jupiter, recognizable, even iconic, features that last for years, embedded within a highly turbulent flow.

    More down-to-Earth examples occur in almost any real-world case of a turbulent flow—in fact, experimenters have to take great pains if they want to create a turbulent flow field that is truly homogeneous, without any embedded structure.

    In the below images of a turbulent wake behind a cylinder and of the transition to turbulence in a wall-bounded flow, you can see the echoes of the wave-like disturbance that precedes the onset of fully developed turbulence: a periodicity that persists even as the flow becomes chaotic.

    12
    Cyclones at Jupiter’s north pole. NASA, JPL-Caltech, SwRI, ASI, INAF, JIRAM.

    13
    Wake behind a cylinder. Joseph Straccia et al. (CC By NC-ND)

    When your basic governing equation is very hard to solve or even to simulate, it’s natural to look for a more tractable equation or model that still captures most of the important physics. Much of the theoretical effort to understand turbulence is of this nature.

    We’ve mentioned subgrid models above, used to reduce the number of grid points required in a numerical simulation. Another approach to simplifying the Navier-Stokes equation is a class of models called “shell models.” Roughly speaking, in these models you take the Fourier transform of the Navier-Stokes equation, leading to a description of the fluid as a large number of interacting waves at different wavelengths. Then, in a systematic way, you discard most of the waves, keeping just a handful of significant ones. You can then calculate, using a computer or, with the simplest models, by hand, the mode interactions and the resulting turbulent properties. While, naturally, much of the physics is lost in these types of models, they allow some aspects of the statistical properties of turbulence to be studied in situations where the full equations cannot be solved.

    Occasionally, we hear about the “end of physics”—the idea that we are approaching the stage where all the important questions will be answered, and we will have a theory of everything. But from another point of view, the fact that such a commonplace phenomenon as the flow of water through a pipe is still in many ways an unsolved problem means that we are unlikely to ever reach a point that all physicists will agree is the end of their discipline. There remains enough mystery in the everyday world around us to keep physicists busy far into the future.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 1:03 pm on July 8, 2017 Permalink | Reply
    Tags: "Answers in Genesis", A great test case, , ars technica, ,   

    From ars technica: “Creationist sues national parks, now gets to take rocks from Grand Canyon” a Test Case Too Good to be True 

    Ars Technica
    ars technica

    7/7/2017
    Scott K. Johnson

    1
    Scott K. Johnson

    “Alternative facts” aren’t new. Young-Earth creationist groups like Answers in Genesis believe the Earth is no more than 6,000 years old despite actual mountains of evidence to the contrary, and they’ve been playing the “alternative facts” card for years. In lieu of conceding incontrovertible geological evidence, they sidestep it by saying, “Well, we just look at those facts differently.”

    Nowhere is this more apparent than the Grand Canyon, which young-Earth creationist groups have long been enamored with. A long geologic record (spanning almost 2 billion years, in total) is on display in the layers of the Grand Canyon thanks to the work of the Colorado River. But many creationists instead assert that the canyon’s rocks—in addition to the spectacular erosion that reveals them—are actually the product of the Biblical “great flood” several thousand years ago.

    Andrew Snelling, who got a PhD in geology before joining Answers in Genesis, continues working to interpret the canyon in a way that is consistent with his views. In 2013, he requested permission from the National Park Service to collect some rock samples in the canyon for a new project to that end. The Park Service can grant permits for collecting material, which is otherwise illegal.

    Snelling wanted to collect rocks from structures in sedimentary formations known as “soft-sediment deformation”—basically, squiggly disturbances of the layering that occur long before the sediment solidifies into rock. While solid rock layers can fold (bend) on a larger scale under the right pressures, young-Earth creationists assert that all folds are soft sediment structures, since forming them doesn’t require long periods of time.

    The National Park Service sent Snelling’s proposal out for review, having three academic geologists who study the canyon look at it. Those reviews were not kind. None felt the project provided any value to justify the collection. One reviewer, the University of New Mexico’s Karl Karlstrom, pointed out that examples of soft-sediment deformation can be found all over the place, so Snelling didn’t need to collect rock from a national park. In the end, Snelling didn’t get his permit.

    In May, Snelling filed a lawsuit alleging that his rights had been violated, as he believed his application had been denied by a federal agency because of his religious views. The complaint cites, among other things, President Trump’s executive order on religious freedom.

    That lawsuit was withdrawn by Snelling on June 28. According to a story in The Australian, Snelling withdrew his suit because the National Park Service has relented and granted him his permit. He will be able to collect about 40 fist-sized samples, provided that he makes the data from any analyses freely available.

    Not that anything he collects will matter. “Even if I don’t find the evidence I think I will find, it wouldn’t assault my core beliefs,” Snelling told The Australian. “We already have evidence that is consistent with a great flood that swept the world.”

    Again, in actuality, that hypothesis is in conflict with the entirety of Earth’s surface geology.

    Snelling says he will publish his results in a peer-reviewed scientific journal. That likely means Answers in Genesis’ own Answers Research Journal, of which he is editor-in-chief.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 9:07 am on June 18, 2017 Permalink | Reply
    Tags: ars technica, , , , , Molybdenum isotopes serve as a marker of the source material for our Solar System, Tungsten acts as a timer for events early in the Solar System’s history, U Münster   

    From ars technica: “New study suggests Jupiter’s formation divided Solar System in two” 

    Ars Technica
    ars technica

    6/17/2017
    John Timmer

    1
    NASA

    Gas giants like Jupiter have to grow fast. Newborn stars are embedded in a disk of gas and dust that goes on to form planets. But the ignition of the star releases energy that drives away much of the gas within a relatively short time. Thus, producing something like Jupiter involved a race to gather material before it was pushed out of the Solar System entirely.

    Simulations have suggested that Jupiter could have won this race by quickly building a massive, solid core that was able to start drawing in nearby gas. But, since we can’t look at the interior or Jupiter to see whether it’s solid, finding evidence to support these simulations has been difficult. Now, a team at the University of Münster has discovered some relevant evidence [PNAS] in an unexpected location: the isotope ratios found in various meteorites. These suggest that the early Solar System was quickly divided in two, with the rapidly forming Jupiter creating the dividing line.

    2

    Divide and conquer

    Based on details of their composition, we already knew that meteorites formed from more than one pool of material in the early Solar System. The new work extends that by looking at specific elements: tungsten and molybdenum. Molybdenum isotopes serve as a marker of the source material for our Solar System, determining what type of star contributed that material. Tungsten acts as a timer for events early in the Solar System’s history, as it’s produced by a radioactive decay with a half life of just under nine million years.

    While we have looked at tungsten and molybdenum in a number of meteorite populations before, the German team extended that work to iron-rich meteorites. These are thought to be fragments of the cores of planetesimals that formed early in the Solar System’s history. In many cases, these bodies went on to contribute to building the first planets.

    The chemical composition of meteorites had suggested a large number of different classes produced as different materials solidified at different distances from the Sun. But the new data suggests that, from the perspective of these isotopes, everything falls into just two classes: carbonaceous and noncarbonaceous.

    These particular isotopes tell us a few things. One is that the two populations probably have a different formation history. The molybdenum data indicates that material was added to the Solar System as it was forming, material that originated from a different type of source star. (One way to visualize this is to think of our Solar System as forming in two steps: first, from the debris of a supernova, then later we received additional material ejected by a red giant star.) And, because the two populations are so distinct, it appears that the later addition of material didn’t spread throughout the entire Solar System. If the later material had spread, you’d find some objects with intermediate compositions.

    A second thing that’s clear from the tungsten data is that the two classes of objects condensed at two different times. This suggests the noncarbonaceous bodies were forming from one to two million years into the Solar System’s history, while carbonaceous materials condensed later, from two to three million years.

    Putting it together

    To explain this, the authors suggest that the Solar System was divided early in its history, creating two different reservoirs of material. “The most plausible mechanism to efficiently separate two disk reservoirs for an extended period,” they suggest, “is the accretion of a giant planet in between them.” That giant planet, obviously, would be Jupiter.

    Modeling indicates that Jupiter would need to be 20 Earth masses to physically separate the two reservoirs. And the new data suggest that a separation had to take place by a million years into the Solar System’s history. All of which means that Jupiter had to grow very large, very quickly. This would be large enough for Jupiter to start accumulating gas well before the newly formed Sun started driving the gas out of the disk. By the time Jupiter grew to 50 Earth masses, it would create a permanent physical separation between the two parts of the disk.

    The authors suggest that the quick formation of Jupiter may have partially starved the inner disk of material, as it prevented material from flowing in from the outer areas of the planet-forming disk. This could explain why the inner Solar System lacks any “super Earths,” larger planets that would have required more material to form.

    Overall, the work does provide some evidence for a quick formation of Jupiter, probably involving a solid core. Other researchers are clearly going to want to check both the composition of additional meteorites and the behavior of planet formation models to see whether the results hold together. But the overall finding of two distinct reservoirs of material in the early Solar System seems to be very clear in their data, and those reservoirs will have to be explained one way or another.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 7:53 am on June 10, 2017 Permalink | Reply
    Tags: , , ars technica, Common Crawl, Implicit Association Test (IAT), Princeton researchers discover why AI become racist and sexist, Word-Embedding Association Test (WEAT)   

    From ars technica: “Princeton researchers discover why AI become racist and sexist” 

    Ars Technica
    ars technica

    19/4/2017
    Annalee Newitz

    Study of language bias has implications for AI as well as human cognition.

    1
    No image caption or credit

    Ever since Microsoft’s chatbot Tay started spouting racist commentary after 24 hours of interacting with humans on Twitter, it has been obvious that our AI creations can fall prey to human prejudice. Now a group of researchers has figured out one reason why that happens. Their findings shed light on more than our future robot overlords, however. They’ve also worked out an algorithm that can actually predict human prejudices based on an intensive analysis of how people use English online.

    The implicit bias test

    Many AIs are trained to understand human language by learning from a massive corpus known as the Common Crawl. The Common Crawl is the result of a large-scale crawl of the Internet in 2014 that contains 840 billion tokens, or words. Princeton Center for Information Technology Policy researcher Aylin Caliskan and her colleagues wondered whether that corpus—created by millions of people typing away online—might contain biases that could be discovered by algorithm. To figure it out, they turned to an unusual source: the Implicit Association Test (IAT), which is used to measure often unconscious social attitudes.

    People taking the IAT are asked to put words into two categories. The longer it takes for the person to place a word in a category, the less they associate the word with the category. (If you’d like to take an IAT, there are several online at Harvard University.) IAT is used to measure bias by asking people to associate random words with categories like gender, race, disability, age, and more. Outcomes are often unsurprising: for example, most people associate women with family, and men with work. But that obviousness is actually evidence for the IAT’s usefulness in discovering people’s latent stereotypes about each other. (It’s worth noting that there is some debate among social scientists about the IAT’s accuracy.)

    Using the IAT as a model, Caliskan and her colleagues created the Word-Embedding Association Test (WEAT), which analyzes chunks of text to see which concepts are more closely associated than others. The “word-embedding” part of the test comes from a project at Stanford called GloVe, which packages words together into “vector representations,” basically lists of associated terms. So the word “dog,” if represented as a word-embedded vector, would be composed of words like puppy, doggie, hound, canine, and all the various dog breeds. The idea is to get at the concept of dog, not the specific word. This is especially important if you are working with social stereotypes, where somebody might be expressing ideas about women by using words like “girl” or “mother.” To keep things simple, the researchers limited each concept to 300 vectors.

    To see how concepts get associated with each other online, the WEAT looks at a variety of factors to measure their “closeness” in text. At a basic level, Caliskan told Ars, this means how many words apart the two concepts are, but it also accounts for other factors like word frequency. After going through an algorithmic transform, closeness in the WEAT is equivalent to the time it takes for a person to categorize a concept in the IAT. The further apart the two concepts, the more distantly they are associated in people’s minds.

    The WEAT worked beautifully to discover biases that the IAT had found before. “We adapted the IAT to machines,” Caliskan said. And what that tool revealed was that “if you feed AI with human data, that’s what it will learn. [The data] contains biased information from language.” That bias will affect how the AI behaves in the future, too. As an example, Caliskan made a video (see above) where she shows how the Google Translate AI actually mistranslates words into the English language based on stereotypes it has learned about gender.

    Imagine an army of bots unleashed on the Internet, replicating all the biases that they learned from humanity. That’s the future we’re looking at if we don’t build some kind of corrective for the prejudices in these systems.

    A problem that AI can’t solve

    Though Caliskan and her colleagues found language was full of biases based on prejudice and stereotypes, it was also full of latent truths as well. In one test, they found strong associations between the concept of woman and the concept of nursing. This reflects a truth about reality, which is that nursing is a majority female profession.

    “Language reflects facts about the world,” Caliskan told Ars. She continued:

    Removing bias or statistical facts about the world will make the machine model less accurate. But you can’t easily remove bias, so you have to learn how to work with it. We are self-aware, we can decide to do the right thing instead of the prejudiced option. But machines don’t have self awareness. An expert human might be able to aid in [the AIs’] decision-making process so the outcome isn’t stereotyped or prejudiced for a given task.”

    The solution to the problem of human language is… humans. “I can’t think of many cases where you wouldn’t need a human to make sure that the right decisions are being made,” concluded Caliskan. “A human would know the edge cases for whatever the application is. Once they test the edge cases they can make sure it’s not biased.”

    So much for the idea that bots will be taking over human jobs. Once we have AIs doing work for us, we’ll need to invent new jobs for humans who are testing the AIs’ results for accuracy and prejudice. Even when chatbots get incredibly sophisticated, they are still going to be trained on human language. And since bias is built into language, humans will still be necessary as decision-makers.

    In a recent paper for Science about their work, the researchers say the implications are far-reaching. “Our findings are also sure to contribute to the debate concerning the Sapir Whorf hypothesis,” they write. “Our work suggests that behavior can be driven by cultural history embedded in a term’s historic use. Such histories can evidently vary between languages.” If you watched the movie Arrival, you’ve probably heard of Sapir Whorf—it’s the hypothesis that language shapes consciousness. Now we have an algorithm that suggests this may be true, at least when it comes to stereotypes.

    Caliskan said her team wants to branch out and try to find as-yet-unknown biases in human language. Perhaps they could look for patterns created by fake news or look into biases that exist in specific subcultures or geographical locations. They would also like to look at other languages, where bias is encoded very differently than it is in English.

    “Let’s say in the future, someone suspects there’s a bias or stereotype in a certain culture or location,” Caliskan mused. “Instead of testing with human subjects first, which takes time, money, and effort, they can get text from that group of people and test to see if they have this bias. It would save so much time.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 4:44 pm on May 16, 2017 Permalink | Reply
    Tags: ars technica, ,   

    From ars technica: “Atomic clocks and solid walls: New tools in the search for dark matter” 

    Ars Technica
    ars technica

    5/15/2017
    Jennifer Ouellette

    1
    An atomic clock based on a fountain of atoms. NSF

    Countless experiments around the world are hoping to reap scientific glory for the first detection of dark matter particles. Usually, they do this by watching for dark matter to bump into normal matter or by slamming particles into other particles and hoping for some dark stuff to pop out. But what if the dark matter behaves more like a wave?

    That’s the intriguing possibility championed by Asimina Arvanitaki, a theoretical physicist at the Perimeter Institute in Waterloo, Ontario, Canada, where she holds the Aristarchus Chair in Theoretical Physics—the first woman to hold a research chair at the institute. Detecting these hypothetical dark matter waves requires a bit of experimental ingenuity. So she and her collaborators are adapting a broad range of radically different techniques to the search: atomic clocks and resonating bars originally designed to hunt for gravitational waves—and even lasers shined at walls in hopes that a bit of dark matter might seep through to the other side.

    “Progress in particle physics for the last 50 years has been focused on colliders, and rightfully so, because whenever we went to a new energy scale, we found something new,” says Arvanitaki. That focus is beginning to shift. To reach higher and higher energies, physicists must build ever-larger colliders—an expensive proposition when funding for science is in decline. There is now more interest in smaller, cheaper options. “These are things that usually fit in the lab, and the turnaround time for results is much shorter than that of the collider,” says Arvanitaki, admitting, “I’ve done this for a long time, and it hasn’t always been popular.”

    The end of the WIMP?

    While most dark matter physicists have focused on hunting for weakly interacting massive particles, or WIMPs, Arvanitaki is one of a growing number who are focusing on less well-known alternatives, such as axions—hypothetical ultralight particles with masses that could be as little as ten thousand trillion trillion times smaller than the mass of the electron. The masses of WIMPs, by contrast, would be larger than the mass of the proton.

    Cosmology gave us very good reason to be excited about WIMPs and focus initial searches in their mass range, according to David Kaplan, a theorist at Johns Hopkins University (and producer of the 2013 documentary Particle Fever). But the WIMP’s dominance in the field to date has also been due, in part, to excitement over the idea of supersymmetry. That model requires every known particle in the Standard Model—whether fermion or boson—to have a superpartner that is heavier and in the opposite class. So an electron, which is a fermion, would have a boson superpartner called the selectron, and so on.

    Physicists suspect one or more of those unseen superpartners might make up dark matter. Supersymmetry predicts not just the existence of dark matter, but how much of it there should be. That fits neatly within a WIMP scenario. Dark matter could be any number of things, after all, and the supersymmetry mass range seemed like a good place to start the search, given the compelling theory behind it.

    But in the ensuing decades, experiment after experiment has come up empty. With each null result, the parameter space where WIMPs might be lurking shrinks. This makes distinguishing a possible signal from background noise in the data increasingly difficult.

    “We’re about to bump up against what’s called the ‘neutrino floor,’” says Kaplan. “All the technology we use to discover WIMPs will soon be sensitive to random neutrinos flying through the Universe. Once it gets there, it becomes a much messier signal and harder to see.”

    Particles are waves

    Despite its momentous discovery of the Higgs boson in 2012, the Large Hadron Collider has yet to find any evidence of supersymmetry. So we shouldn’t wonder that physicists are turning their attention to alternative dark matter candidates outside of the mass ranges of WIMPs. “It’s now a fishing expedition,” says Kaplan. “If you’re going on a fishing expedition, you want to search as broadly as possible, and the WIMP search is narrow and deep.”

    Enter Asimina Arvanitaki—“Mina” for short. She grew up in a small Greek Village called Koklas, and, since her parents were teachers, she grew up with no shortage of books around the house. Arvanitaki excelled in math and physics—at a very young age, she calculated the time light takes to travel from the Earth to the Sun. While she briefly considered becoming a car mechanic in high school because she loved cars, she decided, “I was more interested in why things are the way they are, not in how to make them work.” So she majored in physics instead.

    Similar reasoning convinced her to switch her graduate-school focus at Stanford from experimental condensed matter physics to theory: she found her quantum field theory course more scintillating than any experimental results she produced in the laboratory.

    Central to Arvanitaki’s approach is a theoretical reimagining of dark matter as more than just a simple particle. A peculiar quirk of quantum mechanics is that particles exhibit both particle- and wave-like behavior, so we’re really talking about something more akin to a wavepacket, according to Arvanitaki. The size of those wave packets is inversely proportional to their mass. “So the elementary particles in our theory don’t have to be tiny,” she says. “They can be super light, which means they can be as big as the room or as big as the entire Universe.”

    Axions fit the bill as a dark matter candidate, but they interact so weakly with regular matter that they cannot be produced in colliders. Arvanitaki has proposed several smaller experiments that might succeed in detecting them in ways that colliders cannot.

    Walls, clocks, and bars

    One of her experiments relies on atomic clocks—the most accurate timekeeping devices we have, in which the natural frequency oscillations of atoms serve the same purpose as the pendulum in a grandfather clock. An average wristwatch loses roughly one second every year; atomic clocks are so precise that the best would only lose one second every age of the Universe.

    Within her theoretical framework, dark matter particles (including axions) would behave like waves and oscillate at specific frequencies determined by the mass of the particles. Dark matter waves would cause the atoms in an atomic clock to oscillate as well. The effect is very tiny, but it should be possible to see such oscillations in the data. A trial search of existing data from atomic clocks came up empty, but Arvanitaki suspects that a more dedicated analysis would prove more fruitful.

    Then there are so-called “Weber bars,” which are solid aluminum cylinders that Arvanitaki says should ring like a tuning fork should a dark matter wavelet hit them at just the right frequency. The bars get their name from physicist Joseph Weber, who used them in the 1960s to search for gravitational waves. He claimed to have detected those waves, but nobody could replicate his findings, and his scientific reputation never quite recovered from the controversy.

    Weber died in 2000, but chances are he’d be pleased that his bars have found a new use. Since we don’t know the precise frequency of the dark matter particles we’re hunting, Arvanitaki suggests building a kind of xylophone out of Weber bars. Each bar would be tuned to a different frequency to scan for many different frequencies at once.

    Walking through walls

    Yet another inventive approach involves sending axions through walls. Photons (light) can’t pass through walls—shine a flashlight onto a wall, and someone on the other side won’t be able to see that light. But axions are so weakly interacting that they can pass through a solid wall. Arvanitaki’s experiment exploits the fact that it should be possible to turn photons into axions and then reverse the process to restore the photons. Place a strong magnetic field in front of that wall and then shine a laser onto it. Some of the photons will become axions and pass through the wall. A second magnetic field on the other side of the wall then converts those axions back into photons, which should be easily detected.

    This is a new kind of dark matter detection relying on small, lab-based experiments that are easier to perform (and hence easier to replicate). They’re also much cheaper than setting up detectors deep underground or trying to produce dark matter particles at the LHC—the biggest, most complicated scientific machine ever built, and the most expensive.

    “I think this is the future of dark matter detection,” says Kaplan, although both he and Arvanitaki are adamant that this should complement, not replace, the many ongoing efforts to hunt for WIMPs, whether deep underground or at the LHC.

    “You have to look everywhere, because there are no guarantees. This is what research is all about,” says Arvanitaki. “What we think is correct, and what Nature does, may be two different things.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 6:51 pm on February 14, 2017 Permalink | Reply
    Tags: ars technica, , , , Caltech Palomar Intermediate Palomar Transient Factory, ,   

    From ars technica: “Observations catch a supernova three hours after it exploded” 

    Ars Technica
    ars technica

    1
    BRIGHT AND EARLY Scientists caught an early glimpse of an exploding star in the galaxy NGC7610 (shown before the supernova). Light from the explosion revealed that gas (orange) surrounded the star, indicating that the star spurted out gas in advance of the blast.

    2
    The remains of an earlier Type II supernova. NASA

    The skies are full of transient events. If you don’t happen to have a telescope pointed at the right place at the right time, you can miss anything from the transit of a planet to the explosion of a star. But thanks to the development of automated survey telescopes, the odds of getting lucky have improved considerably.

    In October of 2013, the telescope of the intermediate Palomar Transient Factory worked just as expected, capturing a sudden brightening that turned out to reflect the explosion of a red supergiant in a nearby galaxy.

    Caltech Palomar Intermediate Palomar Transient Factory telescope at the Samuel Oschin Telescope at Palomar Observatory,located in San Diego County, California, United States
    Caltech Palomar Intermediate Palomar Transient Factory telescope at the Samuel Oschin Telescope at Palomar Observatory,located in San Diego County, California, United States

    The first images came from within three hours of the supernova itself, and followup observations tracked the energy released as it blasted through the nearby environment. The analysis of the event was published on Monday in Nature Physics, and it suggests the explosion followed shortly after the star ejected large amounts of material.

    This isn’t the first supernova we’ve witnessed as it happened; the Kepler space telescope captured two just as the energy of the explosion of the star’s core burst through the surface. By comparison, observations three hours later are relative latecomers. But SN 2013fs (as it was later termed) provided considerably more detail, as followup observations were extensive and covered all wavelengths, from X-rays to the infrared.

    Critically, spectroscopy began within six hours of the explosion. This technique separates the light according to its wavelength, allowing researchers to identify the presence of specific atoms based on the colors of light they absorb. In this case, the spectroscopy picked up the presence of atoms such as oxygen and helium, which lost most of its electrons. The presence of these heavily ionized oxygen atoms surged for several hours, then was suddenly cut off 11 hours later.

    The authors explain this behavior by positing that the red supergiant ejected a significant amount of material before it exploded. The light from the explosion then swept through the vicinity, eventually catching up with the material and stripping the electrons off its atoms. The sudden cutoff came when the light exited out the far side of the material, allowing it to return to a lower energy state, where it stayed until the physical debris of the explosion slammed into it about five days later.

    Since the light of the explosion is moving at the speed of light (duh), we know how far away the material was: six light hours, or roughly the Sun-Pluto distance. Some blurring in the spectroscopy also indicates that it was moving at about 100 kilometers a second. Based on its speed and the distance it is from the star that ejected it, they could calculate when it was ejected: less than 500 days before the explosion. The total mass of the material also suggests that the star was losing about 0.1 percent of the Sun’s mass a year.

    Separately, the authors estimate that it is unlikely there is a single star in our galaxy with the potential to be less than 500 days from explosion, so we probably won’t be able to look at an equivalent star—assuming we knew how to identify it.

    Large stars like red supergiants do sporadically eject material, so there’s always the possibility that the ejection-explosion series occurred by chance. But this isn’t the first supernova we’ve seen where explosion material has slammed into a shell of material that had been ejected earlier. Indeed, the closest red supergiant, Betelgeuse, has a stable shell of material a fair distance from its surface.

    What could cause these ejections? For most of their relatively short lives, these giant stars are fusing relatively light elements, each of which is present in sufficient amounts to burn for millions of years. But once they start to shift to heavier elements, higher rates of fusion are needed to counteract gravity, which is constantly drawing the elements in the core. As a result, the core undergoes major rearrangements as it changes fuels, sometimes within a span of a couple of years. It’s possible, suggests an accompanying perspective by astronomer Norbert Langer, that these rearrangements propagate to the surface and force the ejection of matter.

    For now, we’ll have to explore this possibility using models of the interiors of giant stars. But with enough survey telescopes in operation, we may have more data to test the idea against before too long.

    Nature Physics, 2017. DOI: 10.1038/NPHYS4025

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

    Nature Physics, 2017. DOI: 10.1038/NPHYS4025

     
  • richardmitnick 7:59 pm on December 28, 2016 Permalink | Reply
    Tags: ars technica, , How humans survived in the barren Atacama Desert 13000 years ago   

    From ars technica: “How humans survived in the barren Atacama Desert 13,000 years ago” Revised for more Optical telescopes 

    Ars Technica
    ars technica

    12/28/2016
    Annalee Newitz

    1
    The Atacama Desert today is barren, its sands encrusted with salt. And yet there were thriving human settlements there 12,000 years ago.
    Vallerio Pilar

    Home of:

    ESO/LaSilla
    ESO/LaSilla

    ESO/VLT at Cerro Paranal, Chile
    ESO/VLT at Cerro Paranal, Chile

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at  Chajnantor plateau, at 5,000 metres
    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    6
    Cerro Tololo Inter-American Observatory
    Blanco 4.0-m Telescope
    SOAR 4.1-m Telescope
    Gemini South 8.1-m Telescope

    When humans first arrived in the Americas, roughly 18,000 to 20,000 years ago, they traveled by boat along the continents’ shorelines. Many settled in coastal regions or along rivers that took them inland from the sea. Some made it all the way down to Chile quite quickly; there’s evidence for a human settlement there from more than 14,000 years ago at a site called Monte Verde. Another settlement called Quebrada Maní, dating back almost 13,000 years, was recently discovered north of Monte Verde in one of the most arid deserts in the world: the Atacama, whose salt-encrusted sands repel even the hardiest of plants. It seemed an impossible place for early humans to settle, but now we understand how they did it.

    At a presentation during the American Geophysical Union meeting this month, UC Berkeley environmental science researcher Marco Pfeiffer explained how he and his team investigated the Atacama desert’s deep environmental history. Beneath the desert’s salt crust, they found a buried layer of plant and animal remains between 9,000 and 17,000 years old. There were freshwater plants and mosses, as well as snails and plants that prefer brackish water. Quickly it became obvious this land had not always been desert—what Pfeiffer and his colleagues saw suggested wetlands fed by fresh water.

    1
    Chile’s early archaeological sites, named and dated. The yellow area shows the extension of the Atacama Desert hyperarid core. Also note the surrounding mountains that block many rainy weather systems. Quaternary Science Reviews

    But where could this water have come from? The high mountains surrounding the Atacama are a major barrier to weather systems that bring rain, which is partly why the area is lifeless today. Maybe, they reasoned, the water came from the mountains themselves. Based on previous studies, they already knew that rainfall in the area was six times higher than today’s average in that 9,000- to 17,000-years-ago range. So they used a computer model to figure out how all that water would have drained off the mountain peaks to form streams and pools in the Atacama. “We saw that water must have been accumulating,” Pfeiffer said. As a result, the desert bloomed into a marshy ecosystem which could easily have supported a number of human settlements.

    Indeed, Pfeiffer says that his team has found evidence of human settlements in Atacama’s surrounding flatlands, which they are still investigating. Now that they understand climate change in the region, Pfeiffer added, it will be easier for archaeologists to account for the oddly large population in the area. The history of humanity in the Americas isn’t just the story of vanished peoples—it’s also the tale of lost ecosystems.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: