Updates from October, 2019 Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:13 pm on October 23, 2019 Permalink | Reply
    Tags: "First identification of a heavy element born from neutron star collision", , , , ,   

    From European Southern Observatory: “First identification of a heavy element born from neutron star collision” 

    ESO 50 Large

    From European Southern Observatory

    23 October 2019

    Darach Watson
    Cosmic Dawn Center (DAWN), Niels Bohr Institute, University of Copenhagen
    Copenhagen, Denmark
    Cell: +45 24 80 38 25
    Email: darach@nbi.ku.dk

    Camilla J. Hansen
    Max Planck Institute for Astronomy
    Heidelberg, Germany
    Tel: +49 6221 528-358
    Email: hansen@mpia.de

    Jonatan Selsing
    Cosmic Dawn Center (DAWN), Niels Bohr Institute, University of Copenhagen
    Copenhagen, Denmark
    Cell: +45 61 71 43 46
    Email: jselsing@nbi.ku.dk

    Bárbara Ferreira
    ESO Public Information Officer
    Garching bei München, Germany
    Tel: +49 89 3200 6670
    Email: pio@eso.org

    Newly created strontium, an element used in fireworks, detected in space for the first time following observations with ESO telescope.

    For the first time, a freshly made heavy element, strontium, has been detected in space, in the aftermath of a merger of two neutron stars. This finding was observed by ESO’s X-shooter spectrograph on the Very Large Telescope (VLT) and is published today in Nature. The detection confirms that the heavier elements in the Universe can form in neutron star mergers, providing a missing piece of the puzzle of chemical element formation.

    In 2017, following the detection of gravitational waves passing the Earth, ESO pointed its telescopes in Chile, including the VLT, to the source: a neutron star merger named GW170817. Astronomers suspected that, if heavier elements did form in neutron star collisions, signatures of those elements could be detected in kilonovae, the explosive aftermaths of these mergers. This is what a team of European researchers has now done, using data from the X-shooter instrument on ESO’s VLT.

    ESO X-shooter on VLT on UT2 at Cerro Paranal, Chile

    ESO X-shooter on VLT on UT2 at Cerro Paranal, Chile

    Following the GW170817 merger, ESO’s fleet of telescopes began monitoring the emerging kilonova explosion over a wide range of wavelengths. X-shooter in particular took a series of spectra from the ultraviolet to the near infrared. Initial analysis of these spectra suggested the presence of heavy elements in the kilonova, but astronomers could not pinpoint individual elements until now.

    “By reanalysing the 2017 data from the merger, we have now identified the signature of one heavy element in this fireball, strontium, proving that the collision of neutron stars creates this element in the Universe,” says the study’s lead author Darach Watson from the University of Copenhagen in Denmark. On Earth, strontium is found naturally in the soil and is concentrated in certain minerals. Its salts are used to give fireworks a brilliant red colour.

    Astronomers have known the physical processes that create the elements since the 1950s. Over the following decades they have uncovered the cosmic sites of each of these major nuclear forges, except one. “This is the final stage of a decades-long chase to pin down the origin of the elements,” says Watson. “We know now that the processes that created the elements happened mostly in ordinary stars, in supernova explosions, or in the outer layers of old stars. But, until now, we did not know the location of the final, undiscovered process, known as rapid neutron capture, that created the heavier elements in the periodic table.”

    Rapid neutron capture is a process in which an atomic nucleus captures neutrons quickly enough to allow very heavy elements to be created. Although many elements are produced in the cores of stars, creating elements heavier than iron, such as strontium, requires even hotter environments with lots of free neutrons. Rapid neutron capture only occurs naturally in extreme environments where atoms are bombarded by vast numbers of neutrons.

    “This is the first time that we can directly associate newly created material formed via neutron capture with a neutron star merger, confirming that neutron stars are made of neutrons and tying the long-debated rapid neutron capture process to such mergers,” says Camilla Juul Hansen from the Max Planck Institute for Astronomy in Heidelberg, who played a major role in the study.

    Scientists are only now starting to better understand neutron star mergers and kilonovae. Because of the limited understanding of these new phenomena and other complexities in the spectra that the VLT’s X-shooter took of the explosion, astronomers had not been able to identify individual elements until now.

    “We actually came up with the idea that we might be seeing strontium quite quickly after the event. However, showing that this was demonstrably the case turned out to be very difficult. This difficulty was due to our highly incomplete knowledge of the spectral appearance of the heavier elements in the periodic table,” says University of Copenhagen researcher Jonatan Selsing, who was a key author on the paper.

    The GW170817 merger was the fifth detection of gravitational waves, made possible thanks to the NSF’s Laser Interferometer Gravitational-Wave Observatory (LIGO) in the US and the Virgo Interferometer in Italy.

    MIT /Caltech Advanced aLigo

    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    Located in the galaxy NGC 4993, the merger was the first, and so far the only, gravitational wave source to have its visible counterpart detected by telescopes on Earth.

    With the combined efforts of LIGO, Virgo and the VLT, we have the clearest understanding yet of the inner workings of neutron stars and their explosive mergers.


    [1] The LIGO–Virgo detection localised the source to an area on the sky of about 35 square degrees.

    Caltech/MIT Advanced aLigo Hanford, WA, USA installation

    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    [2 The galaxy was only observable in the evening in August and then was too close to the Sun in the sky to be observed by September.

    [3] On the VLT, observations were taken with: the X-shooter spectrograph located on Unit Telescope 2 (UT2); the FOcal Reducer and low dispersion Spectrograph 2 (FORS2) and Nasmyth Adaptive Optics System (NAOS) – Near-Infrared Imager and Spectrograph (CONICA) (NACO) on Unit Telescope 1 (UT1); VIsible Multi-Object Spectrograph (VIMOS) and VLT Imager and Spectrometer for mid-Infrared (VISIR) located on Unit Telescope 3 (UT3); and the Multi Unit Spectroscopic Explorer (MUSE) and High Acuity Wide-field K-band Imager (HAWK-I) on Unit Telescope 4 (UT4). The VST observed using the OmegaCAM and VISTA observed with the VISTA InfraRed CAMera (VIRCAM). Through the ePESSTO programme, the collected visible spectra with the ESO Faint Object Spectrograph and Camera 2 (EFOSC2) spectrograph and infrared spectra with the Son of ISAAC (SOFI) spectrograph. The MPG/ESO 2.2-metre telescope observed using the Gamma-Ray burst Optical/Near-infrared Detector (GROND) instrument.

    ESO FORS2 VLT mounted on Unit Telescope 1 (Antu)

    ESO/NACO on Unit Telescope 1 (UT1); VIsible Multi-Object Spectrograph (VIMOS) and VLT Imager and Spectrometer for mid-Infrared (VISIR) located on Unit Telescope 3 (UT3

    ESO/VISIR on UT3 of the VLT

    ESO MUSE on the VLT on Yepun (UT4)

    ESO HAWK-I on the ESO VLT on Unit Telescope 4 (UT4)

    ESO OmegaCAM on VST at ESO’s Cerro Paranal observatory,with an elevation of 2,635 metres (8,645 ft) above sea level

    VIRCAM on the VISTA telescope

    ESO Faint Object Spectrograph and Camera 2 (EFOSC2) on the NTT

    ESO SofI Instrument is the infrared imaging camera on the VST

    ESO GROND imager on 2.2 meter MPG/ESO telescope at LaSilla

    [4] The comparatively small distance between Earth and the neutron star merger, 130 million light-years, made the observations possible, since merging neutron stars create weaker gravitational waves than merging black holes, which were the likely case of the first four gravitational wave detections.

    [5] When neutron stars orbit one another in a binary system, they lose energy by emitting gravitational waves. They get closer together until, when they finally meet, some of the mass of the stellar remnants is converted into energy in a violent burst of gravitational waves, as described by Einstein’s famous equation E=mc2.

    More information

    This research was presented in a paper to appear in Nature on 24 October 2019.

    The team is composed of D. Watson (Niels Bohr Institute & Cosmic Dawn Center, University of Copenhagen, Denmark), C. J. Hansen (Max Planck Institute for Astronomy, Heidelberg, Germany), J. Selsing (Niels Bohr Institute & Cosmic Dawn Center, University of Copenhagen, Denmark), A. Koch (Center for Astronomy of Heidelberg University, Germany), D. B. Malesani (DTU Space, National Space Institute, Technical University of Denmark, & Niels Bohr Institute & Cosmic Dawn Center, University of Copenhagen, Denmark), A. C. Andersen (Niels Bohr Institute, University of Copenhagen, Denmark), J. P. U. Fynbo (Niels Bohr Institute & Cosmic Dawn Center, University of Copenhagen, Denmark), A. Arcones (Institute of Nuclear Physics, Technical University of Darmstadt, Germany & GSI Helmholtzzentrum für Schwerionenforschung, Darmstadt, Germany), A. Bauswein (GSI Helmholtzzentrum für Schwerionenforschung, Darmstadt, Germany & Heidelberg Institute for Theoretical Studies, Germany), S. Covino (Astronomical Observatory of Brera, INAF, Milan, Italy), A. Grado (Capodimonte Astronomical Observatory, INAF, Naples, Italy), K. E. Heintz (Centre for Astrophysics and Cosmology, Science Institute, University of Iceland, Reykjavík, Iceland & Niels Bohr Institute & Cosmic Dawn Center, University of Copenhagen, Denmark), L. Hunt (Arcetri Astrophysical Observatory, INAF, Florence, Italy), C. Kouveliotou (George Washington University, Physics Department, Washington DC, USA & Astronomy, Physics and Statistics Institute of Sciences), G. Leloudas (DTU Space, National Space Institute, Technical University of Denmark, & Niels Bohr Institute, University of Copenhagen, Denmark), A. Levan (Department of Physics, University of Warwick, UK), P. Mazzali (Astrophysics Research Institute, Liverpool John Moores University, UK & Max Planck Institute for Astrophysics, Garching, Germany), E. Pian (Astrophysics and Space Science Observatory of Bologna, INAF, Bologna, Italy).

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Visit ESO in Social Media-




    ESO Bloc Icon

    ESO is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive ground-based astronomical observatory by far. It is supported by 16 countries: Austria, Belgium, Brazil, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Poland, Portugal, Spain, Sweden, Switzerland and the United Kingdom, along with the host state of Chile. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and two survey telescopes. VISTA works in the infrared and is the world’s largest survey telescope and the VLT Survey Telescope is the largest telescope designed to exclusively survey the skies in visible light. ESO is a major partner in ALMA, the largest astronomical project in existence. And on Cerro Armazones, close to Paranal, ESO is building the 39-metre EEuropean Extremely Large Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”.

    ESO La Silla HELIOS (HARPS Experiment for Light Integrated Over the Sun)

    ESO/HARPS at La Silla

    ESO 3.6m telescope & HARPS at Cerro LaSilla, Chile, 600 km north of Santiago de Chile at an altitude of 2400 metres.

    MPG/ESO 2.2 meter telescope at Cerro La Silla, Chile, 600 km north of Santiago de Chile at an altitude of 2400 metres

    ESO/Cerro LaSilla, 600 km north of Santiago de Chile at an altitude of 2400 metres.

    ESO VLT at Cerro Paranal in the Atacama Desert, •ANTU (UT1; The Sun ),
    •KUEYEN (UT2; The Moon ),
    •MELIPAL (UT3; The Southern Cross ), and
    •YEPUN (UT4; Venus – as evening star).
    elevation 2,635 m (8,645 ft) from above Credit J.L. Dauvergne & G. Hüdepohl atacama photo,

    2009 ESO VLTI Interferometer image, Cerro Paranal, with an elevation of 2,635 metres (8,645 ft) above sea level, •ANTU (UT1; The Sun ),
    •KUEYEN (UT2; The Moon ),
    •MELIPAL (UT3; The Southern Cross ), and
    •YEPUN (UT4; Venus – as evening star).

    ESO VLT 4 lasers on Yepun

    Glistening against the awesome backdrop of the night sky above ESO_s Paranal Observatory, four laser beams project out into the darkness from Unit Telescope 4 UT4 of the VLT.

    ESO/NTT at Cerro La Silla, Chile, at an altitude of 2400 metres

    ESO VLT Survey telescope

    Part of ESO’s Paranal Observatory, the VISTA Telescope observes the brilliantly clear skies above the Atacama Desert of Chile. Credit: ESO/Y. Beletsky, with an elevation of 2,635 metres (8,645 ft) above sea level

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    ESO/E-ELT,to be on top of Cerro Armazones in the Atacama Desert of northern Chile. located at the summit of the mountain at an altitude of 3,060 metres (10,040 ft).

    ESO/APEX high on the Chajnantor plateau in Chile’s Atacama region, at an altitude of over 4,800 m (15,700 ft)

    Leiden MASCARA instrument, La Silla, located in the southern Atacama Desert 600 kilometres (370 mi) north of Santiago de Chile at an altitude of 2,400 metres (7,900 ft)

    Leiden MASCARA cabinet at ESO Cerro la Silla located in the southern Atacama Desert 600 kilometres (370 mi) north of Santiago de Chile at an altitude of 2,400 metres (7,900 ft)

    ESO Next Generation Transit Survey at Cerro Paranel, 2,635 metres (8,645 ft) above sea level

    ESO Speculoos telescopes four 1m-diameter robotic telescopes at ESO Paranal Observatory 2635 metres 8645 ft above sea level

    ESO TAROT telescope at Paranal, 2,635 metres (8,645 ft) above sea level

    ESO ExTrA telescopes at Cerro LaSilla at an altitude of 2400 metres

    A novel gamma ray telescope under construction on Mount Hopkins, Arizona. a large project known as the Cherenkov Telescope Array, composed of hundreds of similar telescopes to be situated in the Canary Islands and Chile. The telescope on Mount Hopkins will be fitted with a prototype high-speed camera, assembled at the University of Wisconsin–Madison, and capable of taking pictures at a billion frames per second. Credit: Vladimir Vassiliev

  • richardmitnick 12:24 pm on October 23, 2019 Permalink | Reply
    Tags: , Cray Archer2, , ,   

    From insideHPC: “ARCHER2 to be first Cray Shasta System in Europe” 

    From insideHPC

    October 22, 2019

    Today Cray, a Hewlett Packard Enterprise company, announced a £48 million contract award in the UK to expand its high-performance computing capabilities with Cray’s next-generation Shasta supercomputer. The new ARCHER2 supercomputer will be the first Shasta system announced in EMEA and the second system worldwide used for academic research. ARCHER2 will be the UK’s most powerful supercomputer and will be equipped with the revolutionary Slingshot interconnect, Cray ClusterStor high-performance storage, the Cray Shasta Software platform, and 2nd Gen AMD EPYC processors. The new supercomputer will be 11X higher performance than its predecessor, ARCHER.


    UK Research and Innovation (UKRI) has once again contracted the team at CRAY to build their follow-up to the Archer supercomputer. Archer 2 is reported to offer up to 11x the throughput of the previous Archer supercomputer put into service back in late 2013. Archer 2 is going to be powered by 12,000 EPYC Rome 64 Core CPUs with 5,848 compute nodes, each having two of the 64 core behemoths. The total core count is 748,544 ( 1,497,088 threads) and 1.57PB for the entire system. The CPU speed is listed as 2.2GHz, which we must assume they are running off of the base clock, so that would be EPYC 7742 CPUs with a 225W TDP. These sorts of specs are insane but also will make some significant heat. Archer 2 will be cooled by 23 Shasta Mountain direct liquid cooling and associated liquid cooling cabinets. The back end for connectivity is Cray’s next-gen slingshot 100Gbps network compute groups. AMD GPUs are part of this array, but the information I have not found yet on which GPU units from AMD will be used. Estimated peak performance is 28 PFLOP/s and the transition for the Archer to the Archer 2 will begin in Q1 2020 and be completed late 1H 2020 as long as things go as planned.

    “ARCHER2 will be an important resource for the UK’s research community, providing them with the capability to pursue investigations which are not possible using current resources, said Lynn Gladden, executive chair, professor at the Engineering and Physical Sciences Research Council (ESPRC). “The new system delivered by Cray will greatly increase the potential for researchers to make discoveries across fields such as physics, chemistry, healthcare and technology development.”

    The new Cray Shasta-based ARCHER2 system will replace the existing ARCHER Cray XC30 in 2020 and be an even greater capability resource for academic researchers and industrial users from the UK, Europe and the rest of the world. At rates previously unattainable, the new supercomputer will achieve 11X higher performance with only a 27% increase in grid power. The ARCHER2 project provides resources for exploration in research disciplines including oil and gas, sustainability and natural resources, mental and physical health, oceanography, atomistic structures, and technology advancement.

    “We’re pleased to continue supporting UKRI’s mission and provide the most advanced high-end computing resources for the UK’s science and research endeavors,” said Peter Ungaro, president and CEO at Cray, a Hewlett Packard Enterprise company. “As traditional modeling and simulation applications and workflows converge with AI and analytics, a new Exascale Era architecture is required. Shasta will uniquely provide this new capability and ARCHER2 will be the first of its kind in Europe, as its next-gen architecture will provide UK and neighboring scientists and researchers the ability to meet their research requirements across a broad range of disciplines, faster.”

    The new Shasta system will be the third Cray supercomputer delivered to UKRI, with the previous systems being HECToR and ARCHER. ARCHER2 will be supported by 2nd Gen AMD EPYC processors.


    “AMD is incredibly proud to continue our collaboration with Cray to deliver what will be the most powerful supercomputer in the UK, helping to process data faster and reduce the time it takes to reach critical scientific conclusions,” said Forrest Norrod, senior vice president and general manager, AMD Datacenter and Embedded Systems Group. “Investments in high-performance computing technology are imperative to keep up with today’s increasingly complex problems and explosive data growth. The 2nd Gen AMD EPYC processors paired with Cray Shasta will provide a powerful resource for the next generation of research in the UK when ARCHER2 is delivered next year.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

  • richardmitnick 11:54 am on October 23, 2019 Permalink | Reply
    Tags: "The science of sensations", As a field we have really struggled in identifying novel pain killers., Ishmail Abdus-Saboor, , The biology of touch pain and itch, The opioid epidemic   

    From Penn Today: “The science of sensations” 

    From Penn Today

    October 22, 2019
    Katherine Unger Baillie

    His ‘naïveté’ when it came to neuroscience helped biologist Ishmail Abdus-Saboor pose questions that are reshaping research on pain and touch.

    Ishmail Abdus-Saboor has carved out a path studying the biology of touch, pain, and itch.

    The touch of a feather, the itch of a mosquito bite, the prick of a needle: The body is capable of distinguishing and responding to all of these sensations in a near instantaneous relay, from skin to brain and back again.

    “Our brain is constantly computing these things, and in healthy people it never gets it wrong,” says Ishmail Abdus-Saboor, a biologist in Penn’s School of Arts and Sciences.

    The details that drive these processes are now at the heart of Abdus-Saboor’s research. Using a variety of techniques and models, he and his lab—established at Penn last year—seek to tease out the nervous system pathways involved in translating sensations to the brain, with a particular focus on acute and chronic pain.

    His work has taken on a new significance in light of the opioid epidemic.

    “As a field we have really struggled in identifying novel pain killers,” he says. “This is why we have an overreliance on opioids.”

    Getting to the bottom of basic mechanisms in pain sensation has the potential to uncover new pathways that could be targeted with alternative medications. And with a new technique for applying a measurement to pain itself, Abdus-Saboor has in hand a platform that could be used to screen new drugs or even help clinicians one day evaluate their patients’ discomfort in a much more rigorous way than is currently available.

    Abdus-Saboor and research technician William Foster go over a still image from a video of a mouse reacting to a stimulus. Foster tracks the movement of a paw, training a machine learning program to analyze the animal’s movements on its own.

    Animal behavior and biology got their hooks into Abdus-Saboor when he was a child. Growing up in Philadelphia’s Germantown neighborhood, he fashioned a laboratory in his home at age 14, winning a citywide science competition for his investigations of crayfish.

    He carried that fascination with him through his undergrad years at North Carolina A&T State University, pursuing animal science as a pre-vet student. A summer in a laboratory at Penn refined that interest. The mysteries contained in the molecules and genes of animals began to emerge as the most captivating to Abdus-Saboor.

    He wound up pursuing his graduate studies with Meera Sundaram at Penn in the Perelman School of Medicine, focusing on the genetics of the nematode worm Caenorhabditis elegans. But he made a conscious choice to shift gears as he embarked on two postdoctoral fellowships.

    “Thinking about running my own lab one day, I was considering which area has the biggest growth potential in biomedical research,” he recalls. “The brain is the last frontier; it’s the least well-understood organ. I thought that, if I could apply some of the tools that I’d been learning in genetics and molecular biology toward the study of the nervous system, then perhaps I could make some important discoveries and look at things from a different vantage point.”

    First in a postdoctoral fellowship with Benjamin Shykind of Cornell University and in a second position working with Wenqin Luo back at Penn, Abdus-Saboor played catch up in the field of neuroscience.

    “Basically, every single approach that I worked on was new to me,” he says. “But I think that naïveté helped me.”

    Specifically, Abdus-Saboor started asking questions about the common techniques use to evaluate responses to sensory stimuli in mouse studies and wasn’t satisfied with the answers. Certain assays, for example, relied on a binary response—either the animals responded to a stimulus or they didn’t—a measure that struck Abdus-Saboor as “rather crude and possibly biased.”

    Over the last few years, as he wrapped up his postdoc with Luo and established his own lab at Penn, he set out to create a more refined scale for evaluating these types of responses. His technique relies on the use of a high-speed videography, capable of capturing 1,000 frames per second. In a paper published in August in Cell Reports, he, colleague Nathan Fried, Luo, and others reported the creation of a nuanced mouse pain scale that could effectively differentiate responses to a variety of sensory stimuli.

    “Taking lessons from other model systems, mainly fruit flies and zebrafish, people have been using high-speed cameras to slow down behaviors that we can’t see with the naked eye,” says Abdus-Saboor. “I had the hypothesis that if we did this, maybe there was a lot more information we could extract that could inform us and teach us about what the animal is experiencing. And that turned out to be the case.”

    Processing frames from these recordings manually, which is how the researchers initially completed the study, was a tedious task. But working with biostatisticians, computational biologists, and machine-learning specialists, Abdus-Saboor and members of his lab were able to streamline the process, and, in collaboration with departmental colleague Joshua Plotkin, are working to automate the video frame-by-frame analysis.

    “We want others to easily adopt this technology, and automation would help avoid the potential error and variability of human scoring,” he says. “There are emerging technologies that are allowing us to do this.”

    So far, they’ve tested the platform using both male and female mice representing a variety genetic types and have gotten consistent results across the board.

    Abdus-Saboor observes as Leah Middleton, a third-year graduate student, works at the bench. Since its launch last year, Abdus-Saboor’s lab has grown to include two postdocs, three doctoral students, and two research technicians.

    As his lab has developed this technology, they’ve been working in parallel to more deeply understand the nervous system circuits that produce the sensation of pain, especially in the context of chronic pain. People who suffer from chronic pain become more sensitive to various types of touch, even an otherwise innocuous application of warmth or pressure.

    “This is the chronic pain we hear a lot about now, in this opioid epidemic era,” Abdus-Saboor says.

    In his relatively short time as a faculty member, he’s already struck up collaborations with researchers working on pain elsewhere in the University to advance the science of treating pain. In the School of Dental Medicine, he and Claire Mitchell have worked together on a study of dental pain. Abdus-Saboor has also had productive conversations with researchers, such as Penn Dental Medicine’s Elliot Hersh, who are interested in applying his high-speed camera platform in clinical settings to objectively evaluate the patients’ pain and prescribe painkilling drugs appropriately.

    “We’re not there yet, but these are conversations we’re starting to have,” says Abdus-Saboor. “If this technology could evolve into the clinic? That would be a wonderful thing.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    U Penn campus

    Academic life at Penn is unparalleled, with 100 countries and every U.S. state represented in one of the Ivy League’s most diverse student bodies. Consistently ranked among the top 10 universities in the country, Penn enrolls 10,000 undergraduate students and welcomes an additional 10,000 students to our world-renowned graduate and professional schools.

    Penn’s award-winning educators and scholars encourage students to pursue inquiry and discovery, follow their passions, and address the world’s most challenging problems through an interdisciplinary approach.

  • richardmitnick 11:21 am on October 23, 2019 Permalink | Reply
    Tags: "An artificial retina that could help restore sight to the blind", A new technique helps overcome one major barrier: heat., , ,   

    From Stanford University Engineering: “An artificial retina that could help restore sight to the blind” 

    From Stanford University Engineering

    October 10, 2019
    Andrew Myers

    A new technique helps overcome one major barrier: heat.

    Without this advance, the chips required to build an artificial retina would burn human eye tissue. | Unsplash/Createria, Pixabay/DavidZydd

    For more than a decade, researchers have been working to create artificial digital retinas that can be implanted in the eye to allow the blind to see again. Many challenges stand in the way, but researchers at Stanford University may have found the key to solving one of the most vexing: heat. The artificial retina requires a very small computer chip with many metal electrodes poking out. The electrodes first record the activity of the neurons around them to create a map of cell types. This information is then used to transmit visual data from a camera to the brain. Unfortunately, the eye produces so much data during recording that the electronics get too darn hot.

    “The chips required to build a high-quality artificial retina would essentially fry the human tissue they are trying to interface with,” says E.J. Chichilnisky, a professor in the Neurosurgery and Ophthalmology departments, who is on Stanford’s artificial retina team.

    Members of the team, including Chichilnisky and his collaborators in Stanford’s Electrical Engineering and Computer Science departments, recently announced they have devised a way to solve that problem by significantly compressing the massive amounts of visual data that all those neurons in the eye create. They discuss their advance in a study published in the IEEE Transactions on Biomedical Circuits and Systems.

    To convey visual information, neurons in the retina send electrical impulses, known as spikes, to the brain. The problem is that the digital retina needs to record and decode those spikes to understand the properties of the neurons, but that generates a lot of heat in the digitization process, even with only a few hundred electrodes used in today’s prototypes. The first true digital retina will need to have tens of thousands of such electrodes, complicating the issue further.

    Boris Murmann, a professor of electrical engineering on the retina project, says the team found a way to extract the same level of visual understanding using less data. By better understanding which signal samples matter and which can be ignored, the team was able to reduce the amount of data that has to be processed. It’s a bit like being at a party trying to extract a single coherent conversation amid the din of a crowded room — a few voices matter a lot, but most are noise and can be ignored.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Stanford Engineering has been at the forefront of innovation for nearly a century, creating pivotal technologies that have transformed the worlds of information technology, communications, health care, energy, business and beyond.

    The school’s faculty, students and alumni have established thousands of companies and laid the technological and business foundations for Silicon Valley. Today, the school educates leaders who will make an impact on global problems and seeks to define what the future of engineering will look like.

    Our mission is to seek solutions to important global problems and educate leaders who will make the world a better place by using the power of engineering principles, techniques and systems. We believe it is essential to educate engineers who possess not only deep technical excellence, but the creativity, cultural awareness and entrepreneurial skills that come from exposure to the liberal arts, business, medicine and other disciplines that are an integral part of the Stanford experience.

    Our key goals are to:

    Conduct curiosity-driven and problem-driven research that generates new knowledge and produces discoveries that provide the foundations for future engineered systems
    Deliver world-class, research-based education to students and broad-based training to leaders in academia, industry and society
    Drive technology transfer to Silicon Valley and beyond with deeply and broadly educated people and transformative ideas that will improve our society and our world.

    The Future of Engineering

    The engineering school of the future will look very different from what it looks like today. So, in 2015, we brought together a wide range of stakeholders, including mid-career faculty, students and staff, to address two fundamental questions: In what areas can the School of Engineering make significant world‐changing impact, and how should the school be configured to address the major opportunities and challenges of the future?

    One key output of the process is a set of 10 broad, aspirational questions on areas where the School of Engineering would like to have an impact in 20 years. The committee also returned with a series of recommendations that outlined actions across three key areas — research, education and culture — where the school can deploy resources and create the conditions for Stanford Engineering to have significant impact on those challenges.

    Stanford University

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

  • richardmitnick 10:53 am on October 23, 2019 Permalink | Reply
    Tags: "Extreme solar storms may be more frequent than previously thought", , ,   

    From AGU GeoSpace Blog: “Extreme solar storms may be more frequent than previously thought” 

    From From AGU GeoSpace Blog

    4 October 2019
    Abigail Eisenstadt

    Researchers propose in a new study why an extreme solar storm in 1859 was so damaging to Earth’s magnetic field. They compared the storm with other extreme storms in history, suggesting this storm is not likely unique.

    The September 1859 Carrington Event ejected concentrated solar plasma towards Earth, disrupting the planet’s magnetic field and leading to widespread telegraph disturbances and even sporadic fires. New research in AGU’s journal Space Weather indicates storms like the Carrington Event are not as rare as scientists thought and could happen every few decades, seriously damaging modern communication and navigation systems around the globe.

    “The Carrington Event was considered to be the worst-case scenario for space weather events against the modern civilization… but if it comes several times a century, we have to reconsider how to prepare against and mitigate that kind of space weather hazard,” said Hisashi Hayakawa, lead author of the new study and an astrophysicist at Osaka University in Osaka, Japan and Rutherford Appleton Laboratory in the United Kingdom.

    This visualization depicts what a coronal mass ejection might look like like as it interacts with the interplanetary medium and magnetic forces. Credit: NASA / Steele Hill

    The Carrington Event is one of the most extreme solar storms observed in the last two centuries and was caused by a large coronal mass ejection, an emission of plasma from the Sun’s outmost atmosphere. Depending on a coronal mass ejection’s strength and trajectory, it can significantly distort Earth’s magnetic field, causing an intense magnetic storm, global auroras and damaging any technology that relies on electromagnetic waves.

    Scientists previously thought events like the Carrington Event were very rare, happening maybe once a century. They knew the Carrington Event caused low-latitude auroras and failure of telegraph equipment throughout the globe, but they had mostly studied records from the Western Hemisphere, leaving a considerable data gap in the Eastern Hemisphere.

    In the new study, Hayakawa and his colleagues wanted to improve reconstructions of the Carrington event and compare this event with other extreme storms. They organized an international collaboration and compiled historical observations of auroras during the storm from the Eastern Hemisphere and Iberian Peninsula to fill the gaps in their knowledge from studying only the Western Hemisphere records.

    The researchers collected observations of the storm’s auroras from the Russian Central Observatory, Japanese diaries, and newspapers from Portugal, Spain, Australia, New Zealand, Mexico and Brazil. They then compared these observations to previous reports of the storm from the Western Hemisphere, like ship logs, contemporary scientific journals, and more newspapers.

    An image from NASA’s Solar Dynamic Observatory shows a giant sunspot present in 2014. The sunspot spanned 80,000 miles. Credit: NASA/SDO


    The researchers also analyzed several unpublished sunspot drawings made by European astronomers during the 1859 storm. Analyzing these drawings allowed them to determine where on the Sun the storm originated and track how the sunspot group grew and shrank over time.

    The newly recovered historical documents suggest the Carrington sunspot group had probably launched multiple outbursts from early August to early October, including a preceding solar storm in late August 1859. The researchers estimate this event happened around August 27th, 1859 and sent out separate coronal mass ejections that were strong enough to impact Earth’s magnetic field. The August storm may have played a role in making the September Carrington Event so intense.

    After reconstructing the storms around the Carrington Event, the researchers compared the solar storm to other storms in 1872, 1909, 1921, and 1989 and found two of them – those in 1872 and 1921 – were comparable to this event. The 1989 event caused a serious blackout throughout all of Quebec, Canada. This means events like the Carrington may not be as legendary and elusive as once thought, and scientists need to consider the hazards of such events more seriously than before, according to Hayakawa.

    “While the 1859 storm was certainly one of the most extreme events, this seems at best comparable to the 1872 storm and 1921 storm in terms of its intensity,” he said. “So, the Carrington event is no longer something unique. This fact may require us to reconsider the occurrence frequency of this kind of ‘worst-case scenario’ of space weather events.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    GeoSpace is a blog on Earth and space science, managed by AGU’s Public Information staff. The blog features posts by AGU writers and guest contributors on all sorts of relevant science topics, but with a focus on new research and geo and space sciences-related stories that are currently in the news.

    Do you have ideas on topics we should be covering? Would you like to contribute a guest post to the blog? Contact Peter Weiss at pweiss@agu.org.

  • richardmitnick 10:36 am on October 23, 2019 Permalink | Reply
    Tags: "Radio noise maps show where emergency communications could get tricky", , Letting emergency response teams know what spots to avoid., U.S. Army Cold Regions Research and Engineering Laboratory in Hanover New Hampshire   

    From AGU GeoSpace Blog: “Radio noise maps show where emergency communications could get tricky” 

    From From AGU GeoSpace Blog

    23 October 2019
    Erin I. Garcia de Jesus

    Researchers have created a street-level map of disruptive radio noise in Boston, they report in a new study. The study’s findings suggest radio noise, which could obstruct first response or military communications, is persistent in urban environments and knowing its patterns could help make communications more reliable.

    “Having a map of where highly intense noise regions are can help you understand why your communications aren’t working,” said Daniel Breton, a geophysicist at the U.S. Army Cold Regions Research and Engineering Laboratory in Hanover, New Hampshire and lead author of the new study in AGU’s journal Radio Science.

    “If you have that map and you’re struggling with communications, wouldn’t it be great to know ‘I can move 50 meters down the street and be in the clear’ or ‘this particular neighborhood is notorious for noise sources, we need to get on a rooftop’?” Breton said.

    Radio signals are broadcast through the air to send information to devices such as televisions, smartphones or satellite communication antennas. Unwanted noise from electrical switches or neon signs can interfere with these signals and delay critical alerts from reaching their destination.

    The new study presents a map showing where such radio noise exists at street level in Boston, which could help ensure radios or satellite phones will operate in critical situations and let emergency response teams know what spots to avoid.

    Maps show the median noise power along routes in downtown Boston (left) and North End (right). Lighter colors indicate higher amounts of noise. Credit: Breton et al. 2019

    Few studies have characterized potential radio interference at such a small scale. Previous work relied on measurements from static locations or aircraft, which provided a fixed estimation of noise in a specific area, sometimes an entire city. But buildings, for example, can block some radio waves – or generate it themselves — and might influence where noise is found.

    A survey from 1968 generated a map of radio noise using measurements from a truck while driving around San Antonio, Texas. The results suggested noise varied from street-to-street and was linked to cars, possibly from sparking ignitions.

    But the electromagnetic environment has vastly changed since then, Breton said. Today, there may be different sources of radio noise and it’s unclear how they are spread in the surrounding area.

    A cumbersome backpack

    In the new study, Breton and his colleagues wanted to pinpoint how far apart hotspots of radio noise are in modern urban environments. They measured background interference at one-meter (3.2-foot) intervals at three unused frequencies during business hours. These correspond to parts of the electromagnetic spectrum reserved for federal use and radio noise in this range could interfere with emergency communications.

    Rather than riding in trucks to maneuver around the streets of Boston, the team got in their steps. They carried radiofrequency-monitoring equipment in backpacks sporting a towering radio antenna, a recording device and an aluminum “radio-proof box” containing a laptop – a key feature to prevent the computer from contaminating their data. In total, the contraption weighed 15.4 kilograms (34 pounds).

    Caitlin Haedrich, a physical scientist at the U.S. Army Cold Regions Research and Engineering Laboratory, carried a 34-pound backpack containing radio wave monitoring equipment around the streets of Boston. Credit: Courtesy of Daniel Breton.

    “We definitely got some looks,” said Caitlin Haedrich, a physical scientist at the U.S. Army Cold Regions Research and Engineering Laboratory and co-author of the study. “People clear the way for you on the sidewalk when you look like that.”

    With their radiofrequency-monitoring backpacks, the researchers lapped around downtown Boston and the North End. These areas are packed with easy-to-divide neighborhoods that gave Breton, Haedrich and their team a chance to sample distinct zones, such as a street populated with skyscrapers or residential areas.

    Part of the neighborhood

    The group found radio noise signals varied significantly along Boston’s streets. In downtown, for instance, noise power differed by more than a factor of 1000 between certain areas along the 2-kilometer (1.2 mile) route.

    These signals also had repeatable patterns. Some were clustered in specific zones measuring from 20 to 100 meters (65 to 328 feet) long, while other spots had less interference.

    Breton isn’t sure what the sources of noise are – though lightbulbs, laser printers or computer power supplies could be culprits – but they don’t appear to move over time, he said.

    Surprisingly, noise clusters were not associated with traffic, contrary to findings from the 1968 study in Texas. The team measured the lowest values for noise along heavily-trafficked Atlantic Avenue in downtown Boston, suggesting that buildings may provide the bulk of interfering radio waves.

    “Noise is part of the neighborhood,” Breton said. “There are certain corners you’re going to walk by and there’s going to be intense noise, which may impact a variety of systems.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    GeoSpace is a blog on Earth and space science, managed by AGU’s Public Information staff. The blog features posts by AGU writers and guest contributors on all sorts of relevant science topics, but with a focus on new research and geo and space sciences-related stories that are currently in the news.

    Do you have ideas on topics we should be covering? Would you like to contribute a guest post to the blog? Contact Peter Weiss at pweiss@agu.org.

  • richardmitnick 10:14 am on October 23, 2019 Permalink | Reply
    Tags: "Humpback whale population on the rise after near miss with extinction", , ,   

    From University of Washington: “Humpback whale population on the rise after near miss with extinction” 

    U Washington

    From University of Washington

    October 21, 2019
    Dan DiNicola, School of Aquatic and Fishery Sciences

    A population of humpback whales in the South Atlantic has rebounded from near extinction, a new study shows.iStock.com/Martin Hristov

    A population of humpback whales in the South Atlantic has rebounded from the brink of extinction.

    Intense pressure from the whaling industry in the 20th century saw the western South Atlantic population of humpbacks diminish to only 450 whales. It is estimated that 25,000 whales were caught over approximately 12 years in the early 1900s.

    Protections were put in place in the 1960s as scientists noticed worldwide that populations were declining. In the mid-1980s, the International Whaling Commission issued a moratorium on all commercial whaling, offering further safeguards for the struggling population.

    A new study co-authored by Grant Adams, John Best and André Punt from the University of Washington’s School of Aquatic and Fishery Sciences shows the western South Atlantic humpback (Megaptera novaeangliae) population has grown to 25,000. Researchers believe this new estimate is now close to pre-whaling numbers.

    The findings were published Oct. 16 in the journal Royal Society Open Science.

    “We were surprised to learn that the population was recovering more quickly than past studies had suggested,” said Best, a UW doctoral student.

    A western South Atlantic humpback mother with her calf.L. Candisani/Courtesy Insituto Aqualie

    The study follows a previous assessment conducted by the International Whaling Commission between 2006 and 2015. Those findings indicated the population had only recovered to about 30% of its pre-exploitation numbers. Since that assessment was completed, new data has come to light, providing more accurate information on catches — including struck-and-lost rates — and genetics and life-history.

    “Accounting for pre-modern whaling and struck-and-lost rates where whales were shot or harpooned but escaped and later died, made us realize the population was more productive than we previously believed,” said Adams, a UW doctoral student who helped construct the new model.

    By incorporating detailed records from the whaling industry at the outset of commercial exploitation, researchers have a good idea of the size of the original population. Current population estimates are made from a combination of air- and ship-based surveys, along with advanced modeling techniques.

    The model built for this study provides scientists with a more comprehensive look at the recovery and current status of the humpback population. The authors anticipate it can be used to determine population recovery in other species in more detail as well.

    “We believe that transparency in science is important,” said Adams. “The software we wrote for this project is available to the public and anyone can reproduce our findings.”

    Lead author Alex Zerbini of the NOAA Alaska Fisheries Science Center’s Marine Mammal Laboratory stressed the importance of incorporating complete and accurate information when conducting these assessments, and providing population assessments without biases. These findings come as good news, he said, providing an example of how an endangered species can come back from near extinction.

    “Wildlife populations can recover from exploitation if proper management is applied,” Zerbini said.

    The study also looks at how the revival of South Atlantic humpbacks may have ecosystem-wide impacts. Whales compete with other predators, like penguins and seals, for krill as their primary food source. Krill populations may further be impacted by warming waters due to climate change, compressing their range closer to the poles.

    “Long-term monitoring of populations is needed to understand how environmental changes affect animal populations,” said Zerbini.

    Other co-authors are Phillip Clapham of Alaska Fisheries Science Center and Jennifer Jackson of the British Antarctic Survey.

    This research was funded by the Pew Bertarelli Ocean Legacy Project, the U.S. National Marine Fisheries Service-National Oceanic and Atmospheric Administration, the British Antarctic Survey and the University of Washington.

    For more information, contact Zerbini at alex.zerbini@noaa.gov, Best at jkbest@uw.edu and Adams at adamsgd@uw.edu.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    The University of Washington is one of the world’s preeminent public universities. Our impact on individuals, on our region, and on the world is profound — whether we are launching young people into a boundless future or confronting the grand challenges of our time through undaunted research and scholarship. Ranked number 10 in the world in Shanghai Jiao Tong University rankings and educating more than 54,000 students annually, our students and faculty work together to turn ideas into impact and in the process transform lives and our world. For more about our impact on the world, every day.
    So what defines us —the students, faculty and community members at the University of Washington? Above all, it’s our belief in possibility and our unshakable optimism. It’s a connection to others, both near and far. It’s a hunger that pushes us to tackle challenges and pursue progress. It’s the conviction that together we can create a world of good. Join us on the journey.

  • richardmitnick 10:00 am on October 23, 2019 Permalink | Reply
    Tags: "Antarctic ice cliffs may not contribute to sea-level rise as much as predicted", , ,   

    From MIT News: “Antarctic ice cliffs may not contribute to sea-level rise as much as predicted” 

    MIT News

    From MIT News

    October 21, 2019
    Jennifer Chu

    The Getz Ice Shelf in West Antarctica. Image: NASA/Jeremy Harbeck

    Study finds even the tallest ice cliffs should support their own weight rather than collapsing catastrophically.

    Antarctica’s ice sheet spans close to twice the area of the contiguous United States, and its land boundary is buttressed by massive, floating ice shelves extending hundreds of miles out over the frigid waters of the Southern Ocean. When these ice shelves collapse into the ocean, they expose towering cliffs of ice along Antarctica’s edge.

    Scientists have assumed that ice cliffs taller than 90 meters (about the height of the Statue of Liberty) would rapidly collapse under their own weight, contributing to more than 6 feet of sea-level rise by the end of the century — enough to completely flood Boston and other coastal cities. But now MIT researchers have found that this particular prediction may be overestimated.

    In a paper published today in Geophysical Research Letters, the team reports that in order for a 90-meter ice cliff to collapse entirely, the ice shelves supporting the cliff would have to break apart extremely quickly, within a matter of hours — a rate of ice loss that has not been observed in the modern record.

    “Ice shelves are about a kilometer thick, and some are the size of Texas,” says MIT graduate student Fiona Clerc. “To get into catastrophic failures of really tall ice cliffs, you would have to remove these ice shelves within hours, which seems unlikely no matter what the climate-change scenario.”

    If a supporting ice shelf were to melt away over a longer period of days or weeks, rather than hours, the researchers found that the remaining ice cliff wouldn’t suddenly crack and collapse under its own weight, but instead would slowly flow out, like a mountain of cold honey that’s been released from a dam.

    “The current worst-case scenario of sea-level rise from Antarctica is based on the idea that cliffs higher than 90 meters would fail catastrophically,” Brent Minchew, assistant professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “We’re saying that scenario, based on cliff failure, is probably not going to play out. That’s something of a silver lining. That said, we have to be careful about breathing a sigh of relief. There are plenty of other ways to get rapid sea-level rise.”

    Clerc is the lead author of the new paper, along with Minchew, and Mark Behn of Boston College.

    Silly putty-like behavior

    In a warming climate, as Antarctica’s ice shelves collapse into the ocean, they expose towering cliffs of grounded ice, or ice over land. Without the buttressing support of ice shelves, scientists have assumed that the continent’s very tall ice cliffs would collapse, calving into the ocean, to expose even taller cliffs further inland, which would themselves fail and collapse, initiating a runaway ice-sheet retreat.

    Today, there are no ice cliffs on Earth that are taller than 90 meters, and scientists assumed this is because cliffs any taller than that would be unable to support their own weight.

    Clerc, Minchew, and Behn took on this assumption, wondering whether and under what conditions ice cliffs 90 meters and taller would physically collapse. To answer this, they developed a simple simulation of a rectangular block of ice to represent an idealized ice sheet (ice over land) supported initially by an equally tall ice shelf (ice over water). They ran the simulation forward by shrinking the ice shelf at different rates and seeing how the exposed ice cliff responds over time.

    In their simulation, they set the mechanical properties, or behavior of ice, according to Maxwell’s model for viscoelasticity, which describes the way a material can transition from an elastic, rubbery response, to a viscous, honey-like behavior depending on whether it is quickly or slowly loaded. A classic example of viscoelasticity is silly putty: If you leave a ball of silly putty on a table, it slowly slumps into a puddle, like a viscous liquid; if you quickly pull it apart, it tears like an elastic solid.

    As it turns out, ice is also a viscoelastic material, and the researchers incorporated Maxwell viscoelasticity into their simulation. They varied the rate at which the buttressing ice shelf was removed, and predicted whether the ice cliff would fracture and collapse like an elastic material or flow like a viscous liquid.

    They model the effects of various starting heights, or thicknesses of ice, from 0 to 1,000 meters, along with various timescales of ice shelf collapse. In the end, they found that when a 90-meter cliff is exposed, it will quickly collapse in brittle chunks only if the supporting ice shelf has been removed quickly, over a period of hours. In fact, they found that this behavior holds true for cliffs as tall as 500 meters. If ice shelves are removed over longer periods of days or weeks, ice cliffs as tall as 500 meters will not collapse under their own weight, but instead will slowly slough away, like cold honey.

    A realistic picture

    The results suggest that the Earth’s tallest ice cliffs are unlikely to collapse catastrophically and trigger a runaway ice sheet retreat. That’s because the fastest rate at which ice shelves are disappearing, at least as documented in the modern record, is on the order of weeks, not hours, as scientists observed in 2002, when they captured satellite imagery of the collapse of the Larsen B ice shelf — a chunk of ice as large as Rhode Island that broke away from Antarctica, shattering into thousands of icebergs over the span of two weeks.

    “When Larsen B collapsed, that was quite an extreme event that occurred over two weeks, and that is a tiny ice shelf compared to the ones that we would be particularly worried about,” Clerc says. “So our work shows that cliff failure is probably not the mechanism by which we would get a lot of sea level rise in the near future.”

    This research is supported, in part, by the National Science Foundation.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 9:39 am on October 23, 2019 Permalink | Reply
    Tags: , ,   

    From National Science Foundation- “NSF statement: New development in quantum computing” 

    From National Science Foundation

    In this rendering, a trefoil knot, an iconic topological object, is shown coming out of a tunnel with an image of superconducting qubit chips reflected on its surface. Credit: P. Roushan\Martinis lab\UC Santa Barbara

    October 23, 2019
    Public Affairs, NSF
    (703) 292-7090

    In Quantum supremacy using a programmable superconducting processor, in the Oct. 24 issue of the journal Nature, a team of researchers led by Google present evidence that their quantum computer has accomplished a task that existing computers built from silicon chips cannot. When verified, the result will add credence to the broader promise of quantum computing. In addition to funding a broad portfolio of quantum research, including for other quantum computing systems and approaches, NSF has provided research support to four of the Nature paper’s co-authors: John Martinis of the University of California, Santa Barbara; Fernando Brandao of Caltech; Edward Farhi of the Massachusetts Institute of Technology; and Dave Bacon of the University of Washington.

    Today, Google announced that a quantum computer has accomplished a task not yet possible on a classical device. When verified, this may prove to be a milestone moment, one that builds on more than three decades of continuous NSF investment in the fundamental physics, computer science, materials science, and engineering that underlies many of today’s quantum computing developments — and the researchers behind them — including four of the co-authors who helped create Google’s system. As quantum research continues bridging theory to practice across a range of experimental platforms, it is equally important that NSF, other agencies, and industry invest in the workforce developing quantum technologies and the countless applications that will benefit all of society. Together, we will ensure continuing U.S. leadership in quantum computing.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition
    The National Science Foundation (NSF) is an independent federal agency created by Congress in 1950 “to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense…we are the funding source for approximately 24 percent of all federally supported basic research conducted by America’s colleges and universities. In many fields such as mathematics, computer science and the social sciences, NSF is the major source of federal backing.

    We fulfill our mission chiefly by issuing limited-term grants — currently about 12,000 new awards per year, with an average duration of three years — to fund specific research proposals that have been judged the most promising by a rigorous and objective merit-review system. Most of these awards go to individuals or small groups of investigators. Others provide funding for research centers, instruments and facilities that allow scientists, engineers and students to work at the outermost frontiers of knowledge.

    NSF’s goals — discovery, learning, research infrastructure and stewardship — provide an integrated strategy to advance the frontiers of knowledge, cultivate a world-class, broadly inclusive science and engineering workforce and expand the scientific literacy of all citizens, build the nation’s research capability through investments in advanced instrumentation and facilities, and support excellence in science and engineering research and education through a capable and responsive organization. We like to say that NSF is “where discoveries begin.”

    Many of the discoveries and technological advances have been truly revolutionary. In the past few decades, NSF-funded researchers have won some 236 Nobel Prizes as well as other honors too numerous to list. These pioneers have included the scientists or teams that discovered many of the fundamental particles of matter, analyzed the cosmic microwaves left over from the earliest epoch of the universe, developed carbon-14 dating of ancient artifacts, decoded the genetics of viruses, and created an entirely new state of matter called a Bose-Einstein condensate.

    NSF also funds equipment that is needed by scientists and engineers but is often too expensive for any one group or researcher to afford. Examples of such major research equipment include giant optical and radio telescopes, Antarctic research sites, high-end computer facilities and ultra-high-speed connections, ships for ocean research, sensitive detectors of very subtle physical phenomena and gravitational wave observatories.

    Another essential element in NSF’s mission is support for science and engineering education, from pre-K through graduate school and beyond. The research we fund is thoroughly integrated with education to help ensure that there will always be plenty of skilled people available to work in new and emerging scientific, engineering and technological fields, and plenty of capable teachers to educate the next generation.

    No single factor is more important to the intellectual and economic progress of society, and to the enhanced well-being of its citizens, than the continuous acquisition of new knowledge. NSF is proud to be a major part of that process.

    Specifically, the Foundation’s organic legislation authorizes us to engage in the following activities:

    Initiate and support, through grants and contracts, scientific and engineering research and programs to strengthen scientific and engineering research potential, and education programs at all levels, and appraise the impact of research upon industrial development and the general welfare.
    Award graduate fellowships in the sciences and in engineering.
    Foster the interchange of scientific information among scientists and engineers in the United States and foreign countries.
    Foster and support the development and use of computers and other scientific methods and technologies, primarily for research and education in the sciences.
    Evaluate the status and needs of the various sciences and engineering and take into consideration the results of this evaluation in correlating our research and educational programs with other federal and non-federal programs.
    Provide a central clearinghouse for the collection, interpretation and analysis of data on scientific and technical resources in the United States, and provide a source of information for policy formulation by other federal agencies.
    Determine the total amount of federal money received by universities and appropriate organizations for the conduct of scientific and engineering research, including both basic and applied, and construction of facilities where such research is conducted, but excluding development, and report annually thereon to the President and the Congress.
    Initiate and support specific scientific and engineering activities in connection with matters relating to international cooperation, national security and the effects of scientific and technological applications upon society.
    Initiate and support scientific and engineering research, including applied research, at academic and other nonprofit institutions and, at the direction of the President, support applied research at other organizations.
    Recommend and encourage the pursuit of national policies for the promotion of basic research and education in the sciences and engineering. Strengthen research and education innovation in the sciences and engineering, including independent research by individuals, throughout the United States.
    Support activities designed to increase the participation of women and minorities and others underrepresented in science and technology.

    At present, NSF has a total workforce of about 2,100 at its Alexandria, VA, headquarters, including approximately 1,400 career employees, 200 scientists from research institutions on temporary duty, 450 contract workers and the staff of the NSB office and the Office of the Inspector General.

    NSF is divided into the following seven directorates that support science and engineering research and education: Biological Sciences, Computer and Information Science and Engineering, Engineering, Geosciences, Mathematical and Physical Sciences, Social, Behavioral and Economic Sciences, and Education and Human Resources. Each is headed by an assistant director and each is further subdivided into divisions like materials research, ocean sciences and behavioral and cognitive sciences.

    Within NSF’s Office of the Director, the Office of Integrative Activities also supports research and researchers. Other sections of NSF are devoted to financial management, award processing and monitoring, legal affairs, outreach and other functions. The Office of the Inspector General examines the foundation’s work and reports to the NSB and Congress.

    Each year, NSF supports an average of about 200,000 scientists, engineers, educators and students at universities, laboratories and field sites all over the United States and throughout the world, from Alaska to Alabama to Africa to Antarctica. You could say that NSF support goes “to the ends of the earth” to learn more about the planet and its inhabitants, and to produce fundamental discoveries that further the progress of research and lead to products and services that boost the economy and improve general health and well-being.

    As described in our strategic plan, NSF is the only federal agency whose mission includes support for all fields of fundamental science and engineering, except for medical sciences. NSF is tasked with keeping the United States at the leading edge of discovery in a wide range of scientific areas, from astronomy to geology to zoology. So, in addition to funding research in the traditional academic areas, the agency also supports “high risk, high pay off” ideas, novel collaborations and numerous projects that may seem like science fiction today, but which the public will take for granted tomorrow. And in every case, we ensure that research is fully integrated with education so that today’s revolutionary work will also be training tomorrow’s top scientists and engineers.

    Unlike many other federal agencies, NSF does not hire researchers or directly operate our own laboratories or similar facilities. Instead, we support scientists, engineers and educators directly through their own home institutions (typically universities and colleges). Similarly, we fund facilities and equipment such as telescopes, through cooperative agreements with research consortia that have competed successfully for limited-term management contracts.

    NSF’s job is to determine where the frontiers are, identify the leading U.S. pioneers in these fields and provide money and equipment to help them continue. The results can be transformative. For example, years before most people had heard of “nanotechnology,” NSF was supporting scientists and engineers who were learning how to detect, record and manipulate activity at the scale of individual atoms — the nanoscale. Today, scientists are adept at moving atoms around to create devices and materials with properties that are often more useful than those found in nature.

    Dozens of companies are gearing up to produce nanoscale products. NSF is funding the research projects, state-of-the-art facilities and educational opportunities that will teach new skills to the science and engineering students who will make up the nanotechnology workforce of tomorrow.

    At the same time, we are looking for the next frontier.

    NSF’s task of identifying and funding work at the frontiers of science and engineering is not a “top-down” process. NSF operates from the “bottom up,” keeping close track of research around the United States and the world, maintaining constant contact with the research community to identify ever-moving horizons of inquiry, monitoring which areas are most likely to result in spectacular progress and choosing the most promising people to conduct the research.

    NSF funds research and education in most fields of science and engineering. We do this through grants and cooperative agreements to more than 2,000 colleges, universities, K-12 school systems, businesses, informal science organizations and other research organizations throughout the U.S. The Foundation considers proposals submitted by organizations on behalf of individuals or groups for support in most fields of research. Interdisciplinary proposals also are eligible for consideration. Awardees are chosen from those who send us proposals asking for a specific amount of support for a specific project.

    Proposals may be submitted in response to the various funding opportunities that are announced on the NSF website. These funding opportunities fall into three categories — program descriptions, program announcements and program solicitations — and are the mechanisms NSF uses to generate funding requests. At any time, scientists and engineers are also welcome to send in unsolicited proposals for research and education projects, in any existing or emerging field. The Proposal and Award Policies and Procedures Guide (PAPPG) provides guidance on proposal preparation and submission and award management. At present, NSF receives more than 42,000 proposals per year.

    To ensure that proposals are evaluated in a fair, competitive, transparent and in-depth manner, we use a rigorous system of merit review. Nearly every proposal is evaluated by a minimum of three independent reviewers consisting of scientists, engineers and educators who do not work at NSF or for the institution that employs the proposing researchers. NSF selects the reviewers from among the national pool of experts in each field and their evaluations are confidential. On average, approximately 40,000 experts, knowledgeable about the current state of their field, give their time to serve as reviewers each year.

    The reviewer’s job is to decide which projects are of the very highest caliber. NSF’s merit review process, considered by some to be the “gold standard” of scientific review, ensures that many voices are heard and that only the best projects make it to the funding stage. An enormous amount of research, deliberation, thought and discussion goes into award decisions.

    The NSF program officer reviews the proposal and analyzes the input received from the external reviewers. After scientific, technical and programmatic review and consideration of appropriate factors, the program officer makes an “award” or “decline” recommendation to the division director. Final programmatic approval for a proposal is generally completed at NSF’s division level. A principal investigator (PI) whose proposal for NSF support has been declined will receive information and an explanation of the reason(s) for declination, along with copies of the reviews considered in making the decision. If that explanation does not satisfy the PI, he/she may request additional information from the cognizant NSF program officer or division director.

    If the program officer makes an award recommendation and the division director concurs, the recommendation is submitted to NSF’s Division of Grants and Agreements (DGA) for award processing. A DGA officer reviews the recommendation from the program division/office for business, financial and policy implications, and the processing and issuance of a grant or cooperative agreement. DGA generally makes awards to academic institutions within 30 days after the program division/office makes its recommendation.

  • richardmitnick 9:09 am on October 23, 2019 Permalink | Reply
    Tags: "Automating collision avoidance", , ESA is preparing to use machine learning to protect satellites from the very real and growing danger of space debris., Space19+, The era of ‘NewSpace’ has begun   

    From European Space Agency: “Automating collision avoidance” 

    ESA Space For Europe Banner

    From European Space Agency


    ESA is preparing to use machine learning to protect satellites from the very real and growing danger of space debris.

    The Agency is developing a collision avoidance system that will automatically assess the risk and likelihood of in-space collisions, improve the decision making process on whether or not a manoeuvre is needed, and may even send the orders to at-risk satellites to get out of the way.


    Such automated decisions could even take place on board satellites, which would directly inform other operators on the ground and satellites in orbit of their intentions. This will be essential to ensuring that automated decisions do not interfere with the manoeuvre plans of others.

    As these intelligent systems gather more data and experience, they will get better and better at predicting how risky situations evolve, meaning errors in decision making would fall as well as the cost of operations.

    “There is an urgent need for proper space traffic management, with clear communication protocols and more automation” says Holger Krag, Head of Space Safety at ESA.

    “This is how air traffic control has worked for many decades, and now space operators need to get together to define automated manoeuvre coordination.”

    The current debris environment

    Flying a space mission isn’t what it used to be. We are now faced with the remnants of past orbital endeavours that still today haunt Earth’s environment.

    After roughly 5450 launches since the beginning of the space age in 1957, the number of debris objects estimated to be in orbit, as of January 2019, was:

    34,000 objects larger than 10cm in size
    900 000 objects between 1cm to 10cm
    128 million objects from 1mm to 10cm

    ‘Manual’ collision avoidance


    Because of this debris environment, it is now routine for operators in highly-trafficked orbits to spend time protecting their spacecraft from potentially catastrophic collisions with space junk, by performing ‘collision avoidance manoeuvres’ – basically sending the commands to their spacecraft to get out of the way.

    Such manoeuvres depend on validated, accurate and timely space surveillance data, provided for example by the US Space Surveillance Network, serving as the basis of ‘conjunction data messages’, or CDMs, warning of possible close encounter between their spacecraft and another satellite or space object.

    For a typical satellite in low-Earth orbit, hundreds of alerts are issued every week. For most, the risk of collision decreases as the week goes by and more orbital information is gathered, but for some the risk is deemed high enough that further action is required.

    For ESA’s current fleet of spacecraft in these low altitude orbits, about two alerts per week, per satellite, require detailed follow-up from by an analyst. This involves hours of analysis of the distance between the two objects, their likely positions in the future, uncertainties in observations and therefore in calculations and ultimately the probability of collision.

    If the probability is greater than typically 1 in 10,000, the work of various teams is needed to prepare a collision avoidance manoeuvre and upload the commands to the satellite.

    The manoeuvre must be verified to ensure it will have the expected effect, and doesn’t for example bring the spacecraft closer to the object or even in harm’s way of another object.

    On average, ESA needs to perform more than one collision avoidance manoeuvre per satellite per year, the vast majority due to space debris.

    Although such manoeuvres ultimately protect spacecraft, they also disrupt their normal schedule, delaying or interrupting scientific observations or communications, and often use up scarce fuel, decreasing the lifetime of the mission.



    As the number of small, privately owned satellites in orbit is drastically increasing, the era of ‘NewSpace’ has begun.

    Many satellites will work on their own but thousands have been announced that will launch in large constellations – huge networks of satellites flying together in relatively low orbits – aiming to provide global, close-range coverage, whether for telecommunications or Earth observation.

    Some companies have started to launch such large constellations into low-Earth orbit, in order to provide regular internet access to regions without the necessary infrastructure. Other companies such as Amazon and Boeing have announced similar plans.

    This means we will soon have more active satellites in orbit than have been launched before in the history of spaceflight.

    Such constellations, while bringing great benefits to people on Earth, will be a source of huge disruption to the long term sustainability of the space environment, if we do not change the way we operate.

    As the space highways above Earth get busier and close approaches become more common, the current manual process of calculating collision risks and determining how spacecraft should respond will be far too slow and time consuming to be effective.


    This November, ESA’s Member States will come together to vote on the Agency’s budget for the next few years. As part of its Space Safety activities, this much needed automated collision avoidance system will be one of many exciting projects hoping to secure its future.

    Stay tuned to find out more about Space19+ in the weeks to come.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The European Space Agency (ESA), established in 1975, is an intergovernmental organization dedicated to the exploration of space, currently with 19 member states. Headquartered in Paris, ESA has a staff of more than 2,000. ESA’s space flight program includes human spaceflight, mainly through the participation in the International Space Station program, the launch and operations of unmanned exploration missions to other planets and the Moon, Earth observation, science, telecommunication as well as maintaining a major spaceport, the Guiana Space Centre at Kourou, French Guiana, and designing launch vehicles. ESA science missions are based at ESTEC in Noordwijk, Netherlands, Earth Observation missions at ESRIN in Frascati, Italy, ESA Mission Control (ESOC) is in Darmstadt, Germany, the European Astronaut Centre (EAC) that trains astronauts for future missions is situated in Cologne, Germany, and the European Space Astronomy Centre is located in Villanueva de la Cañada, Spain.

    ESA50 Logo large

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc