Updates from richardmitnick Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:47 pm on September 29, 2020 Permalink | Reply
    Tags: "How big can a fundamental particle be?", , , , , , ,   

    From Symmetry: “How big can a fundamental particle be?” 

    Symmetry Mag
    From Symmetry<

    09/29/20
    Sarah Charley

    Extremely massive fundamental particles could exist, but they would seriously mess with our understanding of quantum mechanics.

    1
    Illustration by Sandbox Studio, Chicago with Steve Shanabruch.

    Fundamental particles are objects that are so small, they have no deeper internal structure.

    There are about a dozen “matter” particles that scientists think are fundamental, and they come in a variety of sizes. For instance, the difference between the masses of the top quark and the electron is equivalent to the difference between the masses of an adult elephant and a mosquito.

    Still, all of these masses are extremely tiny compared to what’s physically possible. The known laws of physics allow for fundamental particles with masses approaching the “Planck mass”: a whopping 22 micrograms, or about the mass of a human eyelash. To go back to our comparisons with currently known particles, if the top quark had the same mass as an elephant, then a fundamental particle at the Planck mass would weigh as much as the moon.

    Could such a particle exist? According to CERN Theory Fellow Dorota Grabowska, scientists aren’t completely sure.

    “Particles with a mass below the Planck scale can be elementary,” Grabowska says. “Above that scale, maybe not. But we don’t know.”

    Scientists at particle accelerators such as the Large Hadron Collider at CERN are always on the look-out for undiscovered massive particles that could fill in the gaps of their models. Finding new particles is so important that the global physics community is discussing building larger colliders that could produce even more massive particles. US involvement in the LHC is supported by the US Department of Energy’s Office of Science and the National Science Foundation.

    CERN FCC Future Circular Collider map.

    ILC schematic, being planned for the Kitakami highland, in the Iwate prefecture of northern Japan.

    China Circular Electron Positron Collider (CEPC) map. It would be housed in a hundred-kilometer- (62-mile-) round tunnel at one of three potential sites. The documents work under the assumption that the collider will be located near Qinhuangdao City around 200 miles east of Beijing.

    If scientists found a fundamental particle with a mass above the Planck scale, they would need to revisit how they think about particle sizes. For the kind of research performed at the LHC, fundamental particles are all considered to be the same size—no size at all.

    “When we think about the pure mathematics, elementary particles are, by definition, point-like,” Grabowska says. “They don’t have a size.”

    Treating fundamental particles as points works well in particle physics because their masses are so small that gravity, which would have an effect on more massive objects, is not really a factor. It’s kind of like how truck drivers planning a trip don’t need to consider the effects of special relativity and time dilation. These effects are there, at some level, but they don’t have a noticeable impact on drive time.

    But a fundamental particle above the Planck scale would sit at the threshold between two divergent mathematical models. Quantum mechanics describes objects that are very tiny, and general relativity describes objects that are very massive. But to describe a particle that is both very tiny and very massive, scientists need a new theory called quantum gravity.

    Mathematically, physicists could no longer consider such a massive particle as a volume-less point. Instead, they would need to think about it behaving more like a wave.

    The particle-wave duality concept was born about 100 years ago and states that subatomic particles have both particle-like and wave-like properties. When scientists think about an electron as a particle, they consider that it has no physical volume. But when they think about it as a wave, it extends throughout all the space it’s granted, such as the orbit around the nucleus of an atom. Both interpretations are correct, and scientists typically use the one that best suits their area of research.

    The mass-to-radius ratio of these waves is important because it determines how they feel the effects of gravity. A super massive particle with tons of room to roam would barely feel the force of gravity. But if that same particle were confined to an extremely small space, it could collapse into a miniature black hole. Scientists at the LHC have searched for such tiny black holes—which would evaporate almost immediately—but so far have come up empty-handed.

    According to Grabowska, quantum gravity is tricky because there is no way to experimentally test it with today’s existing technology. “We would need a collider 14 orders of magnitude more energetic than the LHC,” she says.

    But thinking about the implications of finding such a particle helps theorists push the known laws of physics.

    “Our model of particle physics breaks down when pushed to certain scales,” says Netta Engelhardt, a quantum gravity theorist at the Massachusetts Institute of Technology. “But that doesn’t mean that our universe doesn’t feature these regimes. If we want to understand massive objects at tiny scales, we need a model of quantum gravity.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.


    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 2:02 pm on September 29, 2020 Permalink | Reply
    Tags: "Understanding ghost particle interactions", , , , , Scientists often refer to the neutrino as the ​“ghost particle.”   

    From Argonne National Laboratory: “Understanding ghost particle interactions” 

    Argonne Lab
    News from From Argonne National Laboratory

    September 28, 2020
    Joseph E. Harmon

    1
    Cross sections of neutrino-nucleus interactions versus energy. Improved agreement between experiment and model calculations clearly shown for case of nucleon pair rather than single nucleon. Inset shows neutrino interacting with nucleus and ejecting a lepton. Credit: Image by Argonne National Laboratory.

    Scientists often refer to the neutrino as the ​“ghost particle.” Neutrinos were one of the most abundant particles at the origin of the universe and remain so today. Fusion reactions in the sun produce vast armies of them, which pour down on the Earth every day. Trillions pass through our bodies every second, then fly through the Earth as though it were not there.

    “While first postulated almost a century ago and first detected 65 years ago, neutrinos remain shrouded in mystery because of their reluctance to interact with matter,” said Alessandro Lovato, a nuclear physicist at the U.S. Department of Energy’s (DOE) Argonne National Laboratory.

    Lovato is a member of a research team from four national laboratories that has constructed a model to address one of the many mysteries about neutrinos — how they interact with atomic nuclei, complicated systems made of protons and neutrons (“nucleons”) bound together by the strong force. This knowledge is essential to unravel an even bigger mystery — why during their journey through space or matter neutrinos magically morph from one into another of three possible types or flavors.

    To study these oscillations, two sets of experiments have been undertaken at DOE’s Fermi National Accelerator Laboratory (MiniBooNE and NOvA).

    FNAL/MiniBooNE

    FNAL NOvA Near Detector.

    In these experiments, scientists generate an intense stream of neutrinos in a particle accelerator, then send them into particle detectors over a long period of time (MiniBooNE) or five hundred miles from the source (NOvA).

    FNAL/NOvA experiment map.

    Knowing the original distribution of neutrino flavors, the experimentalists then gather data related to the interactions of the neutrinos with the atomic nuclei in the detectors. From that information, they can calculate any changes in the neutrino flavors over time or distance. In the case of the MiniBooNE and NOvA detectors, the nuclei are from the isotope carbon-12, which has six protons and six neutrons.

    “Our team came into the picture because these experiments require a very accurate model of the interactions of neutrinos with the detector nuclei over a large energy range,” said Noemi Rocco, a postdoc in Argonne’s Physics division and Fermilab. Given the elusiveness of neutrinos, achieving a comprehensive description of these reactions is a formidable challenge.

    The team’s nuclear physics model of neutrino interactions with a single nucleon and a pair of them is the most accurate so far. ​“Ours is the first approach to model these interactions at such a microscopic level,” said Rocco. ​“Earlier approaches were not so fine grained.”

    One of the team’s important findings, based on calculations carried out on the now-retired Mira supercomputer at the Argonne Leadership Computing Facility (ALCF), was that the nucleon pair interaction is crucial to model neutrino interactions with nuclei accurately. The ALCF is a DOE Office of Science User Facility.

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility.

    “The larger the nuclei in the detector, the greater the likelihood the neutrinos will interact with them,” said Lovato. ​“In the future, we plan to extend our model to data from bigger nuclei, namely, those of oxygen and argon, in support of experiments planned in Japan and the U.S.”

    Rocco added that ​“For those calculations, we will rely on even more powerful ALCF computers, the existing Theta system and upcoming exascale machine, Aurora.”

    ANL ALCF Theta Cray XC40 supercomputer.

    Depiction of ANL ALCF Cray Intel SC18 Shasta Aurora exascale supercomputer.

    Scientists hope that, eventually, a complete picture will emerge of flavor oscillations for both neutrinos and their antiparticles, called ​“antineutrinos.” That knowledge may shed light on why the universe is built from matter instead of antimatter — one of the fundamental questions about the universe.

    The paper, titled ​“Ab Initio Study of (νℓ,ℓ−) and (¯νℓ,ℓ+) Inclusive Scattering in 12C: Confronting the MiniBooNE and T2K CCQE Data,” is published in Physical Review X. Besides Rocco and Lovato, authors include J. Carlson (Los Alamos National Laboratory), S. Gandolfi (Los Alamos National Laboratory), and R. Schiavilla (Old Dominion University/Jefferson Lab).

    The present research is supported by the DOE Office of Science. The team received ALCF computing time through DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About the Advanced Photon Source

    The U. S. Department of Energy Office of Science’s Advanced Photon Source (APS) at Argonne National Laboratory is one of the world’s most productive X-ray light source facilities. The APS provides high-brightness X-ray beams to a diverse community of researchers in materials science, chemistry, condensed matter physics, the life and environmental sciences, and applied research. These X-rays are ideally suited for explorations of materials and biological structures; elemental distribution; chemical, magnetic, electronic states; and a wide range of technologically important engineering systems from batteries to fuel injector sprays, all of which are the foundations of our nation’s economic, technological, and physical well-being. Each year, more than 5,000 researchers use the APS to produce over 2,000 publications detailing impactful discoveries, and solve more vital biological protein structures than users of any other X-ray light source research facility. APS scientists and engineers innovate technology that is at the heart of advancing accelerator and light-source operations. This includes the insertion devices that produce extreme-brightness X-rays prized by researchers, lenses that focus the X-rays down to a few nanometers, instrumentation that maximizes the way the X-rays interact with samples being studied, and software that gathers and manages the massive quantity of data resulting from discovery research at the APS.

    This research used resources of the Advanced Photon Source, a U.S. DOE Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357.
    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 1:38 pm on September 29, 2020 Permalink | Reply
    Tags: , , , SPARC- a compact high-field DT burning tokamak.   

    From MIT News: “Validating the physics behind the new MIT-designed fusion experiment” 

    MIT News

    From MIT News

    September 29, 2020
    David L. Chandler

    1
    This image shows a cutaway rendering of SPARC, a compact, high-field, DT burning tokamak, currently under design by a team from the Massachusetts Institute of Technology and Commonwealth Fusion Systems. Its mission is to create and confine a plasma that produces net fusion energy. Credits: Image: CFS/MIT-PSFC — CAD Rendering by T. Henderson.

    Two and a half years ago, MIT entered into a research agreement with startup company Commonwealth Fusion Systems to develop a next-generation fusion research experiment, called SPARC, as a precursor to a practical, emissions-free power plant.

    Now, after many months of intensive research and engineering work, the researchers charged with defining and refining the physics behind the ambitious tokamak design have published a series of papers summarizing the progress they have made and outlining the key research questions SPARC will enable.

    Overall, says Martin Greenwald, deputy director of MIT’s Plasma Science and Fusion Center and one of the project’s lead scientists, the work is progressing smoothly and on track. This series of papers provides a high level of confidence in the plasma physics and the performance predictions for SPARC, he says. No unexpected impediments or surprises have shown up, and the remaining challenges appear to be manageable. This sets a solid basis for the device’s operation once constructed, according to Greenwald.

    Greenwald wrote the introduction for a set of seven research papers authored by 47 researchers from 12 institutions and published today in a special issue of the Journal of Plasma Physics. Together, the papers outline the theoretical and empirical physics basis for the new fusion system, which the consortium expects to start building next year.

    SPARC is planned to be the first experimental device ever to achieve a “burning plasma” — that is, a self-sustaining fusion reaction in which different isotopes of the element hydrogen fuse together to form helium, without the need for any further input of energy. Studying the behavior of this burning plasma — something never before seen on Earth in a controlled fashion — is seen as crucial information for developing the next step, a working prototype of a practical, power-generating power plant.

    Such fusion power plants might significantly reduce greenhouse gas emissions from the power-generation sector, one of the major sources of these emissions globally. The MIT and CFS project is one of the largest privately funded research and development projects ever undertaken in the fusion field.

    “The MIT group is pursuing a very compelling approach to fusion energy.” says Chris Hegna, a professor of engineering physics at the University of Wisconsin at Madison, who was not connected to this work. “They realized the emergence of high-temperature superconducting technology enables a high magnetic field approach to producing net energy gain from a magnetic confinement system. This work is a potential game-changer for the international fusion program.”

    The SPARC design, though about the twice the size as MIT’s now-retired Alcator C-Mod experiment and similar to several other research fusion machines currently in operation, would be far more powerful, achieving fusion performance comparable to that expected in the much larger ITER tokamak being built in France by an international consortium.

    Alcator C-Mod tokamak at MIT, no longer in operation.

    ITER experimental tokamak nuclear fusion reactor that is being built next to the Cadarache facility in Saint Paul les-Durance south of France.

    The high power in a small size is made possible by advances in superconducting magnets that allow for a much stronger magnetic field to confine the hot plasma.

    The SPARC project was launched in early 2018, and work on its first stage, the development of the superconducting magnets that would allow smaller fusion systems to be built, has been proceeding apace. The new set of papers represents the first time that the underlying physics basis for the SPARC machine has been outlined in detail in peer-reviewed publications. The seven papers explore the specific areas of the physics that had to be further refined, and that still require ongoing research to pin down the final elements of the machine design and the operating procedures and tests that will be involved as work progresses toward the power plant.

    The papers also describe the use of calculations and simulation tools for the design of SPARC, which have been tested against many experiments around the world. The authors used cutting-edge simulations, run on powerful supercomputers, that have been developed to aid the design of ITER. The large multi-institutional team of researchers represented in the new set of papers aimed to bring the best consensus tools to the SPARC machine design to increase confidence it will achieve its mission.

    The analysis done so far shows that the planned fusion energy output of the SPARC tokamak should be able to meet the design specifications with a comfortable margin to spare. It is designed to achieve a Q factor — a key parameter denoting the efficiency of a fusion plasma — of at least 2, essentially meaning that twice as much fusion energy is produced as the amount of energy pumped in to generate the reaction. That would be the first time a fusion plasma of any kind has produced more energy than it consumed.

    The calculations at this point show that SPARC could actually achieve a Q ratio of 10 or more, according to the new papers. While Greenwald cautions that the team wants to be careful not to overpromise, and much work remains, the results so far indicate that the project will at least achieve its goals, and specifically will meet its key objective of producing a burning plasma, wherein the self-heating dominates the energy balance.

    Limitations imposed by the Covid-19 pandemic slowed progress a bit, but not much, he says, and the researchers are back in the labs under new operating guidelines.

    Overall, “we’re still aiming for a start of construction in roughly June of ’21,” Greenwald says. “The physics effort is well-integrated with the engineering design. What we’re trying to do is put the project on the firmest possible physics basis, so that we’re confident about how it’s going to perform, and then to provide guidance and answer questions for the engineering design as it proceeds.”

    Many of the fine details are still being worked out on the machine design, covering the best ways of getting energy and fuel into the device, getting the power out, dealing with any sudden thermal or power transients, and how and where to measure key parameters in order to monitor the machine’s operation.

    So far, there have been only minor changes to the overall design. The diameter of the tokamak has been increased by about 12 percent, but little else has changed, Greenwald says. “There’s always the question of a little more of this, a little less of that, and there’s lots of things that weigh into that, engineering issues, mechanical stresses, thermal stresses, and there’s also the physics — how do you affect the performance of the machine?”

    The publication of this special issue of the journal, he says, “represents a summary, a snapshot of the physics basis as it stands today.” Though members of the team have discussed many aspects of it at physics meetings, “this is our first opportunity to tell our story, get it reviewed, get the stamp of approval, and put it out into the community.”

    Greenwald says there is still much to be learned about the physics of burning plasmas, and once this machine is up and running, key information can be gained that will help pave the way to commercial, power-producing fusion devices, whose fuel — the hydrogen isotopes deuterium and tritium — can be made available in virtually limitless supplies.

    The details of the burning plasma “are really novel and important,” he says. “The big mountain we have to get over is to understand this self-heated state of a plasma.”

    “The analysis presented in these papers will provide the world-wide fusion community with an opportunity to better understand the physics basis of the SPARC device and gauge for itself the remaining challenges that need to be resolved,” says George Tynan, professor of mechanical and aerospace engineering at the University of California at San Diego, who was not connected to this work. “Their publication marks an important milestone on the road to the study of burning plasmas and the first demonstration of net energy production from controlled fusion, and I applaud the authors for putting this work out for all to see.”​

    Overall, Greenwald says, the work that has gone into the analysis presented in this package of papers “helps to validate our confidence that we will achieve the mission. We haven’t run into anything where we say, ‘oh, this is predicting that we won’t get to where we want.” In short, he says, “one of the conclusions is that things are still looking on-track. We believe it’s going to work.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    USPS “Forever” postage stamps celebrating Innovation at MIT.

    MIT Campus

     
  • richardmitnick 12:50 pm on September 29, 2020 Permalink | Reply
    Tags: "Solar storms could be more extreme if they ‘slipstream’ behind each other", , ,   

    From Imperial College London: “Solar storms could be more extreme if they ‘slipstream’ behind each other” 


    From Imperial College London

    29 September 2020
    Hayley Dunning

    1
    A previous CME observed by Nasa’s Solar Dynamics Observatory (SDO). Modelling of an extreme space weather event that narrowly missed Earth in 2012 shows it could have been even worse if paired with another event.

    NASA/SDO.

    The findings suggest space weather predictions should be updated to include how close events enhance one another.

    Coronal mass ejections (CMEs) are eruptions of vast amounts of magnetised material from the Sun that travel at high speeds, releasing a huge amount of energy in a short time. When they reach Earth, these solar storms trigger amazing auroral displays, but can disrupt power grids, satellites and communications.

    These most extreme of ‘space weather’ events have the potential to be catastrophic, causing power blackouts that would disable anything plugged into a socket and damage to transformers that could take years to repair. Accurate monitoring and predictions are therefore important to minimising damage.

    Now, a research team led by Imperial College London have shown how CMEs could be more extreme than previously thought when two events follow each other. Their results are published today in a special issue of Solar Physics focusing on space weather.

    Technological blackouts

    The team investigated a large CME that occurred on 23 July 2012 and narrowly missed Earth by a couple of days. The CME was estimated to travel at around 2250 kilometres per second, making it comparable to one of the largest events ever recorded, the so-called Carrington event in 1859. Damage estimates for such an event striking Earth today have run into the trillions of dollars.

    Lead author Dr Ravindra Desai, from the Department of Physics at Imperial, said: “The 23 July 2012 event is the most extreme space weather event of the space age, and if this event struck Earth the consequences could cause technological blackouts and severely disrupt society, as we are ever more reliant on modern technologies for our day-to-day lives. We find however that this event could actually have been even more extreme – faster and more intense – if it had been launched several days earlier directly behind another event.”

    2
    The 23 July 2012 event recorded by STEREO.

    NASA/STEREO spacecraft.

    To determine what made the CME so extreme, the team investigated one of the possible causes: the release of another CME on the 19 July 2012, just a few days before. It has been suggested that one CME can ‘clear the way’ for another.

    CMEs travel faster than the ambient solar wind, the stream of charged particles constantly flowing from the Sun. This means the solar wind exerts drag on the travelling CME, slowing it down.

    However, if a previous CME has recently passed through, the solar wind will be affected in such a way that it will not slow down the subsequent CME as much. This is similar to how race car drivers ‘slipstream’ behind one another to gain a speed advantage.

    Magnifying extreme space weather events

    The team created a model that accurately represented the characteristics of the 23 July event and then simulated what would happen if it had occurred earlier or later – i.e. closer to or further from the 19 July event.

    They found that by the time of the 23 July event the solar wind had largely recovered from the 19 July event, so the previous event had little impact. However, their model showed that if the latter CME had occurred earlier, closer to the 19 July event, then it would have been even more extreme – perhaps reaching speeds of up to 2750 kilometres per second or more.

    Han Zhang, co-author and student who worked on the development of this modelling capability, said: “We show that the phenomenon of ‘solar wind preconditioning’, where an initial CME causes a subsequent CME to travel faster, is important for magnifying extreme space weather events. Our model results, showing the magnitude of the effect and how long the effect lasts, can contribute to current space weather forecasting efforts.”

    The Sun is now entering its next 11-year cycle of increasing activity, which brings increased chances of Earth-bound solar storms. Emma Davies, co-author and PhD student, said: “There have been previous instances of successive solar storms bombarding the Earth, such as the Halloween Storms of 2003. During this period, the Sun produced many solar flares, with accompanying CMEs of speeds around 2000 km/s.

    “These events damaged satellites and communication systems, caused aircraft to be re-routed, and a power outage in Sweden. There is always the possibility of similar or worse scenarios occurring this next solar cycle, therefore accurate models for prediction are vital to help mitigate their effects.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Imperial College London is a science-based university with an international reputation for excellence in teaching and research. Consistently rated amongst the world’s best universities, Imperial is committed to developing the next generation of researchers, scientists and academics through collaboration across disciplines. Located in the heart of London, Imperial is a multidisciplinary space for education, research, translation and commercialisation, harnessing science and innovation to tackle global challenges.

     
  • richardmitnick 12:19 pm on September 29, 2020 Permalink | Reply
    Tags: "Astrophysicist probes cosmic 'dark matter detector'", , , , , Magnetar PSR J1745-2900, , ,   

    From University of Colorado Boulder via phys.org: “Astrophysicist probes cosmic ‘dark matter detector'” 

    U Colorado

    From University of Colorado Boulder

    via


    From phys.org

    September 29, 2020
    Daniel Strain, University of Colorado at Boulder

    1
    The middle of the Milky Way Galaxy showing the location of the supermassive black hole at its center, called Sagittarius A*, and the nearby magnetar PSR J1745-2900. Credits: NASA/CXC/FIT/E

    A University of Colorado Boulder astrophysicist is searching the light coming from a distant, and extremely powerful celestial object, for what may be the most elusive substance in the universe: Dark Matter.

    In two recent studies, Jeremy Darling, a professor in the Department of Astrophysical and Planetary Sciences, has taken a deep look at PSR J1745-2900. This body is a magnetar, or a type of collapsed star that generates an incredibly strong magnetic field.

    “It’s the best natural dark matter detector we know about,” said Darling, also of the Center for Astrophysics and Space Astronomy (CASA) at CU Boulder.

    He explained that dark matter is a sort of cosmic glue—an as-of-yet unidentified particle that makes up roughly 27% of the mass of the universe and helps to bind together galaxies like our own Milky Way. To date, scientists have mostly led the hunt for this invisible matter using laboratory equipment.

    Darling has taken a different approach in his latest research: Drawing on telescope data, he’s peering at PSR J1745-2900 to see if he can detect the faint signals of one candidate for dark matter—a particle called the axion—transforming into light. So far, the scientist’s search has come up empty. But his results could help physicists working in labs around the world to narrow down their own hunts for the axion.

    The new studies are also a reminder that researchers can still look to the skies to solve some of the toughest questions in science, Darling said. He published his first round of results this month in the Astrophysical Journal Letters and Physical Review Letters.

    “In astrophysics, we find all of these interesting problems like dark matter and dark energy, then we step back and let physicists solve them,” he said. “It’s a shame.”

    Natural experiment

    Darling wants to change that—in this case, with a little help from PSR J1745-2900.

    This magnetar orbits the supermassive black hole at the center of the Milky Way Galaxy from a distance of less than a light-year away. And it’s a force of nature: PSR J1745-2900 generates a magnetic field that is roughly a billion times more powerful than the most powerful magnet on Earth.

    “Magnetars have all of the magnetic field that a star has, but it’s been crunched down into an area about 20 kilometers across,” Darling said.

    And it’s where Darling has gone fishing for dark matter.

    He explained that scientists have yet to locate a single axion, a theoretical particle first proposed in the 1970s. Physicists, however, predict that these ephemeral bits of matter may have been created in monumental numbers during the early life of the universe—and in large enough quantities to explain the cosmos’ extra mass from dark matter. According to theory, axions are billions or even trillions of times lighter than electrons and would interact only rarely with their surroundings.

    That makes them almost impossible to observe, with one big exception: If an axion passes through a strong magnetic field, it can transform into light that researchers could, theoretically, detect.

    Scientists, including a team at JILA on the CU Boulder campus, have used lab-generated magnetic fields to try to capture that transition in action. Darling and other scientists had a different idea: Why not try the same search but on a much bigger scale?

    “Magnetars are the most magnetic objects we know of in the universe,” he said. “There’s no way we could get close to that strength in the lab.”

    Narrowing in

    To make use of that natural magnetic field, Darling drew on observations of PSR J1745-2900 taken by the Karl G. Jansky Very Large Array, an observatory in New Mexico.

    NRAO Karl G Jansky Very Large Array, located in central New Mexico on the Plains of San Agustin, between the towns of Magdalena and Datil, ~50 miles (80 km) west of Socorro. The VLA comprises twenty-eight 25-meter radio telescopes.

    If the magnetar was, indeed, transforming axions into light, that metamorphosis might show up in the radiation emerging from the collapsed star.

    The effort is a bit like looking for a single needle in a really, really big haystack. Darling said that while theorists have put limits on how heavy axions might be, these particles could still have a wide range of possible masses. Each of those masses, in turn, would produce light with a specific wavelength, almost like a fingerprint left behind by dark matter.

    Darling hasn’t yet spotted any of those distinct wavelengths in the light coming from the magnetar. But he has been able to use the observations to probe the possible existence of axions across the widest range of masses yet—not bad for his first attempt. He added that such surveys can complement the work happening in Earth-based experiments.

    Konrad Lehnert agreed. He’s part of an experiment led by Yale University—called, not surprisingly, HAYSTAC—that is seeking out axions using magnetic fields created in labs across the country.

    Lehnert explained that astrophysical studies like Darling’s could act as a sort of scout in the hunt for axions—identifying interesting signals in the light of magnetars, which laboratory researchers could then dig into with much greater precision.

    “These well-controlled experiments would be able to sort out which of the astrophysical signals might have a dark matter origin,” said Lehnert, a fellow at JILA, a joint research institute between CU Boulder and the National Institute of Standards and Technology (NIST).

    Darling plans to continue his own search, which means looking even closer at the magnetar at the center of our galaxy: “We need to fill in those gaps and go even deeper.”

    Dark Matter Background
    Fritz Zwicky discovered Dark Matter in the 1930s when observing the movement of the Coma Cluster., Vera Rubin a Woman in STEM denied the Nobel, did most of the work on Dark Matter.

    Fritz Zwicky from http:// palomarskies.blogspot.com.

    Coma cluster via NASA/ESA Hubble.

    In modern times, it was astronomer Fritz Zwicky, in the 1930s, who made the first observations of what we now call dark matter. His 1933 observations of the Coma Cluster of galaxies seemed to indicated it has a mass 500 times more than that previously calculated by Edwin Hubble. Furthermore, this extra mass seemed to be completely invisible. Although Zwicky’s observations were initially met with much skepticism, they were later confirmed by other groups of astronomers.

    Thirty years later, astronomer Vera Rubin provided a huge piece of evidence for the existence of dark matter. She discovered that the centers of galaxies rotate at the same speed as their extremities, whereas, of course, they should rotate faster. Think of a vinyl LP on a record deck: its center rotates faster than its edge. That’s what logic dictates we should see in galaxies too. But we do not. The only way to explain this is if the whole galaxy is only the center of some much larger structure, as if it is only the label on the LP so to speak, causing the galaxy to have a consistent rotation speed from center to edge.

    Vera Rubin, following Zwicky, postulated that the missing structure in galaxies is dark matter. Her ideas were met with much resistance from the astronomical community, but her observations have been confirmed and are seen today as pivotal proof of the existence of dark matter.

    Astronomer Vera Rubin at the Lowell Observatory in 1965, worked on Dark Matter (The Carnegie Institution for Science).


    Vera Rubin measuring spectra, worked on Dark Matter (Emilio Segre Visual Archives AIP SPL).


    Vera Rubin, with Department of Terrestrial Magnetism (DTM) image tube spectrograph attached to the Kitt Peak 84-inch telescope, 1970. https://home.dtm.ciw.edu.

    The Vera C. Rubin Observatory currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    LSST Data Journey, Illustration by Sandbox Studio, Chicago with Ana Kova.

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U Colorado Campus

    As the flagship university of the state of Colorado CU-Boulder is a dynamic community of scholars and learners situated on one of the most spectacular college campuses in the country. As one of 34 U.S. public institutions belonging to the prestigious Association of American Universities (AAU) – and the only member in the Rocky Mountain region – we have a proud tradition of academic excellence, with five Nobel laureates and more than 50 members of prestigious academic academies.

    CU-Boulder has blossomed in size and quality since we opened our doors in 1877 – attracting superb faculty, staff, and students and building strong programs in the sciences, engineering, business, law, arts, humanities, education, music, and many other disciplines.

    Today, with our sights set on becoming the standard for the great comprehensive public research universities of the new century, we strive to serve the people of Colorado and to engage with the world through excellence in our teaching, research, creative work, and service.

     
  • richardmitnick 11:40 am on September 29, 2020 Permalink | Reply
    Tags: "Machine learning homes in on catalyst interactions to accelerate materials development", , , , , ,   

    From University of Michigan via phys.org: “Machine learning homes in on catalyst interactions to accelerate materials development” 

    U Michigan bloc

    From University of Michigan

    via


    From phys.org

    September 29, 2020

    1
    Credit: CC0 Public Domain

    A machine learning technique rapidly rediscovered rules governing catalysts that took humans years of difficult calculations to reveal—and even explained a deviation. The University of Michigan team that developed the technique believes other researchers will be able to use it to make faster progress in designing materials for a variety of purposes.

    “This opens a new door, not just in understanding catalysis, but also potentially for extracting knowledge about superconductors, enzymes, thermoelectrics, and photovoltaics,” said Bryan Goldsmith, an assistant professor of chemical engineering, who co-led the work with Suljo Linic, a professor of chemical engineering.

    The key to all of these materials is how their electrons behave. Researchers would like to use machine learning techniques to develop recipes for the material properties that they want. For superconductors, the electrons must move without resistance through the material. Enzymes and catalysts need to broker exchanges of electrons, enabling new medicines or cutting chemical waste, for instance. Thermoelectrics and photovoltaics absorb light and generate energetic electrons, thereby generating electricity.

    Machine learning algorithms are typically “black boxes,” meaning that they take in data and spit out a mathematical function that makes predictions based on that data.

    “Many of these models are so complicated that it’s very difficult to extract insights from them,” said Jacques Esterhuizen, a doctoral student in chemical engineering and first author of the paper in the journal Chem. “That’s a problem because we’re not only interested in predicting material properties, we also want to understand how the atomic structure and composition map to the material properties.”

    But a new breed of machine learning algorithm lets researchers see the connections that the algorithm is making, identifying which variables are most important and why. This is critical information for researchers trying to use machine learning to improve material designs, including for catalysts.

    A good catalyst is like a chemical matchmaker. It needs to be able to grab onto the reactants, or the atoms and molecules that we want to react, so that they meet. Yet, it must do so loosely enough that the reactants would rather bind with one another than stick with the catalyst.

    In this particular case, they looked at metal catalysts that have a layer of a different metal just below the surface, known as a subsurface alloy. That subsurface layer changes how the atoms in the top layer are spaced and how available the electrons are for bonding. By tweaking the spacing, and hence the electron availability, chemical engineers can strengthen or weaken the binding between the catalyst and the reactants.

    Esterhuizen started by running quantum mechanical simulations at the National Energy Research Scientific Computing Center. These formed the data set, showing how common subsurface alloy catalysts, including metals such as gold, iridium and platinum, bond with common reactants such as oxygen, hydroxide and chlorine.

    The team used the algorithm to look at eight material properties and conditions that might be important to the binding strength of these reactants. It turned out that three mattered most. The first was whether the atoms on the catalyst surface were pulled apart from one another or compressed together by the different metal beneath. The second was how many electrons were in the electron orbital responsible for bonding, the d-orbital in this case. And the third was the size of that d-electron cloud.

    The resulting predictions for how different alloys bind with different reactants mostly reflected the “d-band” model, which was developed over many years of quantum mechanical calculations and theoretical analysis. However, they also explained a deviation from that model due to strong repulsive interactions, which occurs when electron-rich reactants bind on metals with mostly filled electron orbitals.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please support STEM education in your local school system

    Stem Education Coalition

    U MIchigan Campus

    The University of Michigan (U-M, UM, UMich, or U of M), frequently referred to simply as Michigan, is a public research university located in Ann Arbor, Michigan, United States. Originally, founded in 1817 in Detroit as the Catholepistemiad, or University of Michigania, 20 years before the Michigan Territory officially became a state, the University of Michigan is the state’s oldest university. The university moved to Ann Arbor in 1837 onto 40 acres (16 ha) of what is now known as Central Campus. Since its establishment in Ann Arbor, the university campus has expanded to include more than 584 major buildings with a combined area of more than 34 million gross square feet (781 acres or 3.16 km²), and has two satellite campuses located in Flint and Dearborn. The University was one of the founding members of the Association of American Universities.

    Considered one of the foremost research universities in the United States,[7] the university has very high research activity and its comprehensive graduate program offers doctoral degrees in the humanities, social sciences, and STEM fields (Science, Technology, Engineering and Mathematics) as well as professional degrees in business, medicine, law, pharmacy, nursing, social work and dentistry. Michigan’s body of living alumni (as of 2012) comprises more than 500,000. Besides academic life, Michigan’s athletic teams compete in Division I of the NCAA and are collectively known as the Wolverines. They are members of the Big Ten Conference.

     
  • richardmitnick 10:58 am on September 29, 2020 Permalink | Reply
    Tags: "Provably exact artificial intelligence for nuclear and particle physics", ,   

    From MIT News: “Provably exact artificial intelligence for nuclear and particle physics” 

    MIT News

    From MIT News

    September 24, 2020
    Sandi Miller | Department of Physics

    MIT-led team uses AI and machine learning to explore fundamental forces.

    1
    Artist’s impression of the machine learning architecture that explicitly encodes gauge symmetry for a 2D lattice field theory. Credits: Image courtesy of the MIT-DeepMind collaboration.

    The Standard Model of particle physics describes all the known elementary particles and three of the four fundamental forces governing the universe; everything except gravity.

    Standard Model of Particle Physics, Quantum Diaries.

    These three forces — electromagnetic, strong, and weak — govern how particles are formed, how they interact, and how the particles decay.

    Studying particle and nuclear physics within this framework, however, is difficult, and relies on large-scale numerical studies. For example, many aspects of the strong force require numerically simulating the dynamics at the scale of 1/10th to 1/100th the size of a proton to answer fundamental questions about the properties of protons, neutrons, and nuclei.

    “Ultimately, we are computationally limited in the study of proton and nuclear structure using lattice field theory,” says assistant professor of physics Phiala Shanahan. “There are a lot of interesting problems that we know how to address in principle, but we just don’t have enough compute, even though we run on the largest supercomputers in the world.”

    To push past these limitations, Shanahan leads a group that combines theoretical physics with machine learning models. In their paper Equivariant flow-based sampling for lattice gauge theory, published this month in Physical Review Letters, they show how incorporating the symmetries of physics theories into machine learning and artificial intelligence architectures can provide much faster algorithms for theoretical physics.

    “We are using machine learning not to analyze large amounts of data, but to accelerate first-principles theory in a way which doesn’t compromise the rigor of the approach,” Shanahan says. “This particular work demonstrated that we can build machine learning architectures with some of the symmetries of the Standard Model of particle and nuclear physics built in, and accelerate the sampling problem we are targeting by orders of magnitude.”

    Shanahan launched the project with MIT graduate student Gurtej Kanwar and with Michael Albergo, who is now at NYU. The project expanded to include Center for Theoretical Physics postdocs Daniel Hackett and Denis Boyda, NYU Professor Kyle Cranmer, and physics-savvy machine-learning scientists at Google Deep Mind, Sébastien Racanière and Danilo Jimenez Rezende.

    This month’s paper is one in a series aimed at enabling studies in theoretical physics that are currently computationally intractable. “Our aim is to develop new algorithms for a key component of numerical calculations in theoretical physics,” says Kanwar. “These calculations inform us about the inner workings of the Standard Model of particle physics, our most fundamental theory of matter. Such calculations are of vital importance to compare against results from particle physics experiments, such as the Large Hadron Collider at CERN, both to constrain the model more precisely and to discover where the model breaks down and must be extended to something even more fundamental.”

    The only known systematically controllable method of studying the Standard Model of particle physics in the nonperturbative regime is based on a sampling of snapshots of quantum fluctuations in the vacuum. By measuring properties of these fluctuations, once can infer properties of the particles and collisions of interest.

    This technique comes with challenges, Kanwar explains. “This sampling is expensive, and we are looking to use physics-inspired machine learning techniques to draw samples far more efficiently,” he says. “Machine learning has already made great strides on generating images, including, for example, recent work by NVIDIA to generate images of faces ‘dreamed up’ by neural networks. Thinking of these snapshots of the vacuum as images, we think it’s quite natural to turn to similar methods for our problem.”

    Adds Shanahan, “In our approach to sampling these quantum snapshots, we optimize a model that takes us from a space that is easy to sample to the target space: given a trained model, sampling is then efficient since you just need to take independent samples in the easy-to-sample space, and transform them via the learned model.”

    In particular, the group has introduced a framework for building machine-learning models that exactly respect a class of symmetries, called “gauge symmetries,” crucial for studying high-energy physics.

    As a proof of principle, Shanahan and colleagues used their framework to train machine-learning models to simulate a theory in two dimensions, resulting in orders-of-magnitude efficiency gains over state-of-the-art techniques and more precise predictions from the theory. This paves the way for significantly accelerated research into the fundamental forces of nature using physics-informed machine learning.

    The group’s first few papers as a collaboration discussed applying the machine-learning technique to a simple lattice field theory, and developed this class of approaches on compact, connected manifolds which describe the more complicated field theories of the Standard Model. Now they are working to scale the techniques to state-of-the-art calculations.

    “I think we have shown over the past year that there is a lot of promise in combining physics knowledge with machine learning techniques,” says Kanwar. “We are actively thinking about how to tackle the remaining barriers in the way of performing full-scale simulations using our approach. I hope to see the first application of these methods to calculations at scale in the next couple of years. If we are able to overcome the last few obstacles, this promises to extend what we can do with limited resources, and I dream of performing calculations soon that give us novel insights into what lies beyond our best understanding of physics today.”

    This idea of physics-informed machine learning is also known by the team as “ab-initio AI,” a key theme of the recently launched MIT-based National Science Foundation Institute for Artificial Intelligence and Fundamental Interactions (IAIFI), where Shanahan is research coordinator for physics theory.

    Led by the Laboratory for Nuclear Science, the IAIFI is comprised of both physics and AI researchers at MIT and Harvard, Northeastern, and Tufts universities.

    “Our collaboration is a great example of the spirit of IAIFI, with a team with diverse backgrounds coming together to advance AI and physics simultaneously” says Shanahan. As well as research like Shanahan’s targeting physics theory, IAIFI researchers are also working to use AI to enhance the scientific potential of various facilities, including the Large Hadron Collider and the Laser Interferometer Gravity Wave Observatory, and to advance AI itself.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    USPS “Forever” postage stamps celebrating Innovation at MIT.

    MIT Campus

     
  • richardmitnick 10:39 am on September 29, 2020 Permalink | Reply
    Tags: "5 NASA Spacecraft That Are Leaving Our Solar System for Good", , , Pioneer 10 and Pioneer 11, Voyager 1 and Voyager 2   

    From Discover Magazine: “5 NASA Spacecraft That Are Leaving Our Solar System for Good” 

    DiscoverMag

    From Discover Magazine

    September 26, 2020
    Eric Betz

    Most of these interstellar spacecraft carry messages intended to introduce ourselves to any aliens that find them along the way.

    For millennia, humans have gazed up at the stars and wondered what it would be like to journey to them. And while sending astronauts beyond the solar system remains a distant dream, humanity has already launched five robotic probes that are on paths to interstellar space.

    Each of these craft was primarily designed to explore worlds in the outer solar system. But when they finished their jobs, their momentum continued to carry them farther from the Sun. Astronomers knew their ultimate fate was to live among the distant stars. And that’s why all but one of these spacecraft carries a message for any extraterrestrial intelligence that might find it along the way.

    NASA Pioneer 10.

    NASA Pioneer 11.

    NASA/Voyager 1.

    NASA/Voyager 2.

    NASA/New Horizons spacecraft.

    Pioneer 10 and Pioneer 11
    Voyager 1 and Voyager 2
    New Horizons

    3
    The Voyager golden record (left) is a 12-inch gold-plated copper disc. It’s covered with aluminum and electroplated with an ultra-pure sample of uranium-238. Credit: NASA.

    Half a century ago, NASA built its two identical Voyager spacecraft to capitalize on a rare alignment of the outermost planets that only happens once every 175 years. Jupiter, Saturn, Uranus and Neptune were perfectly placed, allowing scientists to chart a course that would send the spacecraft by each of these gas giants. That path also meant that, after they’d completed their tour of our solar system, both Voyager 1 and Voyager 2 would continue into interstellar space.

    Voyager 1 launched in 1977, made its flyby of Jupiter in 1979, and passed by Saturn in 1980. But rather than continuing on to Neptune and Uranus, like Voyager 2 did, NASA decided to send Voyager 1 on a detour past Saturn’s moon Titan — the only other known world in the solar system with an atmosphere thick enough to host a rain cycle.

    That choice made Voyager 1 veer off its grand tour of the outer planets and head up and away from the orbital plane of our solar system, putting in on course for interstellar space.

    Meanwhile, Voyager 2, was sent on an even bolder mission to explore the outer planets. Voyager 2 continued on past Saturn and encountered Neptune and Uranus. It still remains the only spacecraft to see those two planets up close.

    To this day, both Voyager 1 and Voyager 2 remain in communication with NASA. And each spacecraft has now passed beyond the heliopause, a region where the Sun’s solar wind loses is sway. On August 25, 2012, Voyager 1 reached the heliopause and entered what some consider interstellar space. Voyager 2 accomplished the same feat on November 5, 2018.

    That milestone was really just their first step on a long journey into the stars.

    The spacecraft may be zipping along at a breathtaking 35,000 mph, but they still will take many millennia to truly leave the solar system. Voyager 1’s course could take it close to another star in some 40,000 years, while Voyager 2 won’t get close to another star for some 300,000 years, according to NASA.

    However, NASA has prepared for the possibility that someone (or something) stumbles upon them along the way. Both spacecraft contain copies of the Golden Record. And as Carl Sagan noted: “The spacecraft will be encountered and the record played only if there are advanced spacefaring civilizations in interstellar space.”

    We’ll just have to hope record players are popular in other star systems.

    Scientists fought for decades to get a mission to Pluto approved. But months after New Horizons finally launched, Pluto was demoted from planet to dwarf planet. That didn’t make the spacecraft’s findings any less incredible, though.

    At Pluto, New Horizons found signs of ice volcanoes, giant mountains, and even a liquid water ocean. Then, the probe pushed on into the depths of the Kuiper Belt, where it explored 486958 Arrokoth, a primordial world of ice and rock that looks like two pancakes stuck together.

    Now, New Horizons is continuing on in the footsteps of the Pioneer and Voyager missions, as it’s only the fifth spacecraft ever launched on a path that will take it out of the solar system.

    But unlike its interstellar spacecraft kin, New Horizons doesn’t carry a plaque or a golden record designed to teach aliens about the human race. And that was intentional.

    “After we got into the project in 2002, it was suggested we add a plaque,” Alan Stern, New Horizons principal investigator, said in an interview with CollectSPACE.com back in 2008. “I rejected that simply as a matter of focus,” he added. “We had a small team on a tight budget and I knew it would be a big distraction.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 10:20 am on September 29, 2020 Permalink | Reply
    Tags: "Salty Lakes Found Beneath Mars' Surface", ,   

    From Discover Magazine: “Salty Lakes Found Beneath Mars’ Surface” 

    DiscoverMag

    From Discover Magazine

    September 28, 2020
    Mark Zastrow

    New research adds fresh evidence for salty lakes below the red planet’s south pole.

    1
    The potential underground salt lake reported by the Mars Express spacecraft in 2018 is located near the planet’s permanent south polar ice cap. (Credit: USGS Astrogeology Science Center, Arizona State University, INAF.)

    Two years ago, planetary scientists were abuzz with the potential discovery of a subsurface lake on Mars — buried deep beneath layers of ice and dust at the planet’s south pole.

    Now, new research adds more weight to that possibility, suggesting there is not just one but several briney lakes.

    These aquifers would represent the first known martian bodies of liquid water — albeit extremely salty water. Taken with other recent discoveries — such as lakes beneath the surface of the dwarf planet Ceres — it is part of a growing picture that liquid water is more widespread in the solar system than previously thought.

    Looking Salty

    In 2018, an Italian team of researchers announced [Astronomy]evidence of salt water beneath the southern polar cap of Mars: the radar sounder of the ESA Mars Express orbiter had detected unusually bright, reflective patches below the ice. This, the researchers argued, could be a lake of liquid water 12 miles (20 kilometers) across that melted from the ice cap and was trapped beneath it, over a kilometer beneath the surface.

    On Earth, similar lakes form beneath glaciers, where heat from the ground and the pressure of the glacier above melt some of its ice. And although Mars is too cold for pure water to remain in liquid form below its glaciers, it could do so if it were extremely salty with a much lower freezing point, the team says. This briney mixture might be filled with salts called perchlorates, dissolved from rocks.

    But it wasn’t a slam-dunk case. Mars is not very geologically active, and it’s not clear whether the planet’s interior can supply the amount of heat to create a lake of that size.

    Now, the team is back with a new study, published September 28 in Nature Astronomy, that they say bolsters their argument.

    The team returned to data from the Mars Express radar sounder, called MARSIS (Mars Advanced Radar for Subsurface and Ionospheric Sounding).

    This time they analyzed a dataset of 134 radar profiles, compared to 29 in their previous study.

    They also brought a new approach, adapting radar techniques used by satellites orbiting Earth to image buried geological features. Their analysis looks not just at how bright an area is but other metrics as well, such as how the signal strength varies, indicating how smooth the reflecting surface is. Previously, this method has found subglacial lakes in Antartica, Greenland, and the Canadian Arctic.

    By running their analysis on sounding data collected by the spacecraft over the previously-identified bright area and comparing it to surrounding regions, the team could see major differences in their characteristics that suggested the presence of liquid water, strengthening the evidence that the original bright patch is indeed a salty lake.

    In addition, they spotted other, smaller areas that met their detection criteria for liquid water — or came close, suggesting they’re ponds or mucky sediments.

    Life Below Mars?

    The prospect of these underground, salty lakes also add an intriguing wrinkle to the debate about whether life could exist on Mars today. The extreme salt content doesn’t sound hospitable for life, but some researchers think it could be possible. A recent paper by a pair of researchers at Harvard University and the Florida Institute of Technology (FIT) also addressed the possibility of life in underground environments on Mars and even the moon.

    “Extremophilic organisms are capable of growth and reproduction at low subzero temperatures,” said Harvard’s Avi Loeb, one of the study authors, in a press release [ https://sciencesprings.wordpress.com/2020/09/23/from-harvard-smithsonian-center-for-astrophysics-could-life-exist-deep-underground-on-mars/ ]. “They are found in places that are permanently cold on Earth, such as the polar regions and the deep sea, and might also exist on the moon or Mars.”

    In their paper, published September 20 in The Astrophysical Journal Letters they calculate that even without the addition of salt, liquid water is possible on Mars several miles deep. And although any life at those depths would be subjected to crushing pressures from the rock above, some known single-celled organisms can survive them.

    One thing is certain: actually searching for such life will require drilling technology far beyond what we are capable of sending into space at the moment. But, write Loeb and his coauthor Manasvi Lingam of FIT, NASA’s Artemis program could pave the way for such subsurface exploration by returning humans to the moon — beginning as soon as 2024.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:36 am on September 29, 2020 Permalink | Reply
    Tags: "W. M. Keck Observatory’s Adaptive Optics System Upgraded to ‘See’ in Infrared", , , , , W.M. Keck Observatory   

    From W.M. Keck Observatory: “W. M. Keck Observatory’s Adaptive Optics System Upgraded to ‘See’ in Infrared” 

    Keck Observatory, operated by Caltech and the University of California, Maunakea Hawaii USA, 4,207 m (13,802 ft).

    From W.M. Keck Observatory

    September 28, 2020

    Mari-Ela Chock, Communications Officer
    W. M. Keck Observatory
    mchock@keck.hawaii.edu

    W. M. Keck Observatory can now provide adaptive optics (AO) correction using light from cosmic objects at wavelengths invisible to the naked eye. Near-infrared AO wavefront sensing, in addition to sensing in visible light, is a new capability on the Keck II telescope thanks to a major upgrade, which involves the installation of an innovative infrared pyramid wavefront sensor.

    1
    The keck ii telescope showing the 36 hexagonal segments of the primary mirror. A fold mirror in the central tower sends the light to the AO system housed within the enclosure on the right. Credit: Andrew Cooper.

    The paper detailing the project is published in the Journal of Astronomical Telescopes, Instruments, and Systems.

    “This is the first infrared pyramid wavefront sensor available for scientific use. Most observatories use visible-wavelength Shack-Hartmann or pyramid wavefront sensors,” said Peter Wizinowich, chief of technical development at Keck Observatory and principal investigator for this project.

    Adding a pyramid wavefront sensor in the infrared is exceptionally useful in direct imaging and detecting cooler, hard-to-see astronomical objects such as exoplanets and young dwarf stars where planet formation commonly occurs.

    “It’s like adding night-vision goggles to Keck’s AO system,” said lead author Charlotte Bond, a postdoctoral AO scientist who led the project development team. “The infrared pyramid wavefront sensor is especially desirable for the study of baby exoplanets, which are expected to orbit cooler, redder stars or be shrouded in dust, making them faint at visible wavelengths but relatively bright in the infrared.”

    AO technology removes the blurring effect caused by turbulence in the Earth’s atmosphere, resulting in sharper views of the universe. This technique relies on a star, either a real one or an artificial star created by a laser, near the object of interest as a reference point; the wavefront sensor measures the atmospheric blurring of the starlight, then a deformable mirror shapeshifts 1,000 times per second to correct the distortions.

    The new wavefront sensor is highly-sensitive; it uses a four-sided pyramid prism to split the incoming starlight and produce four images of the telescope’s primary mirror on a detector placed behind the prism. The distribution of light between the four images provides superior accuracy in measuring atmospheric blurring.

    2
    Four images of atmospheric turbulence from Keck Observatory’s new pyramid wavefront sensor, with the adaptive optics control loop open (left) and closed (right). Credit: C. Bond.

    The University of Hawaiʻi Institute for Astronomy provided the camera for the pyramid wavefront sensor which is based on a new technology, very low noise, infrared avalanche photodiode array.

    U Hawaii Institute for Astronomy

    The Keck II telescope’s AO upgrade also involves a new GPU-based real-time controller (RTC) that analyzes the image coming from the pyramid wavefront sensor then controls the deformable mirror to correct the atmospheric distortions. The software architecture implemented on the RTC is based on code developed by Olivier Guyon at Subaru Telescope.


    NAOJ/Subaru Telescope at Mauna Kea Hawaii, USA,4,207 m (13,802 ft) above sea level.

    “This has been one of the most exciting projects I’ve ever worked on,” said co-author Sylvain Cetre, a software engineer at Keck Observatory and one of the lead project developers. “The new RTC performs heavy computations in the shortest time possible at very high speeds, resulting in a dramatic improvement in image quality.”

    The success of the Keck II telescope AO upgrade was most recently proven in May by Jason Wang, a Heising-Simons Foundation 51 Pegasi b Fellow at Caltech, when he and his team tested the new technology and captured remarkable direct images of the birth of a pair of giant exoplanets orbiting the star PDS 70 [ https://sciencesprings.wordpress.com/2020/05/18/from-keck-observatory-astronomers-confirm-existence-of-two-giant-newborn-planets-in-pds-70-system/ ].

    3
    A direct image of PDS 70 protoplanets b and planet c (labeled with white arrows) with the circumstellar disk removed. The image was captured using Keck Observatory’s upgraded adaptive optics system on the Keck II telescope. Credit: J. Wang, Caltech.

    The upgrade was also used during the testing phase over the past year to study a nearby binary star whose past orbit was the closest known flyby to our solar system [The Astronomical Journal].

    Keck Observatory has since made the new AO system available for regular science observations, beginning in August of 2020.

    With this upgrade in place, astronomers can now take images with more exquisite detail than ever before. This advanced AO capability will be used to deepen our knowledge of planetary formation, dark matter, dark energy, and more.

    According to Zoran Ninkov, a program director at the National Science Foundation, “NSF’s Advanced Technologies and Instrumentation program and Mid-Scale Innovations Program have supported advances in AO systems like this, which allow ground-based telescopes to attain images as sharp as those obtained from space. This new infrared pyramid system at Keck will provide an exciting new window on faint and cool objects like exoplanets, perhaps even finding some that could support life.”

    “I had been thinking about the potential performance advantages of a near-infrared wavefront sensor for well over a decade,” said Wizinowich. “It is gratifying to see this breakthrough technology finally manifest.”

    3
    Jacques Robert Delorme, postdoctoral AO scientist/engineer (left), and Charlotte Bond, postdoctoral AO scientist (right), assembling the fiber injection unit and pyramid wavefront sensor breadboards. Credit: Nem Jovanovic.

    4
    Installation of the pyramid wavefront sensor and fiber injection unit on the Keck II AO bench. Credit: Charlotte Bond.

    5
    The Keck Observatory project development team celebrating after achieving first light during a successful on-sky test of the new pyramid wavefront sensor on the Keck II telescope. (L-R): Sylvain Cetre, software engineer, Peter Wizinowich, principal investigator, Charlotte Bond, postdoctoral AO scientist, Jacques Robert Delorme, postdoctoral AO scientist/engineer, Sam Ragland, AO scientist, and colleagues from Caltech on the video monitor. Credit: Shui Kwok.

    This project was supported by NSF’s Advanced Technologies and Instrumentation program (award number 1611623) and developed in collaboration with the University of Hawaiʻi (UH), Caltech, Subaru Telescope, Arcetri Observatory, and Marseille Astrophysics Laboratory. Wizinowich served as the Principal Investigator who worked with co-Principal Investigators Dimitri Mawet, astronomy professor at Caltech and senior researcher at JPL, and Mark Chun, specialist/associate director of UH Institute for Astronomy – Hilo.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Mission
    To advance the frontiers of astronomy and share our discoveries with the world.

    The W. M. Keck Observatory operates the largest, most scientifically productive telescopes on Earth. The two, 10-meter optical/infrared telescopes on the summit of Mauna Kea on the Island of Hawaii feature a suite of advanced instruments including imagers, multi-object spectrographs, high-resolution spectrographs, integral-field spectrometer and world-leading laser guide star adaptive optics systems. Keck Observatory is a private 501(c) 3 non-profit organization and a scientific partnership of the California Institute of Technology, the University of California and NASA.

    Today Keck Observatory is supported by both public funding sources and private philanthropy. As a 501(c)3, the organization is managed by the California Association for Research in Astronomy (CARA), whose Board of Directors includes representatives from the California Institute of Technology and the University of California, with liaisons to the board from NASA and the Keck Foundation.


    Keck UCal

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: