Tagged: Physics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 8:14 am on October 19, 2016 Permalink | Reply
    Tags: , Particle X, Physics, , The lithium problem   

    From PI: “Particle X and the Case of the Missing Lithium” 

    Perimeter Institute
    Perimeter Institute

    October 11, 2016
    Erin Bow

    Maxim Pospelov

    About two-thirds of the lithium that should be in the early universe is missing. Perimeter researcher Maxim Pospelov thinks a hypothetical new particle – particle X – may have kept it from being formed.

    There’s nothing like a good anomaly. In science, anomalies – places where theory contradicts observation – drive progress.

    As Isaac Asimov once said, “The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ but ‘That’s funny….’”

    Cosmology, sadly, suffers from a lack of anomalies. Where it makes predictions at all, the standard cosmology tends to be spot on. But there’s an exception: there is not enough lithium-7 in the early universe.

    Perimeter Associate Faculty member Maxim Pospelov has been trying to figure out what happened to that lithium for more than a decade. Now, he and his colleagues have a new idea – one that invokes an unknown particle which they dubbed “particle X.”

    “To me, the lithium problem is quite intriguing,” says Pospelov. “It is a good hard problem that has been around for many years. It’s one of the best cracks through which we might glimpse new ideas.”

    So what exactly is the lithium problem? Using a model called big bang nucleosynthesis, we can calculate the relative amounts of light elements that were produced in the early universe. We can also observe how much of each element was present in the earliest stars. For the most part, the theory and the prediction line up. The model calls for three parts hydrogen to every one part helium, with a dash of deuterium (about 0.01 percent) and even smaller quantities of the two isotopes of lithium, lithium-6 and lithium-7.

    Observations check these ratios and find the predictions to be very accurate. But there is one outlier: lithium-7. That prediction is off by a factor of three. About two-thirds of the lithium-7 predicted by the big bang nucleosynthesis model is missing.

    Is the model of big bang nucleosynthesis simply wrong? That seems unlikely, because its other predictions are so good. Might the lithium be present but somehow hidden? The best efforts to find it have yet to pan out. Might something have happened to the lithium-7 along the way? That’s the line along which Pospelov has been working.

    Pospelov’s research often lies at the intersection of particle physics and cosmology. Working with Josef Pradler – a former Perimeter postdoctoral researcher and long-time collaborator – and Andreas Goudelis of the Institute of High Energy Physics in Austria, Pospelov set out to see if there might be a particle physics solution to the lithium-7 problem.

    As described in the paper A light particle solution to the cosmic lithium problem, published in Physical Review Letters, the team reverse-engineered a hypothetical particle that could destroy lithium without affecting the relative abundance of hydrogen, helium, and deuterium. They called it particle X.

    “I have to stress that this is the realm of speculation,” says Pospelov. “Speculation is a bad word in Russian, but in English it’s fine.” A bad word? “In Russia, speculators are the guys who get put in jail,” he explains, “but in physics it is sometimes good to follow speculative ideas carefully. Sometimes something interesting emerges.”

    In this case, something interesting did emerge: a hypothetical particle that could resolve the lithium anomaly.

    Though any new particle would obviously be outside the Standard Model of known particle physics, there are ones that look reasonable and ones that look out-of-whack. Particle X looks reasonable. Indeed, it looks reasonable enough that the next step in the research is to plan experiments to check for its existence.

    To reverse-engineer particle X, the team first supposed a long lifetime: on the order of 1,000 seconds. That’s long enough that it could have been created during the big bang, and survived the hot and violent first three minutes of the universe to reach the relative calm where the light elements were formed. “A thousand seconds is a benchmark – the period where the most critical reactions for lithium occur,” explains Pospelov. “If there were an input of neutrons right at this time, for instance, you would inhibit the production of lithium.”

    Many previous attempts to tackle the lithium abundance problem with ideas from particle physics have posited a hypothetical heavy particle whose decay produces neutrons. Add neutrons to the mix at the 1,000-second mark, and lithium formation is suppressed.

    But that’s not the only thing those neutrons would do. They would also increase the abundance of deuterium – and that’s not a good thing. The problem with increasing the predicted abundance of deuterium in the universe is that we recently learned to measure the real abundance of deuterium in the early universe.

    “These measurements became as good as a few percent in the last years, and they are bang-on the predictions of big bang nucleosynthesis,” says Pospelov. “Therefore this idea of new particles that produce neutrons doesn’t work. When we learned to measure deuterium abundance, a hundred models died in a moment.

    “Our idea is to work around this. The idea was not introduce new neutrons, but to free up existing neutrons only for a short time.”

    The next assumption the team made was the mass of particle X: about 10 MeV. That’s about 20 times heavier than an electron, and about 100 times lighter than a proton.

    Unlike previous hypothetical particles, it is too light to decay into neutrons. Instead, the team envisions particle X producing neutrons by breaking apart the deuterium, whose nucleus contains one proton and one neutron. “The particle would hit that nucleus and knock the neutron free,” says Pospelov.

    These knocked-free neutrons would enter the soup of the big bang nucleosynthesis, suppressing the formation of lithium. But those neutrons would be quickly swept back up to form deuterium again. As the soup cooled, the ultimate abundance of the deuterium would be unchanged. The team calls the process neutron recycling, and it seems to solve the lithium abundance problem without introducing changes to the otherwise successful model of big bang nucleosynthesis.

    It would be great, lithium-wise, if particle X did exist, but at the moment the idea is still speculative (in the good sense). The team’s next step would be to seek independent lines of evidence for particle X, starting with searches at earthbound particle physics experiments.

    The good news is that the team’s theory is well developed and sophisticated enough to predict experimental signatures. And, at only 10 MeV, particle X should be well within the energy range we can study at particle accelerators. However, it interacts – or “couples,” to use the particle physics language – only weakly, which makes it hard to spot.

    “It’s not like a photon or an electron, or even a pion or kaon,” says Pospelov. “The couplings we’ve introduced are strong enough to have an effect on big bang nucleosynthesis, but that’s not very strong.”

    They might start testing, Pospelov suggests, with the sort of experiments that go by the unpoetic name of beam-dumps. In these experiments, a beam of energetic particles is driven into a thick piece of shielding: the dump. Only fairly long-lived particles can come out the other side. The 1,000-second lifetime for particle X may seem short on the scale of the universe, but on the scale of particle physics, where most exotic particles last for blinks, 1,000 seconds is long enough to qualify as stable or meta-stable.

    “You would look for a particle emerging from the beam dump. It would occasionally interact or decay in the detector.”

    Also, since the particle interacts fairly weakly, it might be good to look for it in experiments that specialize in weakly interacting particles, such as the deep underground ones that look for dark matter or study neutrinos. “All I need is someone willing to let me put a small accelerator next to a big neutrino detector,” says Pospelov, and laughs. It does seem a bit like putting a bandshell in a hospital zone: a hard sell. “These are some small challenges I need to address.”

    Also on the to-do list: building a complete field-theoretical model that would include particle X without making a mess of the Standard Model.

    “Speculations are very nice,” says Pospelov, “but what we need is either a discovery or a firm exclusion.”

    And perhaps, someday, a name for the mysterious particle X.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    About Perimeter

    Perimeter Institute is the world’s largest research hub devoted to theoretical physics. The independent Institute was founded in 1999 to foster breakthroughs in the fundamental understanding of our universe, from the smallest particles to the entire cosmos. Research at Perimeter is motivated by the understanding that fundamental science advances human knowledge and catalyzes innovation, and that today’s theoretical physics is tomorrow’s technology. Located in the Region of Waterloo, the not-for-profit Institute is a unique public-private endeavour, including the Governments of Ontario and Canada, that enables cutting-edge research, trains the next generation of scientific pioneers, and shares the power of physics through award-winning educational outreach and public engagement.

  • richardmitnick 10:19 am on October 17, 2016 Permalink | Reply
    Tags: , Cooper Pairs, John A Paulson School of Engineering and Applied Sciences, Physics, ,   

    From John A Paulson School of Engineering and Applied Sciences: “A new spin on superconductivity” 

    Harvard School of Engineering and Applied Sciences
    John A Paulson School of Engineering and Applied Sciences

    October 14, 2016
    Leah Burrows


    Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have made a discovery that could lay the foundation for quantum superconducting devices. Their breakthrough solves one the main challenges to quantum computing: how to transmit spin information through superconducting materials.

    Every electronic device — from a supercomputer to a dishwasher — works by controlling the flow of charged electrons. But electrons can carry so much more information than just charge; electrons also spin, like a gyroscope on axis.

    Harnessing electron spin is really exciting for quantum information processing because not only can an electron spin up or down — one or zero — but it can also spin any direction between the two poles. Because it follows the rules of quantum mechanics, an electron can occupy all of those positions at once. Imagine the power of a computer that could calculate all of those positions simultaneously.

    A whole field of applied physics, called spintronics, focuses on how to harness and measure electron spin and build spin equivalents of electronic gates and circuits.

    By using superconducting materials through which electrons can move without any loss of energy, physicists hope to build quantum devices that would require significantly less power.

    But there’s a problem.

    According to a fundamental property of superconductivity, superconductors can’t transmit spin. Any electron pairs that pass through a superconductor will have the combined spin of zero.

    In work published recently in Nature Physics, the Harvard researchers found a way to transmit spin information through superconducting materials.

    “We now have a way to control the spin of the transmitted electrons in simple superconducting devices,” said Amir Yacoby, Professor of Physics and of Applied Physics at SEAS and senior author of the paper.

    It’s easy to think of superconductors as particle super highways but a better analogy would be a super carpool lane as only paired electrons can move through a superconductor without resistance.

    These pairs are called Cooper Pairs and they interact in a very particular way. If the way they move in relation to each other (physicists call this momentum) is symmetric, then the pair’s spin has to be asymmetric — for example, one negative and one positive for a combined spin of zero. When they travel through a conventional superconductor, Cooper Pairs’ momentum has to be zero and their orbit perfectly symmetrical.

    But if you can change the momentum to asymmetric — leaning toward one direction — then the spin can be symmetric. To do that, you need the help of some exotic (aka weird) physics.

    Superconducting materials can imbue non-superconducting materials with their conductive powers simply by being in close proximity. Using this principle, the researchers built a superconducting sandwich, with superconductors on the outside and mercury telluride in the middle. The atoms in mercury telluride are so heavy and the electrons move so quickly, that the rules of relativity start to apply.

    “Because the atoms are so heavy, you have electrons that occupy high-speed orbits,” said Hechen Ren, coauthor of the study and graduate student at SEAS. “When an electron is moving this fast, its electric field turns into a magnetic field which then couples with the spin of the electron. This magnetic field acts on the spin and gives one spin a higher energy than another.”

    So, when the Cooper Pairs hit this material, their spin begins to rotate.

    “The Cooper Pairs jump into the mercury telluride and they see this strong spin orbit effect and start to couple differently,” said Ren. “The homogenous breed of zero momentum and zero combined spin is still there but now there is also a breed of pairs that gains momentum, breaking the symmetry of the orbit. The most important part of that is that the spin is now free to be something other than zero.”

    The team could measure the spin at various points as the electron waves moved through the material. By using an external magnet, the researchers could tune the total spin of the pairs.

    “This discovery opens up new possibilities for storing quantum information. Using the underlying physics behind this discovery provides also new possibilities for exploring the underlying nature of superconductivity in novel quantum materials,” said Yacoby.

    This research was coauthored by Sean Hart, Michael Kosowsky, Gilad Ben-Shach, Philipp Leubner, Christoph Brüne, Hartmut Buhmann, Laurens W. Molenkamp and Bertrand I. Halperin.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Through research and scholarship, the Harvard School of Engineering and Applied Sciences (SEAS) will create collaborative bridges across Harvard and educate the next generation of global leaders. By harnessing the power of engineering and applied sciences we will address the greatest challenges facing our society.

    Specifically, that means that SEAS will provide to all Harvard College students an introduction to and familiarity with engineering and technology as this is essential knowledge in the 21st century.

    Moreover, our concentrators will be immersed in the liberal arts environment and be able to understand the societal context for their problem solving, capable of working seamlessly withothers, including those in the arts, the sciences, and the professional schools. They will focus on the fundamental engineering and applied science disciplines for the 21st century; as we will not teach legacy 20th century engineering disciplines.

    Instead, our curriculum will be rigorous but inviting to students, and be infused with active learning, interdisciplinary research, entrepreneurship and engineering design experiences. For our concentrators and graduate students, we will educate “T-shaped” individuals – with depth in one discipline but capable of working seamlessly with others, including arts, humanities, natural science and social science.

    To address current and future societal challenges, knowledge from fundamental science, art, and the humanities must all be linked through the application of engineering principles with the professions of law, medicine, public policy, design and business practice.

    In other words, solving important issues requires a multidisciplinary approach.

    With the combined strengths of SEAS, the Faculty of Arts and Sciences, and the professional schools, Harvard is ideally positioned to both broadly educate the next generation of leaders who understand the complexities of technology and society and to use its intellectual resources and innovative thinking to meet the challenges of the 21st century.

    Ultimately, we will provide to our graduates a rigorous quantitative liberal arts education that is an excellent launching point for any career and profession.

  • richardmitnick 10:42 am on October 14, 2016 Permalink | Reply
    Tags: , , Cuprates, Physics, , X-ray photon correlation spectroscopy   

    From BNL: “Scientists Find Static “Stripes” of Electrical Charge in Copper-Oxide Superconductor” 

    Brookhaven Lab

    October 14, 2016
    Ariana Tantillo
    (631) 344-2347
    Peter Genzer
    (631) 344-3174

    Fixed arrangement of charges coexists with material’s ability to conduct electricity without resistance

    Members of the Brookhaven Lab research team—(clockwise from left) Stuart Wilkins, Xiaoqian Chen, Mark Dean, Vivek Thampy, and Andi Barbour—at the National Synchrotron Light Source II’s Coherent Soft X-ray Scattering beamline, where they studied the electronic order of “charge stripes” in a copper-oxide superconductor. No image credit.

    Cuprates, or compounds made of copper and oxygen, can conduct electricity without resistance by being “doped” with other chemical elements and cooled to temperatures below minus 210 degrees Fahrenheit. Despite extensive research on this phenomenon—called high-temperature superconductivity—scientists still aren’t sure how it works. Previous experiments have established that ordered arrangements of electrical charges known as “charge stripes” coexist with superconductivity in many forms of cuprates. However, the exact nature of these stripes—specifically, whether they fluctuate over time—and their relationship to superconductivity—whether they work together with or against the electrons that pair up and flow without energy loss—have remained a mystery.

    Now, scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory have demonstrated that static, as opposed to fluctuating, charge stripes coexist with superconductivity in a cuprate when lanthanum and barium are added in certain amounts. Their research, described in a paper published on October 11 in Physical Review Letters, suggests that this static ordering of electrical charges may cooperate rather than compete with superconductivity. If this is the case, then the electrons that periodically bunch together to form the static charge stripes may be separated in space from the free-moving electron pairs required for superconductivity.

    “Understanding the detailed physics of how these compounds work helps us validate or rule out existing theories and should point the way toward a recipe for how to raise the superconducting temperature,” said paper co-author Mark Dean, a physicist in the X-Ray Scattering Group of the Condensed Matter Physics and Materials Science Department at Brookhaven Lab. “Raising this temperature is crucial for the application of superconductivity to lossless power transmission.”

    Charge stripes put to the test of time

    To see whether the charge stripes were static or fluctuating in their compound, the scientists used a technique called x-ray photon correlation spectroscopy. In this technique, a beam of coherent x-rays is fired at a sample, causing the x-ray photons, or light particles, to scatter off the sample’s electrons. These photons fall onto a specialized, high-speed x-ray camera, where they generate electrical signals that are converted to a digital image of the scattering pattern. Based on how the light interacts with the electrons in the sample, the pattern contains grainy dark and bright spots called speckles. By studying this “speckle pattern” over time, scientists can tell if and how the charge stripes change.

    In this study, the source of the x-rays was the Coherent Soft X-ray Scattering (CSX-1) beamline at the National Synchrotron Light Source II (NSLS-II), a DOE Office of Science User Facility at Brookhaven.

    BNL NSLS-II Interior

    “It would be very difficult to do this experiment anywhere else in the world,” said co-author Stuart Wilkins, manager of the soft x-ray scattering and spectroscopy program at NSLS-II and lead scientist for the CSX-1 beamline. “Only a small fraction of the total electrons in the cuprate participate in the charge stripe order, so the intensity of the scattered x-rays from this cuprate is extremely small. As a result, we need a very intense, highly coherent x-ray beam to see the speckles. NSLS-II’s unprecedented brightness and coherent photon flux allowed us to achieve this beam. Without it, we wouldn’t be able to discern the very subtle electronic order of the charge stripes.”

    The team’s speckle pattern was consistent throughout a nearly three-hour measurement period, suggesting that the compound has a highly static charge stripe order. Previous studies had only been able to confirm this static order up to a timescale of microseconds, so scientists were unsure if any fluctuations would emerge beyond that point.

    X-ray photon correlation spectroscopy is one of the few techniques that scientists can use to test for these fluctuations on very long timescales. The team of Brookhaven scientists—representing a close collaboration between one of Brookhaven’s core departments and one of its user facilities—is the first to apply the technique to study the charge ordering in this particular cuprate. “Combining our expertise in high-temperature superconductivity and x-ray scattering with the capabilities at NSLS-II is a great way to approach these kind of studies,” said Wilkins.

    To make accurate measurements over such a long time, the team had to ensure the experimental setup was incredibly stable. “Maintaining the same x-ray intensity and sample position with respect to the x-ray beam are crucial, but these parameters become more difficult to control as time goes on and eventually impossible,” said Dean. “When the temperature of the building changes or there are vibrations from cars or other experiments, things can move. NSLS-II has been carefully engineered to counteract these factors, but not indefinitely.”

    “The x-ray beam at CSX-1 is stable within a very small fraction of the 10-micron beam size over our almost three-hour practical limit,” added Xiaoqian Chen, co-first author and a postdoc in the X-Ray Scattering Group at Brookhaven. CSX-1’s performance exceeds that of any other soft x-ray beamline currently operational in the United States.

    In part of the experiment, the scientists heated up the compound to test whether thermal energy might cause the charge stripes to fluctuate. They observed no fluctuations, even up to the temperature at which the compound is known to stop behaving as a superconductor.

    “We were surprised that the charge stripes were so remarkably static over such long timescales and temperature ranges,” said co-first author and postdoc Vivek Thampy of the X-Ray Scattering Group. “We thought we may see some fluctuations near the transition temperature where the charge stripe order disappears, but we didn’t.”

    In a final check, the team theoretically calculated the speckle patterns, which were consistent with their experimental data.

    Going forward, the team plans to use this technique to probe the nature of charges in cuprates with different chemical compositions.

    X-ray scattering measurements were supported by the Center for Emergent Superconductivity, an Energy Frontier Research Center funded by DOE’s Office of Science.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.

  • richardmitnick 7:22 pm on October 13, 2016 Permalink | Reply
    Tags: , , Multi Mission RTG (MMRTG), , Physics, Radioisotope thermoelectric generator (RTG), Skutterudites, THERMOCOUPLES   

    From JPL-Caltech: “Spacecraft ‘Nuclear Batteries’ Could Get a Boost from New Materials” 

    NASA JPL Banner


    October 13, 2016
    Elizabeth Landau
    Jet Propulsion Laboratory, Pasadena, Calif.

    Samad Firdosy, a materials engineer at JPL, holds a thermoelectric module made of four thermocouples, which are devices that help turn heat into electricity. Thermocouples are used in household heating applications, as well as power systems for spacecraft. Credit: NASA/JPL-Caltech

    Sabah Bux, a technologist at JPL, is shown using a furnace in her work to develop thermoelectric materials, which convert heat into electricity. Credit: NASA/JPL-Caltech

    No extension cord is long enough to reach another planet, and there’s no spacecraft charging station along the way. That’s why researchers are hard at work on ways to make spacecraft power systems more efficient, resilient and long-lasting.

    “NASA needs reliable long-term power systems to advance exploration of the solar system,” said Jean-Pierre Fleurial, supervisor for the thermal energy conversion research and advancement group at NASA’s Jet Propulsion Laboratory, Pasadena, California. “This is particularly important for the outer planets, where the intensity of sunlight is only a few percent as strong as it is in Earth orbit.”

    A cutting-edge development in spacecraft power systems is a class of materials with an unfamiliar name: skutterudites (skut-ta-RU-dites). Researchers are studying the use of these advanced materials in a proposed next-generation power system called an eMMRTG, which stands for Enhanced Multi-Mission Radioisotope Thermoelectric Generator.

    Access mp4 video here .

    What is an RTG?

    Radioactive substances naturally generate heat as they spontaneously transform into other elements. Radioisotope power systems make use of this heat as fuel to produce useful electricity for use in a spacecraft. The radioisotope power systems on NASA spacecraft today harness heat from the natural radioactive decay of plutonium-238 oxide.

    The United States first launched a radioisotope thermoelectric generator (RTG) into space on a satellite in 1961. RTGs have powered NASA’s twin Voyager probes since their launch in 1977; more than 10 billion miles (16 billion kilometers) away, the Voyagers are the most distant spacecraft from Earth and are still going. RTGs have enabled many other missions that have sent back a wealth of science results, including NASA’s Mars Curiosity rover and the New Horizons mission, which flew by Pluto in 2015.

    The new eMMRTG would provide 25 percent more power than Curiosity’s generator at the start of a mission, according to current analyses. Additionally, since skutterudites naturally degrade more slowly that the current materials in the MMRTG, a spacecraft outfitted with an eMMRTG would have at least 50 percent more power at the end of a 17-year design life than it does today.

    “Having a more efficient thermoelectric system means we’d need to use less plutonium. We could go farther, for longer and do more,” Bux said.

    What are skutterudites?

    The defining new ingredients in the proposed eMMRTG are materials called skutterudites, which have unique properties that make them especially useful for power systems. These materials conduct electricity like metal, but heat up like glass, and can generate sizable electrical voltages.

    Materials with all of these characteristics are hard to come by. A copper pot, for example, is an excellent conductor of electricity, but gets very hot quickly. Glass, on the other hand, insulates against heat well, but it can’t conduct electricity. Neither of these properties are appropriate in a thermoelectric material, which converts heat into electricity.

    “We needed to design high temperature compounds with the best mix of electrical and heat transfer properties,” said Sabah Bux, a technologist at JPL who works on thermoelectric materials. “Skutterudites, with their complex structures composed of heavy atoms like antimony, allow us to do that.”

    RTGs in space

    A team at JPL is working on turning skutterudites into thermocouples. A thermocouple is a device that generates an electrical voltage from the temperature difference in its components. Compared to other materials, thermocouples made of skutterudites need a smaller temperature difference to produce the same amount of useful power, making them more efficient.

    What is a thermoelectric material?
    Thermoelectric materials are materials that can convert a temperature difference into electricity, or vice versa.

    What is a thermocouple?
    A conventional thermocouple is made of two different thermoelectric materials joined together at one “shoe,” or end, where its temperature is measured. When you heat up a thermocouple, the difference in the conductivity of the materials results in one metal becoming hotter than the other, and causes the temperature of the joined end to change. This temperature difference creates a voltage (the force with which electrons flow through the material), and converts a portion of the transferred heat into electricity.

    How do thermocouples work?
    Thermocouples are in every home: They measure the temperature in your oven and control your water heater. Most household thermocouples are inefficient: they produce a voltage so small, it produces almost no electrical current. By contrast skutterudites are a lot more efficient: They require a smaller temperature difference to produce useful electricity.

    NASA is studying thermocouples made out of skutterudites that have a flat top and two “legs,” somewhat like the iconic Stonehenge stone monuments. Heat transfers across the thermocouple from a high-temperature heat source to a suitable heat ‘sink’ (such as cold water). An electrical current is produced between the hot end (the flat top) and the cold end (the legs) of the thermocouple.

    “It’s as though there are a lot of people in a room where one side is hot and one side is cold,” said JPL’s Sabah Bux. “The people, which represent the electrical charges, will move from the hot side to the cold side. That movement is electricity.”

    The thermocouples are joined end-to-end in one long circuit – the electrical current goes up, over and down each thermocouple, producing useful power. Devices outfitted in this way can take advantage of a variety of heat sources, ranging in temperature from 392 to more than 1832 degrees Fahrenheit (200 to more than 1000 degrees Celsius).

    In Curiosity’s power system, the Multi Mission RTG (MMRTG), 768 thermocouples encircle a central can-like structure, all facing the same direction towards the heat source, at the center of the generator. The enhanced MMRTG (eMMRTG) would have the same number of thermocouples, but all would be made from skutterudite material instead of the alloys of telluride currently used.

    “Only minimal changes to the existing MMRTG design are needed to get these results,” Fleurial said. A group of about two dozen people at JPL is dedicated to working on these advanced materials and testing the resulting thermocouple prototypes.

    The new skutterudite-based thermocouples passed their first major NASA review in late 2015. If they pass further reviews in 2017 and 2018, the first eMMRTG using them could fly aboard NASA’s next New Frontiers-class mission.

    Earth-based applications of skutterudite

    There are many potential applications for these advanced thermoelectric materials here on Earth.

    “In situations where waste heat is emitted, skutterudite materials could be used to improve efficiency and convert that heat into useful electricity,” said Thierry Caillat, project leader for the technology maturation project at JPL.

    For example, exhaust heat from a car could be converted into electricity and fed back into the vehicle, which could be used to charge batteries and reduce fuel use. Industrial processes that require high temperatures, such as ceramic and glass processing, could also use skutterudite materials to make use of waste heat. In 2015, JPL licensed patents on these high-temperature thermoelectric materials to a company called Evident Technologies, Troy, New York.

    “Over the last 20 years, the field of thermoelectrics has come into being and blossomed, especially at JPL,” said Fleurial. “There’s a lot of great science happening in this area. We’re excited to explore the idea of taking these materials to space, and benefitting U.S. industry along the way.”

    JPL’s work to develop higher-efficiency thermoelectric materials is carried out in partnership with the U.S. Department of Energy (DOE), Teledyne Energy Systems and Aerojet Rocketdyne, and is funded by NASA’s Radioisotope Power System program, which is managed by NASA Glenn Research Center in Cleveland. The spaceflight hardware is produced by Teledyne Energy Systems and Aerojet Rocketdyne under a contract held by the DOE, which fuels, completes final assembly and owns the end item.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NASA JPL Campus

    Jet Propulsion Laboratory (JPL) is a federally funded research and development center and NASA field center located in the San Gabriel Valley area of Los Angeles County, California, United States. Although the facility has a Pasadena postal address, it is actually headquartered in the city of La Cañada Flintridge [1], on the northwest border of Pasadena. JPL is managed by the nearby California Institute of Technology (Caltech) for the National Aeronautics and Space Administration. The Laboratory’s primary function is the construction and operation of robotic planetary spacecraft, though it also conducts Earth-orbit and astronomy missions. It is also responsible for operating NASA’s Deep Space Network.

    Caltech Logo

    NASA image

  • richardmitnick 8:43 am on October 12, 2016 Permalink | Reply
    Tags: , Physics, Quasiparticles,   

    From Science Alert: “Physicists just witnessed quasiparticles forming for the first time ever” 


    Science Alert

    11 OCT 2016

    IQOQI/Harald Ritsch

    It’s a glorious moment for science.

    For the first time, scientists have observed the formation of quasiparticles – a strange phenomenon observed in certain solids – in real time, something that physicists have been struggling to do for decades.

    It’s not just a big deal for the physics world – it’s an achievement that could change the way we build ultra-fast electronics, and could lead to the development of quantum processors.

    But what is a quasiparticle? Rather than being a physical particle, it’s a concept used to describe some of the weird phenomena that happen in pretty fancy setups – specifically, many-body quantum systems, or solid-state materials.

    An example is an electron moving through a solid. As it the electron travels, it generates polarisation in its environment because of its electrical charge. This ‘polarisation’ cloud follows the electron through the material, and together they can be described as a quasiparticle.

    “You could picture it as a skier on a powder day,” explained one of the researchers, Rudolf Grimm, from the University of Innsbruck in Austria. “The skier is surrounded by a cloud of snow crystals. Together they form a system that has different properties than the skier without the cloud.”

    Quasiparticles and their formation have been extensively described in theoretical models, but actually measuring and observing them in real time has been a real challenge. That’s because not only is the quasiparticle phenomenon happening on a tiny scale – it’s also incredible short-lived.

    “These processes last only attoseconds, which makes a time-resolved observation of their formation extremely difficult,” said Grimm.

    To put that into perspective, 1 attosecond is one-quintillionth of a second. Which means 1 attosecond is to 1 second what 1 second is to about 31.71 billion years – so, yeah, that’s pretty fast.

    But the team managed to come up with a way to slow the process down a bit.

    Inside a vacuum chamber, they used laser trapping techniques to create an ultracold quantum gas made up of lithium atoms and a small sample of potassium atoms in the centre.

    They then used a magnetic field to tune interactions of the particles, creating a type of quasiparticle known as a Fermi polaron – which is basically potassium atoms embedded in a lithium cloud.

    The formation of those quasiparticles would have taken on the order of 100 attoseconds in a normal system, but thanks to the ultracold quantum gas, the team was able to slow it down, and witness it happening for the first time ever.

    “We simulated the same physical processes at much lower densities,” said Grimm. “Here, the formation time for polarons is a few microseconds.”

    The goal now is to figure out how to not only observe these quasiparticles, but actually measure them, so that we can find a way to use them to develop quantum processing systems that will bring us the super-fast electronics of the future.

    “We developed a new method for observing the ‘birth’ of a polaron virtually in real time,” said Grimm. “This may turn out to be a very interesting approach to better understand the quantum physical properties of ultrafast electronic devices.”

    The research has been published in Science.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 8:09 am on October 12, 2016 Permalink | Reply
    Tags: , , , , , , Physics,   

    From Symmetry: “Recruiting team geoneutrino” 

    Symmetry Mag

    Leah Crane

    Illustration by Sandbox Studio, Chicago with Corinne Mucha

    Physicists and geologists are forming a new partnership to study particles from inside the planet.

    The Earth is like a hybrid car.

    Deep under its surface, it has two major fuel tanks. One is powered by dissipating primordial energy left over from the planet’s formation. The other is powered by the heat that comes from radioactive decay.

    We have only a shaky understanding of these heat sources, says William McDonough, a geologist at the University of Maryland. “We don’t have a fuel gauge on either one of them. So we’re trying to unravel that.”

    One way to do it is to study geoneutrinos, a byproduct of the process that burns Earth’s fuel. Neutrinos rarely interact with other matter, so these particles can travel straight from within the Earth to its surface and beyond.

    Geoneutrinos hold clues as to how much radioactive material the Earth contains. Knowing that could lead to insights about how our planet formed and its modern-day dynamics. In addition, the heat from radioactive decay plays a key role in driving plate tectonics.

    The tectonic plates of the world were mapped in 1996, USGS.
    The tectonic plates of the world were mapped in 1996, USGS

    Understanding the composition of the planet and the motion of the plates could help geologists model seismic activity.

    To effectively study geoneutrinos, scientists need knowledge both of elementary particles and of the Earth itself. The problem, McDonough says, is that very few geologists understand particle physics, and very few particle physicists understand geology. That’s why physicists and geologists have begun coming together to build an interdisciplinary community.

    “There’s really a need for a beyond-superficial understanding of the physics for the geologists and likewise a nonsuperficial understanding of the Earth by the physicists,” McDonough says, “and the more that we talk to each other, the better off we are.”

    There are hurdles to overcome in order to get to that conversation, says Livia Ludhova, a neutrino physicist and geologist affiliated with Forschungzentrum Jülich and RWTH Aachen University in Germany. “I think the biggest challenge is to make a common dictionary and common understanding—to get a common language. At the basic level, there are questions on each side which can appear very naïve.”

    In July, McDonough and Gianpaolo Bellini, emeritus scientist of the Italian National Institute of Nuclear Physics and retired physics professor at the University of Milan, led a summer institute for geology and physics graduate students to bridge the divide.

    “In general, geology is more descriptive,” Bellini says. “Physics is more structured.”

    This can be especially troublesome when it comes to numerical results, since most geologists are not used to working with the defined errors that are so important in particle physics.

    At the summer institute, students began with a sort of remedial “preschool,” in which geologists were taught how to interpret physical uncertainty and the basics of elementary particles and physicists were taught about Earth’s interior. Once they gained basic knowledge of one another’s fields, the scientists could begin to work together.

    This is far from the first interdisciplinary community within science or even particle physics. Ludhova likens it to the field of radiology: There is one expert to take an X-ray and another to determine a plan of action once all the information is clear. Similarly, particle physicists know how to take the necessary measurements, and geologists know what kinds of questions they could answer about our planet.

    Right now, only two major experiments are looking for geoneutrinos: KamLAND at the Kamioka Observatory in Japan and Borexino at the Gran Sasso National Laboratory in Italy. Between the two of them, these observatories detect fewer than 20 geoneutrinos a year.

    KamLAND at the Kamioka Observatory in Japan

    INFN/Borexino Solar Neutrino detector, Gran Sasso, Italy
    INFN/Borexino Solar Neutrino detector, Gran Sasso, Italy

    Between the two of them, these observatories detect fewer than 20 geoneutrinos a year.

    Because of the limited results, geoneutrino physics is by necessity a small discipline: According to McDonough, there are only about 25 active neutrino researchers with a deep knowledge of both geology and physics.

    Over the next decade, though, several more neutrino detectors are anticipated, some of which will be much larger than KamLAND or Borexino. The Jiangmen Underground Neutrino Observatory (JUNO) in China, for example, should be ready in 2020.

    JUNO Neutrino detector China
    JUNO Neutrino detector China

    Whereas Borexino’s detector is made up of 300 tons of active material, and KamLAND’s contains 1000, JUNO’s will have 20,000 tons.

    The influx of data over the next decade will allow the community to emerge into the larger scientific scene, Bellini says. “There are some people who say ‘now this is a new era of science’—I think that is exaggerated. But I do think that we have opened a new chapter of science in which we use the methods of particle physics to study the Earth.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.

  • richardmitnick 1:29 pm on October 7, 2016 Permalink | Reply
    Tags: Are The Beginning And End Of The Universe Connected?, , , , Physics   

    From Ethan Siegel: “Are The Beginning And End Of The Universe Connected?” 

    Oct 7, 2016
    Ethan Siegel

    The deepest views of the distant Universe show galaxies being pushed away by dark energy. Could this force have a connection to the inflationary phenomena that started everything in the first place? Image credit: NASA, ESA, R. Windhorst and H. Yan.

    The very earliest stages of the Universe as-we-know-it began with the hot Big Bang, where the expanding Universe was filled with high energy particles, antiparticles and radiation. But in order to set that up, we needed a time where the Universe was dominated by energy inherent to space itself, expanded at an exponential rate and eventually decayed, giving rise to a matter, antimatter and radiation-filled Universe. Today, 13.8 billion years after the end of inflation, the matter and radiation in the Universe has become so sparse, so low in density, that it’s revealed a new component: dark energy. Dark energy appears to be energy inherent to space itself, and is causing the Universe to expand at an exponential rate. Although there are some differences between dark energy and inflation, there are some unique similarities, too. Could these two phenomena be related? And if so, does that mean the beginning and end of our Universe are connected?

    Fluctuations in spacetime itself at the quantum scale get stretched across the Universe during inflation, giving rise to imperfections in both density and gravitational waves. Image credit: E. Siegel, with images derived from ESA/Planck and the DoE/NASA/ NSF interagency task force on CMB research.

    It would seem very strange to us if there were two entirely different forces or mechanisms at play to cause the Universe to expand: one billions of years ago and one today. When it comes to the Universe, however, there’s a lot going on that appears very strange to us. First off, the Universe very, very surely is expanding. But it didn’t need a force of any type in order to do so. In fact, when you take a Universe like our own, a Universe that is:

    governed by Einstein’s General Relativity,
    filled with matter, radiation and any other “stuff” you like,
    and that’s roughly the same, on average, at all locations and in all directions,

    you wind up with a funny, uncomfortable conclusion. That conclusion was first reached by Einstein himself back in the first few years of relativity itself: that such a Universe is inherently unstable against gravitational collapse.

    A nearly uniform Universe, expanding over time and under the influence of gravity, will create a cosmic web of structure. Image credit: Western Washington University, via http://www.wwu.edu/skywise/a101_cosmologyglossary.html.

    In other words, unless you concocted some “magic fix” for the problem, your Universe was going to have to either expand or contract, with both solutions being possibilities. What it couldn’t do, unless you cooked up some new type of force, was remain static.

    Of course, the work of Edwin Hubble hadn’t yet come along. In addition to not knowing that the Universe was expanding, we didn’t even know whether those spiral shapes in the sky were objects within our own Milky Way or whether they were entire galaxies themselves. Because Einstein favored a static Universe at the time (like most), he made such an ad hoc fix to keep the Universe static: he introduced the idea of a cosmological constant.

    The Einstein field equations, with a cosmological constant included as the final term on the left-hand side.

    The central idea of Einstein’s relativity is that there are two sides to the equation: a matter-and-energy side, and a space-and-time side. It says that the presence of matter and energy determines the curvature and evolution of spacetime, and that the way spacetime curves and evolves determines the fate of every individual quantum of matter and energy within it.

    What the addition of a cosmological constant did was say, “there is this new type of energy, inherent to space itself, that causes the fabric of the Universe to expand at a constant rate.” So if you had the force of gravity due to all the matter-and-energy working to collapse the Universe, while you had this cosmological constant working to expand the Universe, you could wind up with a static Universe after all. All you would need was for those two rates to match, and to exactly cancel one another out.

    If the Universe were perfectly uniform, or if everything were perfectly distributed, no large-scale structure would ever form. But any slight imperfection leads to clumps and voids, as the Universe itself displays. Image credit: Adam Block/Mount Lemmon SkyCenter/University of Arizona, via http://skycenter.arizona.edu/gallery/Galaxies/NGC70.

    As it turned out, the Universe is expanding, and there didn’t need to be a cosmological constant there to counteract the force of gravity. Instead, there was an initial condition, that the Universe began expanding very rapidly, that counteracted the force of gravity from all the matter and energy. Instead of contracting, the Universe was expanding, and that expansion rate was slowing down.

    Now, there are two questions that are natural to ask — and in fact were natural to ask since this discovery in the 1920s — in the aftermath of this:

    What caused the Universe to begin expanding at this rapid rate early on?
    What will the fate of the Universe be? Will it expand forever, will it eventually reverse and recollapse, will it be on the border of these two, or something else?

    The different possible fates of the Universe. The actual, accelerating fate is shown at the right; the Big Bang itself offers no explanation for the origin of the Universe itself. Image credit: NASA and ESA, via http://www.spacetelescope.org/images/opo9919k/.

    The first question went unanswered for over half a century, although interestingly enough there was an initial proposal by Willem de Sitter almost immediately that it was a cosmological constant that caused this expansion to begin.

    Previously thought to only occur from a cosmological constant, Alan Guth’s revelation at the end of 1979 led to the birth of cosmic inflation as a way to “blow up” the Universe at its inception. Image credit: Alan Guth’s 1979 notebook, tweeted via @SLAClab, from https://twitter.com/SLAClab/status/445589255792766976.

    Finally, in the early 1980s, it was the theory of cosmological inflation that came about, proposing that there was an early phase of exponential expansion, where the Universe was dominated by something very much like a cosmological constant.

    Now, it couldn’t have been a true cosmological constant — also known as vacuum energy — because the Universe didn’t remain in that state forever. Instead, the Universe could have been in a false vacuum state, where it had some energy inherent to space itself that then decayed to a lower-energy state, resulting in matter and radiation coming out: the hot Big Bang!

    Large-scale structure would form differently in a Universe that came about from inflation and its predictions (L) than in a cosmic string-dominated network (R). Images credit: Andrey Kravtsov (cosmological simulation, L); B. Allen & E.P. Shellard (simulation in a cosmic string Universe, R), via http://www.ctc.cam.ac.uk/outreach/origins/cosmic_structures_four.php.

    There are a number of other predictions that came out of inflation, all but one of which has been confirmed, and hence we accept that this early phase in the Universe did exist.

    Yet, when we turn to the second question — about the Universe’s fate — we find something very strange. While we had expected that there would be a sort of race between the initial, rapid expansion and the force of gravitation acting on all the matter-and-energy in the Universe, what we found was that there was a new form of energy that was quite unexpected: something dubbed dark energy. And wouldn’t you know it? This dark energy, to the best of our knowledge, appears to take on the same form as a cosmological constant.

    The far distant fates of the Universe offer a number of possibilities, but if dark energy is truly a constant, as the data indicates, it will continue to follow the red curve. Image credit: NASA / GSFC.

    Now, these two types of exponential expansion, the early kind and the late kind, are very, very different in detail.

    The early Universe’s inflationary period lasted for an indeterminate amount of time — possibly as short as 10^-33 seconds, possibly as long as near-infinite — while today’s dark energy has been dominant for around six billion years.
    The early inflationary state was incredibly rapid, where the cosmological expansion rate was some 10^50 times what it is today. By contrast, today’s dark energy is responsible for about 70% of what the expansion rate is today.
    The early state must have coupled, somehow, to matter and radiation. At high enough energies, there must be some sort of “inflaton” particle, assuming quantum field theory is correct. The late-time dark energy has no known couplings at all.

    That said, there are some similarities, too.

    The four possible fates of the Universe, with only the last one matching our observations. Image credit: E. Siegel, from his book, Beyond The Galaxy.

    They both have the same (or indistinguishable) equations-of-state, meaning that the relationship between the Universe’s scale and time is identical for both.

    They both have identical relationships between the energy density and pressure they cause in general relativity.

    And they both cause the same type of expansion — exponential expansion — in the Universe.

    The “open funnel” portion of these illustrations represents exponential expansion, which occurs both at the beginning (during inflation) and at the end (when dark energy dominates). Image credit: C. Faucher-Giguère, A. Lidz, and L. Hernquist, Science 319, 5859 (47).

    But are they related? It’s very, very difficult to say. The reason, of course, is that we don’t understand either one very well at all! I like to imagine a 2-liter soda bottle, partway filled, when I think about inflation. I imagine a drop of oil floating on the top of the liquid inside. That high energy state is like the Universe during inflation.

    Then something happens to cause the liquid to drain out of the bottle. The oil sinks to the bottom, of course, in a low-energy state.

    But if that drop winds up not at the very bottom — not at zero, but at some finite, non-zero value (like the Higgs field when its symmetry breaks) — it could be responsible for dark energy. Models that tie these two fields together, the inflationary field and the dark energy field, are known generically as quintessence.

    It’s pretty easy to make a quintessence model that works. The problem is, it’s pretty easy to make two separate models — one for inflation and one for dark energy — that work, too. We have two new phenomena and they require the introduction of at least two new “free parameters” to make the theory work. You can tie them together or not, but in no way are these models distinguishable from each other.

    The models that have dark energy evolving too much (i.e., w ≠ -1 always) can be ruled out with data. Image credit: Pantazis, G. et al. Phys.Rev. D93 (2016) no.10, 103503.

    All we’ve been able to do, to date, is rule out certain classes of models where the early-time or late-time rates of expansion don’t agree with observation. But observations are also consistent with inflation is a thing on its own, and dark energy arises from a completely different source. I hate having to go through the full explanation of what we know, to have one phenomenon (inflation) occurring at an energy scale of around 10^15 GeV, to have another phenomenon (dark energy) at an energy scale of around 1 milli-eV, and then to have to say “we don’t know if they’re related,” but that’s the situation here.

    Unfortunately, even with all the proposed missions we have — James Webb, WFIRST, LISA and the ILC — we don’t anticipate this question being answered from the data anytime soon. Our best hope is for a theoretical breakthrough. And as someone who’s worked on this problem myself, I have no idea how we’ll get there.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 1:08 pm on October 7, 2016 Permalink | Reply
    Tags: , Physics, ,   

    From Symmetry: “Hunting the nearly un-huntable” 

    Symmetry Mag

    Andre Salles

    Artwork by Sandbox Studio, Chicago with Corinne Mucha

    The MINOS and Daya Bay experiments weigh in on the search for sterile neutrinos.

    In the 1990s, the Liquid Scintillator Neutrino Detector (LSND) experiment at Los Alamos National Laboratory saw intriguing hints of an undiscovered type of particle, one that (as of yet) cannot be detected. In 2007, the MiniBooNE experiment at the US Department of Energy’s Fermi National Accelerator Laboratory followed up and found a similar anomaly.

    LSND Experiment University of California
    LSND Experiment University of California


    Today scientists on two more neutrino experiments—the MINOS experiment at Fermilab and the Daya Bay experiment in China—entered the discussion, presenting results that limit the places where these particles, called sterile neutrinos, might be hiding.


    Daya Bay, approximately 52 kilometers northeast of Hong Kong and 45 kilometers east of Shenzhen, China
    “Daya Bay, approximately 52 kilometers northeast of Hong Kong and 45 kilometers east of Shenzhen, China

    “This combined result was a two-year effort between our collaborations,” says MINOS scientist Justin Evans of the University of Manchester. “Together we’ve set what we believe is a very strong limit on a very intriguing possibility.”

    In three separate papers—two published individually by MINOS and Daya Bay and one jointly, all in Physical Review Letters—scientists on the two experiments detail the results of their hunt for sterile neutrinos.

    Both experiments are designed to see evidence of neutrinos changing, or oscillating, from one type to another. Scientists have so far observed three types of neutrinos, and have detected them changing between those three types, a discovery that was awarded the 2015 Nobel Prize in physics.

    What the LSND and MiniBooNE experiments saw—an excess of electron neutrino-like signals—could be explained by a two-step change: muon neutrinos morphing into sterile neutrinos, then into electron neutrinos. MINOS and Daya Bay measured the rate of these steps using different techniques.

    MINOS, which is fed by Fermilab’s neutrino beam—the most powerful in the world—looks for the disappearance of muon neutrinos. MINOS can also calculate how often muon neutrinos should transform into the other two known types and can infer from that how often they could be changing into a fourth type that can’t be observed by the MINOS detector.

    Daya Bay performed a similar observation with electron anti-neutrinos (assumed, for the purposes of this study, to behave in the same way as electron neutrinos).

    The combination of the two experiments’ data (and calculations based thereon) cannot account for the apparent excess of neutrino-like signals observed by LSND. That along with a reanalysis of results from Bugey, an older experiment in France, leaves only a very small region where sterile neutrinos related to the LSND anomaly could be hiding, according to scientists on both projects.

    “There’s a very small parameter space left that the LSND signal could correspond to,” says Alex Sousa of the University of Cincinnati, one of the MINOS scientists who worked on this result. “We can’t say that these light sterile neutrinos don’t exist, but the space where we might find them oscillating into the neutrinos we know is getting narrower.”

    Both Daya Bay and MINOS’ successor experiment, MINOS+, have already taken more data than was used in the analysis here. MINOS+ has completely analyzed only half of its collected data to date, and Daya Bay plans to quadruple its current data set. The potential reach of the final joint effort, says Kam-Biu Luk, co-spokesperson of the Daya Bay experiment, “could be pretty definitive.”

    The IceCube collaboration, which measures atmospheric neutrinos with a detector deep under the Antarctic ice, recently conducted a similar search for sterile neutrinos and also came up empty.

    U Wisconsin ICECUBE neutrino detector at the South Pole
    IceCube neutrino detector interior
    U Wisconsin ICECUBE neutrino detector at the South Pole

    All of this might seem like bad news for fans of sterile neutrinos, but according to theorist André de Gouvea of Northwestern University, the hypothesis is still alive.

    Sterile neutrinos are “still the best new physics explanation for the LSND anomaly that we can probe, even though that explanation doesn’t work very well,” de Gouvea says. “The important thing to remember is that these results from MINOS, Daya Bay, Ice Cube and others don’t rule out the concept of sterile neutrinos, as they may be found elsewhere.”

    Theorists have predicted the existence of sterile neutrinos based on anomalous results from several different experiments. The results from MINOS and Daya Bay address the sterile neutrinos predicted based on the LSND and MiniBooNE anomalies. Theorists predict other types of sterile neutrinos to explain anomalies in reactor experiments and in experiments using the chemical gallium. Much more massive types of sterile neutrinos would help explain why the neutrinos we know are so very light and how the universe came to be filled with more matter than antimatter.

    Searches for sterile neutrinos have focused on the LSND neutrino excess, de Gouvea says, because it provides a place to look. If that particular anomaly is ruled out as a key to finding these nigh-undetectable particles, then they could be hiding almost anywhere, leaving no clues. “Even if sterile neutrinos do not explain the LSND anomaly, their existence is still a logical possibility, and looking for them is always interesting,” de Gouvea says.

    Scientists around the world are preparing to search for sterile neutrinos in different ways.

    Fermilab is preparing a three-detector suite of short-baseline experiments dedicated to nailing down the cause of both the LSND anomaly and an excess of electrons seen in the MiniBooNE experiment. These liquid-argon detectors will search for the appearance of electron neutrinos, a method de Gouvea says is a more direct way of addressing the LSND anomaly. One of those detetors, MicroBooNE, is specifically chasing down the MiniBooNE excess.


    Scientists at Oak Ridge National Laboratory are preparing the Precision Oscillation and Spectrum Experiment (PROSPECT), which will search for sterile neutrinos generated by a nuclear reactor.

    Yale PROSPECT Neutrino experiment
    Yale PROSPECT Neutrino experiment

    CERN’s SHiP experiment, which stands for Search for Hidden Particles, is expected to look for sterile neutrinos with much higher predicted masses.

    CERN SHiP Experiment
    CERN SHiP Experiment

    Obtaining a definitive answer to the sterile neutrino question is important, Evans says, because the existence (or non-existence) of these particles might impact how scientists interpret the data collected in other neutrino experiments, including Japan’s T2K, the United States’ NOvA, the forthcoming DUNE, and other future projects.

    T2K map
    T2K map

    FNAL/NOvA experiment
    FNAL/NOvA experiment


    DUNE in particular will be able to look for sterile neutrinos across a broad spectrum, and evidence of a fourth kind of neutrino would enhance its already rich scientific program.

    “It’s absolutely vital that we get this question resolved,” Evans says. “Whichever way it goes, it will be a crucial part of neutrino experiments in the future.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.

  • richardmitnick 11:52 am on October 7, 2016 Permalink | Reply
    Tags: , , Correlation between galaxy rotation and visible matter puzzles astronomers, , Physics,   

    From physicsworld: “Correlation between galaxy rotation and visible matter puzzles astronomers” 


    Oct 7, 2016
    Keith Cooper

    Strange correlation: why is galaxy rotation defined by visible mass? No image credit.

    A new study of the rotational velocities of stars in galaxies has revealed a strong correlation between the motion of the stars and the amount of visible mass in the galaxies. This result comes as a surprise because it is not predicted by conventional models of dark matter.

    Stars on the outskirts of rotating galaxies orbit just as fast as those nearer the centre. This appears to be in violation of Newton’s laws, which predict that these outer stars would be flung away from their galaxies. The extra gravitational glue provided by dark matter is the conventional explanation for why these galaxies stay together. Today, our most cherished models of galaxy formation and cosmology rely entirely on the presence of dark matter, even though the substance has never been detected directly.

    These new findings, from Stacy McGaugh and Federico Lelli of Case Western Reserve University, and James Schombert of the University of Oregon, threaten to shake things up. They measured the gravitational acceleration of stars in 153 galaxies with varying sizes, rotations and brightness, and found that the measured accelerations can be expressed as a relatively simple function of the visible matter within the galaxies. Such a correlation does not emerge from conventional dark-matter models.

    Mass and light

    This correlation relies strongly on the calculation of the mass-to-light ratio of the galaxies, from which the distribution of their visible mass and gravity is then determined. McGaugh attempted this measurement in 2002 using visible light data. However, these results were skewed by hot, massive stars that are millions of times more luminous than the Sun. This latest study is based on near-infrared data from the Spitzer Space Telescope.

    NASA/Spitzer Telescope
    NASA/Spitzer Telescope

    Since near-infrared light is emitted by the more common low-mass stars and red giants, it is a more accurate tracer for the overall stellar mass of a galaxy. Meanwhile, the mass of neutral hydrogen gas in the galaxies was provided by 21 cm radio-wavelength observations.

    McGaugh told physicsworld.com that the team was “amazed by what we saw when Federico Lelli plotted the data.”

    The result is confounding because galaxies are supposedly ensconced within dense haloes of dark matter.

    Spherical halo of dark matter. cerncourier.com

    Furthermore, the team found a systematic deviation from Newtonian predictions, implying that there is some other force is at work beyond simple Newtonian gravity.

    “It’s an impressive demonstration of something, but I don’t know what that something is,” admits James Binney, a theoretical physicist at the University of Oxford, who was not involved in the study.

    This systematic deviation from Newtonian mechanics was predicted more than 30 years ago by an alternate theory of gravity known as modified Newtonian dynamics (MOND). According to MOND’s inventor, Mordehai Milgrom of the Weizmann Institute in Israel, dark matter does not exist, and instead its effects can be explained by modifying how Newton’s laws of gravity operate over large distances.

    “This was predicted in the very first MOND paper of 1983,” says Milgrom. “The MOND prediction is exactly what McGaugh has found, to a tee.”

    However, Milgrom is unhappy that McGaugh hasn’t outright attributed his results to MOND, and suggests that there’s nothing intrinsically new in this latest study. “The data here are much better, which is very important, but this is really the only conceptual novelty in the paper,” says Milgrom.

    No tweaking required

    McGaugh disagrees with Milgrom’s assessment, saying that previous results had incorporated assumptions that tweak the data to get the desired result for MOND, whereas this time the mass-to-light ratio is accurate enough that no tweaking is required.

    Furthermore, McGaugh says he is “trying to be open-minded”, by pointing out that exotic forms of dark matter like superfluid dark matter or even complex galactic dynamics could be consistent with the data. However, he also feels that there is implicit bias against MOND among members of the astronomical community.

    “I have experienced time and again people dismissing the data because they think MOND is wrong, so I am very consciously drawing a red line between the theory and the data.”

    Much of our current understanding of cosmology relies on cold dark matter, so could the result threaten our models of galaxy formation and large-scale structure in the universe? McGaugh thinks it could, but not everyone agrees.

    Way too complex

    Binney points out that dark-matter simulations struggle on the scale of individual galaxies because “the physics of galaxy formation is way too complex to compute properly,” he says, the implication being that it is currently impossible to say whether dark matter can explain these results or not. “It’s unfortunately beyond the powers of humankind at the moment to know.”

    That leaves the battle between dark matter and alternate models of gravitation at an impasse. However, Binney points out that dark matter has an advantage because it can also be studied through observations of galaxy mergers and collisions between galaxy clusters. Also, there are many experiments that are currently searching for evidence of dark-matter particles.

    McGaugh’s next step is to extend the study to elliptical and dwarf spheroidal galaxies, as well as to galaxies at greater distances from the Milky Way.

    The research is to be published in Physical Review Letters and a preprint is available on arXiv.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

  • richardmitnick 11:12 am on October 7, 2016 Permalink | Reply
    Tags: , Brookhaven Lab to Play Major Role in Two DOE Exascale Computing Application Projects, Computational Science Initiative (CSI), Exascale Lattice Gauge Theory Opportunities and Requirements for Nuclear and High Energy Physics, Materials and Biomolecular Challenges in the Exascale Era, NWChemEx: Tackling Chemical, Physics   

    From BNL: “Brookhaven Lab to Play Major Role in Two DOE Exascale Computing Application Projects” 

    Brookhaven Lab

    October 5, 2016
    Ariana Tantillo
    (631) 344-2347
    Peter Genzer
    (631) 344-3174

    BNL NSLS-II Building
    BNL NSLS-II Interior

    Scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory will play major roles in two of the 15 fully funded application development proposals recently selected by the DOE’s Exascale Computing Project (ECP) in its first-round funding of $39.8 million. Seven other proposals received seed funding.

    The ECP’s mission is to maximize the benefits of high-performance computing for U.S. economic competitiveness, national security, and scientific discovery. Specifically, the development efforts will focus on advanced modeling and simulation applications for next-generation supercomputers to enable advances in climate and environmental science, precision medicine, cosmology, materials science, and other fields. Led by teams from national labs, research organizations, and universities, these efforts will help guide DOE’s development of a U.S. exascale computing ecosystem. Exascale computing refers to systems that can perform at least a billion-billion calculations per second, or a factor of 50 to 100 times faster than the nation’s most powerful supercomputers in use today.

    At Brookhaven Lab, the Computational Science Initiative (CSI) is focused on developing extreme-scale numerical modeling codes that enable new scientific discoveries in collaboration with Brookhaven’s state-of-the-art experimental facilities, including the National Synchrotron Light Source II, the Center for Functional Nanomaterials, and the Relativistic Heavy Ion Collider—all DOE Office of Science User Facilities.

    BNL RHIC Campus

    This initiative brings together computer scientists, applied mathematicians, and computational scientists to develop and extend modeling capabilities in areas such as quantum chromodynamics, materials science, chemistry, biology, and climate science.

    “Founded only in December 2015, CSI has for the first time brought together leading experts across the lab to address the challenges of exascale computing. The two successful DOE Exascale Computing Project proposals demonstrate the strength of this interdisciplinary team,” said CSI Director Kerstin Kleese van Dam.

    Computational physics

    One of the two projects Brookhaven Lab will contribute to is called “Exascale Lattice Gauge Theory Opportunities and Requirements for Nuclear and High Energy Physics,” led by Fermi National Accelerator Laboratory. Collaborators on the project are DOE’s Jefferson Lab, Boston University, Columbia University, University of Utah, Indiana University, University of Illinois Urbana-Champaign, Stony Brook University, and College of William & Mary.

    The team at Brookhaven will develop algorithms, language environments, and application codes that will enable scientists to perform lattice quantum chromodynamics (QCD) calculations on next-generation supercomputers. These calculations, along with experimental data produced by particle collisions at Brookhaven’s Relativistic Heavy Ion Collider and other facilities, help physicists understand the fundamental interactions between elementary particles called quarks and gluons that represent 99% of the mass in the visible universe.

    Brookhaven physicist Chulwoo Jung and Brookhaven collaborator Peter Boyle of the University of Edinburgh will apply their expertise in QCD and lead the efforts to design new algorithms and software frameworks that are crucial for the success of lattice QCD on exascale machines. Barbara Chapman, head of Brookhaven’s Computer Science and Mathematics Group and a professor in the Computer Science Department at Stony Brook University, and Brookhaven computational scientist Meifeng Lin will tackle the challenge of developing high-performance programing models that will enable scientists to create software with portable performance across different exascale architectures.

    Computational chemistry

    Scientists used x-rays at Brookhaven Lab’s National Synchrotron National Light Source to determine the structure of the proton-regulated calcium channel (ribbons) that is shown above embedded in a lipid bilayer (spheres). This system will be the focus of one of the science challenges of the NWChemEx exascale computing project. The members of the project team will use the computational chemistry code they are developing—called NWChemEx—to help them understand what mechanisms underlie proton transfer and how to control calcium leakage for improved stress resistance in plants.

    The other project that Brookhaven will contribute to, “NWChemEx: Tackling Chemical, Materials and Biomolecular Challenges in the Exascale Era,” will improve the scalability, performance, extensibility, and portability of the popular computational chemistry code NWChem to take full advantage of exascale computing technologies. Robert Harrison, chief scientist of CSI and director of Stony Brook University’s Institute for Advanced Computational Science, will serve as chief architect, working with project director Thom Dunning of Pacific Northwest National Laboratory (PNNL) and deputy project director Theresa Windus of Ames National Laboratory to oversee a team of computational chemists, computer scientists, and applied mathematicians. Argonne, Lawrence Berkeley, and Oak Ridge national labs and Virginia Tech are partners on the project.

    The team will work to redesign the architecture of NWChem so that it is compatible with the pre-exascale and exascale computers to be deployed at the DOE’s Leadership Computing Facilities and the National Energy Research Scientific Computing Center. This effort will be guided by the requirements of scientific challenges in two application areas related to biomass-based energy production: developing energy crops that are resilient to adverse environmental conditions such as drought and salinity (led by Brookhaven structural biologist Qun Liu) and designing catalytic processes for sustainable biomass-to-fuel conversion (led by PNNL scientists).

    Hubertus van Dam, a computational chemist at Brookhaven, will lead the testing and assessment efforts, which are designed to ensure that the project outcomes optimize societal impact. To achieve this goal, the team’s science challenge domain experts will identify requirements—for example, the ability to build structural models from hundreds of thousands of atoms—that will be translated into computational problems of increasing complexity. As the team develops NWChemEx, it will assess the code’s ability to solve these problems.

    A complete list of the 22 selected projects can be found in the press release issued by DOE.

    The ECP is a collaborative effort of two DOE organizations—the Office of Science and the National Nuclear Security Administration. As part of President Obama’s National Strategic Computing Initiative, ECP was established to develop a capable exascale ecosystem, encompassing applications, system software, hardware technologies and architectures, and workforce development to meet the scientific and national security mission needs of DOE in the mid-2020s timeframe.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: