Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:50 pm on December 1, 2015 Permalink | Reply
    Tags: , , ,   

    From LBL: “Berkeley Lab Opens State-of-the-Art Facility for Computational Science” 

    Berkeley Logo

    Berkeley Lab

    November 12, 2015 [This just became available]
    Jon Weiner

    A new center for advancing computational science and networking at research institutions and universities across the country opened today at the Department of Energy’s (DOE) Lawrence Berkeley National Laboratory (Berkeley Lab).

    Berkeley Lab’s Shyh Wang Hall

    Named Shyh Wang Hall, the facility will house the National Energy Research Scientific Computing Center, or NERSC, one of the world’s leading supercomputing centers for open science which serves nearly 6,000 researchers in the U.S. and abroad. Wang Hall will also be the center of operations for DOE’s Energy Sciences Network, or ESnet, the fastest network dedicated to science, which connects tens of thousands of scientists as they collaborate on solving some of the world’s biggest scientific challenges.

    Complementing NERSC and ESnet in the facility will be research programs in applied mathematics and computer science, which develop new methods for advancing scientific discovery. Researchers from UC Berkeley will also share space in Wang Hall as they collaborate with Berkeley Lab staff on computer science programs.

    The ceremonial “connection” marking the opening of Shyh Wang Hall.

    The 149,000 square foot facility built on a hillside overlooking the UC Berkeley campus and San Francisco Bay will house one of the most energy-efficient computing centers anywhere, tapping into the region’s mild climate to cool the supercomputers at the National Energy Research Scientific Computing Center (NERSC) and eliminating the need for mechanical cooling.

    “With over 5,000 computational users each year, Berkeley Lab leads in providing scientific computing to the national energy and science user community, and the dedication of Wang Hall for the Computing program at Berkeley Lab will allow this community to continue to flourish,” said DOE Under Secretary for Science and Energy Lynn Orr.

    Modern science increasingly relies on high performance computing to create models and simulate problems that are otherwise too big, too small, too fast, too slow or too expensive to study. Supercomputers are also used to analyze growing mountains of data generated by experiments at specialized facilities. High speed networks are needed to move the scientific data, as well as allow distributed teams to share and analyze the same datasets.

    Shyh Wang

    Wang Hall is named in honor of Shyh Wang, a professor at UC Berkeley for 34 years who died in 1992. Well-known for his research in semiconductors, magnetic resonances and semiconductor lasers, which laid the foundation for optoelectronics, he supervised a number of students who are now well-known in their own right, and authored two graduate-level textbooks, “Solid State Electronics” and “Fundamentals of Semi-conductor Theory and Device Physics.” Dila Wang, Shyh Wang’s widow, was the founding benefactor of the Berkeley Lab Foundation.

    Solid state electronics, semiconductors and optical networks are at the core of the supercomputers at NERSC—which will be located on the second level of Wang Hall—and the networking routers and switches supporting the Energy Sciences Network (ESnet), both of which are managed by Berkeley Lab from Wang Hall. The Computational Research Division (CRD), which develops advanced mathematics and computing methods for research, will also have a presence in the building.

    NERSC’s Cray Cori supercomputer’s graphic panels being installed at Wang Hall.

    “Berkeley Lab is the most open, sharing, networked, and connected National Lab, with over 10,000 visiting scientists using our facilities and leveraging our expertise each year, plus about 1,000 UC graduate students and postdocs actively involved in the Lab’s world-leading research,” said Berkeley Lab Director Paul Alivisatos. “Wang Hall will allow us to serve more scientists in the future, expanding this unique role we play in the national innovation ecosystem. The computational power housed in Wang Hall will be used to advance research that helps us better understand ourselves, our planet, and our universe. When you couple the combined experience and expertise of our staff with leading-edge systems, you unlock amazing potential for solving the biggest scientific challenges.”

    The $143 million structure financed by the University of California provides an open, collaborative environment bringing together nearly 300 staff members from three lab divisions and colleagues from UC Berkeley to encourage new ideas and new approaches to solving some of the nation’s biggest scientific challenges.

    UC President Janet Napolitano at the Shyh Wang Hall opening.

    “All of our University of California campuses rely on high performance computing for their scientific research,” said UC President Janet Napolitano. “The collaboration between UC Berkeley and Berkeley Lab to make this building happen will go a long ways towards advancing our knowledge of the world around us.”

    The building features unique, large, open windows on the lowest level, facing west toward the Pacific Ocean, which will draw in natural air conditioning for the computing systems. Heat captured from those systems will in turn be used to heat the building. The building will house two leading-edge Cray supercomputers – Edison and Cori [pictured above]– which operate around the clock 52 weeks a year to keep up with the computing demands of users.

    Edison supercomputer

    Temp 1
    The disassembly of our Edison ‪‎supercomputer‬ has begun at NERSC. Edison is relocating to Berkeley from Oakland and into our all-new Shyh Wang Hall.

    Wang Hall will be occupied by Berkeley Lab’s Computing Sciences organization, which comprises three divisions:

    NERSC, the DOE Office of Science’s leading supercomputing center for open science. NERSC supports nearly 6,000 researchers at national laboratories and universities across the country. NERSC’s flagship computer is Edison, a Cray XC30 system capable of performing more than two quadrillion calculations per second. The first phase of Cori, a new Cray XC40 supercomputer designed for data-intensive science has already been installed in Wang Hall.

    ESnet, which links 40 DOE sites across the country and scientists at universities and other research institutions via a 100 gigabits-per second backbone network. ESnet also connects researchers in the U.S. and Europe over connections with a combined capacity of 340 Gbps. To support the transition of NERSC from its 15-year home in downtown Oakland to Berkeley Lab, NERSC and ESnet have developed and deployed a 400 Gbps link for moving massive datasets. This is the first-ever 400 Gbps production network deployed by a research and education network.

    The Computational Research Division, the center for one of DOE’s strongest research programs in applied mathematics and computer science, where more efficient computer architectures are developed alongside more effective algorithms and applications that help scientists make the most effective use of supercomputers and networks to tackle problems in energy, the environment and basic science.

    About Berkeley Lab Computing Sciences
    The Berkeley Lab Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy’s research missions. ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities. The Computational Research Division conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

  • richardmitnick 4:12 pm on December 1, 2015 Permalink | Reply
    Tags: AI, , , , , ,   

    From Nature: “Artificial intelligence called in to tackle LHC data deluge” 

    Nature Mag

    01 December 2015
    Davide Castelvecchi

    Particle collisions at the Large Hadron Collider produce huge amounts of data, which algorithms are well placed to process.

    The next generation of particle-collider experiments will feature some of the world’s most advanced thinking machines, if links now being forged between particle physicists and artificial intelligence (AI) researchers take off. Such machines could make discoveries with little human input — a prospect that makes some physicists queasy.

    Driven by an eagerness to make discoveries and the knowledge that they will be hit with unmanageable volumes of data in ten years’ time, physicists who work on the Large Hadron Collider (LHC), near Geneva, Switzerland, are enlisting the help of AI experts.

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    On 9–13 November, leading lights from both communities attended a workshop — the first of its kind — at which they discussed how advanced AI techniques could speed discoveries at the LHC. Particle physicists have “realized that they cannot do it alone”, says Cécile Germain, a computer scientist at the University of Paris South in Orsay, who spoke at the workshop at CERN, the particle-physics lab that hosts the LHC.

    Computer scientists are responding in droves. Last year, Germain helped to organize a competition to write programs that could ‘discover’ traces of the Higgs boson in a set of simulated data; it attracted submissions from more than 1,700 teams.

    CERN ATLAS Higgs Event
    Higgs Event

    Particle physics is already no stranger to AI. In particular, when ATLAS and CMS, the LHC’s two largest experiments, discovered the Higgs boson in 2012, they did so in part using machine learning — a form of AI that ‘trains’ algorithms to recognize patterns in data.


    CERN CMS Detector

    The algorithms were primed using simulations of the debris from particle collisions, and learned to spot the patterns produced by the decay of rare Higgs particles among millions of more mundane events. They were then set to work on the real thing.

    But in the near future, the experiments will need to get smarter at collecting their data, not just processing it. CMS and ATLAS each currently produces hundreds of millions of collisions per second, and uses quick and dirty criteria to ignore all but 1 in 1,000 events. Upgrades scheduled for 2025 mean that the number of collisions will grow 20-fold, and that the detectors will have to use more sophisticated methods to choose what they keep, says CMS physicist María Spiropulu of the California Institute of Technology in Pasadena, who helped to organize the CERN workshop. “We’re going into the unknown,” she says.

    Inspiration could come from another LHC experiment, LHCb, which is dedicated to studying subtle asymmetries between particles and their antimatter counterparts.

    CERN LHCb New II

    In preparation for the second, higher-energy run of the LHC, which began in April, the LHCb team programmed its detector to use machine learning to decide which data to keep.

    LHCb is sensitive to tiny variations in temperature and pressure, so which data are interesting at any one time changes throughout the experiment — something that machine learning can adapt to in real time. “No one has done this before,” says Vladimir Gligorov, an LHCb physicist at CERN who led the AI project.

    Particle-physics experiments usually take months to recalibrate after an upgrade, says Gligorov. But within two weeks of the energy upgrade, the detector had ‘rediscovered’ a particle called the J/Ψ meson — first found in 1974 by two separate US experiments, and later deemed worthy of a Nobel prize.

    In the coming years, CMS and ATLAS are likely to follow in LHCb’s footsteps, say Spiropulu and others, and will make the detector algorithms do more work in real time. “That will revolutionize how we do data analysis,” says Spiropulu.

    An increased reliance on AI decision-making will present new challenges. Unlike LHCb, which focuses mostly on finding known particles so they can be studied in detail, ATLAS and CMS are designed to discover new particles. The idea of throwing away data that could in principle contain huge discoveries, using criteria arrived at by algorithms in a non-transparent way, causes anxiety for many physicists, says Germain. Researchers will want to understand how the algorithms work and to ensure they are based on physics principles, she says. “It’s a nightmare for them.”

    Proponents of the approach will also have to convince their colleagues to abandon tried-and-tested techniques, Gligorov says. “These are huge collaborations, so to get a new method approved, it takes the age of the Universe.” LHCb has about 1,000 members; ATLAS and CMS have some 3,000 each.

    Despite these challenges, the most hotly discussed issue at the workshop was whether and how particle physics should make use of even more sophisticated AI, in the form of a technique called deep learning. Basic machine-learning algorithms are trained with sample data such as images, and ‘told’ what each picture shows — a house versus a cat, say. But in deep learning, used by software such as Google Translate and Apple’s voice-recognition system Siri, the computer typically receives no such supervision, and finds ways to categorize objects on its own.

    Although they emphasized that they would not be comfortable handing over this level of control to an algorithm, several speakers at the CERN workshop discussed how deep learning could be applied to physics. Pierre Baldi, an AI researcher at the University of California, Irvine who has applied machine learning to various branches of science, described how he and his collaborators have done research suggesting that a deep-learning technique known as dark knowledge might aid — fittingly — in the search for dark matter.

    Deep learning could even lead to the discovery of particles that no theorist has yet predicted, says CMS member Maurizio Pierini, a CERN staff physicist who co-hosted the workshop. “It could be an insurance policy, just in case the theorist who made the right prediction isn’t born yet.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Nature is a weekly international journal publishing the finest peer-reviewed research in all fields of science and technology on the basis of its originality, importance, interdisciplinary interest, timeliness, accessibility, elegance and surprising conclusions. Nature also provides rapid, authoritative, insightful and arresting news and interpretation of topical and coming trends affecting science, scientists and the wider public.

  • richardmitnick 3:53 pm on December 1, 2015 Permalink | Reply
    Tags: , , ,   

    From Berkeley: “Exoplanet kicked into exile” 

    UC Berkeley

    UC Berkeley

    December 01, 2015
    Robert Sanders

    A planet discovered last year sitting at an unusually large distance from its star – 16 times farther than Pluto is from the sun – may have been kicked out of its birthplace close to the star in a process similar to what may have happened early in our own solar system’s history.

    A wide-angle view of the star HD 106906 taken by the Hubble Space Telescope and a close-up view from the Gemini Planet Imager reveal a dynamically disturbed system of comets, suggesting a link between this and the unusually distant planet (upper right), 11 times the mass of Jupiter. Click image for hi-res versions & caption. Paul Kalas image, UC Berkeley.

    Images from the Gemini Planet Imager (GPI) in the Chilean Andes and the Hubble Space Telescope show that the star has a lopsided comet belt indicative of a very disturbed solar system, and hinting that the planet interactions that roiled the comets closer to the star might have sent the exoplanet into exile as well.

    NASA Hubble Telescope
    NASA/ESA Hubble

    The planet may even have its own ring of debris that it dragged along with it.

    “We think that the planet itself could have captured material from the comet belt, and that the planet is surrounded by a large dust ring or dust shroud,” said Paul Kalas, an adjunct professor of astronomy at the University of California, Berkeley. “We conducted three tests and found tentative evidence for a dust cloud, but the jury is still out.”

    “The measurements we made on the planet suggest it may be dustier than comparison objects, and we are making follow-up observations to check if the planet is really encircled by a disk – an exciting possibility,” added Abhi Rajan, a graduate student at Arizona State University who analyzed the planet images.

    Such planets are of interest because in its youth, our own solar system may have had planets that were kicked out of the local neighborhood and are no longer among of the eight planets we see today.

    “Is this a picture of our solar system when it was 13 million years old?” asks Kalas. “We know that our own belt of comets, the Kuiper belt, lost a large fraction of its mass as it evolved, but we don’t have a time machine to go back and see how it was decimated.

    Known objects in the Kuiper belt beyond the orbit of Neptune. (Scale in AU; epoch as of January 2015.

    One of the ways, though, is to study these violent episodes of gravitational disturbance around other young stars that kick out many objects, including planets.”

    The disturbance could have been caused by a passing star that perturbed the inner planets, or a second massive planet in the system. The GPI team looked for another large planet closer to the star that may have interacted with the exoplanet, but found nothing outside of a Uranus-sized orbit.

    Kalas and Rajan will discuss the observations during a Google+ Hangout On Air at 7 a.m. Hawaii time (noon EST) on Dec. 1 during Extreme Solar Systems III, the third in a series of international meetings on exoplanets that this year takes place on the 20th anniversary of the discovery of the first exoplanet in 1995. Viewers without Google+ accounts may participate via YouTube.

    A paper about the results, with Kalas as lead author, was published in the The Astrophysical Journal on Nov. 20, 2015.

    Young, 13-million-year-old star

    The star, HD 106906, is located 300 light years away in the direction of the constellation Crux and is similar to the sun, but much younger: about 13 million years old, compared to our sun’s 4.5 billion years. Planets are thought to form early in a star’s history, however, and in 2014 a team led by Vanessa Bailey at the University of Arizona discovered a planet HD 106906 b around the star weighing a hefty 11 times Jupiter’s mass and located in the star’s distant suburbs, an astounding 650 AU from the star (one AU is the average distance between Earth and the sun, or 93 million miles).

    The star HD 106906 and the planet HD 106906 b, with Neptune’s orbit for comparison

    The Gemini Planet Imager mounted on the Gemini South telescope in Chile. Courtesy of Gemini Telescopes.

    Planets were not thought to form so far from their star and its surrounding protoplanetary disk, so some suggested that the planet formed much like a star, by condensing from its own swirling cloud of gas and dust. The GPI and Hubble discovery of a lopsided comet belt and possible ring around the planet points instead to a normal formation within the debris disk around the star, but a violent episode that forced it into a more distant orbit.

    Kalas and a multi-institutional team using GPI first targeted the star in search of other planets in May 2015 and discovered that it was surrounded by a ring of dusty material very close to the size of our own solar system’s Kuiper Belt. The emptiness of the central region – an area about 50 AU in radius, slightly larger than the region occupied by planets in our solar system – indicates that a planetary system has formed there, Kalas said.

    He immediately reanalyzed existing images of the star taken earlier by the Hubble Space Telescope and discovered that the ring of dusty material extended much farther away and was extremely lopsided. On the side facing the planet, the dusty material was vertically thin and spanned nearly the huge distance to the known planet, but on the opposite side the dusty material was vertically thick and truncated.

    “These discoveries suggest that the entire planetary system has been recently jostled by an unknown perturbation to its current asymmetric state,” he said. The planet is also unusual in that its orbit is possibly tilted 21 degrees away from the plane of the inner planetary system, whereas most planets typically lie close to a common plane.

    Kalas and collaborators hypothesized that the planet may have originated from a position closer to the comet belt, and may have captured dusty material that still orbits the planet. To test the hypothesis, they carefully analyzed the GPI and Hubble observations, revealing three properties about the planet consistent with a large dusty ring or shroud surrounding it. However, for each of the three properties, alternate explanations are possible.

    The investigators will be pursuing more sensitive observations with the Hubble Space Telescope to determine if HD 106906b is in fact one of the first exoplanets that resembles Saturn and its ring system.

    The inner belt of dust around the star has been confirmed by an independent team using the planet-finding instrument SPHERE on the ESO’s Very Large Telescope in Chile. The lopsided nature of the debris disk was not evident, however, until Kalas called up archival images from Hubble’s Advanced Camera for Surveys.

    The GPI Exoplanet Survey, operated by a team of astronomers at UC Berkeley and 23 other institutions, is targeting 600 young stars, all less than approximately 100 million years old, to understand how planetary systems evolve over time and what planetary dynamics could shape the final arrangement of planets like we see in our solar system today. GPI operates on the Gemini South telescope and provides extremely high-resolution, high-contrast direct imaging, integral field spectroscopy and polarimetry of exoplanets.

    Gemini South telescope
    Gemini South

    Among Kalas’s coauthors are UC Berkeley graduate student Jason Wang. The research was supported by the National Science Foundation and NASA’s Nexus for Exoplanet System Science (NExSS) research coordination network sponsored by NASA’s Science Mission Directorate.

    NASA NExSS bloc

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Founded in the wake of the gold rush by leaders of the newly established 31st state, the University of California’s flagship campus at Berkeley has become one of the preeminent universities in the world. Its early guiding lights, charged with providing education (both “practical” and “classical”) for the state’s people, gradually established a distinguished faculty (with 22 Nobel laureates to date), a stellar research library, and more than 350 academic programs.

    UC Berkeley Seal

  • richardmitnick 3:28 pm on December 1, 2015 Permalink | Reply
    Tags: , ,   

    From IceCube: “A search for cosmic-ray sources with IceCube, the Pierre Auger Observatory, and the Telescope Array” 

    IceCube South Pole Neutrino Observatory

    01 Dec 2015
    Sílvia Bravo

    High-energy neutrinos are thought to be excellent cosmic messengers when exploring the extreme universe: they don’t bend in magnetic fields as cosmic rays (CRs) do and they are not absorbed by the radiation background as gamma rays are. However, it turns out that the deviation of some CRs, namely protons, is expected to be only a few degrees at energies above 50 EeV. This opens the possibility for investigating common origins of high-energy neutrinos and CRs.

    In a new study by the IceCube, Pierre Auger, and Telescope Array Collaborations, scientists have looked for correlations between the highest energy neutrino candidates in IceCube and the highest energy CRs in these two cosmic-ray observatories.

    Pierre Augur Observatory
    Pierre Auger

    Telescope Array Collaboration
    Telescope Array Collaboration

    The results, submitted today to the Journal of Cosmology and Astroparticle Physics, have not found any correlation at discovery level. However, potentially interesting results have been found and will continue to be studied in future joint analyses.

    Maps in Equatorial and Galactic coordinates showing the arrival directions of the IceCube cascades (black dots) and tracks (diamonds), as well as those of the UHECRs detected by the Pierre Auger Observatory (magenta stars) and Telescope Array (orange stars). The circles around the showers indicate angular errors. The black diamonds are the HESE tracks while the blue diamonds stand for the tracks from the through-going muon sample. The blue curve indicates the Super-Galactic plane. Image: IceCube, Pierre Auger and Telescope Array Collaborations.

    The IceCube astrophysical neutrino flux is consistent with an isotropic distribution, which suggests that most neutrinos have an extragalactic origin. The intensity of this flux is also found to be close to the so-called Waxman-Bahcall flux, which is the rate assuming that ultra-high-energy CRs (UHECRs) are mainly protons and have a power-law spectrum. In this scenario, primary cosmic rays collide to a significant extent with photons and neutrons within the source environment, resulting in mainly protons escaping from these sources.

    The UHECRs detected by the Pierre Auger Observatory (Auger) and the Telescope Array (TA) that were used in this study have energies above 50 EeV, since at the highest energies cosmic rays are deflected the least. UHECRs produce neutrinos that carry only 3-5% of the original proton energy, i.e., neutrinos that would have energies of at least several hundred PeVs for the CR sample of this work. However, we expect that the sources of these UHECRs will also produce lower energy CRs, which would then produce neutrinos in the energy range—30 TeV to 2 PeV—observed in IceCube. And this is the idea behind this search: to look for a statistical excess of neutrinos in IceCube from the direction of cosmic rays in the Auger and TA and, thus, their sources.

    Not a simple search, but definitely worth trying to study since searches for the most obvious potential CR sources using IceCube neutrinos have not been successful so far. The major challenges of this search are: i) CRs do not precisely point to their sources, and our knowledge of the deviations produced by the galactic magnetic fields is limited; ii) cascade neutrino events—mainly produced by electron and tau neutrinos—in IceCube are characterized by large angular uncertainties; and iii) IceCube neutrino candidates include background muon events due to the interaction of CRs with the Earth’s atmosphere.

    Researchers have used three different analyses to tackle these challenges. They first searched for cross-correlations between the number of CR-neutrino pairs at different angular windows and compared them to expectations for the null hypothesis of an isotropic UHECR flux. Then, they used a stacking likelihood analysis that looked for the combined contribution from different sources. These two searches used both cascade and track neutrino events from the astrophysical neutrino fluxes measured in IceCube (HESE and high-energy throughgoing muons). IceCube track neutrino signatures are produced by charged-current interactions of muon neutrinos and have an angular uncertainty of less than one degree. Finally, they performed a third study, a stacking search using the neutrino sample used for the four-year point-source search in IceCube, which includes track neutrino candidates only.

    The results obtained are all below 3.3 sigma. There is a potentially interesting finding in the analyses performed with the set of high-energy cascades. When compared to an isotropic flux of neutrinos (fixing the positions of the cosmic rays) to consider the effect of anisotropies in the arrival directions of cosmic rays, the significance is 2.4 sigma for the cross-correlation analysis. These results were obtained with relatively few events and the collaborations will update the analyses in the future with additional statistics to follow their evolution.

    + Info: Search for correlations between the arrival directions of IceCube neutrino events and ultrahigh-energy cosmic rays detected by the Pierre Auger Observatory and the Telescope Array, IceCube, Pierre Auger and Telescope Array Collaborations: M.G.Aartsen et al. Submitted to Journal of Cosmology and Astroparticle Physics,

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ICECUBE neutrino detector
    IceCube is a particle detector at the South Pole that records the interactions of a nearly massless sub-atomic particle called the neutrino. IceCube searches for neutrinos from the most violent astrophysical sources: events like exploding stars, gamma ray bursts, and cataclysmic phenomena involving black holes and neutron stars. The IceCube telescope is a powerful tool to search for dark matter, and could reveal the new physical processes associated with the enigmatic origin of the highest energy particles in nature. In addition, exploring the background of neutrinos produced in the atmosphere, IceCube studies the neutrinos themselves; their energies far exceed those produced by accelerator beams. IceCube is the world’s largest neutrino detector, encompassing a cubic kilometer of ice.

  • richardmitnick 2:49 pm on December 1, 2015 Permalink | Reply
    Tags: , , ,   

    From Symmetry: “What could dark matter be?” 


    Laura Dattaro

    Dark matter is an invisible material that emits or absorbs no light but betrays its presence by interacting gravitationally with visible matter. This image from Dark Universe shows the distribution of dark matter in the universe, as simulated with a novel, high-resolution algorithm at the Kavli Institute of Particle Astrophysics & Cosmology at Stanford University and SLAC National Accelerator Laboratory. Credit: © AMNH

    Although nearly a century has passed since an astronomer first used the term dark matter in the 1930s, the elusive substance still defies explanation. Physicists can measure its effects on the movements of galaxies and other celestial objects, but what it’s made of remains a mystery.

    In order to solve it, physicists have come up with myriad possibilities, plus a unique way to find each one. Some ideas for dark matter particles arose out of attempts to solve other problems in physics. Others are pushing the boundaries of what we understand dark matter to be.

    “You don’t know which experiment is going to ultimately show it,” says Neal Weiner, a New York University physics professor. “And if you don’t think of the right experiment, then you might not find it. It’s not just going to hit you in the face, because it’s dark matter.”

    One image of WIMPS


    The term WIMP encompasses many dark matter particles, some of which are discussed in this list.

    Short for weakly interacting massive particles, WIMPs would have about 1 to 1000 times the mass of a proton and would interact with one another only through the weak force , the force responsible for radioactive decay.

    If dark matter were a pop star, WIMPs would be Beyoncé. “WIMPs are the canonical candidate,” says Manoj Kaplinghat, a professor of physics and astronomy at the University of California, Irvine.

    But a recent surge in data has cast new doubt on their existence. Despite the fact that scientists are hunting for them in experiments in space and on Earth, including ones at the Large Hadron Collider, WIMPs have yet to show themselves, making the restrictions on their mass, interaction strength and other properties ever tighter.

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    If WIMPs do fail to appear, the upshot will be a push for creative new solutions to the dark matter mystery—plus a chance to finally cross something off the list.

    “If we don’t see it, it will at least end up closing the chapter on a really dominant paradigm that’s been the guide in the field for many, many years,” says Mariangela Lisanti, a theoretical particle physicist at Princeton University.

    Fermilab is running an experiment called MiniBooNE, whose detector is shown here, to verify the existence of a relatively low-mass ‘sterile’ neutrino (Image: FNAL)

    Sterile neutrinos

    Neutrinos are almost massless particles that shape-shift from one type to another and can stream right through an entire planet without hitting a thing. As strange as they are, they may have an even odder counterpart known as sterile neutrinos.

    These most elusive particles would be so unresponsive to their surroundings that it would take the entire age of the universe for just one to interact with another bit of matter.

    If sterile neutrinos are the stuff of dark matter, their reluctance to interact might seem to spell doom for physicists hoping to detect them. But in a poetic twist, it’s possible that they decay into something we can find quite easily: photons, or particles of light.

    “Photons, we’re pretty good at,” says Stefano Profumo, a physics professor at the University of California, Santa Cruz.

    Last year, physicists using space-based telescopes discovered a steady signal with the energy predicted for decaying sterile neutrinos streaming from the centers of galaxy clusters. But the signal could originate from a different source, such as potassium ions. (Profumo proposed this idea in a paper provocatively titled Dark matter searches going bananas.) A new Japanese telescope known as [JAXA]/ASTRO-H has much better energy resolution and may be able to put an end to the debate.

    JAXA ASTRO-H telescope

    Image including neutralinos


    The canonical example of a WIMP, the neutralino, arises out of the theory of Supersymmetry. Supersymmetry posits that every known particle has a “super” partner and helps to fill some holes in the Standard Model, but its particles have stubbornly eluded observation.

    Supersymmetry standard model
    Standard Model of Supersymmetry

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    Some of them, like the partners of the photon and the Z boson, have properties akin to dark matter. Dark matter could be a mix of these supersymmetric particles, and the one we’d be most likely to observe is known as the neutralino.

    Discovering a neutralino would help solve two massive physics problems—it would tell us the identity of dark matter and would give us proof of the existence of Supersymmetry. But it would also leave physicists with the conundrum of all those other missing supersymmetric particles.

    “If dark matter is a neutralino, it’s essentially telling us there’s a whole host of other new stuff that’s out there that’s just waiting to be discovered,” Lisanti says. “It opens up a floodgate of really, really interesting and very exciting work to be done.”

    Asymmetric dark matter

    In the beginning of the universe, matter and antimatter collided furiously, annihilating each other on contact until, somehow, only matter was left. But there’s nothing in the Standard Model of particle physics that says this must be so. Antimatter and matter should have existed in equal amounts, wiping each other out and leaving an empty universe.

    That’s clearly not the case, and physicists don’t yet know why. It’s possible the same principle applies to dark matter. In a twist on the standard neutralino theory, which includes the property that neutralinos are their own antiparticle, an idea known as asymmetric dark matter proposes that anti-dark matter particles were wiped out by their dark matter counterparts, leaving behind the dark matter we see today. Finding asymmetric dark matter could help answer not only the question of what dark matter is, but also why we’re here to look for it.

    From Quantum Diaries, depiction of Axions.


    As the search for WIMPs faces challenges, a particle known as the axion is generating new excitement.

    The axion itself is not new. Physicists first imagined its existence in the early 1980s, shortly after physicists Helen Quinn and Roberto Peccei published a landmark paper that helped to solve a problem with the strong nuclear force. While it’s been simmering in the background as a dark matter candidate for decades, experimentalists haven’t been able to search for it—until now.

    “We’re just recently getting to the stage of having experiments that are able to probe the most interesting regions of axion parameter space,” says physics professor Risa Wechsler of the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of Stanford University and SLAC National Accelerator Laboratory.

    The University of Washington’s Axion Dark Matter Experiment (ADMX) is on the hunt for axions, using a strong magnetic field to try to turn them into detectable photons.

    U Washington ADMX Axion Dark Matter Experiment

    At the same time, theorists are beginning to imagine new types of axions, along with novel ways to search for them.

    “There’s been a renaissance in axion theory, leading to a lot more excitement in axion experiments,” says UCI theoretical physicist Jonathan Feng.

    Mirror world dark matter

    Just like strange objects and creatures inhabited the world beyond Alice’s looking glass, dark matter might exist in an entirely separate world full of its own versions of all the elementary particles. These dark protons and neutrons would never interact with us, save through gravity, exerting a pull on matter in our world without leaving any other trace. “The only reason we know there’s something out there called dark matter is because of gravity,” Feng says. “This embodies that very beautifully.”

    Beautiful as it may be, the theory leaves little hope for ever detecting dark matter. But there are hints that dark photons might be able to morph into regular photons, similar to the way neutrinos oscillate among flavors. This has spawned active research into understanding and finding these mysterious particles.

    Extra dimensional dark matter

    If dark matter doesn’t exist in another world entirely, it might live in a fourth spatial dimension unseen by humans and our experiments. Such a dimension would be too small for us to observe a particle’s movements within it. Instead, we would see multiple particles with the same charge but different masses, an idea proposed by Theodor Kaluza and Oskar Klein in the 1920s. One of these particles could be the dark matter particle, a much more recent concept known as Kaluza-Klein dark matter. These particles wouldn’t shine or reflect any light, explaining why dark matter can’t be seen by anyone in our three dimensions.

    Confirming that dark matter exists in another dimension could also be seen as support for string theory, which requires extra dimensions to work.

    “You can go out there and map out the extra-dimensional world just like 500 years ago people mapped out the continents,” Feng says.


    Though physicists have never detected dark matter, they have a pretty good idea of how much of it exists, based on observations of galaxies. But observations of the inner regions of galaxies don’t match up with some dark matter simulations, a puzzle physicists and astronomers are still working to solve.

    Those simulations often assume that dark matter doesn’t interact with itself, but there’s no reason to believe that has to be the case. That realization has led to the concept of strongly interacting massive particles, or SIMPs, the latest newcomer to the crowded field of dark matter candidates. Simulations run with SIMPs seem to eliminate the discrepancy in other models, Feng says, and could even explain the strange photon signal emanating from galaxy clusters, rather than sterile neutrinos.

    Composite dark matter

    Dark matter could be none of these candidates—or it could be more than one.

    “There is no reason for dark matter to be just one particle, not a single one,” Kaplinghat says. “We only assume it is for simplicity.”

    After all, visible matter is made up of a swarm of particles, each with their own properties and behaviors, each able to combine with others in countless ways. Why should dark matter be any different?

    Dark matter could have its own equivalents of quarks and gluons interacting to form dark baryons and other particles. There could be dark atoms, made of several particles linked together.

    Whatever the case, dark matter is likely to keep physicists probing the depths of the universe for decades, revealing new mysteries even as old ones are solved.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.

  • richardmitnick 5:20 pm on November 30, 2015 Permalink | Reply
    Tags: , ,   

    From SDSS: “How SDSS Uses Light to Measure the Mass of Stars in Galaxies” 

    SDSS Telescope

    Sloan Digital Sky Survey

    November 30, 2015
    Karen Masters

    SDSS 3

    Galaxy NGC 3338 imaged by SDSS (the red stars to the right is in our own galaxy). Credit: SDSS

    It might sound relatively simple – astronomers look at a galaxy, count the stars in it, and work out how much mass they contain, but in reality interpreting the total light from a galaxy as a mass of stars is fairly complex.

    If all stars were the same mass and brightness, it would be easy, but stars come in all different brightnesses, colours and masses, with the lowest mass stars over 600 times smaller than the most massive.

    Hertzsprung-Russell (HR) Diagram, which shows the mass, colour, brightness and lifetimes of different types of stars. This version identifies many well known stars in the Milky Way galaxy. Credit: ESO

    And it turns out that most of the light from a galaxy will come from just a small fraction of these stars (those in the upper left of the HR diagram). The most massive stars are so much brighter ounce for ounce than dimmer stars this makes estimating the total mass much more of a guessing game than astronomers would like (while they are 600 times more massive, they are over a million times brighter). So astronomers have to make assumptions about how many stars of low mass are hiding behind the light of their brighter siblings to make the total count.

    One of the first astronomers to suggest trying to decode the light from galaxies in this way was Beatrice Tinsley. British born, raised in New Zealand, and working at Yale University in the USA, Dr. Tinsley had a much larger impact on extragalactic astronomy than her sadly shortened career would suggest (she died of cancer in 1981 aged just 40).

    Stars of different masses have distinctive spectra (and colours), as first famously classified by Astronomer Annie Jump Cannon in the late 1890s into the OBAFGKM stellar sequence. O stars (at the top left of the HR diagram) are massive, hot, blue and with very strong emission lines, while M stars (at the lower right) are low mass, red and show absorption features from metallic lines in their atmospheres. With a best guess as to the relative abundance of different stars (something we call the “initial mass function“) a stellar population model can be constructed from individual stellar spectra or colours and fit to the total light from the galaxy. Example optical spectra of different types of stars are shown below (or see the APOGEE View of the IR Stellar Sequence)

    Example optical spectra of different stellar types. Credit: NOAO/AURA/NSF.

    Using data from SDSS (and other surveys) astronomers use this methods to decode the galaxy light – in fact we can use either the total light observed through different filters in the SDSS imaging, to match the colours of the stars, or if we measure the spectrum of the galaxy we can fit a population of stars to this instead. While in principle the spectrum should give more information, in SDSS (at least before the MaNGA survey) we take spectra through a small fibre aperture (just 2-3″ across), so for nearby galaxies this misses most of the light (e.g. see below), and most galaxies have colour gradients (being redder in the middle than the outskirts), so the extrapolation can add quite a lot of error to the inferred mass.

    Many astronomers prefer to use models based on the total light through different filters (at least for nearby galaxies). The five filters of the SDSS imaging are an excellent start for this, but extending into the UV with the GALEX survey, and IR with a survey like 2MASS or WISE adds even more information to make sure no stars are being missed.

    NASA Galex telescope

    2MASS Telescope

    NASA Wise Telescope

    However, these fits are still a “best guess” and will still have error – there is often more than one way to fit the galaxy light (e.g. model galaxies with certain combinations of ages and metallicities can have the same integrated colours), so there’s still typically up to 50% error in the inferred mass.

    The SDSS camera filter throughput curves (from left to right ugriz). Credit: SDSS

    But with galaxies spanning more than 3 orders of magnitude in total mass (ie. the biggest galaxies have more than a 1000 times the stellar mass of the smallest) this is still good enough for many purposes. It gives us an idea of the total mass in stars in a galaxy, which (as you know from earlier post for IYL2015) is almost always way less than the total mass we estimate from looking at the dynamics (ie. the “gravitating mass”). And the properties of galaxies correlate extremely well with their stellar masses, so it’s a really useful thing to have even an estimate of.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Sloan Digital Sky Survey has created the most detailed three-dimensional maps of the Universe ever made, with deep multi-color images of one third of the sky, and spectra for more than three million astronomical objects. Learn and explore all phases and surveys—past, present, and future—of the SDSS.

    The SDSS began regular survey operations in 2000, after a decade of design and construction. It has progressed through several phases, SDSS-I (2000-2005), SDSS-II (2005-2008), SDSS-III (2008-2014), and SDSS-IV (2014-). Each of these phases has involved multiple surveys with interlocking science goals. The three surveys that comprise SDSS-IV are eBOSS, APOGEE-2, and MaNGA, described at the links below. You can find more about the surveys of SDSS I-III by following the Prior Surveys link.

    Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS- IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is

    SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatory of China, New Mexico State University, New York University, University of Notre Dame, Observatório Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.

  • richardmitnick 4:22 pm on November 30, 2015 Permalink | Reply
    Tags: , , , Hypernova   

    From Discovery: “Turbulent Magnetic ‘Perfect Storm’ Triggers Hypernovas” 

    Discovery News
    Discovery News

    Nov 30, 2015
    Ian O’Neill

    Screenshot of the turbulent conditions inside a core-collapse Type II supernova simulation carried out by the Blue Waters supercomputer. Robert R. Sisneros (NCSA) and Philipp Mösta

    Cray Blue Waters supercomputer

    Although intense magnetic fields have long been assumed as the driving force behind the most powerful supernovas, astrophysicists have now created a computer model that simulates a dying stars’ magnetic guts before generating a cosmic monster.

    From the computer model, Supernova Plasma Energy

    When massive stars die, they explode. But sometimes these stars really, really explode, becoming the most powerful explosions in the observable universe.

    When a massive star runs out of hydrogen fuel, the intense gravity inside its core will start to fuse progressively more massive elements together. On cosmic timescales, this process happens fast, but as the star starts to try to fuse iron, the process comes to an abrupt stop. Fusion in the core is extinguished, and gravity wants to crush the core into oblivion.

    Over a period of one second, the star’s core will dramatically implode, from around 1,000 miles to 10 miles across, initiating the mother of all shock waves that, ultimately, rip the star to shreds. This is the short story: star runs out of fuel, implodes, shockwaves, massive explosion. All that’s left is a rapidly expanding cloud of super-heated gas and a tiny neutron star rapidly spinning where the star’s core used to live.

    This model is all well and good for explaining how massive stars die, but occasionally astronomers see stellar explosions in the farthest-most reaches of the cosmos popping off with way more energy than can be explained by conventional supernova models. These explosions are known as gamma-ray bursts and it is believed they are the product of a very special breed of supernova — the HYPERnova.

    Besides sounding like the next Marvel Comics movie baddie, a hypernova is the epitome of magnetic intensity. As a massive star’s core begins to collapse, it doesn’t only experience a rapid increase in density; the spin of the star is conserved, and, like an ice-skater who retracts her arms while spinning on the spot, the core of the collapsing star will rapidly “spin up” as it shrinks. Along with all this spinning violence, turbulent flows in the superheated plasma spike and the magnetic field of the star becomes extremely concentrated.

    Artist’s impression of a hypernova, generating 2 gamma-ray jets. NASA/JPL-Caltech

    Until now, these effects of core collapse supernovas were pretty well understood — though firmly based in theory, observations of supernovae appear to provide observational evidence of this theory. But the mechanisms behind hypernovae (and gamma-ray bursts) haven’t been fully appreciated, until now.

    In a simulation using one of the most powerful supercomputers on the planet, an international team of researchers have created a model of a hypernova’s core, during collapse, over a fraction of a second as it erupts. And what they found could be the Holy Grail behind gamma-ray bursts.

    The reason why gamma-ray bursts are so energetic is that it is believed that when a massive star collapses and goes supernova, something happens in the core that blasts matter and energy in opposite directions in two highly concentrated (or collimated) jets from the erupting supernova’s magnetic poles. Because these jets are so intense, should one of the beams from the hypernova be pointing at Earth, the signal gives the impression it was generated by a much more powerful explosion than a typical supernova can muster.

    “We were looking for the basic mechanism, the core engine, behind how a collapsing star could lead to the formation of jets,” said computational scientist Erik Schnetter, of the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, who designed the model to simulate the cores of dying stars.

    One way to imagine why these jets are so powerful would be to take a stick of dynamite and place it on the ground with a cannonball balanced on top. When the dynamite explodes, it makes a loud bang and might leave a small smoking crater in the ground, but the cannonball probably won’t move very far — it will likely jump a foot in the air and roll into the small crater. But place that same stuck of dynamite in a metal tube, block one end and roll the cannonball into the open end, as the dynamite explodes, all the energy is focused out of the open end, ejecting the ball hundreds of meters into the air.

    Like our dynamite analogy, most of the hypernova’s energy is concentrated through the two jets — contained inside magnetic “tubes”. So when we see the jet pointing at us, it appears many times brighter (and more powerful) than the sum of its parts if the supernova ejected all of its energy omnidirectionally. This is a gamma-ray burst.

    How these jets are formed, however, has largely been a mystery. But the simulation carried out over 2 weeks on the Blue Waters supercomputer, based at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, has revealed an extreme dynamo, driven by turbulence, may be at the center of it all.

    “A dynamo is a way of taking the small-scale magnetic structures inside a massive star and converting them into larger and larger magnetic structures needed to produce hypernovae and long gamma-ray bursts,” said postdoctoral fellow Philipp Mösta, of the University of California, Berkeley, and first author of a study published in the journal Nature. “That kicks off the process.

    “People had believed this process could work out. Now we actually show it.”

    By reconstructing the fine-scale structure inside a dying star’s core as it collapses, the researchers have, for the first time, shown that a mechanism called “magnetorotational instability” may be what triggers the intense magnetic conditions inside the core of a hypernova to generate the powerful jets.

    Different layers of stars are known to rotate at different speeds — indeed, our sun is known to have differential rotation. As a massive star’s core collapses, this differential rotation triggers intense instabilities, creating turbulence that channels the magnetic fields into powerful flux tubes. This rapid alignment accelerates the stellar plasma, which, in turn, revs up the magnetic field a quadrillion (that’s a 1 with 15 zeros) times. This feedback loop will fuel the rapid release of material out of the magnetic poles, triggering a hypernova and gamma-ray burst.

    According to Mösta, this situation is akin to how powerful hurricanes form in the Earth’s atmosphere; small scale turbulent weather phenomena coalesce to form large-scale cyclones. Hypernova could therefore be imagined as the “perfect storm,” where small-scale turbulence in a collapsing core drives powerful magnetic fields that, if the conditions are right, produce intense jets of exploding matter.

    “What we have done are the first global extremely high-resolution simulations of this that actually show that you create this large global field from a purely turbulent one,” Mösta said. “The simulations also demonstrate a mechanism to form magnetars, neutron stars with an extremely strong magnetic field, which may be driving a particular class of very bright supernovae.”

    Although digging into the guts of the most powerful explosions in the universe is cool in itself, this research may also go to some way of understanding how some of the heaviest elements in our universe formed.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 3:38 pm on November 30, 2015 Permalink | Reply
    Tags: , ,   

    From ANU: “How the Earth’s Pacific plates collapsed” 

    ANU Australian National University Bloc

    Australian National University

    23 November 2015

    Professor Richard Arculus. Image Charles Tambiah and MNF

    Scientists drilling into the ocean floor have for the first time found out what happens when one tectonic plate first gets pushed under another.

    The key principle of plate tectonics is that the lithosphere exists as separate and distinct tectonic plates, which float on the fluid-like (visco-elastic solid) asthenosphere. The relative fluidity of the asthenosphere allows the tectonic plates to undergo motion in different directions. This map shows 15 of the largest plates. Note that the Indo-Australian Plate may be breaking apart into the Indian and Australian plates, which are shown separately on this map.

    The international expedition drilled into the Pacific ocean floor and found distinctive rocks formed when the Pacific tectonic plate changed direction and began to plunge under the Philippine Sea Plate about 50 million years ago.

    “It’s a bit like a rugby scrum, with two rows of forwards pushing on each other. Then one side goes down and the other side goes over the top,” said study leader Professor Richard Arculus, from the Research School of Earth Sciences.

    “But we never knew what started the scrum collapsing,” said Professor Arculus.

    The new knowledge will help scientists understand the huge earthquakes and volcanoes that form where the Earth’s plates collide and one plate gets pushed under the other.

    As part of the International Ocean Discovery Program, the team studied the sea floor in 4,700 metres of water in the Amami Sankaku Basin of the north-western Pacific Ocean, near the Izu-Bonin-Mariana Trench, which forms the deepest parts of the Earth’s oceans.

    Drilling 1,600 metres into the sea floor, the team recovered rock types from the extensive rifts and big volcanoes that were initiated as one plate bored under the other in a process known as subduction.

    “We found rocks low in titanium, but high in scandium and vanadium, so the Earth’s mantle overlying the subducting plate must have been around 1,300 degrees Celsius and perhaps 150 degrees hotter than we expected to find,” Professor Arculus said.

    The team found the tectonic scrum collapsed at the south end first and then the Pacific Plate rapidly collapsed 1,000 kilometres northwards in about one million years.

    “It’s quite complex. There’s a scissoring motion going on. You’d need skycam to see the 3D nature of it,” Professor Arculus said.

    Professor Arculus said that the new knowledge could give insights into the formation of copper and gold deposits that are often formed where plates collide.

    The research is published in Nature Geoscience.


    The International Ocean Discovery Program (IODP) is the world’s largest geoscience research program with 26 countries in its membership.

    ANU hosts the office of the Australian and New Zealand IODP Consortium (ANZIC), which includes 15 universities and two government agencies involved in geoscience. The IODP was awarded $10 million in Australian Research Council over five years in the latest round of ARC funding.

    More information about ANZIC is available at

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ANU Campus

    ANU is a world-leading university in Australia’s capital city, Canberra. Our location points to our unique history, ties to the Australian Government and special standing as a resource for the Australian people.

    Our focus on research as an asset, and an approach to education, ensures our graduates are in demand the world-over for their abilities to understand, and apply vision and creativity to addressing complex contemporary challenges.

  • richardmitnick 1:08 pm on November 30, 2015 Permalink | Reply
    Tags: , ,   

    From EMSL: “Scarcity Drives Efficiency” 



    September 28, 2015 [This just became available.]
    Tim Scheibe at EMSL
    Younan Xia

    Platinum can be an efficient fuel cell catalyst

    Researchers developed a new class of catalysts by putting essentially all of the platinum atoms on the surface material and minimized the use of atoms in the core, thereby increasing the utilization efficiency of precious metals for fuel cells.

    The Science

    Platinum is an excellent catalyst for reactions in fuel cells, but its scarcity and cost have driven scientists to look for more efficient ways to use the precious metal. In a recent study, researchers developed a new class of catalysts by putting essentially all of the platinum atoms on the surface and minimizing the use of atoms in the core, thereby increasing efficient utilization of platinum for fuel cells.

    The Impact

    The novel nanocage catalyst will help promote the sustainable use of platinum and other precious metals for energy and other industrial applications. The reduced costs associated with the novel nanostructures will encourage commercialization of this technology for the development of zero-emission energy sources.


    Researchers from Georgia Institute of Technology and Emory University, Xiamen University, University of Wisconsin–Madison, Oak Ridge National Laboratory and Arizona State University fabricated cubic and octahedral nanocages by depositing a few atomic layers of platinum on palladium nanocrystals, and then completely etching away the palladium core. Density functional theory (DFT) calculations suggested the etching process was initiated by the formation of vacancies through the removal of palladium atoms incorporated into the outermost layer during the deposition of platinum. Some of the computational work was performed using computer resources at EMSL, the Environmental Molecular Sciences Laboratory, a Department of Energy Office of Biological and Environmental Research user facility. DFT calculations were performed at supercomputing centers at EMSL, Argonne National Laboratory and the National Energy Research Scientific Computing Center.

    Based on the findings, researchers propose that during platinum deposition, some palladium atoms are incorporated into the outermost platinum layers. Upon contact with the etchant—an acid or corrosive chemical—the palladium atoms in the outermost layer of the platinum shell are oxidized to generate vacancies in the surface of the nanostructure. The underlying palladium atoms then diffuse to these vacancies and are continuously etched away, leaving behind atom-wide channels. Over time, the channels grow in size to allow direct corrosion of palladium from the core. This process leads to a nanocage with a few layers of platinum atoms in the shell and a hollow interior.

    Compared to a commercial platinum/carbon catalyst, the nanocages showed enhanced catalytic activity and durability. The findings demonstrate it is possible to design fuel cell catalysts with efficient use of precious metals without sacrificing performance. Moreover, it is possible to tailor the arrangement of atoms or the surface structure of catalytic particles to optimize their catalytic performance for a specific type of chemical reaction. The researchers are testing these catalysts in fuel cell devices to determine how to further improve their design for clean energy applications.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    EMSL campus

    Welcome to EMSL. EMSL is a national scientific user facility that is funded and sponsored by DOE’s Office of Biological & Environmental Research. As a user facility, our scientific capabilities – people, instruments and facilities – are available for use by the global research community. We support BER’s mission to provide innovative solutions to the nation’s environmental and energy production challenges in areas such as atmospheric aerosols, feedstocks, global carbon cycling, biogeochemistry, subsurface science and energy materials.

    A deep understanding of molecular-level processes is critical to gaining a predictive, systems-level understanding of the impacts of aerosols and terrestrial systems on climate change; making clean, affordable, abundant energy; and cleaning up our legacy wastes. Visit our Science page to learn how EMSL leads in these areas, through our Science Themes.

    Team’s in Our DNA. We approach science differently than many institutions. We believe in – and have proven – the value of drawing together members of the scientific community and assembling the people, resources and facilities to solve problems. It’s in our DNA, since our founder Dr. Wiley’s initial call to create a user facility that would facilitate “synergism between the physical, mathematical, and life sciences.” We integrate experts across disciplines; experiment with theory; and our user program proposal calls with other user facilities.

    We proudly provide an enriched, customized experience that allows users to connect with our people and capabilities in an environment where we focus on solving problems. We collaborate with researchers from academia, government labs and industry, and from nearly all 50 states and from other countries.

  • richardmitnick 12:37 pm on November 30, 2015 Permalink | Reply
    Tags: , ,   

    From ORNL via DOE Pulse: “ORNL Wigner Fellow writes the recipe for glowing research” 


    Oak Ridge National Laboratory

    November 30, 2015

    ORNL’s Michael Chance

    In 2013, an inorganic chemistry student at the University of South Carolina conducted neutron experiments at DOE’s Oak Ridge National Laboratory for his Ph.D. work.

    Two years later, Michael Chance is picking right back up on his research at ORNL as a Eugene P. Wigner Fellow, the most prestigious fellowship at ORNL.

    The Wigner Fellowship, established in 1975, was created in honor of Nobel Laureate and the first ORNL Director of Research and Development.

    ORNL Wigner Fellows are exceptional early career scientists like Chance who, for his doctoral thesis, established a new crystal growth technique. As a Wigner Fellow, Chance has a rare opportunity to pursue research programs, collaborate with ORNL distinguished scientists and staff and access national laboratory expertise, facilities, and programs.

    “It’s exciting, being a Wigner Fellow and getting this support from a national lab to do real science and solve real problems,” said Chance.

    As a Wigner Fellow, Chance has had the opportunity to work with the Critical Materials Institute (CMI), an Energy Department Innovation Hub led by Ames Laboratory. CMI, supported by the Advanced Manufacturing Office in DOE’s Office of Energy Efficiency and Renewable Energy, is dedicated to reducing the nation’s dependence on vital, yet expensive and critical materials.

    Chance’s CMI research requires him to pull from his training as a solid-state chemist to reimagine and improve an important, everyday technology: fluorescent lighting.

    Flick on a fluorescent light switch, and chemistry happens. Within the glass light tube is mercury vapor that bombards a luminescent coating, called phosphor, with UV rays. This causes the phosphors to emit visible light.

    “The problem with the fluorescent lights that are commonly used is that they require phosphors with a large amount of rare earths in them,” said Chance.

    Fluorescent lights are more energy-efficient than incandescent light bulbs, but LED lighting has recently emerged as a forerunner in efficient lighting technology. Transitioning to the next-generation technology won’t happen rapidly, Chance explained.

    “LEDs (light emitting diodes) are rapidly growing their market share, but fluorescent lighting is poised to remain relevant for some time,” said Chance.

    Chance is part of an ORNL Materials Science and Technology Division team whose goal is to lessen the dependence on rare earths in the popular fluorescent lighting source.

    “Currently, we’re depending on these materials that are very expensive and at risk for supply disruptions,” Chance said. “DOE targets require us to significantly reduce the amount of rare earths in the phosphor coating to address this.”

    To do so, the team is testing new combinations of chemicals, tweaking the recipe for creating phosphors to find direct substitutes for the current phosphor blend.

    “A lot of solid-state science is like cooking—you’ve got to find the right recipe,” he said. “It’s not just what goes in the recipe, it’s the way you cook it that really makes a difference. Once I make a phosphor, it’s got to stand up to a really tough testing environment. It’s not easy,” said Chance.

    He points to his background in crystal growth methods as a big part of helping in the Edisonian approach for finding the right combination of chemicals.

    “You need to use chemistry knowledge and intuition for the phosphor research. You need to discover trends from what’s in the literature, new and old, to see what to try next,” Chance said.

    He keeps a lab notebook prominently on his desk, full of what he calls “scrupulous notes” on his successes and setbacks in synthesizing manganese-based phosphors.

    “It is paramount to keep track of everything in the process so others can reproduce it.”

    The transition to the thriving science culture at the national laboratory from university studies has been relatively easy for Michael Chance. Though he’s a native of Marshall County, Kentucky, Chance recently settled in East Tennessee with his wife, an art teacher at a local elementary school.

    He’s adjusted to life at ORNL with ease.

    “While I was a student at the University of South Carolina, our research group collaborated on a project at ORNL with Lynn Boatner [leader of the Synthesis and Properties of Novel Materials Group in ORNL’s Materials Science and Technology Division,” said Chance.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.


Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 494 other followers

%d bloggers like this: