Tagged: MIT Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:15 am on April 13, 2018 Permalink | Reply
    Tags: MIT, MIT led NASA High Energy Transient Explorer 2HETE 2 NASA,   

    From MIT: “TESS readies for takeoff” 

    MIT News

    MIT Widget

    MIT News

    April 12, 2018
    Jennifer Chu

    1
    A set of flight camera electronics on one of the TESS cameras, developed by the MIT Kavli Institute for Astrophysics and Space Research (MKI), will transmit exoplanet data from the camera to a computer aboard the spacecraft that will process it before transmitting it back to scientists on Earth. Image: MIT Kavli Institute.

    2
    NASA’s Transiting Exoplanet Survey Satellite (TESS), shown here in a conceptual illustration, will identify exoplanets orbiting the brightest stars just outside our solar system. TESS will search for exoplanets orbiting stars within hundreds of light-years of our solar system. Looking at these close, bright stars will allow large ground-based telescopes and the James Webb Space Telescope to do follow-up observations on the exoplanets TESS finds to characterize their atmospheres. Image: NASA’s Goddard Space Flight Center.

    3
    NASA’s Transiting Exoplanet Survey Satellite (TESS), shown here in a conceptual illustration, will identify exoplanets orbiting the brightest stars just outside our solar system. Image: NASA’s Goddard Space Flight Center.

    Satellite developed by MIT aims to discover thousands of nearby exoplanets, including at least 50 Earth-sized ones.

    There are potentially thousands of planets that lie just outside our solar system — galactic neighbors that could be rocky worlds or more tenuous collections of gas and dust. Where are these closest exoplanets located? And which of them might we be able to probe for clues to their composition and even habitability? The Transiting Exoplanet Survey Satellite (TESS) will be the first to seek out these nearby worlds.

    The NASA-funded spacecraft, not much larger than a refrigerator, carries four cameras that were conceived, designed, and built at MIT, with one wide-eyed vision: to survey the nearest, brightest stars in the sky for signs of passing planets.

    Now, more than a decade since MIT scientists first proposed the mission, TESS is about to get off the ground. The spacecraft is scheduled to launch on a SpaceX Falcon 9 rocket from Cape Canaveral Air Force Station in Florida, no earlier than April 16, at 6:32 p.m. EDT.

    4
    The Transiting Exoplanet Survey Satellite (TESS) will discover thousands of exoplanets in orbit around the brightest stars in the sky. In a two-year survey of the solar neighborhood, TESS will monitor more than 200,000 stars for temporary drops in brightness caused by planetary transits. This first-ever space borne all-sky transit survey will identify planets ranging from Earth-sized to gas giants, around a wide range of stellar types and orbital distances. No ground-based survey can achieve this feat. (NASA’s Goddard Space Flight Center/CI Lab).

    TESS will spend two years scanning nearly the entire sky — a field of view that can encompass more than 20 million stars. Scientists expect that thousands of these stars will host transiting planets, which they hope to detect through images taken with TESS’s cameras.

    Amid this extrasolar bounty, the TESS science team at MIT aims to measure the masses of at least 50 small planets whose radii are less than four times that of Earth. Many of TESS’s planets should be close enough to our own that, once they are identified by TESS, scientists can zoom in on them using other telescopes, to detect atmospheres, characterize atmospheric conditions, and even look for signs of habitability.

    “TESS is kind of like a scout,” says Natalia Guerrero, deputy manager of TESS Objects of Interest, an MIT-led effort that will catalog objects captured in TESS data that may be potential exoplanets.

    “We’re on this scenic tour of the whole sky, and in some ways we have no idea what we will see,” Guerrero says. “It’s like we’re making a treasure map: Here are all these cool things. Now, go after them.”

    A seed, planted in space

    TESS’s origins arose from an even smaller satellite that was designed and built by MIT and launched into space by NASA on Oct. 9, 2000. The High Energy Transient Explorer 2, or HETE-2, orbited Earth for seven years, on a mission to detect and localize gamma-ray bursts — high-energy explosions that emit massive, fleeting bursts of gamma and X-rays.

    MIT led NASA High Energy Transient Explorer 2HETE 2 NASA

    To detect such extreme, short-lived phenomena, scientists at MIT, led by principal investigator George Ricker, integrated into the satellite a suite of optical and X-ray cameras outfitted with CCDs, or charge-coupled devices, designed to record intensities and positions of light in an electronic format.

    “With the advent of CCDs in the 1970s, you had this fantastic device … which made a lot of things easier for astronomers,” says HETE-2 team member Joel Villasenor, who is now also instrument scientist for TESS. “You just sum up all the pixels on a CCD, which gives you the intensity, or magnitude, of light. So CCDs really broke things open for astronomy.”

    In 2004, Ricker and the HETE-2 team wondered whether the satellite’s optical cameras could pick out other objects in the sky that had begun to attract the astronomy community: exoplanets. Around this time, fewer than 200 planets outside our solar system had been discovered. A few of these were found with a technique known as the transit method, which involves looking for periodic dips in the light from certain stars, which may signal a planet passing in front of the star.

    “We were thinking, was the photometry of HETE-2’s cameras sufficient so that we could point to a part of the sky and detect one of these dips? Needless to say, it didn’t exactly work,” Villasenor recalls. “But that was sort of the seed that started us thinking, maybe we should try to fly CCDs with a camera to try and detect these things.”

    A path, cleared

    In 2006, Ricker and his team at MIT proposed a small, low cost satellite (HETE-S) to NASA as a Discovery class mission, and later on as a privately funded mission for $20 million. But as the cost of, and interest in, an all-sky exoplanet survey grew, they decided instead to seek NASA funding, at a higher level of $120 million. In 2008, they submitted a proposal for a NASA Small Explorer (SMEX) Class Mission with the new name — TESS.

    At this time, the satellite design included six CCD cameras, and the team proposed that the spacecraft fly in a low-Earth orbit, similar to that of HETE-2. Such an orbit, they reasoned, should keep observing efficiency relatively high, as they already had erected data-receiving ground stations for HETE-2 that could also be put to use for TESS.

    But they soon realized that a low-Earth orbit would have a negative impact on TESS’s much more sensitive cameras. The spacecraft’s reaction to the Earth’s magnetic field, for example, could lead to significant “spacecraft jitter,” producing noise that hides an exoplanet’s telltale dip in starlight.

    NASA bypassed this first proposal, and the team went back to the drawing board, this time emerging with a new plan that hinged on a completely novel orbit. With the help of engineers from Orbital ATK, the Aerospace Corporation, and NASA’s Goddard Space Flight Center, the team identified a never-before-used “lunar-resonant” orbit that would keep the spacecraft extremely stable, while giving it a full-sky view.

    Once TESS reaches this orbit, it will slingshot between the Earth and the moon on a highly elliptical path that could keep TESS orbiting for decades, shepherded by the moon’s gravitational pull.

    “The moon and the satellite are in a sort of dance,” Villasenor says. “The moon pulls the satellite on one side, and by the time TESS completes one orbit, the moon is on the other side tugging in the opposite direction. The overall effect is the moon’s pull is evened out, and it’s a very stable configuration over many years. Nobody’s done this before, and I suspect other programs will try to use this orbit later on.”

    In its current planned trajectory, TESS will swing out toward the moon for less than two weeks, gathering data, then swing back toward the Earth where, on its closest approach, it will transmit the data back to ground stations from 67,000 miles above the surface before swinging back out. Ultimately, this orbit will save TESS a huge amount of fuel, as it won’t need to burn its thrusters on a regular basis to keep on its path.

    With this revamped orbit, the TESS team submitted a second proposal in 2010, this time as an Explorer class mission, which NASA approved in 2013. It was around this time that the Kepler Space Telescope ended its original survey for exoplanets. The observatory, which was launched in 2009, stared at one specific patch of the sky for four years, to monitor the light from distant stars for signs of transiting planets.

    By 2013, two of Kepler’s four reaction wheels had worn out, preventing the spacecraft from continuing its original survey. At this point, the telescope’s measurements had enabled the discovery of nearly 1,000 confirmed exoplanets. Kepler, designed to study far-off stars, paved the way for TESS, a mission with a much wider view, to scan the nearest stars to Earth.

    “Kepler went up, and was this huge success, and researchers said, ‘We can do this kind of science, and there are planets everywhere,” says TESS member Jennifer Burt, an MIT-Kavli postdoc. “And I think that was really the scientific check box that we needed for NASA to say, ‘Okay, TESS makes a lot of sense now.’ It’ll enable not just detecting planets, but finding planets that we can thoroughly characterize after the fact.”

    Stripes in the sky

    With the selection by NASA, the TESS team set up facilities on campus and in MIT’s Lincoln Laboratory to build and test the spacecraft’s cameras. The engineers designed “deep depletion” CCDs specifically for TESS, meaning that the cameras can detect light over a wide range of wavelengths up to the near infrared. This is important, as many of the nearby stars TESS will monitor are red-dwarfs — small, cool stars that emit less brightly than the sun and in the infrared part of the electromagnetic spectrum.

    If scientists can detect periodic dips in the light from such stars, this may signal the presence of planets with significantly tighter orbits than that of Earth. Nevertheless, there is a chance that some of these planets may be within the “habitable zone,” as they would circle much cooler stars, compared with the sun. Since these stars are relatively close by, scientists can do follow-up observations with ground-based telescopes to help identify whether conditions might indeed be suitable for life.

    TESS’s cameras are mounted on the top of the satellite and surrounded by a protective cone to shield them from other forms of electromagnetic radiation. Each camera has a 24 by 24 degree view of the sky, large enough to encompass the Orion constellation. The satellite will start its observations in the Southern Hemisphere and will divide the sky into 13 stripes, monitoring each segment for 27 days before pivoting to the next. TESS should be able to observe nearly the entire sky in the Southern Hemisphere in its first year, before moving on to the Northern Hemisphere in its second year.

    While TESS points at one stripe of the sky, its cameras will take pictures of the stars in that portion. Ricker and his colleagues have made a list of 200,000 nearby, bright stars that they would particularly want to observe. The satellite’s cameras will create “postage stamp” images that include pixels around each of these stars. These images will be taken every two minutes, in order to maximize the chance of catching the moment that a planet crosses in front of its star. The cameras will also take full-frame images of all the stars in a particular stripe of the sky, every 30 minutes.

    “With the two-minute pictures, you can get a movie-like image of what the starlight is doing as the planet is crossing in front of its host star,” Guerrero says. “For the 30-minute images, people are excited about maybe seeing supernovae, asteroids, or counterparts to gravitational waves. We have no idea what we’re going to see at that timescale.”

    Are we alone?

    After TESS launches, the team expects that the satellite will reestablish contact within the first week, during which it will turn on all its instruments and cameras. Then, there will be a 60-day commissioning phase, as engineers and scientists at Orbital ATK, NASA, and MIT calibrate the instruments and monitor the satellite’s trajectory and performance. After that, TESS will begin to collect and downlink images of the sky. Scientists at MIT and NASA will take the raw data and convert it into light curves that indicate the changing brightness of a star over time.

    From there, the TESS Science Team, including Sara Seager, the Class of 1941 Professor of Earth, Atmospheric and Planetary Sciences, and deputy director of science for TESS, will look through thousands of light curves, for at least two similar dips in starlight, indicating that a planet may have passed twice in front of its star. Seager and her colleagues will then employ a battery of methods to determine the mass of a potential planet.

    “Mass is a defining planetary characteristic,” Seager says. “If you just know that a planet is twice the size of Earth, it could be a lot of things: a rocky world with a thin atmosphere, or what we call a “mini-Neptune” — a rocky world with a giant gas envelope, where it would be a huge greenhouse blanket, and there would be no life on the surface. So mass and size together give us an average planet density, which tells us a huge amount about what the planet is.”

    During TESS’s two-year mission, Seager and her colleagues aim to measure the masses of 50 planets with radii less than four times that of Earth — dimensions that could signal further observations for signs of habitability. Meanwhile, the whole scientific community and public will get a chance to search through TESS data for their own exoplanets. Once the data are calibrated, the team will make them publicly available. Anyone will be able to download the data and draw their own interpretations, including high school students, armchair astronomers, and other research institutions.

    With so many eyes on TESS’S data, Seager says there’s a chance that, some day, a nearby planet discovered by TESS might be found to have signs of life.

    “There’s no science that will tell us life is out there right now, except that small rocky planets appear to be incredibly common,” Seager says. “They appear to be everywhere we look. So it’s got to be there somewhere.”

    TESS is a NASA Astrophysics Explorer mission led and operated by MIT in Cambridge, Massachusetts, and managed by NASA’s Goddard Space Flight Center in Greenbelt, Maryland. George Ricker of MIT’s Kavli Institute for Astrophysics and Space Research serves as principal investigator for the mission. Additional partners include Orbital ATK, NASA’s Ames Research Center, the Harvard-Smithsonian Center for Astrophysics, and the Space Telescope Science Institute. More than a dozen universities, research institutes, and observatories worldwide are participants in the mission.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

    Advertisements
     
  • richardmitnick 5:24 pm on April 1, 2018 Permalink | Reply
    Tags: , , Computer searches telescope data for evidence of distant planets, , MIT   

    From MIT: “Computer searches telescope data for evidence of distant planets” 

    MIT News

    MIT Widget

    MIT News

    March 29, 2018
    Larry Hardesty

    1
    A young sun-like star encircled by its planet-forming disk of gas and dust.
    Image: NASA/JPL-Caltech

    Machine-learning system uses physics principles to augment data from NASA crowdsourcing project.

    As part of an effort to identify distant planets hospitable to life, NASA has established a crowdsourcing project in which volunteers search telescopic images for evidence of debris disks around stars, which are good indicators of exoplanets.

    Using the results of that project, researchers at MIT have now trained a machine-learning system to search for debris disks itself. The scale of the search demands automation: There are nearly 750 million possible light sources in the data accumulated through NASA’s Wide-Field Infrared Survey Explorer (WISE) mission alone.

    NASA/WISE Telescope

    In tests, the machine-learning system agreed with human identifications of debris disks 97 percent of the time. The researchers also trained their system to rate debris disks according to their likelihood of containing detectable exoplanets. In a paper describing the new work in the journal Astronomy and Computing, the MIT researchers report that their system identified 367 previously unexamined celestial objects as particularly promising candidates for further study.

    The work represents an unusual approach to machine learning, which has been championed by one of the paper’s coauthors, Victor Pankratius, a principal research scientist at MIT’s Haystack Observatory. Typically, a machine-learning system will comb through a wealth of training data, looking for consistent correlations between features of the data and some label applied by a human analyst — in this case, stars circled by debris disks.

    But Pankratius argues that in the sciences, machine-learning systems would be more useful if they explicitly incorporated a little bit of scientific understanding, to help guide their searches for correlations or identify deviations from the norm that could be of scientific interest.

    “The main vision is to go beyond what A.I. is focusing on today,” Pankratius says. “Today, we’re collecting data, and we’re trying to find features in the data. You end up with billions and billions of features. So what are you doing with them? What you want to know as a scientist is not that the computer tells you that certain pixels are certain features. You want to know ‘Oh, this is a physically relevant thing, and here are the physics parameters of the thing.’”

    Classroom conception

    The new paper grew out of an MIT seminar that Pankratius co-taught with Sara Seager, the Class of 1941 Professor of Earth, Atmospheric, and Planetary Sciences, who is well-known for her exoplanet research. The seminar, Astroinformatics for Exoplanets, introduced students to data science techniques that could be useful for interpreting the flood of data generated by new astronomical instruments. After mastering the techniques, the students were asked to apply them to outstanding astronomical questions.

    For her final project, Tam Nguyen, a graduate student in aeronautics and astronautics, chose the problem of training a machine-learning system to identify debris disks, and the new paper is an outgrowth of that work. Nguyen is first author on the paper, and she’s joined by Seager, Pankratius, and Laura Eckman, an undergraduate majoring in electrical engineering and computer science.

    From the NASA crowdsourcing project, the researchers had the celestial coordinates of the light sources that human volunteers had identified as featuring debris disks. The disks are recognizable as ellipses of light with slightly brighter ellipses at their centers. The researchers also used the raw astronomical data generated by the WISE mission.

    To prepare the data for the machine-learning system, Nguyen carved it up into small chunks, then used standard signal-processing techniques to filter out artifacts caused by the imaging instruments or by ambient light. Next, she identified those chunks with light sources at their centers, and used existing image-segmentation algorithms to remove any additional sources of light. These types of procedures are typical in any computer-vision machine-learning project.

    Coded intuitions

    But Nguyen used basic principles of physics to prune the data further. For one thing, she looked at the variation in the intensity of the light emitted by the light sources across four different frequency bands. She also used standard metrics to evaluate the position, symmetry, and scale of the light sources, establishing thresholds for inclusion in her data set.

    In addition to the tagged debris disks from NASA’s crowdsourcing project, the researchers also had a short list of stars that astronomers had identified as probably hosting exoplanets. From that information, their system also inferred characteristics of debris disks that were correlated with the presence of exoplanets, to select the 367 candidates for further study.

    “Given the scalability challenges with big data, leveraging crowdsourcing and citizen science to develop training data sets for machine-learning classifiers for astronomical observations and associated objects is an innovative way to address challenges not only in astronomy but also several different data-intensive science areas,” says Dan Crichton, who leads the Center for Data Science and Technology at NASA’s Jet Propulsion Laboratory. “The use of the computer-aided discovery pipeline described to automate the extraction, classification, and validation process is going to be helpful for systematizing how these capabilities can be brought together. The paper does a nice job of discussing the effectiveness of this approach as applied to debris disk candidates. The lessons learned are going to be important for generalizing the techniques to other astronomy and different discipline applications.”

    “The Disk Detective science team has been working on its own machine-learning project, and now that this paper is out, we’re going to have to get together and compare notes,” says Marc Kuchner, a senior astrophysicist at NASA’s Goddard Space Flight Center and leader of the crowdsourcing disk-detection project known as Disk Detective. “I’m really glad that Nguyen is looking into this because I really think that this kind of machine-human cooperation is going to be crucial for analyzing the big data sets of the future.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 12:53 pm on March 29, 2018 Permalink | Reply
    Tags: , Gran Sasso National Laboratories (LNGS), Is the neutrino is its own antiparticle?, MIT,   

    From MIT News: “Scientists report first results from CUORE neutrino experiment” 

    MIT News

    MIT Widget

    MIT News

    March 26, 2018
    Jennifer Chu

    1
    Researchers working on the cryostat. Image: CUORE Collaboration

    Data could shed light on why the universe has more matter than antimatter.

    This week, an international team of physicists, including researchers at MIT, is reporting the first results from an underground experiment designed to answer one of physics’ most fundamental questions: Why is our universe made mostly of matter?

    According to theory, the Big Bang should have produced equal amounts of matter and antimatter — the latter consisting of “antiparticles” that are essentially mirror images of matter, only bearing charges opposite to those of protons, electrons, neutrons, and other particle counterparts. And yet, we live in a decidedly material universe, made mostly of galaxies, stars, planets, and everything we see around us — and very little antimatter.

    Physicists posit that some process must have tilted the balance in favor of matter during the first moments following the Big Bang. One such theoretical process involves the neutrino — a particle that, despite having almost no mass and interacting very little with other matter, is thought to permeate the universe, with trillions of the ghostlike particles streaming harmlessly through our bodies every second.

    There is a possibility that the neutrino may be its own antiparticle, meaning that it may have the ability to transform between a matter and antimatter version of itself. If that is the case, physicists believe this might explain the universe’s imbalance, as heavier neutrinos, produced immediately after the Big Bang, would have decayed asymmetrically, producing more matter, rather than antimatter, versions of themselves.

    One way to confirm that the neutrino is its own antiparticle, is to detect an exceedingly rare process known as a “neutrinoless double-beta decay,” in which a stable isotope, such as tellurium or xenon, gives off certain particles, including electrons and antineutrinos, as it naturally decays. If the neutrino is indeed its own antiparticle, then according to the rules of physics the antineutrinos should cancel each other out, and this decay process should be “neutrinoless.” Any measure of this process should only record the electrons escaping from the isotope.

    The underground experiment known as CUORE, for the Cryogenic Underground Observatory for Rare Events, is designed to detect a neutrinoless double-beta decay from the natural decay of 988 crystals of tellurium dioxide.

    CUORE experiment,at the Italian National Institute for Nuclear Physics’ (INFN’s) Gran Sasso National Laboratories (LNGS) in Italy,a search for neutrinoless double beta decay

    In a paper published this week in Physical Review Letters, researchers, including physicists at MIT, report on the first two months of data collected by CUORE (Italian for “heart”). And while they have not yet detected the telltale process, they have been able to set the most stringent limits yet on the amount of time that such a process should take, if it exists at all. Based on their results, they estimate that a single atom of tellurium should undergo a neutrinoless double-beta decay, at most, once every 10 septillion (1 followed by 25 zeros) years.

    Taking into account the massive number of atoms within the experiment’s 988 crystals, the researchers predict that within the next five years they should be able to detect at least five atoms undergoing this process, if it exists, providing definitive proof that the neutrino is its own antiparticle.

    “It’s a very rare process — if observed, it would be the slowest thing that has ever been measured,” says CUORE member Lindley Winslow, a member of the Laboratory for Nuclear Science, and the Jerrold R. Zacharias Career Development Assistant Professor of Physics at MIT, who led the analysis. “The big excitement here is that we were able to run 998 crystals together, and now we’re on a path to try and see something.”

    The CUORE collaboration includes some 150 scientists primarily from Italy and the U.S., including Winslow and a small team of postdocs and graduate students from MIT.

    Coldest cube in the universe

    The CUORE experiment is housed underground, in the Italian National Institute for Nuclear Physics’ (INFN) Gran Sasso National Laboratories, buried deep within a mountain in central Italy, in order to shield it from external stimuli such as the constant bombardment of radiation from sources in the universe.

    Gran Sasso LABORATORI NAZIONALI del GRAN SASSO, located in the Abruzzo region of central Italy

    The heart of the experiment is a detector consisting of 19 towers, each containing 52 cube-shaped crystals of tellurium dioxide, totaling 988 crystals in all, with a mass of about 742 kilograms, or 1,600 pounds. Scientists estimate that this amount of crystals embodies around 100 septillion atoms of the particular tellurium isotope. Electronics and temperature sensors are attached to each crystal to monitor signs of their decay.

    The entire detector resides within an ultracold refrigerator, about the size of a vending machine, which maintains a steady temperature of 6 millikelvin, or -459.6 degrees Fahrenheit. Researchers in the collaboration have previously calculated that this refrigerator is the coldest cubic meter that exists in the universe.

    The experiment needs to be kept exceedingly cold in order to detect minute changes in temperature generated by the decay of a single tellurium atom. In a normal double-beta decay process, a tellurium atom gives off two electrons, as well as two antineutrinos, which amount to a certain energy in the form of heat. In the event of a neutrinoless double-beta decay, the two antineutrinos should cancel each other out, and only the energy released by the two electrons would be generated. Physicists have previously calculated that this energy must be around 2.5 megaelectron volts (Mev).

    In the first two months of CUORE’s operation, scientists have essentially been taking the temperature of the 988 tellurium crystals, looking for any miniscule spike in energy around that 2.5 Mev mark.

    “CUORE is like a gigantic thermometer,” Winslow says. “Whenever you see a heat deposit on a crystal, you end up seeing a pulse that you can digitize. Then you go through and look at these pulses, and the height and width of the pulse corresponds to how much energy was there. Then you zoom in and count how many events were at 2.5 Mev, and we basically saw nothing. Which is probably good because we weren’t expecting to see anything in the first two months of data.”

    The heart will go on

    The results more or less indicate that, within the short window in which CUORE has so far operated, not one of the 1,000 septillion tellurium atoms in the detector underwent a neutrinoless double-beta decay. Statistically speaking, this means that it would take at least 10 septillion years, or years, for a single atom to undergo this process if a neutrino is in fact its own antiparticle.

    “For tellurium dioxide, this is the best limit for the lifetime of this process that we’ve ever gotten,” Winslow says.

    CUORE will continue to monitor the crystals for the next five years, and researchers are now designing the experiment’s next generation, which they have dubbed CUPID — a detector that will look for the same process within an even greater number of atoms. Beyond CUPID, Winslow says there is just one more, bigger iteration that would be possible, before scientists can make a definitive conclusion.

    “If we don’t see it within 10 to 15 years, then, unless nature chose something really weird, the neutrino is most likely not its own antiparticle,” Winslow says. “Particle physics tells you there’s not much more wiggle room for the neutrino to still be its own antiparticle, and for you not to have seen it. There’s not that many places to hide.”

    This research is supported by the National Institute for Nuclear Physics (INFN) in Italy, the National Science Foundation, the Alfred P. Sloan Foundation, and the U.S. Department of Energy.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 1:12 pm on March 11, 2018 Permalink | Reply
    Tags: , , , Eni, , MIT   

    From MIT: “A new era in fusion research at MIT” 

    MIT News

    MIT Widget

    MIT News

    March 9, 2018
    Francesca McCaffrey | MIT Energy Initiative

    MIT Energy Initiative founding member Eni announces support for key research through MIT Laboratory for Innovation in Fusion Technologies.

    1

    A new chapter is beginning for fusion energy research at MIT.

    This week the Italian energy company Eni, a founding member of the MIT Energy Initiative (MITEI), announced it has reached an agreement with MIT to fund fusion research projects run out of the MIT Plasma Science and Fusion Center (PSFC)’s newly created Laboratory for Innovation in Fusion Technologies (LIFT). The expected investment in these research projects will amount to about $2 million over the following years.

    This is part of a broader engagement with fusion research and the Institute as a whole: Eni also announced a commitment of $50 million to a new private company with roots at MIT, Commonwealth Fusion Systems (CFS), which aims to make affordable, scalable fusion power a reality.

    “This support of LIFT is a continuation of Eni’s commitment to meeting growing global energy demand while tackling the challenge of climate change through its research portfolio at MIT,” says Robert C. Armstrong, MITEI’s director and the Chevron Professor of Chemical Engineering at MIT. “Fusion is unique in that it is a zero-carbon, dispatchable, baseload technology, with a limitless supply of fuel, no risk of runaway reaction, and no generation of long-term waste. It also produces thermal energy, so it can be used for heat as well as power.”

    Still, there is much more to do along the way to perfecting the design and economics of compact fusion power plants. Eni will fund research projects at LIFT that are a continuation of this research and focus on fusion-specific solutions. “We are thrilled at PSFC to have these projects funded by Eni, who has made a clear commitment to developing fusion energy,” says Dennis Whyte, the director of PSFC and the Hitachi America Professor of Engineering at MIT. “LIFT will focus on cutting-edge technology advancements for fusion, and will significantly engage our MIT students who are so adept at innovation.”

    Tackling fusion’s challenges

    The inside of a fusion device is an extreme environment. The creation of fusion energy requires the smashing together of light elements, such as hydrogen, to form heavier elements such as helium, a process that releases immense amounts of energy. The temperature at which this process takes place is too hot for solid materials, necessitating the use of magnets to hold the hot plasma in place.

    One of the projects PSFC and Eni intend to carry out will study the effects of high magnetic fields on molten salt fluid dynamics. One of the key elements of the fusion pilot plant currently being studied at LIFT is the liquid immersion blanket, essentially a flowing pool of molten salt that completely surrounds the fusion energy core. The purpose of this blanket is threefold: to convert the kinetic energy of fusion neutrons to heat for eventual electricity production; to produce tritium — a main component of the fusion fuel; and to prevent the neutrons from reaching other parts of the machine and causing material damage.

    It’s critical for researchers to be able to predict how the molten salt in such an immersion blanket would move when subjected to high magnetic fields such as those found within a fusion plant. As such, the researchers and their respective teams plan to study the effects of these magnetohydrodynamic forces on the salt’s fluid dynamics.

    A history of innovation

    During the 23 years MIT’s Alcator C-Mod tokamak fusion experiment was in operation, it repeatedly advanced records for plasma pressure in a magnetic confinement device. Its compact, high-magnetic-field fusion design confined superheated plasma in a small donut-shaped chamber.

    “The key to this success was the innovations pursued more than 20 years ago at PSFC in developing copper magnets that could access fields well in excess of other fusion experiments. The coupling between innovative technology development and advancing fusion science is in the DNA of the Plasma Science and Fusion Center,” says PSFC Deputy Director Martin Greenwald.

    In its final run in 2016, Alcator C-Mod set a new world record for plasma pressure, the key ingredient to producing net energy from fusion. Since then, PSFC researchers have used data from these decades of C-Mod experiments to continue to advance fusion research. Just last year, they used C-Mod data to create a new method of heating fusion plasmas in tokamaks which could result in the heating of ions to energies an order of magnitude greater than previously reached.

    A commitment to low-carbon energy

    MITEI’s mission is to advance low-carbon and no-carbon emissions solutions to efficiently meet growing global energy needs. Critical to this mission are collaborations between academia, industry, and government — connections MITEI helps to develop in its role as MIT’s hub for multidisciplinary energy research, education, and outreach.

    Eni is an inaugural, founding member of the MIT Energy Initiative, and it was through their engagement with MITEI that they became aware of the fusion technology commercialization being pursued by CFS and its immense potential for revolutionizing the energy system. It was through these discussions, as well, that Eni investors learned of the high-potential fusion research projects taking place through LIFT at MIT, spurring them to support the future of fusion at the Institute itself.

    Eni CEO Claudio Descalzi said, “Today is a very important day for us. Thanks to this agreement, Eni takes a significant step forward toward the development of alternative energy sources with an ever lower environmental impact. Fusion is the true energy source of the future, as it is completely sustainable, does not release emissions or waste, and is potentially inexhaustible. It is a goal that we are determined to reach quickly.” He added, “We are pleased and excited to pursue such a challenging goal with a collaborator like MIT, with unparalleled experience in the field and a long-standing and fruitful alliance with Eni.”

    These fusion projects are the latest in a line of MIT-Eni collaborations on low- and no-carbon energy projects. One of the earliest of these was the Eni-MIT Solar Frontiers Center, established in 2010 at MIT. Through its mission to develop competitive solar technologies, the center’s research has yielded the thinnest, lightest solar cells ever produced, effectively able to turn any surface, from fabric to paper, into a functioning solar cell. The researchers at the center have also developed new, luminescent materials that could allow windows to efficiently collect solar power.

    Other fruits of MIT-Eni collaborations include research into carbon capture systems to be installed in cars, wearable technologies to improve workplace safety, energy storage, and the conversion of carbon dioxide into fuel.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 4:32 pm on March 9, 2018 Permalink | Reply
    Tags: , MIT,   

    From MIT: “MIT and newly formed company launch novel approach to fusion power” 

    MIT News

    MIT Widget

    MIT News

    March 9, 2018
    David Chandler

    1
    Visualization of the proposed SPARC tokamak experiment. Using high-field magnets built with newly available high-temperature superconductors, this experiment would be the first controlled fusion plasma to produce net energy output. Visualization by Ken Filar, PSFC research affiliate.

    Goal is for research to produce a working pilot plant within 15 years.

    Progress toward the long-sought dream of fusion power — potentially an inexhaustible and zero-carbon source of energy — could be about to take a dramatic leap forward.

    Development of this carbon-free, combustion-free source of energy is now on a faster track toward realization, thanks to a collaboration between MIT and a new private company, Commonwealth Fusion Systems. CFS will join with MIT to carry out rapid, staged research leading to a new generation of fusion experiments and power plants based on advances in high-temperature superconductors — work made possible by decades of federal government funding for basic research.

    CFS is announcing today that it has attracted an investment of $50 million in support of this effort from the Italian energy company Eni. In addition, CFS continues to seek the support of additional investors. CFS will fund fusion research at MIT as part of this collaboration, with an ultimate goal of rapidly commercializing fusion energy and establishing a new industry.

    “This is an important historical moment: Advances in superconducting magnets have put fusion energy potentially within reach, offering the prospect of a safe, carbon-free energy future,” says MIT President L. Rafael Reif. “As humanity confronts the rising risks of climate disruption, I am thrilled that MIT is joining with industrial allies, both longstanding and new, to run full-speed toward this transformative vision for our shared future on Earth.”

    “Everyone agrees on the eventual impact and the commercial potential of fusion power, but then the question is: How do you get there?” adds Commonwealth Fusion Systems CEO Robert Mumgaard SM ’15, PhD ’15. “We get there by leveraging the science that’s already developed, collaborating with the right partners, and tackling the problems step by step.”

    MIT Vice President for Research Maria Zuber, who has written an op-ed on the importance of this news that appears in today’s Boston Globe, notes that MIT’s collaboration with CFS required concerted effort among people and offices at MIT that support innovation: “We are grateful for the MIT team that worked tirelessly to form this collaboration. Associate Provost Karen Gleason’s leadership was instrumental — as was the creativity, diligence, and care of the Office of the General Counsel, the Office of Sponsored Programs, the Technology Licensing Office, and the MIT Energy Initiative. A great job by all.”

    Superconducting magnets are key

    Fusion, the process that powers the sun and stars, involves light elements, such as hydrogen, smashing together to form heavier elements, such as helium — releasing prodigious amounts of energy in the process. This process produces net energy only at extreme temperatures of hundreds of millions of degrees Celsius, too hot for any solid material to withstand. To get around that, fusion researchers use magnetic fields to hold in place the hot plasma — a kind of gaseous soup of subatomic particles — keeping it from coming into contact with any part of the donut-shaped chamber.

    The new effort aims to build a compact device capable of generating 100 million watts, or 100 megawatts (MW), of fusion power. This device will, if all goes according to plan, demonstrate key technical milestones needed to ultimately achieve a full-scale prototype of a fusion power plant that could set the world on a path to low-carbon energy. If widely disseminated, such fusion power plants could meet a substantial fraction of the world’s growing energy needs while drastically curbing the greenhouse gas emissions that are causing global climate change.

    “Today is a very important day for us,” says Eni CEO Claudio Descalzi. “Thanks to this agreement, Eni takes a significant step forward toward the development of alternative energy sources with an ever-lower environmental impact. Fusion is the true energy source of the future, as it is completely sustainable, does not release emissions or long-term waste, and is potentially inexhaustible. It is a goal that we are increasingly determined to reach quickly.”

    CFS will support more than $30 million of MIT research over the next three years through investments by Eni and others. This work will aim to develop the world’s most powerful large-bore superconducting electromagnets — the key component that will enable construction of a much more compact version of a fusion device called a tokamak. The magnets, based on a superconducting material that has only recently become available commercially, will produce a magnetic field four times as strong as that employed in any existing fusion experiment, enabling a more than tenfold increase in the power produced by a tokamak of a given size.

    Conceived at PSFC

    The project was conceived by researchers from MIT’s Plasma Science and Fusion Center, led by PSFC Director Dennis Whyte, Deputy Director Martin Greenwald, and a team that grew to include representatives from across MIT, involving disciplines from engineering to physics to architecture to economics. The core PSFC team included Mumgaard, Dan Brunner PhD ’13, and Brandon Sorbom PhD ’17 — all now leading CFS — as well as Zach Hartwig PhD ’14, now an assistant professor of nuclear science and engineering at MIT.

    Once the superconducting electromagnets are developed by researchers at MIT and CFS — expected to occur within three years — MIT and CFS will design and build a compact and powerful fusion experiment, called SPARC, using those magnets. The experiment will be used for what is expected to be a final round of research enabling design of the world’s first commercial power-producing fusion plants.

    SPARC is designed to produce about 100 MW of heat. While it will not turn that heat into electricity, it will produce, in pulses of about 10 seconds, as much power as is used by a small city. That output would be more than twice the power used to heat the plasma, achieving the ultimate technical milestone: positive net energy from fusion.

    This demonstration would establish that a new power plant of about twice SPARC’s diameter, capable of producing commercially viable net power output, could go ahead toward final design and construction. Such a plant would become the world’s first true fusion power plant, with a capacity of 200 MW of electricity, comparable to that of most modern commercial electric power plants. At that point, its implementation could proceed rapidly and with little risk, and such power plants could be demonstrated within 15 years, say Whyte, Greenwald, and Hartwig.

    Complementary to ITER

    The project is expected to complement the research planned for a large international collaboration called ITER, currently under construction as the world’s largest fusion experiment at a site in southern France.

    ITER Tokamak in Saint-Paul-lès-Durance, which is in southern France

    If successful, ITER is expected to begin producing fusion energy around 2035.

    “Fusion is way too important for only one track,” says Greenwald, who is a senior research scientist at PSFC.

    By using magnets made from the newly available superconducting material — a steel tape coated with a compound called yttrium-barium-copper oxide (YBCO) — SPARC is designed to produce a fusion power output about a fifth that of ITER, but in a device that is only about 1/65 the volume, Hartwig says. The ultimate benefit of the YBCO tape, he adds, is that it drastically reduces the cost, timeline, and organizational complexity required to build net fusion energy devices, enabling new players and new approaches to fusion energy at university and private company scale.

    The way these high-field magnets slash the size of plants needed to achieve a given level of power has repercussions that reverberate through every aspect of the design. Components that would otherwise be so large that they would have to be manufactured on-site could instead be factory-built and trucked in; ancillary systems for cooling and other functions would all be scaled back proportionately; and the total cost and time for design and construction would be drastically reduced.

    “What you’re looking for is power production technologies that are going to play nicely within the mix that’s going to be integrated on the grid in 10 to 20 years,” Hartwig says. “The grid right now is moving away from these two- or three-gigawatt monolithic coal or fission power plants. The range of a large fraction of power production facilities in the U.S. is now is in the 100 to 500 megawatt range. Your technology has to be amenable with what sells to compete robustly in a brutal marketplace.”

    Because the magnets are the key technology for the new fusion reactor, and because their development carries the greatest uncertainties, Whyte explains, work on the magnets will be the initial three-year phase of the project — building upon the strong foundation of federally funded research conducted at MIT and elsewhere. Once the magnet technology is proven, the next step of designing the SPARC tokamak is based on a relatively straightforward evolution from existing tokamak experiments, he says.

    “By putting the magnet development up front,” says Whyte, the Hitachi America Professor of Engineering and head of MIT’s Department of Nuclear Science and Engineering, “we think that this gives you a really solid answer in three years, and gives you a great amount of confidence moving forward that you’re giving yourself the best possible chance of answering the key question, which is: Can you make net energy from a magnetically confined plasma?”

    The research project aims to leverage the scientific knowledge and expertise built up over decades of government-funded research — including MIT’s work, from 1971 to 2016, with its Alcator C-Mod experiment, as well as its predecessors — in combination with the intensity of a well-funded startup company. Whyte, Greenwald, and Hartwig say that this approach could greatly shorten the time to bring fusion technology to the marketplace — while there’s still time for fusion to make a real difference in climate change.

    MITEI participation

    Commonwealth Fusion Systems is a private company and will join the MIT Energy Initiative (MITEI) as part of a new university-industry partnership built to carry out this plan. The collaboration between MITEI and CFS is expected to bolster MIT research and teaching on the science of fusion, while at the same time building a strong industrial partner that ultimately could be positioned to bring fusion power to real-world use.

    “MITEI has created a new membership specifically for energy startups, and CFS is the first company to become a member through this new program,” says MITEI Director Robert Armstrong, the Chevron Professor of Chemical Engineering at MIT. “In addition to providing access to the significant resources and capabilities of the Institute, the membership is designed to expose startups to incumbent energy companies and their vast knowledge of the energy system. It was through their engagement with MITEI that Eni, one of MITEI’s founding members, became aware of SPARC’s tremendous potential for revolutionizing the energy system.”

    Energy startups often require significant research funding to further their technology to the point where new clean energy solutions can be brought to market. Traditional forms of early-stage funding are often incompatible with the long lead times and capital intensity that are well-known to energy investors.

    “Because of the nature of the conditions required to produce fusion reactions, you have to start at scale,” Greenwald says. “That’s why this kind of academic-industry collaboration was essential to enable the technology to move forward quickly. This is not like three engineers building a new app in a garage.”

    Most of the initial round of funding from CFS will support collaborative research and development at MIT to demonstrate the new superconducting magnets. The team is confident that the magnets can be successfully developed to meet the needs of the task. Still, Greenwald adds, “that doesn’t mean it’s a trivial task,” and it will require substantial work by a large team of researchers. But, he points out, others have built magnets using this material, for other purposes, which had twice the magnetic field strength that will be required for this reactor. Though these high-field magnets were small, they do validate the basic feasibility of the concept.

    In addition to its support of CFS, Eni has also announced an agreement with MITEI to fund fusion research projects run out of PSFC’s Laboratory for Innovation in Fusion Technologies. The expected investment in these research projects amounts to about $2 million in the coming years.

    “Conservative physics”

    SPARC is an evolution of a tokamak design that has been studied and refined for decades. This included work at MIT that began in the 1970s, led by professors Bruno Coppi and Ron Parker, who developed the kind of high-magnetic-field fusion experiments that have been operated at MIT ever since, setting numerous fusion records.

    “Our strategy is to use conservative physics, based on decades of work at MIT and elsewhere,” Greenwald says. “If SPARC does achieve its expected performance, my sense is that’s sort of a Kitty Hawk moment for fusion, by robustly demonstrating net power, in a device that scales to a real power plant.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 8:42 am on March 9, 2018 Permalink | Reply
    Tags: , MIT, MIT’s interdisciplinary Quantum Engineering Group (QEG), , , Scientists gain new visibility into quantum information transfer   

    From MIT: “Scientists gain new visibility into quantum information transfer” 

    MIT News

    MIT Widget

    MIT News

    March 8, 2018
    Peter Dunn | Department of Nuclear Science and Engineering

    1
    The NMR spectrometer in the Quantum Engineering Group (QEG) lab. Image: Paola Cappellaro.

    2
    Quantum many-body correlations in a spin chain grow from an initial localized state in the absence of disorder, but are restricted to a finite size by disorder, as measured by the average correlation length. Image: Paola Cappellaro.

    Advance holds promise for “wiring” of quantum computers and other systems, and opens new avenues for understanding basic workings of the quantum realm.

    When we talk about “information technology,” we generally mean the technology part, like computers, networks, and software. But information itself, and its behavior in quantum systems, is a central focus for MIT’s interdisciplinary Quantum Engineering Group (QEG) as it seeks to develop quantum computing and other applications of quantum technology.

    A QEG team has provided unprecedented visibility into the spread of information in large quantum mechanical systems, via a novel measurement methodology and metric described in a new article in Physics Review Letters. The team has been able, for the first time, to measure the spread of correlations among quantum spins in fluorapatite crystal, using an adaptation of room-temperature solid-state nuclear magnetic resonance (NMR) techniques.

    Researchers increasingly believe that a clearer understanding of information spreading is not only essential to understanding the workings of the quantum realm, where classical laws of physics often do not apply, but could also help engineer the internal “wiring” of quantum computers, sensors, and other devices.

    One key quantum phenomenon is nonclassical correlation, or entanglement, in which pairs or groups of particles interact such that their physical properties cannot be described independently, even when the particles are widely separated.

    That relationship is central to a rapidly advancing field in physics, quantum information theory. It posits a new thermodynamic perspective in which information and energy are linked — in other words, that information is physical, and that quantum-level sharing of information underlies the universal tendency toward entropy and thermal equilibrium, known in quantum systems as thermalization.

    QEG head Paola Cappellaro, the Esther and Harold E. Edgerton Associate Professor of Nuclear Science and Engineering, co-authored the new paper with physics graduate student Ken Xuan Wei and longtime collaborator Chandrasekhar Ramanathan of Dartmouth College.

    Cappellaro explains that a primary aim of the research was measuring the quantum-level struggle between two states of matter: thermalization and localization, a state in which information transfer is restricted and the tendency toward higher entropy is somehow resisted through disorder. The QEG team’s work centered on the complex problem of many-body localization (MBL) where the role of spin-spin interactions is critical.

    The ability to gather this data experimentally in a lab is a breakthrough, in part because simulation of quantum systems and localization-thermalization transitions is extremely difficult even for today’s most powerful computers. “The size of the problem becomes intractable very quickly, when you have interactions,” says Cappellaro. “You can simulate perhaps 12 spins using brute force but that’s about it — far fewer than the experimental system is capable of exploring.”

    NMR techniques can reveal the existence of correlations among spins, as correlated spins rotate faster under applied magnetic fields than isolated spins. However, traditional NMR experiments can only extract partial information about correlations. The QEG researchers combined those techniques with their knowledge of the spin dynamics in their crystal, whose geometry approximately confines the evolution to linear spin chains.

    “That approach allowed us to figure out a metric, average correlation length, for how many spins are connected to each other in a chain,” says Cappellaro. “If the correlation is growing, it tells you that interaction is winning against the disorder that’s causing localization. If the correlation length stops growing, disorder is winning and keeping the system in a more quantum localized state.”

    In addition to being able to distinguish between different types of localization (such as MBL and the simpler Anderson localization), the method also represents a possible advance toward the ability to control of these systems through the introduction of disorder, which promotes localization, Cappellaro adds. Because MBL preserves information and prevents it from becoming scrambled, it has potential for memory applications.

    The research’s focus “addresses a very fundamental question about the foundation of thermodynamics, the question of why systems thermalize and even why the notion of temperature exists at all,” says former MIT postdoc Iman Marvian, who is now an assistant professor in Duke University’s departments of Physics and Electrical and Computer Engineering. “Over the last 10 years or so there’s been mounting evidence, from analytical arguments to numerical simulations, that even though different parts of the system are interacting with each other, in the MBL phase systems don’t thermalize. And it is very exciting that we can now observe this in an actual experiment.”

    “People have proposed different ways to detect this phase of matter, but they’re difficult to measure in a lab,” Marvian explains. “Paola’s group studied it from a new point of view and introduced quantities that can be measured. I’m really impressed at how they’ve been able to extract useful information about MBL from these NMR experiments. It’s great progress, because it makes it possible to experiment with MBL on a natural crystal.”

    The research was able to leverage NMR-related capabilities developed under a previous grant from the US Air Force, says Cappellaro, and some additional funding from the National Science Foundation. Prospects for this research area are promising, she adds. “For a long time, most many-body quantum research was focused on equilibrium properties. Now, because we can do many more experiments and would like to engineer quantum systems, there’s much more interest in dynamics, and new programs devoted to this general area. So hopefully we can get more funding and continue the work.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 10:12 am on March 1, 2018 Permalink | Reply
    Tags: , , , , , , MIT, MIT physicists observe electroweak production of same-sign W boson pairs, , ,   

    From MIT: “MIT physicists observe electroweak production of same-sign W boson pairs” 

    MIT News

    MIT Widget

    MIT News

    February 27, 2018
    Scott Morley | Laboratory for Nuclear Science

    1
    Vector-boson scattering processes are characterized by two high-energetic jets in the forward regions of the detector. The Figure shows a significant excess of events in the distribution of the mass of the two tagging jets in yellow, labelled as EW WW. Image: Markus Klute

    In research conducted by a group led by MIT Laboratory for Nuclear Science researcher and associate professor of physics Markus Klute, electroweak productions of same-sign W boson pairs were observed, the first such observation of its kind and a milestone toward precision testing of vector boson scattering (W and Z bosons) at the Large Hadron Collider (LHC).

    LHC

    CERN/LHC Map

    CERN LHC Tunnel

    CERN LHC particles

    The LHC at CERN in Geneva, Switzerland, was proposed in the 1980s as a machine to either find the Higgs boson or discover yet unknown particles or interactions.

    CERN CMS Higgs Event


    CERN ATLAS Higgs Event

    This idea, that the LHC would be able to make a discovery, whatever that might be, is called by theorists No-lose Theorem, and is connected to probing the scattering of W boson pairs at energies above 1 teraelectronvolt (TeV). In 2012, only two years after the first high-energy collision at the LHC, this proposal paid huge dividends when the Higgs boson was discovered by the ATLAS and Compact Muon Solenid (CMS) collaborations.

    According to CERN, the CMS detector at the LHC utilizes a massive solenoid magnet to study everything from the Higgs boson to dark matter to the Standard Model.

    CERN/CMS Detector

    The Standard Model of elementary particles , with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    CMS is capable of generating a magnetic field that is approximately 100,000 times that of Earth. It resides in an underground cavern near Cessy, France, which is northwest of Geneva.

    The main goal of a recent measurement by CMS was to identify W boson pairs with the same sign (W+W+ or W-W-) produced purely via the electroweak interaction and probing the scattering of W bosons. The result does not unveil physics beyond the Standard Model, but this first observation of this process marks a starting point for a field of study to independently test whether the discovered Higgs boson is or is not the particle predicted by Robert Brout, François Englert, and Peter Higgs. It is anticipated that the rapidly growing data sets available at the LHC will further knowledge along these lines. Studies show that the high luminosity LHC will likely allow the direct study of longitudinal W boson scattering.

    “The measurement of vector-boson scattering processes, like the one studied in this paper, is an important test bench of the nature of the Higgs boson, as small deviations from the Standard Model expectation can have a large impact on event rates,” Klute says. “While challenging new physics models, these processes also allow a unique model-independent measurement of Higgs boson couplings to the W and Z boson at the LHC.”

    “The observation of this vector-boson scattering process is an important milestone toward future precision measurements,” Klute says. “These measurements are very challenging experimentally and require theoretical predictions with high precision. Both areas are pushed forward by the published results.”

    The work, while within CMS, was performed by MIT and included Klute, his students Andrew Levin and Xinmei Nui, and research scientist Guillelmo Gomez-Ceballos, along with University of Antwerp colleague Xavier Janssen and his student Jasper Lauwers.

    The work has been published in Physical Review Letters.

    This research was funded with support from U.S. Department of Energy.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 1:13 pm on February 19, 2018 Permalink | Reply
    Tags: , , Automating materials design, , , MIT   

    From MIT: “Automating materials design” 

    MIT News

    MIT Widget

    MIT News

    February 2, 2018 [Just showed up in social media.]
    Larry Hardesty

    1
    New software identified five different families of microstructures, each defined by a shared “skeleton” (blue), that optimally traded off three mechanical properties. Courtesy of the researchers.

    With new approach, researchers specify desired properties of a material, and a computer system generates a structure accordingly.

    For decades, materials scientists have taken inspiration from the natural world. They’ll identify a biological material that has some desirable trait — such as the toughness of bones or conch shells — and reverse-engineer it. Then, once they’ve determined the material’s “microstructure,” they’ll try to approximate it in human-made materials.

    Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have developed a new system that puts the design of microstructures on a much more secure empirical footing. With their system, designers numerically specify the properties they want their materials to have, and the system generates a microstructure that matches the specification.

    The researchers have reported their results in Science Advances. In their paper, they describe using the system to produce microstructures with optimal trade-offs between three different mechanical properties. But according to associate professor of electrical engineering and computer science Wojciech Matusik, whose group developed the new system, the researchers’ approach could be adapted to any combination of properties.

    “We did it for relatively simple mechanical properties, but you can apply it to more complex mechanical properties, or you could apply it to combinations of thermal, mechanical, optical, and electromagnetic properties,” Matusik says. “Basically, this is a completely automated process for discovering optimal structure families for metamaterials.”

    Joining Matusik on the paper are first author Desai Chen, a graduate student in electrical engineering and computer science; and Mélina Skouras and Bo Zhu, both postdocs in Matusik’s group.

    Finding the formula

    The new work builds on research reported last summer, in which the same quartet of researchers generated computer models of microstructures and used simulation software to score them according to measurements of three or four mechanical properties. Each score defines a point in a three- or four-dimensional space, and through a combination of sampling and local exploration, the researchers constructed a cloud of points, each of which corresponded to a specific microstructure.

    Once the cloud was dense enough, the researchers computed a bounding surface that contained it. Points near the surface represented optimal trade-offs between the mechanical properties; for those points, it was impossible to increase the score on one property without lowering the score on another.

    2
    No image caption or credit.

    That’s where the new paper picks up. First, the researchers used some standard measures to evaluate the geometric similarities of the microstructures corresponding to the points along the boundaries. On the basis of those measures, the researchers’ software clusters together microstructures with similar geometries.

    For every cluster, the software extracts a “skeleton” — a rudimentary shape that all the microstructures share. Then it tries to reproduce each of the microstructures by making fine adjustments to the skeleton and constructing boxes around each of its segments. Both of these operations — modifying the skeleton and determining the size, locations, and orientations of the boxes — are controlled by a manageable number of variables. Essentially, the researchers’ system deduces a mathematical formula for reconstructing each of the microstructures in a cluster.

    Next, the researchers use machine-learning techniques to determine correlations between specific values for the variables in the formulae and the measured properties of the resulting microstructures. This gives the system a rigorous way to translate back and forth between microstructures and their properties.

    3

    On automatic

    Every step in this process, Matusik emphasizes, is completely automated, including the measurement of similarities, the clustering, the skeleton extraction, the formula derivation, and the correlation of geometries and properties. As such, the approach would apply as well to any collection of microstructures evaluated according to any criteria.

    By the same token, Matusik explains, the MIT researchers’ system could be used in conjunction with existing approaches to materials design. Besides taking inspiration from biological materials, he says, researchers will also attempt to design microstructures by hand. But either approach could be used as the starting point for the sort of principled exploration of design possibilities that the researchers’ system affords.

    “You can throw this into the bucket for your sampler,” Matusik says. “So we guarantee that we are at least as good as anything else that has been done before.”

    In the new paper, the researchers do report one aspect of their analysis that was not automated: the identification of the physical mechanisms that determine the microstructures’ properties. Once they had the skeletons of several different families of microstructures, they could determine how those skeletons would respond to physical forces applied at different angles and locations.

    But even this analysis is subject to automation, Chen says. The simulation software that determines the microstructures’ properties can also identify the structural elements that deform most under physical pressure, a good indication that they play an important functional role.

    The work was supported by the U.S. Defense Advanced Research Projects Agency’s Simplifying Complexity in Scientific Discovery program.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 1:10 pm on February 17, 2018 Permalink | Reply
    Tags: A new approach to rechargeable batteries, , MIT,   

    From MIT: “A new approach to rechargeable batteries” 

    MIT News

    MIT Widget

    MIT News

    January 22, 2018 [Just now in social media.]
    David L. Chandler


    A type of battery first invented nearly five decades ago could catapult to the forefront of energy storage technologies, thanks to a new finding by researchers at MIT. Illustration modified from an original image by Felice Frankel

    A type of battery first invented nearly five decades ago could catapult to the forefront of energy storage technologies, thanks to a new finding by researchers at MIT. The battery, based on electrodes made of sodium and nickel chloride and using a new type of metal mesh membrane, could be used for grid-scale installations to make intermittent power sources such as wind and solar capable of delivering reliable baseload electricity.

    The findings are being reported today in the journal Nature Energy, by a team led by MIT professor Donald Sadoway, postdocs Huayi Yin and Brice Chung, and four others.

    Although the basic battery chemistry the team used, based on a liquid sodium electrode material, was first described in 1968, the concept never caught on as a practical approach because of one significant drawback: It required the use of a thin membrane to separate its molten components, and the only known material with the needed properties for that membrane was a brittle and fragile ceramic. These paper-thin membranes made the batteries too easily damaged in real-world operating conditions, so apart from a few specialized industrial applications, the system has never been widely implemented.

    But Sadoway and his team took a different approach, realizing that the functions of that membrane could instead be performed by a specially coated metal mesh, a much stronger and more flexible material that could stand up to the rigors of use in industrial-scale storage systems.

    “I consider this a breakthrough,” Sadoway says, because for the first time in five decades, this type of battery — whose advantages include cheap, abundant raw materials, very safe operational characteristics, and an ability to go through many charge-discharge cycles without degradation — could finally become practical.

    While some companies have continued to make liquid-sodium batteries for specialized uses, “the cost was kept high because of the fragility of the ceramic membranes,” says Sadoway, the John F. Elliott Professor of Materials Chemistry. “Nobody’s really been able to make that process work,” including GE, which spent nearly 10 years working on the technology before abandoning the project.

    As Sadoway and his team explored various options for the different components in a molten-metal-based battery, they were surprised by the results of one of their tests using lead compounds. “We opened the cell and found droplets” inside the test chamber, which “would have to have been droplets of molten lead,” he says. But instead of acting as a membrane, as expected, the compound material “was acting as an electrode,” actively taking part in the battery’s electrochemical reaction.

    “That really opened our eyes to a completely different technology,” he says. The membrane had performed its role — selectively allowing certain molecules to pass through while blocking others — in an entirely different way, using its electrical properties rather than the typical mechanical sorting based on the sizes of pores in the material.

    In the end, after experimenting with various compounds, the team found that an ordinary steel mesh coated with a solution of titanium nitride could perform all the functions of the previously used ceramic membranes, but without the brittleness and fragility. The results could make possible a whole family of inexpensive and durable materials practical for large-scale rechargeable batteries.

    The use of the new type of membrane can be applied to a wide variety of molten-electrode battery chemistries, he says, and opens up new avenues for battery design. “The fact that you can build a sodium-sulfur type of battery, or a sodium/nickel-chloride type of battery, without resorting to the use of fragile, brittle ceramic — that changes everything,” he says.

    The work could lead to inexpensive batteries large enough to make intermittent, renewable power sources practical for grid-scale storage, and the same underlying technology could have other applications as well, such as for some kinds of metal production, Sadoway says.

    Sadoway cautions that such batteries would not be suitable for some major uses, such as cars or phones. Their strong point is in large, fixed installations where cost is paramount, but size and weight are not, such as utility-scale load leveling. In those applications, inexpensive battery technology could potentially enable a much greater percentage of intermittent renewable energy sources to take the place of baseload, always-available power sources, which are now dominated by fossil fuels.

    The research team included Fei Chen, a visiting scientist from Wuhan University of Technology; Nobuyuki Tanaka, a visiting scientist from the Japan Atomic Energy Agency; MIT research scientist Takanari Ouchi; and postdocs Huayi Yin, Brice Chung, and Ji Zhao. The work was supported by the French oil company Total S.A. through the MIT Energy Initiative.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 2:17 pm on February 16, 2018 Permalink | Reply
    Tags: , , MIT, , PRIMA   

    From MIT: “Integrated simulations answer 20-year-old question in fusion research” 

    MIT News

    MIT Widget

    MIT News

    February 16, 2018
    Leda Zimmerman

    To make fusion energy a reality, scientists must harness fusion plasma, a fiery gaseous maelstrom in which radioactive particles react to generate heat for electricity. But the turbulence of fusion plasma can confront researchers with unruly behaviors that confound attempts to make predictions and develop models. In experiments over the past two decades, an especially vexing problem has emerged: In response to deliberate cooling at its edges, fusion plasma inexplicably undergoes abrupt increases in central temperature.

    These counterintuitive temperature spikes, which fly against the physics of heat transport models, have not found an explanation — until now.

    A team led by Anne White, the Cecil and Ida Green Associate Professor in the Department of Nuclear Science and Engineering, and Pablo Rodriguez Fernandez, a graduate student in the department, has conducted studies that offer a new take on the complex physics of plasma heat transport and point toward more robust models of fusion plasma behavior. The results of their work appear this week in the journal Physical Review Letters. Rodriguez Fernandez is first author on the paper.

    In experiments using MIT’s Alcator C-Mod tokamak (a toroidal-shaped device that deploys a magnetic field to contain the star-furnace heat of plasma), the White team focused on the problem of turbulence and its impact on heating and cooling.

    Alcator C-Mod tokamak at MIT, no longer in operation

    In tokamaks, heat transport is typically dominated by turbulent movement of plasma, driven by gradients in plasma pressure.

    Hot and cold

    Scientists have a good grasp of turbulent transport of heat when the plasma is held at steady-state conditions. But when the plasma is intentionally perturbed, standard models of heat transport simply cannot capture plasma’s dynamic response.

    In one such case, the cold-pulse experiment, researchers perturb the plasma near its edge by injecting an impurity, which results in a rapid cooling of the edge.

    “Now, if I told you we cooled the edge of hot plasma, and I asked you what will happen at the center of the plasma, you would probably say that the center should cool down too,” says White. “But when scientists first did this experiment 20 years ago, they saw that edge cooling led to core heating in low-density plasmas, with the temperature in the core rising, and much faster than any standard transport model would predict.” Further mystifying researchers was the fact that at higher densities, the plasma core would cool down.

    Replicated many times, these cold-pulse experiments with their unlikely results defy what is called the standard local model for the turbulent transport of heat and particles in fusion devices. They also represent a major barrier to predictive modeling in high-performance fusion experiments such as ITER, the international nuclear fusion project, and MIT’s own proposed smaller-scale fusion reactor, ARC.

    MIT ARC Fusion Reactor

    ITER Tokamak in Saint-Paul-lès-Durance, which is in southern France

    To achieve a new perspective on heat transport during cold-pulse experiments, White’s team developed a unique twist.

    “We knew that the plasma rotation, that is, how fast the plasma was spinning in the toroidal direction, would change during these cold-pulse experiments, which complicates the analysis quite a bit,” White notes. This is because the coupling between momentum transport and heat transport in fusion plasmas is still not fully understood,” she explains. “We needed to unambiguously isolate one effect from the other.”

    As a first step, the team developed a new experiment that conclusively demonstrated how the cold-pulse phenomena associated with heat transport would occur irrespective of the plasma rotation state. With Rodriguez Fernandez as first author, White’s group reported this key result in the journal Nuclear Fusion in 2017.

    A new integrated simulation

    From there, a tour de force of modeling was needed to recreate the cold-pulse dynamics seen in the experiments. To tackle the problem, Rodriguez Fernandez built a new framework, called PRIMA, which allowed him to introduce cold-pulses in time-dependent simulations. Using special software that factored in the turbulence, radiation and heat transport physics inside a tokamak, PRIMA could model cold-pulse phenomena consistent with experimental measurements.

    “I spent a long time simulating the propagation of cold pulses by only using an increase in radiated power, which is the most intuitive effect of a cold-pulse injection,” Rodriguez Fernandez says.

    Because experimental data showed that the electron density increased with every cold pulse injection, Rodriguez Fernandez implemented an analogous effect in his simulations. He observed a very good match in amplitude and time-scales of the core temperature behavior. “That was an ‘aha!’ moment,” he recalls.

    Using PRIMA, Rodriguez Fernandez discovered that a competition between types of turbulent modes in the plasma could explain the cold-pulse experiments. These different modes, explains White, compete to become the dominant cause of the heat transport. “Whichever one wins will determine the temperature profile response, and determine whether the center heats up or cools down after the edge cooling,” she says.

    By determining the factors behind the center-heating phenomenon (the so-called nonlocal response) in cold-pulse experiments, White’s team has removed a central concern about limitations in the standard, predictive (local) model of plasma behavior. This means, says White, that “we are more confident that the local model can be used to predict plasma behavior in future high performance fusion plasma experiments — and eventually, in reactors.”

    “This work is of great significance for validating fundamental assumptions underpinning the standard model of core tokamak turbulence,” says Jonathan Citrin, Integrated Modelling and Transport Group leader at the Dutch Institute for Fundamental Energy Research (DIFFER), who was not involved in the research. “The work also validated the use of reduced models, which can be run without the need for supercomputers, allowing to predict plasma evolution over longer timescales compared to full-physics simulations,” says Citrin. “This was key to deciphering the challenging experimental observations discussed in the paper.”

    The work isn’t over for the team. As part of a separate collaboration between MIT and General Atomics, Plasma Science and Fusion Center scientists are installing a new laser ablation system to facilitate cold-pulse experiments at the DIII-D tokamak in San Diego, California, with first data expected soon. Rodriguez Fernandez has used the integrated simulation tool PRIMA to predict the cold-pulse behavior at DIII-D, and he will perform an experimental test of the predictions later this year to complete his PhD research.

    The research team included Brian Grierson and Xingqiu Yuan, research scientists at Princeton Plasma Physics Laboratory; Gary Staebler, research scientist at General Atomics; Martin Greenwald, Nathan Howard, Amanda Hubbard, Jerry Hughes, Jim Irby and John Rice, research scientists from the MIT Plasma Science and Fusion Center; and MIT grad students Norman Cao, Alex Creely, and Francesco Sciortino. The work was supported by the US DOE Fusion Energy Sciences.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: