Updates from August, 2017 Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:32 pm on August 19, 2017 Permalink | Reply
    Tags: , , , , Cosmic Magnifying Lens Reveals Inner Jets of Black Holes, , Gravitational lensing system discovered by OVRO, Much more distant galaxy containing a jet-spewing supermassive black hole, OVRO-Caltech's Owens Valley Radio Observatory   

    From Caltech: “Cosmic Magnifying Lens Reveals Inner Jets of Black Holes” 

    Caltech Logo

    Caltech

    08/15/2017

    Whitney Clavin
    (626) 395-1856
    wclavin@caltech.edu

    1
    Illustration shows the likely configuration of a gravitational lensing system discovered by OVRO. The “milli-lens” is located in or near the intervening spiral galaxy. The lens is magnifying blobs of jet material within the active galaxy PKS1413+135, but the blobs are too small to be seen in the radio image (top left), taken by MOJAVE. Only when the blobs move far away from the yellow core do they expand and are visible as the pink blobs in the image.
    Credit: Anthony Readhead/Caltech/MOJAVE

    Caltech Owens Valley Radio Observatory, Owens Valley, California

    1
    Image of the 40-meter telescope of the Owens Valley Radio Observatory (OVRO), located near Bishop, California.
    Credit: Anthony Readhead/Caltech

    Astronomers using Caltech’s Owens Valley Radio Observatory (OVRO) have found evidence for a bizarre lensing system in space, in which a large assemblage of stars is magnifying a much more distant galaxy containing a jet-spewing supermassive black hole. The discovery provides the best view yet of blobs of hot gas that shoot out from supermassive black holes.

    “We have known about the existence of these clumps of material streaming along black hole jets, and that they move close to the speed of light, but not much is known about their internal structure or how they are launched,” says Harish Vedantham, a Caltech Millikan Postdoctoral Scholar. “With lensing systems like this one, we can see the clumps closer to the central engine of the black hole and in much more detail than before.” Vedantham is lead author of two new studies describing the results in the Aug. 15 issue of The Astrophysical Journal. The international project is led by Anthony Readhead, the Robinson Professor of Astronomy, Emeritus, and director of the OVRO.

    Many supermassive black holes at the centers of galaxies blast out jets of gas traveling near the speed of light. The gravity of black holes pulls material toward them, but some of that material ends up ejected away from the black hole in jets. The jets are active for one to 10 million years—every few years, they spit out additional clumps of hot material. With the new gravitational lensing system, these clumps can be seen at scales about 100 times smaller than before.

    “The clumps we’re seeing are very close to the central black hole and are tiny—only a few light-days across. We think these tiny components moving at close to the speed of light are being magnified by a gravitational lens in the foreground spiral galaxy,” says Readhead. “This provides exquisite resolution of a millionth of a second of arc, which is equivalent to viewing a grain of salt on the moon from Earth.”

    A critical element of this lensing system is the lens itself. The scientists think that this could be the first lens of intermediate mass—which means that it is bigger than previously observed “micro” lenses consisting of single stars and smaller than the well-studied massive lenses as big as galaxies. The lens described in the new paper, dubbed a “milli-lens,” is thought to be about 10,000 solar masses, and most likely consists of a cluster of stars. An advantage of a milli-sized lens is that it is small enough not to block the entire source, which allows the jet clumps to be magnified and viewed as they travel, one by one, behind the lens. What’s more, the researchers say the lens itself is of scientific interest because not much is known about objects of this intermediate-mass range.

    “This system could provide a superb cosmic laboratory for both the study of gravitational milli-lensing and the inner workings of the nuclear jet in an active galaxy,” says Readhead.

    The new findings are part of an OVRO program to obtain twice-weekly observations of 1,800 active supermassive black holes and their host galaxies, using OVRO’s 40-meter telescope, which detects radio emissions from celestial objects. The program has been running since 2008 in support of NASA’s Fermi mission, which observes the same galaxies in higher-energy gamma rays.

    In 2010, the OVRO researchers noticed something unusual happening with the galaxy in the study, an active galaxy called PKS 1413+ 135. Its radio emission had brightened, faded, and then brightened again in a very symmetrical fashion over the course of a year. The same type of event happened again in 2015. After a careful analysis that ruled out other scenarios, the researchers concluded that the overall brightening of the galaxy is most likely due to two successive high-speed clumps ejected by the galaxy’s black hole a few years apart. The clumps traveled along the jet and became magnified when they passed behind the milli-lens.

    “It has taken observations of a huge number of galaxies to find this one object with the symmetrical dips in brightness that point to the presence of a gravitational lens,” says coauthor Timothy Pearson, a senior research scientist at Caltech who helped discover in 1981 that the jet clumps travel at close to the speed of light. “We are now looking hard at all our other data to try to find similar objects that can give a magnified view of galactic nuclei.”

    The next step to confirm the PKS 1413+ 135 results is to observe the galaxy with a technique called very-long-baseline interferometry (VLBI), in which radio telescopes across the globe work together to image cosmic objects in detail. The researchers plan to use this technique beginning this fall to look at the galaxy and its supermassive black hole, which is expected to shoot out another clump of jet material in the next few years. With the VLBI technique, they should be able to see the clump smeared out into an arc across the sky via the light-bending effects of the milli-lens. Identifying an arc would confirm that indeed a milli-lens is magnifying the ultra-fast jet clumps spewing from a supermassive black hole.

    “We couldn’t do studies like these without a university observatory like the Owens Valley Radio Observatory, where we have the time to dedicate a large telescope exclusively to a single program,” said Readhead.

    Additional authors of The Astrophysical Journal studies are: Vikram Ravi of Caltech; Walter Max-Moerbeck (MS ’08, PhD ’13) and Anton Zensus of the Max Planck Institute for Radio Astronomy; Talvikki Hovatta of University of Turku and the Aalto University Metsähovi Radio Observatory; Anne Lähteenmäki and Merja Tornikoski of the Aalto University Metsähovi Radio Observatory; Mark Gurwell (MS ’92, PhD ’96) of the Smithsonian Astrophysical Observatory; Roger Blandford of Stanford University; Rodrigo Reeves of the University of Concepción; and Vasiliki Pavlidou of the University of Crete.

    The two studies, titled, Symmetric Achromatic Variability in Active Galaxies: A Powerful New Gravitational Lensing Probe? and The Peculiar Light Curve of J1415+1320: A Case Study in Extreme Scattering Events, are funded by NASA, the National Science Foundation, the Smithsonian Institution, the Academia Sinica, the Academy of Finland, and the Chilean Centro de Excelencia en Astrofísica y Tecnologías Afines (CATA).

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”

    Caltech campus

     
  • richardmitnick 3:49 pm on August 18, 2017 Permalink | Reply
    Tags: , , , , , The Origin of Binary Stars   

    From CfA: “The Origin of Binary Stars” 

    Harvard Smithsonian Center for Astrophysics


    Center For Astrophysics

    1

    An image taken at submillimeter wavelengths of a star-forming core, showing that it contains two young stellar embryos. Astronomers have concluded from a systematic study of very young cores that most embryonic stars form in multiple systems, and later some of them separate.
    Sadavoy and Stahler

    The origin of binary stars has long been one of the central problems of astronomy. One of the main questions is how stellar mass affects the tendency to be multiple. There have been numerous studies of young stars in molecular clouds to look for variations in binary frequency with stellar mass, but so many other effects can influence the result that the results have been inconclusive. These complicating factors include dynamical interactions between stars that can eject one member of a multiple system, or on the other hand might capture a passing star under the right circumstances. Some studies, for example, found that younger stars are more likely to be found in binary pairs. One issue with much of the previous observational work, however, has been the small sample sizes.

    CfA astronomer Sarah Sadavoy and her colleague used combined observations from a large radio wavelength survey of young stars in the Perseus cloud with submillimeter observations of the natal dense core material around these stars to identify twenty-four multiple systems. The scientists then used a submillimeter study to identify and characterize the dust cores in which the stars are buried. They found that most of the embedded binaries are located near the centers of their dust cores, indicative of their still being young enough to have not drifted away. About half of the binaries are in elongated core structures, and they conclude that the initial cores were also elongated structures. After modeling their findings, they argue that the most likely scenarios are the ones predicting that all stars, both single and binaries, form in widely separated binary pair systems, but that most of these break apart either due to ejection or to the core itself breaking apart. A few systems become more tightly bound. Although other studies have suggested this idea as well, this is the first study to do so based on observations of very young, still embedded stars. One of their most significant major conclusions is that each dusty core of material is likely to be the birthplace of two stars, not the single star usually modeled. This means that there are probably twice as many stars being formed per core than is generally believed.

    Reference(s):

    Embedded Binaries and Their Dense Cores, Sarah I. Sadavoy and Steven W. Stahler, MNRAS

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Center for Astrophysics combines the resources and research facilities of the Harvard College Observatory and the Smithsonian Astrophysical Observatory under a single director to pursue studies of those basic physical processes that determine the nature and evolution of the universe. The Smithsonian Astrophysical Observatory (SAO) is a bureau of the Smithsonian Institution, founded in 1890. The Harvard College Observatory (HCO), founded in 1839, is a research institution of the Faculty of Arts and Sciences, Harvard University, and provides facilities and substantial other support for teaching activities of the Department of Astronomy.

     
  • richardmitnick 3:30 pm on August 18, 2017 Permalink | Reply
    Tags: , , , , , Gliese 832b and Gliese 832c were discovered by the radial velocity technique, Star system Gliese 832,   

    From U Texas Arlington: “UTA astrophysicists predict Earth-like planet may exist in star system only 16 light years away” 

    U Texas Arlington

    University of Texas at Arlington

    August 17, 2017
    Louisa Kellie
    Office: 817‑272‑0864
    Cell: 817-524-8926
    louisa.kellie@uta.edu

    1
    Astrophysicists at the University of Texas at Arlington have predicted that an Earth-like planet may be lurking in a star system just 16 light years away.

    The team investigated the star system Gliese 832 for additional exoplanets residing between the two currently known alien worlds in this system. Their computations revealed that an additional Earth-like planet with a dynamically stable configuration may be residing at a distance ranging from 0.25 to 2.0 astronomical unit (AU) from the star.

    “According to our calculations, this hypothetical alien world would probably have a mass between 1 to 15 Earth’s masses,” said the lead author Suman Satyal, UTA physics researcher, lecturer and laboratory supervisor. The paper is co-authored by John Griffith, UTA undergraduate student and long-time UTA physics professor Zdzislaw Musielak.

    The astrophysicists published their findings this week as Dynamics of a probable Earth-Like Planet in the GJ 832 System in The Astrophysical Journal.

    UTA Physics Chair Alexander Weiss congratulated the researchers on their work, which underscores the University’s commitment to data-driven discovery within its Strategic Plan 2020: Bold Solutions | Global Impact.

    “This is an important breakthrough demonstrating the possible existence of a potential new planet orbiting a star close to our own,” Weiss said. “The fact that Dr. Satyal was able to demonstrate that the planet could maintain a stable orbit in the habitable zone of a red dwarf for more than 1 billion years is extremely impressive and demonstrates the world class capabilities of our department’s astrophysics group.”

    Gliese 832 is a red dwarf and has just under half the mass and radius of our sun. The star is orbited by a giant Jupiter-like exoplanet designated Gliese 832b and by a super-Earth planet Gliese 832c. The gas giant with 0.64 Jupiter masses is orbiting the star at a distance of 3.53 AU, while the other planet is potentially a rocky world, around five times more massive than the Earth, residing very close its host star—about 0.16 AU

    For this research, the team analyzed the simulated data with an injected Earth-mass planet on this nearby planetary system hoping to find a stable orbital configuration for the planet that may be located in a vast space between the two known planets.

    Gliese 832b and Gliese 832c were discovered by the radial velocity technique, which detects variations in the velocity of the central star, due to the changing direction of the gravitational pull from an unseen exoplanet as it orbits the star. By regularly looking at the spectrum of a star – and so, measuring its velocity – one can see if it moves periodically due to the influence of a companion.

    “We also used the integrated data from the time evolution of orbital parameters to generate the synthetic radial velocity curves of the known and the Earth-like planets in the system,” said Satyal, who earned his Ph.D. in Astrophysics from UTA in 2014. “We obtained several radial velocity curves for varying masses and distances indicating a possible new middle planet,” the astrophysicist noted.

    For instance, if the new planet is located around 1 AU from the star, it has an upper mass limit of 10 Earth masses and a generated radial velocity signal of 1.4 meters per second. A planet with about the mass of the Earth at the same location would have radial velocity signal of only 0.14 m/s, thus much smaller and hard to detect with the current technology.

    “The existence of this possible planet is supported by long-term orbital stability of the system, orbital dynamics and the synthetic radial velocity signal analysis”, Satyal said. “At the same time, a significantly large number of radial velocity observations, transit method studies, as well as direct imaging are still needed to confirm the presence of possible new planets in the Gliese 832 system.”

    In 2014, Noyola, Satyal and Musielak published findings related to radio emissions indicating that an exomoon could be orbiting an exoplanet in The Astrophysical Journal, where they suggested that interactions between Jupiter’s magnetic field and its moon Io may be used to detect exomoons at distant exoplanetary systems.

    Zdzislaw Musielak joined the UTA physics faculty in 1998 following his doctoral program at the University of Gdansk in Poland and appointments at the University of Heidelberg in Germany; Massachusetts Institute of Technology, NASA Marshall Space Flight Center and the University of Alabama in Huntsville.

    Suman Satyal is a research assistant, laboratory supervisor and physics lecturer at UTA and his research area includes the detection of exoplanets and exomoons, and orbital stability analysis of Exoplanets in single and binary star systems. He previously worked in the National Synchrotron Light Source located at the Brookhaven National Laboratory in New York, where he measured the background in auger-photoemission coincidence spectra associated with multi-electron valence band photoemission processes.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    U Texas Arlington Campus

    The University of Texas at Arlington is a growing research powerhouse committed to life-enhancing discovery, innovative instruction, and caring community engagement. An educational leader in the heart of the thriving North Texas region, UT Arlington nurtures minds within an environment that values excellence, ingenuity, and diversity.

    Guided by world-class faculty members, the University’s more than 48,000 students in Texas and around the world represent 120 countries and pursue more than 180 bachelor’s, master’s, and doctoral degrees in a broad range of disciplines. UT Arlington is dedicated to producing the lifelong learners and critical thinkers our region and nation demand. More than 60 percent of the University’s 190,000 alumni live in North Texas and contribute to our annual economic impact of $12.8 billion in the region.

    With a growing number of campus residents, UT Arlington has become a first-choice university for students seeking a vibrant college experience. In addition to receiving a first-rate education, our students participate in a robust slate of co-curricular activities that prepare them to become the next generation of leaders.

     
  • richardmitnick 2:19 pm on August 18, 2017 Permalink | Reply
    Tags: , , , Brown dwarf binaries, , , Goal was to measure the masses of the objects in these binaries, ,   

    From CFHT: “Astronomers prove what separates true stars from wannabes” 

    CFHT icon
    Canada France Hawaii Telescope

    June 5, 2017 [Just presented in social media.]

    Dr. Roy Gal
    University of Hawaii at Manoa
    +1 301-728-8637
    rgal@ifa.hawaii.edu

    Dr. Trent Dupuy
    The University of Texas at Austin
    +1 318-344-0975
    tdupuy@astro.as.utexas.edu

    Dr. Michael Liu
    University of Hawaii at Manoa
    +1 808-956-6666
    mliu@ifa.hawaii.edu

    “When we look up and see the stars shining at night, we are seeing only part of the story,” said Trent Dupuy of the University of Texas at Austin and a graduate of the Institute for Astronomy at the University of Hawaii at Manoa. “Not everything that could be a star ‘makes it,’ and figuring out why this process sometimes fails is just as important as understanding when it succeeds.”

    1
    Professor Michael Liu stands in front of WIRCam, CFHT’s infrared camera that was used for this decade long study.

    Dupuy is the lead author of the study and will present his research today in a news conference at the semi-annual meeting of the American Astronomical Society in Austin.

    Stars form when a cloud of gas and dust collapses due to gravity, and the resulting ball of matter becomes hot enough and dense enough to sustain nuclear fusion at its core. Fusion produces huge amounts of energy — it’s what makes stars shine. In the Sun’s case, it’s what makes most life on Earth possible.

    But not all collapsing gas clouds are created equal. Sometimes, the collapsing cloud makes a ball that isn’t dense enough to ignite fusion. These ‘failed stars’ are known as brown dwarfs.

    This simple division between stars and brown dwarfs has been used for a long time. In fact, astronomers have had theories about how massive the collapsing ball has to be in order to form a star (or not) for over 50 years. However, the dividing line in mass has never been confirmed by experiment.

    Now, astronomers Dupuy and Michael Liu of the University of Hawaii, who is a co-author of the study, have done just that. They found that an object must weigh at least 70 Jupiters in order to start hydrogen fusion. If it weighs less, the star does not ignite and becomes a brown dwarf instead.

    How did they reach that conclusion? For a decade, the two studied 31 faint brown dwarf binaries (pairs of these objects that orbit each other) using two powerful telescopes in Hawaii — the W. M. Keck Observatory and Canada-France-Hawaii telescopes — as well as data from the Hubble Space Telescope.


    Keck Observatory, Maunakea, Hawaii, USA

    NASA/ESA Hubble Telescope

    Their goal was to measure the masses of the objects in these binaries, since mass defines the boundary between stars and brown dwarfs. Astronomers have been using binaries to measure masses of stars for more than a century. To determine the masses of a binary, one measures the size and speed of the stars’ orbits around an invisible point between them where the pull of gravity is equal (known as the “center of mass”). However, binary brown dwarfs orbit much more slowly than binary stars, due to their lower masses. And because brown dwarfs are dimmer than stars, they can only be well studied with the world’s most powerful telescopes.

    To measure masses, Dupuy and Liu collected images of the brown-dwarf binaries over several years, tracking their orbital motions using high-precision observations. They used the 10-meter Keck Observatory telescope, along with its laser guide star adaptive optics system, and the Hubble Space Telescope, to obtain the extremely sharp images needed to distinguish the light from each object in the pair.

    However, the price of such zoomed-in, high-resolution images is that there is no reference frame to identify the center of mass. Wide-field images from the Canada-France-Hawaii Telescope containing hundreds of stars provided the reference grid needed to measure the center of mass for every binary. The precise positions needed to make these measurements are one of the specialties of WIRCam, the wide field infrared camera at CFHT. “Working with Trent Dupuy and Mike Liu over the last decade has not only benefited their work but our understanding of what is possible with WIRCam as well” says Daniel Devost, director of science operations at CFHT. “This is one of the first programs I worked on when I started at CFHT so this makes this discovery even more exciting.”

    The result of the decade-long observing program is the first large sample of brown dwarf masses. The information they have assembled has allowed them to draw a number of conclusions about what distinguishes stars from brown dwarfs.

    Objects heavier than 70 Jupiter masses are not cold enough to be brown dwarfs, implying that they are all stars powered by nuclear fusion. Therefore 70 Jupiters is the critical mass below which objects are fated to be brown dwarfs. This minimum mass is somewhat lower than theories had predicted but still consistent with the latest models of brown dwarf evolution.

    In addition to the mass cutoff, they discovered a surface temperature cutoff. Any object cooler than 1,600 Kelvin (about 2,400 degrees Fahrenheit) is not a star, but a brown dwarf.

    This new work will help astronomers understand the conditions under which stars form and evolve — or sometimes fail. In turn, the success or failure of star formation has an impact on how, where, and why solar systems form.

    “As they say, good things come to those who wait. While we’ve had many interesting brown dwarf results over the past 10 years, this large sample of masses is the big payoff. These measurements will be fundamental to understanding both brown dwarfs and stars for a very long time,” concludes Liu.

    This research will be published in The Astrophysical Journal Supplement.

    See the full article here .
    See the U Hawaii press release here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The CFH observatory hosts a world-class, 3.6 meter optical/infrared telescope. The observatory is located atop the summit of Mauna Kea, a 4200 meter, dormant volcano located on the island of Hawaii. The CFH Telescope became operational in 1979. The mission of CFHT is to provide for its user community a versatile and state-of-the-art astronomical observing facility which is well matched to the scientific goals of that community and which fully exploits the potential of the Mauna Kea site.

    CFHT Telescope
    CFHT Interior
    CFHT

     
  • richardmitnick 1:52 pm on August 18, 2017 Permalink | Reply
    Tags: "Sneak peek of Gaia's sky in colour, DPAC-Gaia Data Processing and Analysis Consortium, , While surveying the positions of over a billion stars ESA's Gaia mission is also measuring their colour a key diagnostic to study the physical properties of stars   

    From ESA: “Sneak peek of Gaia’s sky in colour” 

    ESA Space For Europe Banner

    European Space Agency

    While surveying the positions of over a billion stars, ESA’s Gaia mission is also measuring their colour, a key diagnostic to study the physical properties of stars. A new image provides a preview of Gaia’s first full-colour all-sky map, which will be unleashed in its highest resolution with the next data release in 2018.

    ESA/GAIA satellite

    1
    Preliminary map of Gaia’s sky in colour. Credit: ESA/Gaia/DPAC/CU5/DPCI/CU8/F. De Angeli, D.W. Evans, M. Riello, M. Fouesneau, R. Andrae, C.A.L. Bailer-Jones

    Stars come in a variety of colours that depend on their surface temperature, which is, in turn, determined by their mass and evolutionary stage.

    Massive stars are hotter and therefore shine most brightly in blue or white light, unless they are approaching the end of their life and have puffed up into a red supergiant. Lower-mass stars, instead, are cooler and tend to appear red.

    Measuring stellar colours is important to solve a variety of questions, ranging from the internal structure and chemical composition of stars to their evolution.

    Gaia, ESA’s astrometry mission to compile the largest and most precise catalogue of stellar positions and motions to date, has also been recording the colour of the stars it observes. The satellite was launched in December 2013 and has been collecting scientific data since July 2014.

    A special effort in the Gaia Data Processing and Analysis Consortium (DPAC) is dedicated to the challenging endeavour of extracting stellar colours from the satellite data. While these measurements will be published with Gaia’s second data release in April 2018, a preview of the Gaia sky map in colour demonstrates that the ongoing work is progressing well.

    The new map, based on preliminary data from 18.6 million bright stars taken between July 2014 and May 2016 [1], shows the middle value (median) of the colours of all stars that are observed in each pixel. It is helpful to look at it next to its companion map, showing the density of stars in each pixel, which is higher along the Galactic Plane – the roughly horizontal structure that extends across the image, corresponding to the most densely populated region of our Milky Way galaxy – and lower towards the poles.

    2
    Star density map. Credit: ESA/Gaia/DPAC/CU5/DPCI/CU8/F. De Angeli, D.W. Evans, M. Riello, M. Fouesneau, R. Andrae, C.A.L. Bailer-Jones

    Even though this map is only meant as an appetizer to the full treat of next year’s release, which will include roughly a hundred times more stars, it is already possible to spot some interesting features.

    The reddest regions in the map, mainly found near the Galactic Centre, correspond to dark areas in the density map: these are clouds of dust that obscure part of the starlight, especially at blue wavelengths, making it appear redder – a phenomenon known as reddening.

    It is also possible to see the two Magellanic Clouds – small satellite galaxies of our Milky Way – in the lower part of the map.

    The task of measuring colours is performed by the photometric instrument on Gaia. This instrument contains two prisms that split the starlight into its constituent wavelengths, providing two low-resolution spectra for each star: one for the short, or blue, wavelengths (330-680 nm) and the other for the long, or red, ones (640-1050 nm). Scientists then compare the total amount of light in the blue and red spectra to estimate stellar colours.

    To precisely calibrate these spectra, however, it is necessary to know the position of each source on Gaia’s focal plane to very high accuracy – in fact, to an accuracy that only Gaia itself can provide.

    As part of the effort to extract physical parameters from the data sent back by the satellite, scientists feed them to an iterative algorithm that compares the recorded images of stars to models of how such images should look: as a result, the algorithm provides a first estimate of the star’s parameters, such as its position, brightness, or colour. By collecting more data and feeding them to the algorithm, the models are constantly improved and so are the estimated parameters for each star.

    The first Gaia data release, published in September 2016, was based on less than a quarter of the total amount of data that will be collected by the satellite over its entire five-year mission, which is expected to observe each star an average of 70 times. This first release, listing unprecedentedly accurate positions on the sky for 1.142 billion stars, along with their brightness, contained no information on stellar colours: by then, it had not been possible to run enough iterations of the algorithm to accurately estimate additional parameters.

    As the satellite continues to observe more stars, scientists have now had more time to feed data to the iterative algorithm to obtain estimates of stellar colours, like the ones shown in the new map. These estimates will be validated, over the coming months, as part of the overall data processing effort leading to the second Gaia data release.

    Since the first data release, scientists across the world have been using Gaia’s brightness measurements – which are obtained over the full G-band, from 330 to 1050 nm – along with datasets from other missions to estimate stellar colours. These studies have been applied to a variety of subjects, from variable stars and stellar clusters in our Galaxy to the characterisation of stars in the Magellanic Clouds.

    Next year, the second release of Gaia data will include not only the position and G-band brightness, but also the blue and red colour for over a billion stars – in addition to the long-awaited estimates of stellar parallaxes and proper motions based on Gaia measurements for all the observed stars [2]. This extraordinary dataset will allow scientists to delve into the secrets of our Galaxy, investigating its composition, formation and evolution to an unparalleled degree of detail.

    Notes

    [1] The preliminary colour map shows a sample of stars that have been selected randomly from all Gaia stars with G-band magnitudes brighter than 17 and for which both colour measurements (from the blue and the red channels of Gaia’s photometric instrument) are available.

    [2] Gaia’s goal is to measure the parallax (a small, periodic change in the apparent position of a star caused by Earth’s yearly revolution around the Sun, which depends on the star’s distance from us) and proper motion (the motion of stars across the plane of the sky caused by their physical movement through the Galaxy) for over one billion stars. In the process, Gaia will measure also the brightness and colour of these stars, take spectra for a subset of them, and observe a variety of other celestial objects, from asteroids in our own Solar System to distant galaxies beyond the Milky Way.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The European Space Agency (ESA), established in 1975, is an intergovernmental organization dedicated to the exploration of space, currently with 19 member states. Headquartered in Paris, ESA has a staff of more than 2,000. ESA’s space flight program includes human spaceflight, mainly through the participation in the International Space Station program, the launch and operations of unmanned exploration missions to other planets and the Moon, Earth observation, science, telecommunication as well as maintaining a major spaceport, the Guiana Space Centre at Kourou, French Guiana, and designing launch vehicles. ESA science missions are based at ESTEC in Noordwijk, Netherlands, Earth Observation missions at ESRIN in Frascati, Italy, ESA Mission Control (ESOC) is in Darmstadt, Germany, the European Astronaut Centre (EAC) that trains astronauts for future missions is situated in Cologne, Germany, and the European Space Astronomy Centre is located in Villanueva de la Cañada, Spain.

    ESA50 Logo large

     
  • richardmitnick 1:29 pm on August 18, 2017 Permalink | Reply
    Tags: , Different Triggers Same Shaking, , , Fault types differ between the two regions, , Quakes Pack More Punch in Eastern Than in Central United States   

    From Eos: “Quakes Pack More Punch in Eastern Than in Central United States” 

    AGU bloc

    AGU
    Eos news bloc

    Eos

    8.18.17
    Kimberly M. S. Cartier

    A new finding rests on the recognition that fault types differ between the two regions. It helps explain prior evidence that human-induced quakes and natural ones behave the same in the nation’s center.

    1
    A broken angel statue lies among other damage on the roof of the Washington National Cathedral, Washington, D. C., after a magnitude 5.8 earthquake that impacted the eastern United States and Canada on 23 August 2011. Credit: AP Photo/J. Scott Applewhite

    Earthquakes in the eastern United States and Canada are many times more severe than central U.S. earthquakes of human or natural origin, earthquake scientists have found, highlighting a crucial need to separate the two regions when designing future earthquake hazard maps. The study separated the regions from the Mississippi-Alabama border up to the base of Lake Michigan, approximately 87°W.

    “People have never really compared these two regions very carefully,” said Yihe Huang, assistant professor of Earth and environmental sciences at the University of Michigan, Ann Arbor, and lead author of a study published in Science Advances on 2 August.

    Because earthquakes have occurred rarely in the central and eastern United States until recently, seismologists have not studied those areas as closely as they have more high-risk ones like the U.S. West Coast. “They are always taken as one region in the hazard models, but…if you look closely, they actually [are] very different,” she said. “We didn’t really think about this before.”

    Huang’s research shows that there is a fundamental and important difference in the stress released, and therefore in the hazard level, of central U.S. quakes compared with those in the eastern United States and Canada, said Gail Atkinson, professor of Earth sciences and Industrial Research Chair in Hazards from Induced Seismicity at Western University in London, Ontario, Canada.

    Different Triggers, Same Shaking

    Huang and her coauthors began their investigation questioning whether seismologists can use existing earthquake hazard models—developed using data from naturally occurring tectonic earthquakes—to accurately predict the severity of quakes induced by human activity.

    They expected the trigger mechanism to be a major source of uncertainty in hazard prediction models, but they found instead that the biggest difference was geography. Earthquakes they analyzed from the eastern United States and Canada along the Appalachians released 5–6 times more energy than their central counterparts. Consequently, Huang argued that “we should treat the central and eastern U.S. tectonic earthquakes differently in our hazard prediction.”

    Their study confirmed that earthquakes in the central United States released similar amounts of energy and shook the ground the same way whether they were induced or natural. So seismologists can use the same models to study them all, report Huang and her colleagues.

    “Within the central U.S., all of the earthquakes appear to be the same, and we’re really comparing apples and apples,” said William Ellsworth, professor of geophysics at Stanford University in Stanford, Calif., and a coauthor on the paper.

    “We don’t need to discriminate why the earthquake occurred to describe its shaking,” he said.

    Different Types of Stress Relief

    Why do the two regions produce earthquakes of such different severity? The reason, the researchers explained, is that the central and eastern regions release underground stress using different mechanisms. The way that ground layers shift and slide against each other to dissipate energy determines the violence of the stress release and strength of high-frequency motion aboveground, the shaking most relevant for engineering safety and seismic hazard assessment.

    Huang explained that in the central United States, seven of nine earthquakes they examined happened when chunks of Earth’s crust slid horizontally against each other along strike-slip faults. All eight of the eastern earthquakes they analyzed occurred at reverse faults, where the ground shifts vertically against the pull of gravity. Separating by region, Huang said, equates to separating by fault type.

    A comparison of earthquake magnitudes in eastern and central regions underscores the greater power of eastern temblors, according to Huang. The team’s list of natural events, reaching back more than 15 years, contains only one earthquake stronger than magnitude 5 in the central United States but three from the eastern United States. The strongest, an M5.8 quake in Mineral, Va., on 23 August 2011, caused significant property damage but only minor injuries.

    Ellsworth explained that industrial processes in the central and eastern United States, like the disposal of wastewater from oil production and hydraulic fracturing, may simply be speeding up the normal geologic processes nearby by releasing underground pressure that builds up naturally. “We might be speeding up the processes by hundreds of thousands of years,” he said.

    The researchers noted in their paper that wastewater injection is likely acting as a trigger for stress release but that subsequent shaking follows natural tectonic physics. Because the shaking is similar, Huang said, existing ground motion prediction equations can actually be used to predict the severity of induced earthquakes as long as they first account for the fault type at work.

    Improving Hazard Predictions Nationwide

    Now that this new work has revealed a significant difference in the types of earthquake-producing faults prevalent in the central and eastern regions, Huang said that she wants to conduct a broader investigation into seismic events nationwide to see if there are other overlooked patterns related to earthquake strength.

    In the meantime, the new recognition of an eastern versus central difference in typical fault type should help improve future hazard prediction maps and guide the construction of earthquake-safe structures, Ellsworth said.

    “The more accurate we can make that forecast,” he said, “the more it actually reduces the cost of ensuring seismic safety.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Eos is the leading source for trustworthy news and perspectives about the Earth and space sciences and their impact. Its namesake is Eos, the Greek goddess of the dawn, who represents the light shed on understanding our planet and its environment in space by the Earth and space sciences.

     
  • richardmitnick 1:03 pm on August 18, 2017 Permalink | Reply
    Tags: , , Hot spot at Hawaii? Not so fast, Hot spots around the globe can be used to determine how fast tectonic plates move, Mantle plumes, Paleogeography, , Seamounts, The Pacific Plate moves relative to the hot spots at about 100 millimeters per year   

    From Rice: “Hot spot at Hawaii? Not so fast” 

    Rice U bloc

    Rice University

    August 18, 2017
    Mike Williams

    Rice University scientists’ model shows global mantle plumes don’t move as quickly as thought

    Through analysis of volcanic tracks, Rice University geophysicists have concluded that hot spots like those that formed the Hawaiian Islands aren’t moving as fast as recently thought.

    Hot spots are areas where magma pushes up from deep Earth to form volcanoes. New results from geophysicist Richard Gordon and his team confirm that groups of hot spots around the globe can be used to determine how fast tectonic plates move.

    1
    Rice University geophysicists have developed a method that uses the average motion of hot-spot groups by plate to determine that the spots aren’t moving as fast as geologists thought. For example, the Juan Fernandez Chain (outlined by the white rectangle) on the Nazca Plate west of Chile was formed by a hot spot now at the western end of the chain as the Nazca moved east-northeast relative to the hotspot forming the chain that includes Alejandro Selkirk and Robinson Crusoe islands. The white arrow shows the direction of motion of the Nazca Plate relative to the hot spot, and it is nearly indistinguishable from the direction predicted from global plate motions relative to all the hot spots on the planet (green arrow). The similarity in direction indicates that very little motion of the Juan Fernandez hot spot relative to other hot spots is needed to explain its trend. Illustration by Chengzu Wang.

    Gordon, lead author Chengzu Wang and co-author Tuo Zhang developed a method to analyze the relative motion of 56 hot spots grouped by tectonic plates. They concluded that the hot-spot groups move slowly enough to be used as a global reference frame for how plates move relative to the deep mantle. This confirmed the method is useful for viewing not only current plate motion but also plate motion in the geologic past.

    The study appears in Geophysical Research Letters.

    Hot spots offer a window into the depths of Earth, as they mark the tops of mantle plumes that carry hot, buoyant rock from deep Earth to near the surface and produce volcanoes. These mantle plumes were once thought to be straight and stationary, but recent results suggested they can also shift laterally in the convective mantle over geological time.

    The primary evidence of plate movement relative to the deep mantle comes from volcanic activity that forms mountains on land, islands in the ocean or seamounts, mountain-like features on the ocean floor. A volcano forms on a tectonic plate above a mantle plume. As the plate moves, the plume gives birth to a series of volcanoes. One such series is the Hawaiian Islands and the Emperor Seamount Chain; the youngest volcanoes become islands while the older ones submerge. The series stretches for thousands of miles and was formed as the Pacific Plate moved over a mantle plume for 80 million years.

    The Rice researchers compared the observed hot-spot tracks with their calculated global hot-spot trends and determined the motions of hot spots that would account for the differences they saw. Their method demonstrated that most hot-spot groups appear to be fixed and the remainder appear to move slower than expected.

    “Averaging the motions of hot-spot groups for individual plates avoids misfits in data due to noise,” Gordon said. “The results allowed us to say that these hot-spot groups, relative to other hot-spot groups, are moving at about 4 millimeters or less a year.

    “We used a method of analysis that’s new for hot-spot tracks,” he said. “Fortunately, we now have a data set of hot-spot tracks that is large enough for us to apply it.”

    For seven of the 10 plates they analyzed with the new method, average hot-spot motion measured was essentially zero, which countered findings from other studies that spots move as much as 33 millimeters a year. Top speed for the remaining hot-spot groups — those beneath the Eurasia, Nubia and North America plates — was between 4 and 6 millimeters a year but could be as small as 1 millimeter per year. That’s much slower than most plates move relative to the hot spots. For example, the Pacific Plate moves relative to the hot spots at about 100 millimeters per year.

    Gordon said those interested in paleogeography should be able to make use of the model. “If hot spots don’t move much, they can use them to study prehistorical geography. People who are interested in circum-Pacific tectonics, like how western North America was assembled, need to know that history of plate motion.

    “Others who will be interested are geodynamicists,” he said. “The motions of hot spots reflect the behavior of mantle. If the hot spots move slowly, it may indicate that the viscosity of mantle is higher than models that predict fast movement.”

    “Modelers, especially those who study mantle convection, need to have something on the surface of Earth to constrain their models, or to check if their models are correct,” Wang said. “Then they can use their models to predict something. Hot-spot motion is one of the things that can be used to test their models.”

    Gordon is the W.M. Keck Professor of Earth Science. Wang and Zhang are Rice graduate students. The National Science Foundation supported the research.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Rice U campus

    In his 1912 inaugural address, Rice University president Edgar Odell Lovett set forth an ambitious vision for a great research university in Houston, Texas; one dedicated to excellence across the range of human endeavor. With this bold beginning in mind, and with Rice’s centennial approaching, it is time to ask again what we aspire to in a dynamic and shrinking world in which education and the production of knowledge will play an even greater role. What shall our vision be for Rice as we prepare for its second century, and how ought we to advance over the next decade?

    This was the fundamental question posed in the Call to Conversation, a document released to the Rice community in summer 2005. The Call to Conversation asked us to reexamine many aspects of our enterprise, from our fundamental mission and aspirations to the manner in which we define and achieve excellence. It identified the pressures of a constantly changing and increasingly competitive landscape; it asked us to assess honestly Rice’s comparative strengths and weaknesses; and it called on us to define strategic priorities for the future, an effort that will be a focus of the next phase of this process.

     
  • richardmitnick 12:34 pm on August 18, 2017 Permalink | Reply
    Tags: A bioengineering class helped Stanford researchers understand coral bleaching and more, Aiptasia, , , , , , Team Traptasia   

    From Stanford: “A bioengineering class helped Stanford researchers understand coral bleaching and more” 

    Stanford University Name
    Stanford University

    August 16, 2017
    Nathan Collins

    1
    Polly Fordyce (left), assistant professor of bioengineering and of genetics, and graduate students Louai Labanieh, Sarah Lensch and Diego Oyarzun discuss the design of a microfluidic device built to study coral bleaching. The device was designed and built as part of Fordyce’s graduate-level microfluidics course. (Image credit: Courtesy Polly Fordyce)

    Team Traptasia had a problem: The tiny baby sea anemones they were trying to ensnare are, unlike their adult forms, surprisingly powerful swimmers. They are also, as team member and chemical engineering graduate student Daniel Hunt put it, “pretty squishy little deformable things.” Previous attempts to trap the anemones, called Aiptasia, while keeping them alive long enough to study under a microscope had ended in gruesome, if teensy, failure.

    But Traptasia had to make it work. Cawa Tran, then a postdoctoral fellow, and her research into climate change’s effects on coral bleaching were depending on them. (Sea anemones, it turns out, are a close relative of corals, but easier to study.)

    And then there was the matter of the team’s grades to consider, along with the outcome of an experiment in the “democratization” of a powerful set of tools known as microfluidics.

    Democratizing science

    Team Traptasia was part of a microfluidics course dreamed up by Polly Fordyce, an assistant professor of genetics and of bioengineering and a Stanford ChEM-H faculty fellow.

    At the time, she was feeling a bit frustrated.

    “Microfluidics has the potential to be this really awesome tool,” Fordyce said. That’s because microfluidic devices shrink equipment that would normally fill a chemistry or biology lab bench down to the size of a large wristwatch, saving space and materials, not to mention time and money. They also open up entirely new ways to conduct biological research – trapping baby sea anemones and watching them under a microscope, for example. But making high-quality devices takes expertise and resources most labs don’t have.

    “There’s this big chasm between the bioengineers that develop devices and the biologists that want to use them,” Fordyce said. Bioengineers know how to design sophisticated devices and biologists have important questions to answer, but there is little overlap between the two.

    To bridge the gap, Fordyce invited biology labs to propose projects to students in her graduate-level microfluidics course. The idea, she said, was to give students real-world experience while giving labs access to technology they might not have the time, money or expertise to pursue otherwise.

    In fact, the desire to break down disciplinary boundaries was something that attracted her to Stanford and to ChEM-H in the first place. “One of the reasons that I came to Stanford and ChEM-H was that I really love the idea of having interdisciplinary institutes that attempt to cross the boundaries between disciplines,” she said.

    Ultimately, researchers from four labs took part, including Tran, who was working in the lab of John Pringle, a professor of genetics. Fordyce will be describing her experiences teaching that class in an upcoming paper, which she hopes will provide a blueprint for people eager to help others make use of microfluidics tools.

    Shrinky Dinks vs. Aiptasia

    Before linking up with Fordyce’s class, Tran had been working with Heather Cartwright, core imaging director at the Carnegie Institution for Science’s Department of Plant Biology. Together they tried a more do-it-yourself approach involving the children’s toy Shrinky Dinks, an approach first proposed by Michelle Khine at the University of California, Irvine.

    The effort did not work. “We got some movies. They were mostly end-of-life movies,” Cartwright said.

    If Tran and Cartwright managed to trap Aiptasia, their Shrinky Dink device crushed or twisted the sea anemones apart. So when Fordyce approached them to work with what would become Team Traptasia – graduate students Salil Bhate, Hunt, Louai Labanieh, Sarah Lensch and Will Van Treuren – and Stanford’s Microfluidics Foundry, they jumped at the chance.

    A non-smashing success

    Team Traptasia, Tran said, solved her problem “completely.”

    After several rounds of design, troubleshooting and testing, Team Traptasia built a microfluidic device that kept Aiptasia alive and healthy long enough to study. As a result, the researchers could actually watch the effects of rising water temperature and pollution on living sea anemones and their symbiotic algae – something that has never been done before. Tran, Cartwright and Team Traptasia will publish their findings soon, Tran said.

    Other teams helped labs design devices to study how the parasite that causes toxoplasmosis infects human cells, to trap and study placental cells, and to isolate single cells in tiny reaction chambers for detailed molecular biology studies.

    Tran said the device Team Traptasia came up with could provide opportunities for the Pringle lab, as well as in education. Now an assistant professor at California State University, Chico, Tran said she’ll be using the device with undergraduates there. “Basically, this device has given me the opportunity to train the next generation of biologists” in a new, research-focused way, she said.

    Hunt, the chemical engineering student in Team Traptasia, said that his own research on intestinal biology could benefit from microfluidics. “I’m hoping to take the expertise that I gained in the microfluidics design process to my own research,” he said. Hunt is working in the lab of Sarah Heilshorn, an associate professor of materials science and engineering.

    Those are exactly the kinds of results Fordyce had hoped for.

    “This year was successful beyond my dreams, and the reason is that the students in the course were incredibly creative and talented and driven,” Fordyce said. She also credits her graduate student and teaching assistant Kara Brower, who won a teaching award for her efforts. “She went way above and beyond what would be required of a TA and really helped imagine and develop the course,” Fordyce said.

    “If you put this forward as a model for people at other schools, that could actually make a difference,” both for students and the labs that could benefit from microfluidics, she said.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
  • richardmitnick 11:46 am on August 18, 2017 Permalink | Reply
    Tags: , , CBETA, Cornell University, The overall mission for CBETA is to develop a prototype for eRHIC a 2.4 mile-long electron-ion collider proposed to be built at BNL on Long Island New York   

    From Cornell: “Energy-efficient accelerator was 50 years in the making” 

    Cornell Bloc

    Cornell University

    July 5, 2017 [I never saw this in social media. I got it from a BNL article.]
    Rick Ryan

    1
    Main linac cryomodule being placed into its final position by Cornell engineers at Wilson Lab.

    With the introduction of CBETA, the Cornell-Brookhaven ERL Test Accelerator, Cornell University and Brookhaven National Laboratory scientists are following up on the concept of energy-recovering particle accelerators first introduced by physicist Maury Tigner at Cornell more than 50 years ago.

    CBETA tests two energy-saving technologies for accelerators: energy recovery and permanent magnets. An energy recovery linac (ERL) like CBETA reclaims the energy of a used electron beam instead of dumping it after the experiment. The recovered energy is used to accelerate the next beam of particles, creating a beam of electrons that can be used for many areas of research. The beams are accelerated by Superconducting Radio Frequency (SRF) units, another energy-efficient technology pioneered at Cornell.

    By using permanent magnets, the power that is usually needed to steer the beam with electromagnets is saved. While energy recovery linacs and fixed magnets are being used elsewhere, never before has a group been able to steer four particle beams of different energies simultaneously by using fixed magnets through an ERL.

    Imagine four cars traveling at different speeds around a turn. The physics involved is different for each car: One must turn exceptionally hard at a higher speed as opposed to another traveling at a much lower speed. This also holds true for particles with different energy in the beam pipe. Permanent magnets with alternating gradients make it possible to steer each particle of different energy within the same 120 mm-wide chamber.

    While this method recycles energy, it also creates beams that are much more powerful: They are more tightly bound, can produce brighter and more coherent radiation, can have higher currents, and can produce higher luminosity in colliding-beam experiments.

    “The ERL process was invented at Cornell University 50 years ago, and having its first demonstration in a multi-turn SRF ERL shows Cornell’s strong and continuing tradition in this research field,” said Georg Hoffstaetter, Cornell professor of physics and CBETA principal investigator.

    Combining world-record-holding accelerator components constructed by Cornell with the permanent magnet technology developed by the U.S Department of Energy’s Brookhaven National Laboratory (BNL), the CBETA collaboration aims to revolutionize the way in which accelerators are built.

    2
    Artist’s rendering of the main accelerator components in Wilson Lab.

    The overall mission for CBETA is to develop a prototype for eRHIC, a 2.4 mile-long electron-ion collider proposed to be built at BNL on Long Island, New York.

    Roughly two dozen scientists from BNL and Cornell’s Laboratory for Accelerator-based Sciences and Education (CLASSE) are collaborating on the project. They are running initial tests and expect to complete installation of CBETA by summer 2019. They will test and commission the prototype for eRHIC by spring 2020.

    More than 30,000 accelerators are in operation around the world. This prototype ERL has far-reaching implications for biology, chemistry and a host of other disciplines. ERLs are not only envisioned for nuclear and elementary particle physics colliders, as in eRHIC and the LHeC at CERN in Switzerland, but also as coherent X-ray sources for basic research, industrial and medical purposes.

    “Existing linear accelerators have superior beam quality when compared to large circular accelerators,” Hoffstaeter said. “However, they are exceedingly wasteful due to the beam being discarded after use and can therefore only have an extremely low current compared to ring accelerators. This limits the amount of data collected during an experiment. An ERL like CBETA solves the problem of low beam quality in rings and of low beam-current in linear accelerators, all while conserving energy compared to their predecessors.”

    The most complex components of CBETA already exist at Wilson Lab: the DC electron source, the superconducting radio-frequency (SRF) injector linac, the main ERL cryomodule and the high-power beam stop. They were designed, constructed and commissioned in 10 years of National Science Foundation funding.

    Said Karl Smolenski, lead engineer for Cornell ERL development: “If we are successful it will be a great thing for science and industry. So many different departments and scientists will be able to use this technology. It will also put us way ahead in the competitive world.”

    Principal funding for CBETA comes from the New York State Energy Research and Development Authority.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    Once called “the first American university” by educational historian Frederick Rudolph, Cornell University represents a distinctive mix of eminent scholarship and democratic ideals. Adding practical subjects to the classics and admitting qualified students regardless of nationality, race, social circumstance, gender, or religion was quite a departure when Cornell was founded in 1865.

    Today’s Cornell reflects this heritage of egalitarian excellence. It is home to the nation’s first colleges devoted to hotel administration, industrial and labor relations, and veterinary medicine. Both a private university and the land-grant institution of New York State, Cornell University is the most educationally diverse member of the Ivy League.

    On the Ithaca campus alone nearly 20,000 students representing every state and 120 countries choose from among 4,000 courses in 11 undergraduate, graduate, and professional schools. Many undergraduates participate in a wide range of interdisciplinary programs, play meaningful roles in original research, and study in Cornell programs in Washington, New York City, and the world over.

     
  • richardmitnick 11:32 am on August 18, 2017 Permalink | Reply
    Tags: , , , Successful Test of Small-Scale Accelerator with Big Potential Impacts for Science and Medicine   

    From BNL: “Successful Test of Small-Scale Accelerator with Big Potential Impacts for Science and Medicine” 

    Brookhaven Lab

    August 16, 2017
    Karen McNulty Walsh
    kmcnulty@bnl.gov

    “Fixed-field” accelerator transports multiple particle beams at a wide range of energies through a single beam pipe.

    1
    Members of the team testing a fixed-field, alternating-gradient beam transport line made with permanent magnets at Brookhaven Lab’s Accelerator Test Facility (ATF), left to right: Mark Palmer (Director of ATF), Dejan Trbojevic, Stephen Brooks, George Mahler, Steven Trabocchi, Thomas Roser, and Mikhail Fedurin (ATF operator and experimental liaison).

    An advanced particle accelerator designed at the U.S. Department of Energy’s Brookhaven National Laboratory could reduce the cost and increase the versatility of facilities for physics research and cancer treatment. It uses lightweight, 3D-printed frames to hold blocks of permanent magnets and an innovative method for fine-tuning the magnetic field to steer multiple beams at different energies through a single beam pipe.

    With this design, physicists could accelerate particles through multiple stages to higher and higher energies within a single ring of magnets, instead of requiring more than one ring to achieve these energies. In a medical setting, where the energy of particle beams determines how far they penetrate into the body, doctors could more easily deliver a range of energies to zap a tumor throughout its depth.

    Scientists testing a prototype of the compact, cost-effective design at Brookhaven’s Accelerator Test Facility (ATF)—a DOE Office of Science User Facility—say it came through with flying colors. Color-coded images show how a series of electron beams accelerated to five different energies successfully passed through the five-foot-long curve of magnets, with each beam tracing a different pathway within the same two-inch-diameter beam pipe.

    2
    Brooks’ proof-of-principle experiment showed that electron beams of five different energies could make their way through the arc of permanent magnets, each taking a somewhat different, color-coded path: dark green (18 million electron volts, or MeV), light green (24MeV), yellow (36MeV), red (54MeV), and purple (70MeV).

    “For each of five energy levels, we injected the beam at the ‘ideal’ trajectory for that energy and scanned to see what happens when it is slightly off the ideal orbit,” said Brookhaven Lab physicist Stephen Brooks, lead architect of the design. Christina Swinson, a physicist at the ATF, steered the beam through the ATF line and Brooks’ magnet assembly and played an essential role in running the experiments.

    “We designed these experiments to test our predictions and see how far away you can go from the ideal incoming trajectory and still get the beam through. For the most part, all the beam that went in came out at the other end,” Brooks said.

    The beams reached energies more than 3.5 times what had previously been achieved in a similar accelerator made from significantly larger electromagnets, with a doubling of the ratio between the highest and lowest energy beams.

    “These tests give us confidence that this accelerator technology can be used to carry beams at a wide range of energies,” Brooks said.

    No wires required

    Most particle accelerators use electromagnets to generate the powerful magnetic fields required to steer a beam of charged particles. To transport particles of different energies, scientists change the strength of the magnetic field by ramping up or down the electrical current passing through the magnets.

    Brooks’ design instead uses permanent magnets, the kind that stay magnetic without an electrical current—like the ones that stick to your refrigerator, only stronger. By arranging differently shaped magnet blocks to form a circle, Brooks creates a fixed magnetic field that varies in strength across different positions within the central aperture of each donut-shaped magnet array.

    When the magnets are lined up end-to-end like beads on a necklace to form a curved arc—as they were in the ATF experiment with assistance from Brookhaven’s surveying team to achieve precision alignment—higher energy particles move to the stronger part of the field. Alternating the field directions of sequential magnets keeps particles oscillating along their preferred trajectory as they move through the arc, with no power needed to accommodate particles of different energies.

    No electricity means less supporting infrastructure and easier operation—which all contribute to the significant cost savings potential of this non-scaling, fixed-field, alternating-gradient accelerator technology.

    Simplified design

    4
    Brooks’ successful test lays the foundation for the CBETA accelerator, in which bunches of electrons will be accelerated to four different energies and travel simultaneously within the same beampipe, as shown in this simulation.

    Brooks worked with George Mahler and Steven Trabocchi, engineers in Brookhaven’s Collider-Accelerator Department, to assemble the deceptively simple yet powerful magnets.

    First they used a 3D printer to create plastic frames to hold the shaped magnetic blocks, like pieces in a puzzle, around the central aperture. “Different sizes, or block thicknesses, and directions of magnetism allow a customized field within the aperture,” Brooks said.

    After the blocks were tapped into the frames with a mallet to create a coarse assembly, John Cintorino, a technician in Lab’s magnet division, measured the strength of the field. The team then fine-tuned each assembly by inserting different lengths of iron rods into as many as 64 positions around a second 3D-printed cartridge that fits within the ring of magnets. A computational program Brooks wrote uses the coarse assembly field-strength measurements to determine exactly how much iron goes into each slot. He’s also currently working on a robot to custom cut and insert the rods.

    The end-stage fine-tuning “compensates for any errors in machining and positioning of the magnet blocks,” Brooks said, improving the quality of the field 10-fold over the coarse assembly. The final magnets’ properties match or even surpass those of sophisticated electromagnets, which require much more precise engineering and machining to create each individual piece of metal.

    “The only high-tech equipment in our setup is the rotating coil we use to do the precision measurements,” he said.

    Applications and next steps

    The lightweight, compact components and simplified operation of Brooks’ permanent magnet beam transport line would be “a dramatic improvement from what is currently on the market for delivering particle beams in cancer treatment centers,” said Dejan Trbojevic, Brooks’ supervisor, who holds several patents on designs for particle therapy gantries.

    A gantry is the arced beamline that delivers cancer-killing particles from an accelerator to a patient. In some particle therapy facilities the gantry and supporting infrastructure can weigh 50 tons or more, often occupying a specially constructed wing of a hospital. Trbojevic estimates that a gantry using Brooks’ compact design would weigh just one ton. That would bring down the cost of constructing such facilities.

    “Plus with no need for electricity [to the magnets] to change field strengths, it would be much easier to operate,” Trbojevic said.

    The ability to accelerate particles rapidly to higher and higher energy levels within a single accelerator ring could also bring down the cost of proposed future physics experiments, including a muon collider, a neutrino factory, and an electron-ion collider (EIC). In these cases, additional accelerator components would boost the beams to higher energy.

    For example, Brookhaven physicists have been collaborating with physicists at Cornell University on a similar fixed-field design called CBETA. That project, developed with funding from the New York State Energy Research and Development Authority (NYSERDA), is a slightly larger version of Brooks’ machine and includes all the accelerator components for bringing electron beams up to the energies required for an EIC. CBETA also decelerates electrons once they’ve been used for experiments to recover and reuse most of the energy. It will also test beams of multiple energies at the same time, something Brooks’ proof-of-principle experiment at the ATF did not do. But Brooks’ successful test strengthens confidence that the CBETA design is sound.

    “Everyone in Brookhaven’s Collider-Accelerator Department has been very supportive of this project,” said Trbojevic, Brookhaven’s Principal Investigator on CBETA.

    As Collider-Accelerator Department Chair Thomas Roser noted, “All these efforts are working toward advanced accelerator concepts that will ultimately benefit science and society as a whole. We’re looking forward to the next chapter in the evolution of this technology.”

    The magnets for Brooks’ experiment were built with Brookhaven’s Laboratory Directed Research and Development funds for the CBETA project as part of the R&D effort for an early version of Brookhaven’s proposed design for an EIC, known as eRHIC. Operation of the ATF is supported by the DOE Office of Science.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: