Tagged: Large Synoptic Survey Telescope (LSST) Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:40 pm on June 1, 2017 Permalink | Reply
    Tags: , , , , , Large Synoptic Survey Telescope (LSST), Pan-STARRS1,   

    From Universe Today: “What Exactly Should We See When a Star Splashes into a Black Hole Event Horizon?” 

    universe-today

    Universe Today

    1 June , 2017
    Evan Gough

    1
    This artist’s impression shows the surroundings of the supermassive black hole at the heart of the active galaxy NGC 3783. Credit: ESO/M. Kornmesser

    At the center of our Milky Way galaxy dwells a behemoth.

    Sag A* NASA Chandra X-Ray Observatory 23 July 2014, the supermassive black hole at the center of the Milky Way

    An object so massive that nothing can escape its gravitational pull, not even light. In fact, we think most galaxies have one of them. They are, of course, supermassive black holes.

    Supermassive black holes are stars that have collapsed into a singularity. Einstein’s General Theory of Relativity predicted their existence. And these black holes are surrounded by what’s known as an event horizon, which is kind of like the point of no return for anything getting too close to the black hole. But nobody has actually proven the existence of the event horizon yet.

    Some theorists think that something else might lie at the center of galaxies, a supermassive object event stranger than a supermassive black hole. Theorists think these objects have somehow avoided a black hole’s fate, and have not collapsed into a singularity. They would have no event horizon, and would have a solid surface instead.

    “Our whole point here is to turn this idea of an event horizon into an experimental science, and find out if event horizons really do exist or not,” – Pawan Kumar Professor of Astrophysics, University of Texas at Austin.

    A team of researchers at the University of Texas at Austin and Harvard University have tackled the problem. Wenbin Lu, Pawan Kumar, and Ramesh Narayan wanted to shed some light onto the event horizon problem.

    They wondered about the solid surface object, and what would happen when an object like a star collided with it. They published their results in the Monthly Notices of the Royal Astronomical Society.

    The trio predicted that in the 3.5 year time-frame captured by the Pan-STAARS survey, 10 of these collisions would occur and should be represented in the data.

    Pan-STARRS1 located on Haleakala, Maui, HI, USA

    The team found none of the flare-ups they expected to see if the hard-surface theory is true.

    2
    Artist’s conception of the event horizon of a black hole. Credit: Victor de Schwanberg/Science Photo Library

    They’re hoping to improve their test with the upcoming Large Synoptic Survey Telescope (LSST) being built in Chile.


    LSST Camera, built at SLAC



    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    The LSST is a wide field telescope that will capture images of the night sky every 20 seconds over a ten-year span. Every few nights, the LSST will give us an image of the entire available night sky. This will make the study of transient objects much easier and effective.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 7:50 am on May 27, 2017 Permalink | Reply
    Tags: , , Azimuth Advancements, , , Large Synoptic Survey Telescope (LSST)   

    From LSST: “Azimuth Advancements” 

    LSST

    Large Synoptic Survey Telescope

    1
    The telescope mount is rapidly being assembled in the factory at Asturfeito. No image credit.

    5-26-17

    Work on the Telescope Mount Assembly (TMA) by subcontractor Asturfeito in Spain is progressing rapidly. An auxiliary second level platform has been installed, providing access to the azimuth floor. In addition, azimuth radial bearings have been placed and vertical seismic stops have been trial fitted. Currently, the plan is to float the azimuth structure on the hydrostatic bearings in July, which will be the first time the base of the TMA supports the full structural load. Achieving this milestone will allow the Telescope & Site team to start doing balance calculations and planning rotational tests of the TMA.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    LSST telescope, currently under construction at Cerro Pachón Chile
    LSST Interior
    LSST telescope, currently under construction at Cerro Pachón Chile.

    LSST/Camera, built at SLAC
    LSST/Camera, built at SLAC

    The LSST is a new kind of telescope. Currently under construction in Chile, it is being built to rapidly survey the night-time sky. Compact and nimble, the LSST will move quickly between images, yet its large mirror and large field of view—almost 10 square degrees of sky, or 40 times the size of the full moon—work together to deliver more light from faint astronomical objects than any optical telescope in the world.

    From its mountaintop site in the foothills of the Andes, the LSST will take more than 800 panoramic images each night with its 3.2 billion-pixel camera, recording the entire visible sky twice each week. Each patch of sky it images will be visited 1000 times during the survey. With a light-gathering power equal to a 6.7-m diameter primary mirror, each of its 30-second observations will be able to detect objects 10 million times fainter than visible with the human eye. A powerful data system will compare new with previous images to detect changes in brightness and position of objects as big as far-distant galaxy clusters and as small as near-by asteroids.

    The LSST’s combination of telescope, mirror, camera, data processing, and survey will capture changes in billions of faint objects and the data it provides will be used to create an animated, three-dimensional cosmic map with unprecedented depth and detail , giving us an entirely new way to look at the Universe. This map will serve a myriad of purposes, from locating that mysterious substance called dark matter and characterizing the properties of the even more mysterious dark energy, to tracking transient objects, to studying our own Milky Way Galaxy in depth. It will even be used to detect and track potentially hazardous asteroids—asteroids that might impact the Earth and cause significant damage.

    As with past technological advances that opened new windows of discovery, such a powerful system for exploring the faint and transient Universe will undoubtedly serve up surprises.

    Plans for sharing the data from LSST with the public are as ambitious as the telescope itself. Anyone with a computer will be able to view the moving map of the Universe created by the LSST, including objects a hundred million times fainter than can be observed with the unaided eye. The LSST project will provide analysis tools to enable both students and the public to participate in the process of scientific discovery. We invite you to learn more about LSST science.

    The LSST will be unique: no existing telescope or proposed camera could be retrofitted or re-designed to cover ten square degrees of sky with a collecting area of forty square meters. Named the highest priority for ground-based astronomy in the 2010 Decadal Survey, the LSST project formally began construction in July 2014.

     
  • richardmitnick 3:50 pm on May 16, 2017 Permalink | Reply
    Tags: , Blind studies, , , Large Synoptic Survey Telescope (LSST), , , , ,   

    From Symmetry: “The facts and nothing but the facts” 

    Symmetry Mag

    Symmetry

    1
    Artwork by Corinne Mucha

    05/16/17
    Manuel Gnida

    At a recent workshop on blind analysis, researchers discussed how to keep their expectations out of their results.

    Scientific experiments are designed to determine facts about our world. But in complicated analyses, there’s a risk that researchers will unintentionally skew their results to match what they were expecting to find. To reduce or eliminate this potential bias, scientists apply a method known as “blind analysis.”

    Blind studies are probably best known from their use in clinical drug trials, in which patients are kept in the dark about—or blind to—whether they’re receiving an actual drug or a placebo. This approach helps researchers judge whether their results stem from the treatment itself or from the patients’ belief that they are receiving it.

    Particle physicists and astrophysicists do blind studies, too. The approach is particularly valuable when scientists search for extremely small effects hidden among background noise that point to the existence of something new, not accounted for in the current model. Examples include the much-publicized discoveries of the Higgs boson by experiments at CERN’s Large Hadron Collider and of gravitational waves by the Advanced LIGO detector.

    LHC

    CERN/LHC Map

    CERN LHC Tunnel

    CERN LHC particles


    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project


    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    ESA/eLISA the future of gravitational wave research

    “Scientific analyses are iterative processes, in which we make a series of small adjustments to theoretical models until the models accurately describe the experimental data,” says Elisabeth Krause, a postdoc at the Kavli Institute for Particle Astrophysics and Cosmology, which is jointly operated by Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory. “At each step of an analysis, there is the danger that prior knowledge guides the way we make adjustments. Blind analyses help us make independent and better decisions.”

    Krause was the main organizer of a recent workshop at KIPAC that looked into how blind analyses could be incorporated into next-generation astronomical surveys that aim to determine more precisely than ever what the universe is made of and how its components have driven cosmic evolution.

    Black boxes and salt

    One outcome of the workshop was a finding that there is no one-size-fits-all approach, says KIPAC postdoc Kyle Story, one of the event organizers. “Blind analyses need to be designed individually for each experiment.”

    The way the blinding is done needs to leave researchers with enough information to allow a meaningful analysis, and it depends on the type of data coming out of a specific experiment.

    A common approach is to base the analysis on only some of the data, excluding the part in which an anomaly is thought to be hiding. The excluded data is said to be in a “black box” or “hidden signal box.”

    Take the search for the Higgs boson. Using data collected with the Large Hadron Collider until the end of 2011, researchers saw hints of a bump as a potential sign of a new particle with a mass of about 125 gigaelectronvolts. So when they looked at new data, they deliberately quarantined the mass range around this bump and focused on the remaining data instead.

    They used that data to make sure they were working with a sufficiently accurate model. Then they “opened the box” and applied that same model to the untouched region. The bump turned out to be the long-sought Higgs particle.

    That worked well for the Higgs researchers. However, as scientists involved with the Large Underground Xenon experiment reported at the workshop, the “black box” method of blind analysis can cause problems if the data you’re expressly not looking at contains rare events crucial to figuring out your model in the first place.

    LUX has recently completed one of the world’s most sensitive searches for WIMPs—hypothetical particles of dark matter, an invisible form of matter that is five times more prevalent than regular matter.

    LUX/Dark matter experiment at SURF

    LUX scientists have done a lot of work to guard LUX against background particles—building the detector in a cleanroom, filling it with thoroughly purified liquid, surrounding it with shielding and installing it under a mile of rock. But a few stray particles make it through nonetheless, and the scientists need to look at all of their data to find and eliminate them.

    For that reason, LUX researchers chose a different blinding approach for their analyses. Instead of using a “black box,” they use a process called “salting.”

    LUX scientists not involved in the most recent LUX analysis added fake events to the data—simulated signals that just look like real ones. Just like the patients in a blind drug trial, the LUX scientists didn’t know whether they were analyzing real or placebo data. Once they completed their analysis, the scientists that did the “salting” revealed which events were false.

    A similar technique was used by LIGO scientists, who eventually made the first detection of extremely tiny ripples in space-time called gravitational waves.

    High-stakes astronomical surveys

    The Blind Analysis workshop at KIPAC focused on future sky surveys that will make unprecedented measurements of dark energy and the Cosmic Microwave Background—observations that will help cosmologists better understand the evolution of our universe.

    CMB per ESA/Planck

    ESA/Planck

    Dark energy is thought to be a force that is causing the universe to expand faster and faster as time goes by. The CMB is a faint microwave glow spread out over the entire sky. It is the oldest light in the universe, left over from the time the cosmos was only 380,000 years old.

    To shed light on the mysterious properties of dark energy, the Dark Energy Science Collaboration is preparing to use data from the Large Synoptic Survey Telescope, which is under construction in Chile. With its unique 3.2-gigapixel camera, LSST will image billions of galaxies, the distribution of which is thought to be strongly influenced by dark energy.


    Dark Energy Camera [DECam], built at FNAL


    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam


    LSST Camera, built at SLAC



    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    “Blinding will help us look at the properties of galaxies picked for this analysis independent of the well-known cosmological implications of preceding studies,” DESC member Krause says. One way the collaboration plans on blinding its members to this prior knowledge is to distort the images of galaxies before they enter the analysis pipeline.

    Not everyone in the scientific community is convinced that blinding is necessary. Blind analyses are more complicated to design than non-blind analyses and take more time to complete. Some scientists participating in blind analyses inevitably spend time looking at fake data, which can feel like a waste.

    Yet others strongly advocate for going blind. KIPAC researcher Aaron Roodman, a particle-physicist-turned-astrophysicist, has been using blinding methods for the past 20 years.

    “Blind analyses have already become pretty standard in the particle physics world,” he says. “They’ll be also crucial for taking bias out of next-generation cosmological surveys, particularly when the stakes are high. We’ll only build one LSST, for example, to provide us with unprecedented views of the sky.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 1:56 pm on April 28, 2017 Permalink | Reply
    Tags: , , , , , Large Synoptic Survey Telescope (LSST)   

    From Kavli: “Delving Into the ‘Dark Universe’ with the Large Synoptic Survey Telescope” 

    KavliFoundation

    The Kavli Foundation

    Two astrophysicists and a theoretical physicist discuss how the Large Synoptic Survey Telescope will probe the nature of dark matter and dark energy by taking an unprecedentedly enormous scan of the sky.


    LSST Camera, built at SLAC



    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    AT A MOUNTAINTOP CEREMONY IN CHILE, on April 14th, scientists and diplomats laid the first stone for the Large Synoptic Survey Telescope (LSST). This ambitious international astrophysics project is slated to start scanning the heavens in 2022. When it does, LSST should open up the “dark universe” of dark matter and dark energy—the unseen substance and force, respectively, composing 95 percent of the universe’s mass and energy—as never before.

    The “large” in LSST’s name is a bit of an understatement. The telescope will feature an 8.4-meter diameter mirror and a 3.2 gigapixel camera, the biggest digital camera ever built. The telescope will survey the entire Southern Hemisphere’s sky every few days, hauling in 30 terabytes of data nightly. After just its first month of operations, LSST’s camera will have observed more of the universe than all previous astronomical surveys combined.

    On April 2, 2015, two astrophysicists and a theoretical physicist spoke with The Kavli Foundation about how LSST’s sweeping search for dark matter and dark energy will answer fundamental questions about our universe’s make-up.

    Steven Kahn – is the Director of LSST and the Cassius Lamb Kirk Professor in the Natural Sciences in the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) at Stanford University.

    Sarah Bridle – is a professor of astrophysics in the Extragalactic Astronomy and Cosmology research group of the Jodrell Bank Center for Astrophysics in the School of Physics and Astronomy at The University of Manchester.

    Hitoshi Murayama – is the Director of the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) at the University of Tokyo and a professor at the Berkeley Center for Theoretical Physics at the University of California, Berkeley.

    The following is an edited transcript of their roundtable discussion. The participants have been provided the opportunity to amend or edit their remarks.

    THE KAVLI FOUNDATION (TKF): Steven, when the LSST takes its first look at the universe seven years from now, why will this be so exciting to you?

    STEVEN KAHN: In terms of how much light it will collect and its field of view, LSST is about ten times bigger than any other survey telescope either planned or existing. This is important because it will allow us to survey a very large part of the sky relatively quickly and to do many repeated observations of every part of the Southern Hemisphere over ten years. By doing this, the LSST will gather information on an enormous number of galaxies. We’ll detect something like 20 billion galaxies.

    SARAH BRIDLE: That’s a hundred times as many as we’re going to get with the current generation of telescopes, so it’s a huge increase. With the data, we’re going to be able to make a three-dimensional map of the dark matter in the universe using gravitational lensing.

    Gravitational Lensing NASA/ESA

    Gravitational microlensing, S. Liebes, Physical Review B, 133 (1964): 835

    Then we’re going to use that to tell us about how the “clumpiness” of the universe is changing with time, which is going to tell us about dark energy.

    TKF: How does gathering information on billions of galaxies help us learn more about dark energy?

    HITOSHI MURAYAMA: Dark energy is accelerating the expansion of the universe and ripping it apart. The questions we are asking are: Where is the universe going? What is its fate? Is it getting completely ripped apart at some point? Does the universe end? Or does it go forever? Does the universe slow down at some point? To understand these questions, it’s like trying to understand how quickly the population of a given country is aging. You can’t understand the trend of where the country is going just by looking at a small number of people. You have to do a census of the entire population. In a similar way, you need to really look at a vast amount of galaxies so you can understand the trend of where the universe is going. We are taking a cosmic census with LSST.

    2
    A diagram explaining the phenomenon of gravitational lensing. Foreground clumps of dark matter in galaxy clusters gravitationally bend the light from background galaxies on its way to Earth. Note that the image is not to scale. Credit: NASA, ESA, L. Calcada)

    This phenomenon occurs when foreground matter and dark matter contained in galaxy clusters bend the light from background galaxies—sort of like looking through the bottom of a wine glass. Measuring the amount of the distortion of the background galaxies indirectly reveals the amount of dark matter that has clumped together in the foreground object. Measuring the rate of this dark matter clumping across different eras in the universe’s history speaks to how much dark energy is stretching the universe at given times, thus revealing the mysterious, pervasive force’s strength and properties.

    TKF: The main technique the LSST will use to learn more about dark energy will be gravitational lensing. Dark energy is the mysterious, invisible force that is pushing open and shaping the universe. Can you elaborate on why this is important and how will LSST help realize its full potential?

    BRIDLE: It’s extremely difficult to detect the dark energy that seems to be causing our universe to accelerate.


    Dark Energy Camera [DECam], built at FNAL


    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam

    Through gravitational lenses, however, it’s possible by observing how much dark matter is being pulled together by gravity.

    Dark matter cosmic web and the large-scale structure it forms The Millenium Simulation, V. Springel et al

    And by looking at how much this dark matter clumps up early and later on in the universe, we can see how much the universe is being stretched apart at different times. With LSST, there will be a huge increase in the number of galaxies that we can detect and observe. LSST will also let us identify how far away the galaxies are. This is important. If we want to see how fast the universe is clumping together at different times, we need to know at what time and how far away we’re looking.

    KAHN: With LSST, we’re trying to measure the subtle distortion of the appearance of galaxies caused by clumps of dark matter. We do this by looking for correlations in galaxies’ shapes depending on their position with respect to one another. Of course, there’s uncertainty associated with that kind of measurement on the relatively small scales of individual galaxies, and the dominant source of that uncertainty is that galaxies have intrinsic shapes—some are spiral-shaped, some are round, and so on, and we are seeing them at different viewing angles, too. Increasing the number of galaxies with LSST makes doing this a far more statistically powerful and thus precise measurement of the effect of gravitational lensing caused by dark matter and how the clumping of dark matter has changed over the universe’s history.

    LSST will also help address something called cosmic variance. This happens when we’re making comparisons of what we see against a statistical prediction of what an ensemble of possible universes might look like. We only live in one universe, so there’s an inherent error associated with how good those statistical predictions are of what our universe should look like when applied to the largest scales of great fields of galaxies. The only way to try and statistically beat that cosmic variance down is to survey as much of the sky as possible, and that’s the other area where LSST is breaking new ground.

    TKF: Will the gravitational lensing observations by LSST be more accurate than anything before?

    KAHN: One of the reasons I personally got motivated to work on LSST was because of the difficulty in making the sort of weak lensing measurements that Sarah described.

    BRIDLE: Typically, telescopes distort the images of galaxies by more than the gravitational lensing effect we are trying to measure. And in order to learn about dark matter and dark energy from gravitational lensing, we’ve got to not just detect the gravitational lensing signal but measure it to about a one-percent accuracy. So we’ve got to rid of these effects from the optics in the telescope before we can do anything to learn about cosmology.

    KAHN: A lot of the initial work in this field has been plagued by issues associated with the basic telescopes and cameras used. It was hard to separate out the cosmic signals that people were looking for from spurious effects that were introduced by the instrumentation. LSST is actually the first telescope that will have ever been built with the notion of doing weak lensing in mind. We have taken great care to model in detail the whole system, from the telescope to the camera to the atmosphere that we are looking through, to understand the particular issues in the system that could compromise weak lensing measurements. That approach has been a clear driver in how we design the facility and how we calibrate it. It’s been a big motivation for me personally and for the entire LSST team.

    TKF: As LSST reveals the universe’s past, will it also help us predict the future of the universe?

    MURAYAMA: Yes, it will. Because LSST will survey the sky so quickly and repeatedly, it will show how the universe is changing over time. For example, we will be able to see how a supernova changes from one time period to another. This kind of information should prove extremely useful in deciphering the nature of dark energy, for instance.

    KAHN: This is one way LSST will observe changes in the universe and gather information on dark energy beyond gravitational lensing. In fact, the way the acceleration of the universe by dark energy was first discovered in 1998 was through the measurement of what are called Type Ia supernovae.

    Sag A* NASA Chandra X-Ray Observatory 23 July 2014, the supermassive black hole at the center of the Milky Way

    These are exploding stars where we believe we understand the typical intrinsic brightness of the explosion. Therefore, the apparent brightness of a supernova—how faint the supernova appears when we see it—is a clear measure of how far away the object is. That is because objects that are farther away are dimmer than closer objects. By measuring a population of Type Ia supernovae, we can figure out their true distances from us and how those distances have increased over time. Put those two pieces of information together, and that’s a way of determining the expansion rate of the universe.

    This analysis was done for the initial discovery of the accelerating cosmic expansion with a relatively small number of supernovae—just tens. LSST will measure an enormous number of supernovae, something like 250,000 per year. Only a smaller fraction of those will be very well characterized, but that number is still in the tens of thousands per year. That will be very useful for understanding how our universe has evolved.

    TKF: LSST will gather a prodigious amount of data. How will this information be made available to scientists and the public alike for parsing?

    KAHN: Dealing with the enormous size of the data base LSST will produce is a challenge. Over its ten-year run, LSST will generate something like a couple hundred petabytes of data, where a petabyte is 10 to the 15th bytes. That’s more data, by a lot, than everything that’s ever been written in any language in human history.

    The data will be made public to the scientific community largely in the form of catalogs of objects and their properties. But those catalogs can be trillions of lines long. So one of the challenges is not so much how you acquire and store the data, but how do you actually find anything in something that big? It’s the needle in the haystack problem. That’s where there need to be advances because the current techniques that we use to query catalogs, or to say “find me such and such,” they don’t scale very well to this size of data. So a lot of new computer science ideas have to be invoked to make that work.

    ___________________________________________________________________________________

    “With the data, we’re going to be able to make a three-dimensional map of the dark matter in the universe using gravitational lensing. Then we’re going to use that to tell us about how the “clumpiness” of the universe is changing with time, which is going to tell us about dark energy.” –Sarah Bridle
    ___________________________________________________________________________________

    MURAYAMA: One thing that we at Kavli IPMU are pursuing right now is a sort of precursor project to LSST called Hyper Suprime-Cam, using the Subaru Telescope.

    NAOJ Subaru Hyper Suprime Camera

    NAOJ/Subaru Telescope at Mauna Kea Hawaii, USA

    It’s smaller than LSST, but it’s trying to do many of the things that LSST is after, like looking for weak gravitational lensing and trying to understand dark energy. We already are facing the challenge of dealing with a large data set. One aspect we would like to pursue at Kavli IPMU, and of course LSST is already doing it, is to get a lot of people in computer science and statistics involved into this. I believe a new area of statistics will be created by the needs of handling these large data sets. It’s a sort of fusion, the interdisciplinary aspects of this project. It’s a large astronomy survey that will influence other areas of science.

    TKF: Are any “citizen science” projects envisioned for LSST, like Galaxy Zoo, a website where astronomy buffs classify the shapes of millions of galaxies imaged by the Sloan Digital Sky Survey?

    KAHN: Data will be made available right away. So LSST will in some sense bring the universe home to anybody with a personal computer, who can log on and look at any part of the southern hemisphere’s sky at any given time. So there’s a tremendous potential there to engage the public not only in learning about science, but actually in doing science and interacting directly with the universe.

    We have people involved in LSST that are intimately tied into Galaxy Zoo. We’re looking into how to incorporate citizens and crowdsource the science investigations of LSST. One of these investigations is strong gravitational lensing. Sarah has talked about weak gravitational lensing, which is a very subtle distortion to the appearance of the background galaxies. But it turns out if you put a galaxy right behind a concentration of dark matter found in a massive foreground galaxy cluster, then the distortions can get very significant. You can actually see multiple images of the background galaxy in a single image, bent all the way around the foreground galaxy cluster. The detection of those strong gravitational lenses and the analysis of the light patterns you see within them also yields complementary scientific information about cosmological fundamental parameters. But it requires sort of recognizing what is in fact a strong gravitational lensing event, as well as modeling the distribution of dark matter that gives rise to the strength of that particular lensing. Colleagues of Hitoshi and myself have already created a tool to help with this effort, called SpaceWarps (www.spacewarps.org). The tool lets the public look for strong gravitational lenses using data from the Sloan Digital Sky Survey and to play around with dark matter modeling to see if they can get something that looks like the real data.

    ___________________________________________________________________________

    “Over its ten-year run, LSST will generate something like a couple hundred petabytes of data, where a petabyte is 10 to the 15th bytes. That’s more data, by a lot, than everything that’s ever been written in any language in human history.” –Steven Kahn
    ___________________________________________________________________________

    MURAYAMA: This has been incredibly successful. Scientists have developed computer programs to automatically look for these strongly lensed galaxies, but even an algorithm written by the best scientists can still miss some of these strong gravitationally lensed objects. Regular citizens, however, often manage to find some candidates for the strongly lensed galaxies that the computer algorithm has missed. Not only will this be great fun for people to get involved, it can even help the science as well, especially with a project as large as LSST.

    TKF: In the hunt for dark energy’s signature on the cosmos, LSST is just one of many current and planned efforts. Sarah, how will LSST observations tie in with the Dark Energy Survey you’re working on, and Hitoshi, with will LSST complement the Hyper Suprime-Cam?

    BRIDLE: So the Dark Energy Survey is going to image one-eighth of the whole sky and have 300 million galaxy images. About two years of data have been taken so far, with about three more years to go. We’ll be doing maps of dark matter and measurements of dark energy. The preparation for LSST that we are doing via DES will be essential.

    MURAYAMA: Hyper Suprime-Cam is similar to the Dark Energy Survey. It’s a nearly billion pixel camera looking for nearly 10 million galaxies. Following up on the Hyper Suprime-Cam imaging surveys, we would like to measure what we call spectra from a couple million galaxies.

    KAHN: The measurement of spectra as an addition to imaging tells us not only about the structure of matter in the universe but also how much the matter is moving with respect to the overall, accelerating cosmic expansion due to dark energy. Spectra are an additional, very important piece of information in constraining cosmological models.

    MURAYAMA: We will identify spectra with an instrument called the Prime Focus Spectrograph, which is scheduled to start operations in 2017 also on the Subaru telescope.

    NAOJ Subaru Prime Focus Spectrograph

    We will do very deep exposures to get the spectra on some of these interesting objects, such as galaxies where lensing is taking place and supernovae, which will also allow us to do much more precise measurements on dark energy.

    3
    This image from a pilot project, the Deep Lens Survey (DLS), offers up an example of what the sky will look like when observed by LSST. The images from LSST will have twice DLS’ depth and resolution, while also covering 50,000 times the area of this particular image, and in six different optical colors. Credit: Deep Lens Survey / UC Davis / NOAO)

    Like the Hyper Suprime-Cam, LSST can only do imaging. So I’m hoping when LSST comes online in the 2020s, we will already have the Prime Focus Spectrograph operational, and we will be able to help each other. LSST’s huge amount of data will contain many interesting objects we would like to study with this Prime Focus Spectrograph.

    KAHN: All these dark matter and dark energy telescope projects are very complementary to each other. It’s because of the scientific importance of these really fundamental pressing questions—what is the nature of dark matter and dark energy?—that the various different funding institutions around the world have been eager to invest in such an array of different complementary projects. I think that’s great, and it just shows how important this general problem is.

    TKF: Hitoshi, you mentioned earlier the interdisciplinary approach fostered by LSST and projects like it, and you’ve spoken before about how having different scientific disciplines and perspectives together leads to breakthrough thinking—a major goal of Kavli IPMU. Your primary expertise is in particle physics, but you work on many other areas of physics. Could you describe how observations of the very biggest scales of the dark universe with LSST will inform work on the very smallest, subatomic scales, and vice versa?

    MURAYAMA: It’s really incredible to think about this point. The biggest thing we can observe in the universe has to have something to do with the smallest things we can think of and all the matter we see around us.

    BRIDLE: It is amazing that you can look at the largest scales and find out about the smallest things.

    MURAYAMA: For more than a hundred years, particle physicists have been trying to understand what everything around us is made of. We made huge progress by building a theory called the standard model of particle physics in the 20th century, which is really a milestone of science. Discovering the Higgs boson at the Large Hadron Collider at CERN in 2012 really nailed that the standard model is the right theory about the origin of everything around us. But it turns out that what we see around us is actually making up only five percent of the universe.

    CERN CMS Higgs Event

    CERN ATLAS Higgs Event

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    So there is this feeling among particle physicists of “what have we been doing for a hundred years?” We only have five percent of the universe! We still need to understand the remaining 95 percent of the universe, which is dark matter and dark energy. It’s a huge problem and we have no idea what they are really.

    ______________________________________________________________________________________

    “The biggest thing we can observe in the universe has to have something to do with the smallest things we can think of.” –Hitoshi Murayama
    ______________________________________________________________________________________

    A way I explain what dark matter is: It’s the mother from whom we got separated at birth. What I mean by this is without dark matter, there’s no structure to the universe—no galaxies, no stars—and we wouldn’t be here. Dark matter, like a mother, is the reason we exist, but we haven’t met her and have never managed to thank her. So that’s the reason why we would like to know who she is, how she came to exist and how she shaped us. That’s the connection between the science of looking for the fundamental constituents of the universe, which is namely what particle physicists are after, and this largest scale of observation done with LSST.

    TKF: Given LSST’s vast vista on the Universe, it is frankly expected that the project will turn up the unexpected. Any ideas or speculations on what tracking such a huge portion of the universe might newly reveal?

    KAHN: That’s sort of like asking, “what are the unknown unknowns?” [laughter]

    TKF: Yes—good luck figuring those out!

    KAHN: Let me just say, one of the great things about astrophysics is that we have explicit theoretical predictions we’re trying to test out by taking measurements of the universe. That approach is more akin to many other areas of experimental physics, like searching for the Higgs boson with the Large Hadron Collider, as Hitoshi mentioned earlier.

    CERN/LHC Map


    CERN LHC Tunnel



    LHC at CERN

    But there’s also this wonderful history in astronomy that every time we build a bigger and better facility, we always find all kinds of new things we never envisioned.

    If you go back—unfortunately I’m old enough to remember these days—to the period before the launch of the Hubble Space Telescope, it’s interesting to see what people had thought were going to be the most exciting things to do with Hubble. Many of those things were done and they were definitely exciting. But I think what many people felt was the most exciting was the stuff we didn’t even think to ask about, like the discovery of dark energy Hubble helped make. So I think a lot of us have expectations of similar kinds of discoveries for facilities like LSST. We will make the measurement we’re intending to make, but there will be a whole bunch of other exciting stuff that we never even dreamed of that’ll come for free on top.

    BRIDLE: I’m a cosmologist and I’m very excited for what LSST is going to do for cosmology, but I’m even more excited that it’s going to be taking very, very short 15-second exposures of the sky. LSST is going to be able to discover all these changing, fleeting objects like supernovae that Hitoshi talked about, but it’s a whole new phase of discovery. It’s inevitable we’re going to discover a whole load of new stuff that we’ve never even thought of.

    MURAYAMA: I’m sure there will be surprises!

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    The Kavli Foundation, based in Oxnard, California, is dedicated to the goals of advancing science for the benefit of humanity and promoting increased public understanding and support for scientists and their work.

    The Foundation’s mission is implemented through an international program of research institutes, professorships, and symposia in the fields of astrophysics, nanoscience, neuroscience, and theoretical physics as well as prizes in the fields of astrophysics, nanoscience, and neuroscience.

     
  • richardmitnick 12:36 pm on April 6, 2017 Permalink | Reply
    Tags: , , , , Gemini Observatory Octocam, Large Synoptic Survey Telescope (LSST)   

    From Gemini Observatory: “OCTOCAM Looks Toward a New Era of Discovery” 

    NOAO

    Gemini Observatory
    Gemini Observatory

    April 5, 2017
    Science/Technical Contacts:

    Stephen Goodsell
    Gemini Program Manager
    Durham University, Durham, UK
    Email: sgoodsell”at”gemini.edu
    Cell: +44 7539256513

    Scot Kleinman
    Gemini Head of Development
    Gemini Observatory, Hilo, Hawai‘i
    Email: kleinman”at”gemini.edu
    Office: 808 074-2618

    Media Contact:

    Peter Michaud
    Public Information and Outreach Manager
    Gemini Observatory, Hilo, Hawai‘i
    Email: pmichaud”at”gemini.edu
    Desk: 808 974-2510
    Cell: 808 936-6643

    1
    OCTOCAM’s near-infrared optical bench. The near-infrared section is cryogenically cooled in a vacuum to operate at a temperature below 80 Kelvin.

    2
    OCTOCAM’s visible optical bench. The visible section is kept at about the temperature of the outside telescope environment.

    Gemini Observatory announces the development of a major new facility-class broadband optical and near-infrared imager and spectrograph named OCTOCAM.

    “OCTOCAM provides Gemini with a unique capability as we look ahead to the Large Synoptic Survey Telescope era,” says Stephen Goodsell who manages the instrument program for Gemini. “The instrument will be able to rapidly acquire transient objects and simultaneously obtain eight images or spectral bands from each target,” according to Goodsell. “This is important because it provides a much greater level of information and detail, which will undoubtedly lead to transformational scientific discoveries.”

    The power of the instrument comes from its ability to simultaneously observe over an extremely wide swath of the optical and infrared spectrum. It is expected that when the instrument begins commissioning and observations in 2022 it will serve as an ideal complement to the discoveries made with the Large Synoptic Survey Telescope (LSST) by providing rapid follow-up capabilities.


    LSST Camera, built at SLAC



    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    Once completed the instrument is slated for installation on the 8-meter Gemini South telescope which is located adjacent to the current construction site of the LSST on Cerro Pachón in Chile.

    “Two core tenets of Gemini’s future vision are to lead as a premier facility for taking advantage of the upcoming discoveries from the LSST; while offering broad, flexible capabilities that enable a wide-range of individual PI-driven science,” says Scot Kleinman, Associate Director of Development at Gemini. “OCTOCAM, with its eight simultaneous channels, including both imaging and spectroscopic capabilities, moves Gemini a giant step closer to this vision. We are thrilled to make this transformative instrument available to our community in early 2022.”

    Chris Davis, Program Officer at the U.S. National Science Foundation (NSF, which also funds the LSST), notes that because Gemini has international funding participants that include the U.S., Canada, Brazil, Argentina, and Chile, this project spans the globe. “All of the Gemini participants contribute to the development of instruments like OCTOCAM,” says Davis. However, he adds, “OCTOCAM really captures the spirit of international cooperation with the global network of researchers that are designing and building this instrument.”

    In late March the Association of Universities for Research in Astronomy (AURA, which operates Gemini on behalf of the NSF) and the Southwest Research Institute (SwRI) signed a contract to build and commission the instrument. With the contract signed work began immediately on the conceptual design of the instrument. Please see SwRI press release here.

    “Using eight state-of-the-art detectors, OCTOCAM will simultaneously observe visible and invisible light spectra almost instantaneously, in tens of milliseconds,” said Dr. Peter Roming, a staff scientist at SwRI who will serve as project manager and co-principal investigator. SwRI will oversee systems engineering, providing detectors, electronics, and software development for this refrigerator-sized, ground-based apparatus. The Institute will also lead the integration and testing of the device.

    “It’s really exciting to be working on an 8-meter class instrument that will be used to observe the whole Universe, from the oldest stars to nearby exoplanets,” Roming said. “The imaging, spectral analysis, and temporal resolution combined with exceptional sensitivity make OCTOCAM a unique, unparalleled instrument.”

    “OCTOCAM has been designed to revolutionize the research in many fields of astrophysics. To achieve this, a large, international group of scientists determined the key science questions to be addressed in the coming decade and those were used subsequently to define the technical characteristics that will allow OCTOCAM to answer them,” says Antonio de Ugarte Postigo, scientist at the Instituto de Astrofísica of Andalucía (IAA-CSIC) in Granada, Spain and principal investigator of the project.

    “We look forward to a work that will involve the full scientific community of Gemini. OCTOCAM will open a new window of research by occupying a region in the spectral coverage-spectral resolution-time resolution diagram not covered by any other instrument in the world,” says Christina Thöne, scientist at IAA, Granada and Deputy Project Manager of OCTOCAM.

    “I am very excited about the science that we will be able to do with OCTOCAM,” said Dr. Alexander van der Horst, an assistant professor of astrophysics at the George Washington University in Washington, DC, and the project scientist for OCTOCAM. “The capabilities of OCTOCAM make it a unique instrument, and it will provide a wealth of information on a very broad range of objects, from rocks of ice in our own solar system to the most massive stars exploding at the edge of our Universe.”

    Please visit OCTOCAM page for the list of team members and more.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Gemini/North telescope at Mauna Kea, Hawaii, USA
    Gemini/North telescope at Mauna Kea, Hawaii, USA

    Gemini South
    Gemini South telescope, Cerro Tololo Inter-American Observatory (CTIO) campus near La Serena, Chile

    AURA Icon

    Gemini’s mission is to advance our knowledge of the Universe by providing the international Gemini Community with forefront access to the entire sky.

    The Gemini Observatory is an international collaboration with two identical 8-meter telescopes. The Frederick C. Gillett Gemini Telescope is located on Mauna Kea, Hawai’i (Gemini North) and the other telescope on Cerro Pachón in central Chile (Gemini South); together the twin telescopes provide full coverage over both hemispheres of the sky. The telescopes incorporate technologies that allow large, relatively thin mirrors, under active control, to collect and focus both visible and infrared radiation from space.

    The Gemini Observatory provides the astronomical communities in six partner countries with state-of-the-art astronomical facilities that allocate observing time in proportion to each country’s contribution. In addition to financial support, each country also contributes significant scientific and technical resources. The national research agencies that form the Gemini partnership include: the US National Science Foundation (NSF), the Canadian National Research Council (NRC), the Chilean Comisión Nacional de Investigación Cientifica y Tecnológica (CONICYT), the Australian Research Council (ARC), the Argentinean Ministerio de Ciencia, Tecnología e Innovación Productiva, and the Brazilian Ministério da Ciência, Tecnologia e Inovação. The observatory is managed by the Association of Universities for Research in Astronomy, Inc. (AURA) under a cooperative agreement with the NSF. The NSF also serves as the executive agency for the international partnership.

     
  • richardmitnick 10:27 pm on March 3, 2017 Permalink | Reply
    Tags: , , , , Large Synoptic Survey Telescope (LSST), ,   

    From Universe Today: “Rise of the Super Telescopes”: LSST 

    universe-today

    Universe Today

    3 Mar , 2017
    Evan Gough

    LSST
    LSST/Camera, built at SLAC
    LSST/Camera, built at SLAC
    LSST Interior
    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.
    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes

    Gemini South telescope, Cerro Tololo Inter-American Observatory (CTIO) campus near La Serena, Chile
    Gemini South telescope, Cerro Tololo Inter-American Observatory (CTIO) campus near La Serena, Chile

    NOAO/ Southern Astrophysical Research Telescope (SOAR)telescope situated on Cerro Pachón - IV Región - Chile, at 2,700 meters (8,775 feet)
    NOAO/ Southern Astrophysical Research Telescope (SOAR)telescope situated on Cerro Pachón – IV Región – Chile, at 2,700 meters (8,775 feet)

    While the world’s other Super Telescopes rely on huge mirrors to do their work, the LSST is different. It’s a huge panoramic camera that will create an enormous moving image of the Universe. And its work will be guided by three words: wide, deep, and fast.

    While other telescopes capture static images, the LSST will capture richly detailed images of the entire available night sky, over and over. This will allow astronomers to basically “watch” the movement of objects in the sky, night after night. And the imagery will be available to anyone.

    The LSST is being built by a group of institutions in the US, and even got some money from Bill Gates. It will be situated atop Cerro Pachon, a peak in Northern Chile. The Gemini South and Southern Astrophysical Research Telescopes are also situated there.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 2:28 pm on January 14, 2017 Permalink | Reply
    Tags: , , , , Large Synoptic Survey Telescope (LSST), , Twinkles   

    From Symmetry: “Twinkle, twinkle, little supernova” 

    Symmetry Mag
    Symmetry

    01/12/17
    Ricarda Laasch

    1
    Phil Marshall, SLAC

    Using Twinkles, the new simulation of images of our night sky, scientists get ready for a gigantic cosmological survey unlike any before.

    Almost every worthwhile performance is preceded by a rehearsal, and scientific performances are no exception. Engineers test a car’s airbag deployment using crash test dummies before incorporating them into the newest model. Space scientists fire a rocket booster in a test environment before attaching it to a spacecraft in flight.

    One of the newest “training grounds” for astrophysicists is called Twinkles. The Twinkles dataset, which has not yet been released, consists of thousands of simulated, highly realistic images of the night sky, full of supernovae and quasars. The simulated-image database will help scientists rehearse a future giant cosmological survey called LSST.

    LSST
    LSST/Camera, built at SLAC
    LSST/Camera, built at SLAC
    LSST Interior
    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.
    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    LSST, short for the Large Synoptic Survey Telescope, is under construction in Chile and will conduct a 10-year survey of our universe, covering the entire southern sky once a year. Scientists will use LSST images to explore our galaxy to learn more about supernovae and to shine a light on the mysterious dark energy that is responsible for the expansion of our universe.

    It’s a tall order, and it needs a well prepared team. Scientists designed LSST using simulations and predictions for its scientific capabilities. But Twinkles’ thousands of images will give them an even better chance to see how accurately their LSST analysis tools can measure the changing brightness of supernovae and quasars. That’s the advantage of using simulated data. Scientists don’t know about all the objects in the sky above our heads, but they do know their simulated sky— there, they already know the answers. If the analysis tools make a calculation error, they’ll see it.

    The findings will be a critical addition to LSST’s measurements of certain cosmological parameters, where a small deviation can have a huge impact on the outcome.

    “We want to understand the whole path of the light: From other galaxies through space to our solar system and our planet, then through our atmosphere to the telescope – and from there through our data-taking system and image processing,” says Phil Marshall, a scientist at the US Department of Energy’s SLAC National Accelerator Laboratory who leads the Twinkles project. “Twinkles is our way to go all the way back and study the whole picture instead of one single aspect.”

    Scientists simulate the images as realistically as possible to figure out if some systematic errors add up or intertwine with each other. If they do, it could create unforeseen problems, and scientists of course want to deal with them before LSST starts.

    Twinkles also lets scientists practice sorting out a different kind of problem: A large collaboration spread across the whole globe that will perform numerous scientific searches simultaneously on the same massive amounts of data.

    Richard Dubois, senior scientist at SLAC and co-leader of the software infrastructure team, works with his team of computing experts to create methods and plans to deal with the data coherently across the whole collaboration and advise the scientists to choose specific tools to make their life easier.

    “Chaos is a real danger; so we need to keep it in check,” Dubois says. “So with Twinkles, we test software solutions and databases that help us to keep our heads above water.”

    The first test analysis using Twinkles images will start toward the end of the year. During the first go, scientists extract type 1a supernovae and quasars and learn how to interpret the automated LSST measurements.

    “We hid both types of objects in the Twinkles data,” Marshall says. “Now we can see whether they look the way they’re supposed to.”

    LSST will start up in 2022, and the first LSST data will be released at the end of 2023.

    “High accuracy cosmology will be hard,” Marshall says. “So we want to be ready to start learning more about our universe right away!”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 2:40 pm on November 25, 2016 Permalink | Reply
    Tags: , , GridPP, Large Synoptic Survey Telescope (LSST), Shear brilliance: computing tackles the mystery of the dark universe,   

    From U Manchester: “Shear brilliance: computing tackles the mystery of the dark universe” 

    U Manchester bloc

    University of Manchester

    24 November 2016
    No writer credit found

    Scientists from The University of Manchester working on a revolutionary telescope project have harnessed the power of distributed computing from the UK’s GridPP collaboration to tackle one of the Universe’s biggest mysteries – the nature of dark matter and dark energy.

    Researchers at The University of Manchester have used resources provided by GridPP – who represent the UK’s contribution to the computing grid used to find the Higgs boson at CERN – to run image processing and machine learning algorithms on thousands of images of galaxies from the international Dark Energy Survey.

    Dark Energy Icon

    The Manchester team are part of the collaborative project to build the Large Synoptic Survey Telescope (LSST), a new kind of telescope currently under construction in Chile and designed to conduct a 10-year survey of the dynamic Universe. LSST will be able to map the entire visible sky.

    LSST/Camera, built at SLAC
    LSST/Camera, built at SLAC

    LSST Interior
    LSST telescope, currently under construction at Cerro Pachón Chile
    LSST telescope, currently under construction at Cerro Pachón Chile

    In preparation to the LSST starting its revolutionary scanning, a pilot research project has helped researchers detect and map out the cosmic shear seen across the night sky, one of the tell-tale signs of the dark matter and dark energy thought to make up some 95 per cent of what we see in the Universe. This in turn will help prepare for the analysis of the expected 200 petabytes of data the LSST will collect when it starts operating in 2023.

    The pilot research team based at The Manchester of University was led by Dr Joe Zuntz, a cosmologist originally at Manchester’s Jodrell Bank Observatory and now a researcher at the Royal Observatory in Edinburgh.

    “Our overall aim is to tackle the mystery of the dark universe – and this pilot project has been hugely significant. When the LSST is fully operating researchers will face a galactic data deluge – and our work will prepare us for the analytical challenge ahead.”
    Sarah Bridle, Professor of Astrophysics

    Dr George Beckett, the LSST-UK Science Centre Project Manager based at The University of Edinburgh, added: “The pilot has been a great success. Having completed the work, Joe and his colleagues are able to carry out shear analysis on vast image sets much faster than was previously the case. Thanks are due to the members of the GridPP community for their assistance and support throughout.”

    The LSST will produce images of galaxies in a wide variety of frequency bands of the visible electromagnetic spectrum, with each image giving different information about the galaxy’s nature and history. In times gone by, the measurements needed to determine properties like cosmic shear might have been done by hand, or at least with human-supervised computer processing.

    With the billions of galaxies expected to be observed by LSST, such approaches are unfeasible. Specialised image processing and machine learning software (Zuntz 2013) has therefore been developed for use with galaxy images from telescopes like LSST and its predecessors. This can be used to produce cosmic shear maps like those shown in the figure below. The challenge then becomes one of processing and managing the data for hundreds of thousands of galaxies and extracting scientific results required by LSST researchers and the wider astrophysics community.

    As each galaxy is essentially independent of other galaxies in the catalogue, the image processing workflow itself is highly parallelisable. This makes it an ideal problem to tackle with the kind of High-Throughput Computing (HTP) resources and infrastructure offered by GridPP. In many ways, the data from CERN’s Large Hadron Collider particle collision events is like that produced by a digital camera (indeed, pixel-based detectors are used near the interaction points) – and GridPP regularly processes billions of such events as part of the Worldwide LHC Computing Grid (WLCG).

    A pilot exercise, led by Dr Joe Zuntz while at The University of Manchester and supported by one of the longest serving and most experienced GridPP experts, Senior System Administrator Alessandra Forti, saw the porting of the image analysis workflow to GridPP’s distributed computing infrastructure. Data from the Dark Energy Survey (DES) was used for the pilot.

    After transferring this data from the US to GridPP Storage Elements, and enabling the LSST Virtual Organisation on a number of GridPP Tier-2 sites, the IM3SHAPE analysis software package (Zuntz, 2013) was tested on local, grid-friendly client machines to ensure smooth running on the grid. Analysis jobs were then submitted and managed using the Ganga software suite, which is able to coordinate the thousands of individual analyses associated with each batch of galaxies. Initial runs were submitted using Ganga to local grid sites, but the pilot progressed to submission to multiple sites via the GridPP DIRAC (Distributed Infrastructure with Remote Agent Control) service. The flexibility of Ganga allows both types of submission, which made the transition from local to distributed running significantly easier.

    By the end of pilot, Dr Zuntz was able to run the image processing workflow on multiple GridPP sites, regularly submitting thousands of analysis jobs on DES images.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    U Manchester campus

    The University of Manchester (UoM) is a public research university in the city of Manchester, England, formed in 2004 by the merger of the University of Manchester Institute of Science and Technology (renamed in 1966, est. 1956 as Manchester College of Science and Technology) which had its ultimate origins in the Mechanics’ Institute established in the city in 1824 and the Victoria University of Manchester founded by charter in 1904 after the dissolution of the federal Victoria University (which also had members in Leeds and Liverpool), but originating in Owens College, founded in Manchester in 1851. The University of Manchester is regarded as a red brick university, and was a product of the civic university movement of the late 19th century. It formed a constituent part of the federal Victoria University between 1880, when it received its royal charter, and 1903–1904, when it was dissolved.

    The University of Manchester is ranked 33rd in the world by QS World University Rankings 2015-16. In the 2015 Academic Ranking of World Universities, Manchester is ranked 41st in the world and 5th in the UK. In an employability ranking published by Emerging in 2015, where CEOs and chairmen were asked to select the top universities which they recruited from, Manchester placed 24th in the world and 5th nationally. The Global Employability University Ranking conducted by THE places Manchester at 27th world-wide and 10th in Europe, ahead of academic powerhouses such as Cornell, UPenn and LSE. It is ranked joint 56th in the world and 18th in Europe in the 2015-16 Times Higher Education World University Rankings. In the 2014 Research Excellence Framework, Manchester came fifth in terms of research power and seventeenth for grade point average quality when including specialist institutions. More students try to gain entry to the University of Manchester than to any other university in the country, with more than 55,000 applications for undergraduate courses in 2014 resulting in 6.5 applicants for every place available. According to the 2015 High Fliers Report, Manchester is the most targeted university by the largest number of leading graduate employers in the UK.

    The university owns and operates major cultural assets such as the Manchester Museum, Whitworth Art Gallery, John Rylands Library and Jodrell Bank Observatory which includes the Grade I listed Lovell Telescope.

     
  • richardmitnick 9:32 am on October 26, 2016 Permalink | Reply
    Tags: , Large Synoptic Survey Telescope (LSST), Scheduling algorithm for LSST   

    From Harvard John A Paulson School of Engineering and Applied Sciences: “Eye on the sky” 

    Harvard School of Engineering and Applied Sciences
    harvard John A Paulson School of Engineering and Applied Sciences

    October 26, 2016
    Adam Zewe

    Student uses computer science to chart a course for massive telescope

    When it begins operating in 2022, the $500 million Large Synoptic Survey Telescope (LSST) will capture some of the sharpest night sky images ever produced, giving scientists an unprecedented view of near-Earth asteroids, supernovae, and the Milky Way galaxy.

    But the telescope, under construction atop a peak is Chile’s northern Andes, also presents an unprecedented challenge for astrophysicists—it will require a complicated scheduling algorithm to determine where to point the telescope as it traces the sky. To Harvard student Daniel Rothchild, that sounded like a puzzle he could solve.

    “This is not a well-studied problem in astrophysics because there has never been a telescope that behaved like this,” said Rothchild, A.B. ’17, a physics concentrator who is pursing a secondary in computer science at the John A. Paulson School of Engineering and Applied Sciences. “But scheduling is a well-studied problem in computer science. It is very important that the scheduler be effective, or the telescope is not going to be looking at the places that will yield the best data.”

    Working with Christopher Stubbs, Samuel C. Moncher Professor of Physics and Astronomy, who is a contributor to the LSST project, Rothchild launched an independent research project to develop a scheduling algorithm that would be effective in this unique situation.

    LSST/Camera, built at SLAC
    LSST/Camera, built at SLAC
    LSST Interior
    LSST telescope, currently under construction at Cerro Pachón Chile
    LSST telescope, currently under construction at Cerro Pachón Chile

    The LSST, which will image the entire night sky every three days, will stop at each point for 30 seconds before moving onto a new field. Longer calculation time leads to a much more complicated algorithm and that could easily bog down the telescope’s progress. The algorithm must also overcome the challenge of determining the “best” place for the telescope to look, when there are literally 10 billion possibilities.

    “How do you decide if Milky Way astronomy is more important than asteroid science on this particular 30-second exposure?” Rothchild asked. “It’s very difficult for scientists to say, here’s an exact quantification of how important these different areas are.”

    Rather than using machine-learning or mathematical merit functions to determine the ideal next field, Rothchild is writing code that will give the telescope a baseline optimal path to follow, along with instructions for how to respond when faced with adverse weather and unexpected downtime.

    Programming a set path for the entire 10-year span of the project allows scientists to explicitly optimize global properties of the telescope’s data, instead of hoping the merit functions or machine-learning algorithms will perform those optimizations themselves, he said. It also eliminates the headaches of trying to determine why the computer pointed the telescope at a certain location, or troubleshooting a machine-learning algorithm that seems to be aiming the telescope far off the best course.

    “There are certain astronomical elements that are fixed, even 10 years out. We know the moon will be moving a certain way and the stars will appear in specific patterns and locations, and we also know the meridian is generally the best place to point the telescope because there is the least amount of air overhead,” he said. “By programming these considerations into the scheduler explicitly, I hope to create an algorithm that will produce better schedules than those produced with existing methods.”

    His code lays out a path for the telescope to follow using a combination of astronomical data and meteorological predictions. Rothchild’s method involves much faster calculations than other scheduler algorithms because there are no machine-learning elements.

    Several other researchers are working on schedulers, and all have taken a slightly different approach. Once the telescope hardware is complete, the LSST leadership team will test each scheduler and select the one to use.

    Though he still has six years to wait before the LSST has its eye on the sky, Rothchild is excited for the opportunity to contribute to such a significant astrophysics project.

    “The LSST will produce about 15 terabytes of data each night for 10 years. By comparison, the Hubble telescope produces 10 terabytes of data in one year,” he said. “This project is going to enable scientists to take precision measurements of the universe in an unprecedented way. It is very cool to be a part of that.”

    1
    Currently under construction in Chile, the LSST will incorporate the world’s largest digital camera. (Photo credit: LSST.)

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Through research and scholarship, the Harvard School of Engineering and Applied Sciences (SEAS) will create collaborative bridges across Harvard and educate the next generation of global leaders. By harnessing the power of engineering and applied sciences we will address the greatest challenges facing our society.

    Specifically, that means that SEAS will provide to all Harvard College students an introduction to and familiarity with engineering and technology as this is essential knowledge in the 21st century.

    Moreover, our concentrators will be immersed in the liberal arts environment and be able to understand the societal context for their problem solving, capable of working seamlessly withothers, including those in the arts, the sciences, and the professional schools. They will focus on the fundamental engineering and applied science disciplines for the 21st century; as we will not teach legacy 20th century engineering disciplines.

    Instead, our curriculum will be rigorous but inviting to students, and be infused with active learning, interdisciplinary research, entrepreneurship and engineering design experiences. For our concentrators and graduate students, we will educate “T-shaped” individuals – with depth in one discipline but capable of working seamlessly with others, including arts, humanities, natural science and social science.

    To address current and future societal challenges, knowledge from fundamental science, art, and the humanities must all be linked through the application of engineering principles with the professions of law, medicine, public policy, design and business practice.

    In other words, solving important issues requires a multidisciplinary approach.

    With the combined strengths of SEAS, the Faculty of Arts and Sciences, and the professional schools, Harvard is ideally positioned to both broadly educate the next generation of leaders who understand the complexities of technology and society and to use its intellectual resources and innovative thinking to meet the challenges of the 21st century.

    Ultimately, we will provide to our graduates a rigorous quantitative liberal arts education that is an excellent launching point for any career and profession.

     
  • richardmitnick 3:20 pm on September 5, 2014 Permalink | Reply
    Tags: , , , , Large Synoptic Survey Telescope (LSST),   

    From Quanta via FNAL: “A Digital Copy of the Universe, Encrypted” 2013 

    Quanta Magazine
    Quanta Magazine

    October 2, 2013
    Natalie Wolchover

    Even as he installed the landmark camera that would capture the first convincing evidence of dark energy in the 1990s, Tony Tyson, an experimental cosmologist now at the University of California, Davis, knew it could be better. The camera’s power lay in its ability to collect more data than any other. But digital image sensors and computer processors were progressing so rapidly that the amount of data they could collect and store would soon be limited only by the size of the telescopes delivering light to them, and those were growing too. Confident that engineering trends would hold, Tyson envisioned a telescope project on a truly grand scale, one that could survey hundreds of attributes of billions of cosmological objects as they changed over time.

    It would record, Tyson said, “a digital, color movie of the universe.”

    Tyson’s vision has come to life as the Large Synoptic Survey Telescope (LSST) project, a joint endeavor of more than 40 research institutions and national laboratories that has been ranked by the National Academy of Sciences as its top priority for the next ground-based astronomical facility. Set on a Chilean mountaintop, and slated for completion by the early 2020s, the 8.4-meter LSST will be equipped with a 3.2-billion-pixel digital camera that will scan 20 billion cosmological objects 800 times apiece over the course of a decade. That will generate well over 100 petabytes of data that anyone in the United States or Chile will be able to peruse at will. Displaying just one of the LSST’s full-sky images would require 1,500 high-definition TV screens.

    LSST

    LSST Exterior
    LSST Camera
    int
    LSST Exterior, Camera, Interior

    The LSST epitomizes the new era of big data in physics and astronomy. Less than 20 years ago, Tyson’s cutting-edge digital camera filled 5 gigabytes of disk space per night with revelatory information about the cosmos. When the LSST begins its work, it will collect that amount every few seconds — literally more data than scientists know what to do with.

    tt
    Tony Tyson, an experimental cosmologist at the University of California, Davis, with a small test camera for the Large Synoptic Survey Telescope project, which he is helping to launch.
    Peter DaSilva for Quanta Magazine

    “The data volumes we [will get] out of LSST are so large that the limitation on our ability to do science isn’t the ability to collect the data, it’s the ability to understand the systematic uncertainties in the data,” said Andrew Connolly, an astronomer at the University of Washington.

    Typical of today’s costly scientific endeavors, hundreds of scientists from different fields are involved in designing and developing the LSST, with Tyson as chief scientist. “It’s sort of like a federation,” said Kirk Borne, an astrophysicist and data scientist at George Mason University. The group is comprised of nearly 700 astronomers, cosmologists, physicists, engineers and data scientists.

    Much of the scientists’ time and about one-half of the $1 billion cost of the project are being spent on developing software rather than hardware, reflecting the exponential growth of data since the astronomy projects of the 1990s. For the telescope to be useful, the scientists must answer a single question. As Borne put it: “How do you turn petabytes of data into scientific knowledge?”

    Physics has been grappling with huge databases longer than any other field of science because of its reliance on high-energy machines and enormous telescopes to probe beyond the known laws of nature. This has given researchers a steady succession of models upon which to structure and organize each next big project, in addition to providing a starter kit of computational tools that must be modified for use with ever larger and more complex data sets.

    Even backed by this tradition, the LSST tests the limits of scientists’ data-handling abilities. It will be capable of tracking the effects of dark energy, which is thought to make up a whopping 68 percent of the total contents of the universe, and mapping the distribution of dark matter, an invisible substance that accounts for an additional 27 percent. And the telescope will cast such a wide and deep net that scientists say it is bound to snag unforeseen objects and phenomena too. But many of the tools for disentangling them from the rest of the data don’t yet exist.

    New Dimensions

    Particle physics is the elder statesman of big data science. For decades, high-energy http://en.wikipedia.org/wiki/Particle_accelerator
    have been bashing particles together millions of times per second in hopes of generating exotic, never-before-seen particles. These facilities, such as the Large Hadron Collider (LHC) at CERN laboratory in Switzerland, generate so much data that only a tiny fraction (deemed interesting by an automatic selection process) can be kept. A network of hundreds of thousands of computers spread across 36 countries called the Worldwide LHC Computing Grid stores and processes the 25 petabytes of LHC data that were archived in a year’s worth of collisions. The work of thousands of physicists went into finding the bump in that data that last summer was deemed representative of a new subatomic particle, the Higgs boson.

    CERN, the organization that operates the LHC, is sharing its wisdom by working with other research organizations “so they can benefit from the knowledge and experience that has been gathered in data acquisition, processing and storage,” said Bob Jones, head of CERN openlab, which develops new IT technologies and techniques for the LHC. Scientists at the European Space Agency, the European Molecular Biology Laboratory, other physics facilities and even collaborations in the social sciences and humanities have taken cues from the LHC on data handling, Jones said.

    When the LHC turns back on in 2014 or 2015 after an upgrade, higher energies will mean more interesting collisions, and the amount of data collected will grow by a significant factor. But even though the LHC will continue to possess the biggest data set in physics, its data is much simpler than those obtained from astronomical surveys such as the Sloan Digital Sky Survey and Dark Energy Survey and — to an even greater extent — those that will be obtained from future sky surveys such as the Square Kilometer Array, a radio telescope project set to begin construction in 2016, and the LSST.

    Sloan Digital Sky Survey Telescope
    SSDS Telescope

    DECam
    DECam

    SKA Square Kilometer Array

    “The LHC generates a lot more data right at the beginning, but they’re only looking for certain events in that data and there’s no correlation between events in that data,” said Jeff Kantor, the LSST data management project manager. “Over time, they still build up large sets, but each one can be individually analyzed.”

    In combining repeat exposures of the same cosmological objects and logging hundreds rather than a handful of attributes of each one, the LSST will have a whole new set of problems to solve. “It’s the complexity of the LSST data that’s a challenge,” Tyson said. “You’re swimming around in this 500-dimensional space.”

    From color to shape, roughly 500 attributes will be recorded for every one of the 20 billion objects surveyed, and each attribute is treated as a separate dimension in the database. Merely cataloguing these attributes consistently from one exposure of a patch of the sky to the next poses a huge challenge. “In one exposure, the scene might be clear enough that you could resolve two different galaxies in the same spot, but in another one, they might be blurred together,” Kantor said. “You have to figure out if it’s one galaxy or two or N.”

    Beyond N-Squared

    To tease scientific discoveries out of the vast trove of data gathered by the LSST and other sky surveys, scientists will need to pinpoint unexpected relationships between attributes, which is extremely difficult in 500 dimensions. Finding correlations is easy with a two-dimensional data set: If two attributes are correlated, then there will be a one-dimensional curve connecting the data points on a two-dimensional plot of one attribute versus the other. But additional attributes plotted as extra dimensions obscure such curves. “Finding the unexpected in a higher-dimensional space is impossible using the human brain,” Tyson said. “We have to design future computers that can in some sense think for themselves.”

    Algorithms exist for “reducing the dimensionality” of data, or finding surfaces on which the data points lie (like that 1-D curve in the 2-D plot), in order to find correlated dimensions and eliminate “nuisance” ones. For example, an algorithm might identify a 3-D surface of data points coursing through a database, indicating that three attributes, such as the type, size and rotation speed of galaxies, are related. But when swamped with petabytes of data, the algorithms take practically forever to run.

    Identifying correlated dimensions is exponentially more difficult than looking for a needle in a haystack. “That’s a linear problem,” said Alex Szalay, a professor of astronomy and computer science at Johns Hopkins University. “You search through the haystack and whatever looks like a needle you throw in one bucket and you throw everything else away.” When you don’t know what correlations you’re looking for, however, you must compare each of the N pieces of hay with every other piece, which takes N-squared operations.

    Adding to the challenge is the fact that the amount of data is doubling every year. “Imagine we are working with an algorithm that if my data doubles, I have to do four times as much computing and then the following year, I have to do 16 times as much computing,” Szalay said. “But by next year, my computers will only be twice as fast, and in two years from today, my computers will only be four times as fast, so I’m falling farther and farther behind in my ability to do this.”

    A huge amount of research has gone into developing scalable algorithms, with techniques such as compressed sensing, topological analysis and the maximal information coefficient emerging as especially promising tools of big data science. But more work remains to be done before astronomers, cosmologists and physicists will be ready to fully exploit the multi-petabyte digital movie of the universe that premiers next decade. Progress is hampered by the fact that researchers in the physical sciences get scant academic credit for developing algorithms — a problem that the community widely recognizes but has yet to solve.

    “It’s always been the case that the people who build the instrumentation don’t get as much credit as the people who use the instruments to do the cutting-edge science,” Connolly said. “Ten years ago, it was people who built physical instruments — the cameras that observe the sky — and today, it’s the people who build the computational instruments who don’t get enough credit. There has to be a career path for someone who wants to work on the software — because they can go get jobs at Google. So if we lose these people, it’s the science that loses.”

    Coffee and Kudos

    In December 2010, in an effort to encourage the development of better algorithms, an international group of astronomers issued a challenge to computer geeks everywhere: What is the best way to measure gravitational lensing, or the distorting effect that dark matter has on the light from distant galaxies? David Kirkby read about the GREAT10 (GRavitational lEnsing Accuracy Testing 2010) Challenge on Wired.com and decided to give it a go.

    dc
    David Kirkby, a physicist at the University of California, Irvine, holds an observing plate designed to capture data for a specific circular patch of the sky. Peter DaSilva for Quanta Magazine

    Kirkby, a physicist at the University of California, Irvine, and his graduate student won the contest using a modified version of a neural network algorithm that he had previously developed for the BABAR experiment, a large physics collaboration investigating the asymmetry of matter and antimatter. The victory earned Kirkby a co-author credit on the recent paper detailing the contest, easing his switch from the field of particle physics to astrophysics. Also, with the prize money, “we bought a top of the line espresso machine for the lab,” he said.

    GREAT10 was one of a growing number of “data challenges” designed to find solutions to specific problems faced in creating and analyzing large physics and astronomy databases, such as the best way to reconstruct the shapes of two galaxies that are aligned relative to Earth and so appear blended together.

    “One group produces a set of data — it could be blended galaxies — and then anybody can go out and try and estimate the shape of the galaxies using their best algorithm,” explained Connolly, who is involved in generating simulations of future LSST images that are used to test the performance of algorithms. “It’s quite a lot of kudos to the person who comes out on top.”

    Many of the data challenges, including the GREAT series, focus on teasing out the effects of dark matter. When light from a distant galaxy travels to Earth, it is bent, or “lensed,” by the gravity of the dark matter it passes through. “It’s a bit like looking at wallpaper through a bathroom window with a rough surface,” Kirkby said. “You determine what the wallpaper would look like if you were looking at it directly, and you use that information to figure out what the shape of the glass is.”

    Each new data challenge in a series includes an extra complication — additional distortions caused by atmospheric turbulence or a faulty amplifier in one of the detectors, for example — moving the goal posts of the challenge closer and closer to reality.

    Data challenges are “a great way of crowd-sourcing problems in data science, but I think it would be good if software development was just recognized as part of your productivity as an academic,” Kirkby said. “At career reviews, you measure people based on their scientific contributions even though software packages could have a much broader impact.”

    The culture is slowly changing, the scientists said, as the ability to analyze data becomes an ever-tightening bottleneck in research. “In the past, it was usually some post-doc or grad student poring over data who would find something interesting or something that doesn’t seem to work and stumble across some new effect,” Tyson said. “But increasingly, the amount of data is so large that you have to have machines with algorithms to do this.”

    Dark Side of the Universe

    Assuming that physicists can solve the computing problems they face with the LSST, the results could be transformative. There are many reasons to want a 100-petabyte digital copy of the universe. For one, it would help map the expansion of space and time caused by the still-mysterious dark energy, discovered with the help of the LSST’s predecessor, the Big Throughput Camera, which Tyson and a collaborator built in 1996.

    When that camera, which could cover a patch of the sky the size of a full moon in a single exposure, was installed on the Blanco Telescope in Chile, astrophysicists immediately discovered dozens of exploding stars called Type IA supernovae strewn across the sky that revealed that most stuff in the universe is unknown. Light from nearby supernovae appeared to have stretched more than it should have during its journey through the expanding cosmos compared with light from faraway ones. This suggested that the expansion of the universe had recently sped up, driven by dark energy.

    CTIO Victor M Blanco 4m Telescope
    CTIO Victor M Blanco 4m Telescope interior
    CTIO Victor M Blanco 4m Telescope

    With the LSST, scientists hope to precisely track the accelerating expansion of the universe and thus to better define the nature of dark energy. They aim to do this by mapping a sort of cosmic yardstick called baryon acoustic oscillations. The yardstick was created from sound waves that rippled through the universe when it was young and hot and became imprinted in the distribution of galaxies as it cooled and expanded. The oscillations indicate the size of space at every distance away from Earth — and thus at any point back in time.

    Baryon acoustic oscillations are so enormous that a truly vast astronomical survey is needed to make them a convenient measuring tool. By cataloguing billions of galaxies, the LSST promises to measure the size of these resonances more accurately than any other existing or planned astronomical survey. “The idea is that with the LSST, we will have onion shells of galaxies at different distances and we can look for this pattern and trace the size of the resonant patterns as a function of time,” Szalay said. “This will be beautiful.”

    But, Szalay added, “it will be a nontrivial task to actually milk the information out of the data.”

    See the full article here.

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: