Updates from richardmitnick Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:49 pm on September 27, 2016 Permalink | Reply
    Tags: , , LBT-Large Binocular Telescope, N6946-BH1, This star was so massive it ate itself before it could go supernova   

    From astronomy.com: “This star was so massive it ate itself before it could go supernova” 

    Astronomy magazine


    September 27, 2016
    Eugene Myers

    X-ray: NASA/CXC/MSSL/R.Soria et al, Optical: AURA/Gemini OBs

    A team of scientists may have confirmed the first failed supernova — and in the process witnessed the birth of a black hole.

    Caltech astronomer Scott Adams and his colleagues Christopher Kochanek, Jill Gerke, and Krzystof Stanek of Ohio State University, and Xinyu Dai of the University of Oklahoma, devised a novel observation technique that uses the Large Binocular Telescope (LBT) to identify candidates for failed supernovae, that is, massive stars that have died without the typical spectacular explosion.

    Large Binocular Telescope,  Mount Graham,  Arizona, USA
    Large Binocular Telescope, Mount Graham, Arizona, USA

    Using the first four years of data from the LBT survey, the team zeroed in on a star in the NGC 6946 galaxy.

    This star, N6946-BH1, caught their attention because in 2009 it flared up to more than one million times the brightness of the Sun, then gradually faded — and vanished. But not without leaving a tantalizing clue to what happened: a faint trace of near-infrared radiation (IR) that is consistent with energy emitted from matter as it spirals into a black hole.

    “If this event really was a failed supernova, it means that we have observed the birth of a new black hole for the first time, which is really quite exciting,” says Adams. “But this finding also has wider implications.”

    Since 2003, many astronomers have accepted the idea that some stars are too massive to go supernova. Because they can’t overcome their own gravity enough to explode, these stars simply extinguish and collapse into black holes. But there hasn’t been any direct evidence that it happens — until now.

    Adams and his team reasoned that if failed supernovae occur in some red supergiants, the largest stars in the universe by volume, the process would produce a visible burst of gravitational energy — like the bright flash of light that the Hubble Space Telescope (HST) recorded in N6946-BH1 from March to May 2009. They submitted this finding to the Monthly Notices of the Royal Astronomical Society and published a preprint online early this month.

    “It implies that failed supernovae are the solution to the mystery of why higher-mass red supergiants have not been seen as supernova progenitors, and why there is a gap between the mass distributions of neutron stars and black holes,” says Adams.

    There are several other possible, but unlikely, explanations for what they’ve observed in N6946-BH1. Analysis of new and archival photometry data from HST and the Spitzer Space Telescope allows the team to all but rule out the possibility that the star is hidden behind a mask of ejected dust.

    NASA/ESA Hubble Telescope
    NASA/ESA Hubble Telescope

    NASA/Spitzer Telescope
    NASA/Spitzer Telescope

    They are waiting for more data from the orbiting Chandra X-ray Observatory.

    NASA/Chandra Telescope
    NASA/Chandra Telescope

    X-ray emissions at the star would confirm the presence of a black hole; however, if X-rays are not detected, the team will continue to monitor it and employ a more powerful telescope to be certain the star has vanished.

    “This unique survey is still ongoing,” says Adams of his team’s work with the LBT. “In an upcoming paper we will present the results of the first seven years of the survey and our constraint on the fraction of core-collapses of massive stars that result in failed supernovae.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 2:31 pm on September 27, 2016 Permalink | Reply
    Tags: , , ,   

    From tuscon.com via LSST: “Tucson has a major piece of the action on giant telescope” 


    Large Synoptic Survey Telescope



    Tom Beal, Arizona Daily Star


    The Large Synoptic Survey Telescope, the next big thing in astronomy, is a worldwide effort, with a telescope mount from Spain, a dome from Italy, a coating chamber from Germany and a site in the Andean foothills of Chile.

    The heart of the project remains in Tucson.

    The project, known by its initials LSST, was dreamed up in Tucson and retains its headquarters here.

    A variety of local vendors, from the heavy-metal experts at CAID Industries to the electronics assemblers at Beacon Group, have a piece of the financial action.

    The project began with an idea for a novel mirror that contains the primary and tertiary surfaces of the light path on a single surface. That task, conceived and perfected by the astronomical mirror builders and polishers at Steward Observatory and the College of Optical Sciences, is the $20 million heart of a plan to take a 10-year movie of the cosmos.

    Coupled with the largest camera in the world — a $165-million, 3.2-billion-pixel marvel being developed at SLAC National Accelerator Laboratory in Menlo Park, California — it will spend 10 years imaging the entire night sky visible from Chile every three nights and recording all that moves, brightens, darkens, changes or remains constant.

    It will track the orbits of asteroids and watch explosions of supernovae. It will provide a map of the galaxies and clues to the mysterious phenomena of dark matter and dark energy. It will find some things we’ve never seen before.

    Its giant (27.6-foot diameter) mirror, cast in what is now known as the Richard F. Caris Mirror Lab beneath the bleachers of Arizona Stadium, needs to be handled with care and for that, LSST turned to Tucson’s CAID Industries.

    At first, LSST asked CAID to build a box for it — a metal box that would delicately support the mirror’s size and weight without breaking it, as it makes its way from Tucson to a mountaintop in Chile by crane, truck and ship.

    CAID had experience in building shipping boxes for similar-sized mirrors manufactured at the University of Arizona’s mirror lab.

    LSST decided to add to its CAID contract and have it build a mirror cell that will become a permanent part of the telescope. It is currently milling and machining that 57,000-pound piece of weathering steel.

    Fabricating large metal structures is no big challenge for CAID, but working within the precision tolerances demanded by astronomy is, said Keath Beifus, CAID project engineer.

    “We work within these tolerances all the time on things you can lift,” he said.

    The top surface of the cell, a 7-foot-tall square that is 30 feet on each side, can’t vary by more than the width of a piece of paper, he said.

    The design requires “4,440 machined features,” said Beifus — precision-drilled holes for wiring and plumbing to accommodate actuators that perfect the mirror’s shape and the fans and coolant-filled tubes that keep its temperature constant. Those are currently being drilled.

    The entire cell must also be airtight. It has a round metal rim that, when mated with the coating-chamber dome being fabricated in Dresden, Germany, will create a vacuum needed for coating the glass with a reflective surface.

    That meant extra attention paid to all the welded joints to prevent air pockets and installation of a submarine-style access door, Beifus said.

    CAID is also building the cart that will hold the mirror and cell when it needs to be detached from the mirror mount and rolled on rails to the mirror-coating chamber.

    It is also building a surrogate mirror made of metal that will be used to test the cell’s ability to mold the mirror into shape before it trusts it with the fragile glass surface.

    CAID’s current contracts with LSST total more than $3.4 million, and it has been asked to do another phase of the project — the integration of the parts that will turn the “big empty behemoth” of a cell into a functioning platform for the mirror, said Bill Gressler, telescope and site manager for LSST.

    About a dozen other local vendors are contributing $600,000 worth of parts and pieces to the telescope, with assembly of a variety of components being done in Tucson, where 72 LSST employees now work in the North Cherry Avenue building that also houses the National Optical Astronomy Observatory.

    The $400,000 contract for remodeling the space to accommodate LSST went to Tucson-based Division II Construction Co.

    LSST is managed for the National Science Foundation (NSF) by the Association of Universities for Research in Astronomy (AURA).

    NSF has committed $473 million for construction of the telescope, scheduled for completion by the end of 2019.

    The $165 million camera is funded by the U.S. Department of Energy. Arizona Optical Systems is building mirrors for the camera, and sensors are being provided by the UA’s Imaging Technology Lab.

    Some of the assembly work being done at the AURA center on North Cherry has been subcontracted to the Beacon Group, a Tucson nonprofit that provides work for people with significant disabilities.

    Greg Natvig, vice president for business operations at Beacon, said it does a lot of assembly work for local hi-tech companies. “We do that sort of thing every day,” he said.

    Gressler said LSST has used the Beacon Group for most of its Inner Loop Controller work.

    “It’s been a great relationship where many times we’ve had critical orders that needed short response times,” he said.

    Gressler said he enjoys working with local companies and the project benefits from that proximity. A lot of Gressler’s site visits involve long airline flights.

    “The nicest thing, aside from our really good relationship with CAID, is going there in your car and coming home to your office the same day,” he said.

    Contracting locally also saves a lot in shipping costs, he said. LSST has already had to move its main mirror from the UA’s mirror lab to a hangar near the airport. It will bring it to CAID, just a few blocks away, for integration with the cell built there.

    It will be tested again back at the mirror lab before the cell and mirror are shipped in separate containers to Chile, where an observatory is under construction on a peak called Cerro Pachón, near the NOAO-managed Cerro Tololo Observatory, 50 miles east of the port of La Serena.

    When the cell, mirror, camera and mount are all put together, the telescope will weigh 350 tons and will move more quickly and more often on its axes than any telescope ever built.

    CAID’s cell will look a lot nicer before then, said Beifus.

    It oxidized to a rusting tanker color after being heated and water-cooled to prevent any “relaxation” of its shape. It will be painted teal blue before it is shipped to Chile, said Beifus.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    LSST telescope, currently under construction at Cerro Pachón Chile
    LSST Interior
    LSST telescope, currently under construction at Cerro Pachón Chile.

    LSST/Camera, built at SLAC
    LSST/Camera, built at SLAC

    The LSST is a new kind of telescope. Currently under construction in Chile, it is being built to rapidly survey the night-time sky. Compact and nimble, the LSST will move quickly between images, yet its large mirror and large field of view—almost 10 square degrees of sky, or 40 times the size of the full moon—work together to deliver more light from faint astronomical objects than any optical telescope in the world.

    From its mountaintop site in the foothills of the Andes, the LSST will take more than 800 panoramic images each night with its 3.2 billion-pixel camera, recording the entire visible sky twice each week. Each patch of sky it images will be visited 1000 times during the survey. With a light-gathering power equal to a 6.7-m diameter primary mirror, each of its 30-second observations will be able to detect objects 10 million times fainter than visible with the human eye. A powerful data system will compare new with previous images to detect changes in brightness and position of objects as big as far-distant galaxy clusters and as small as near-by asteroids.

    The LSST’s combination of telescope, mirror, camera, data processing, and survey will capture changes in billions of faint objects and the data it provides will be used to create an animated, three-dimensional cosmic map with unprecedented depth and detail , giving us an entirely new way to look at the Universe. This map will serve a myriad of purposes, from locating that mysterious substance called dark matter and characterizing the properties of the even more mysterious dark energy, to tracking transient objects, to studying our own Milky Way Galaxy in depth. It will even be used to detect and track potentially hazardous asteroids—asteroids that might impact the Earth and cause significant damage.

    As with past technological advances that opened new windows of discovery, such a powerful system for exploring the faint and transient Universe will undoubtedly serve up surprises.

    Plans for sharing the data from LSST with the public are as ambitious as the telescope itself. Anyone with a computer will be able to view the moving map of the Universe created by the LSST, including objects a hundred million times fainter than can be observed with the unaided eye. The LSST project will provide analysis tools to enable both students and the public to participate in the process of scientific discovery. We invite you to learn more about LSST science.

    The LSST will be unique: no existing telescope or proposed camera could be retrofitted or re-designed to cover ten square degrees of sky with a collecting area of forty square meters. Named the highest priority for ground-based astronomy in the 2010 Decadal Survey, the LSST project formally began construction in July 2014.

  • richardmitnick 2:08 pm on September 27, 2016 Permalink | Reply
    Tags: , , Physics slang,   

    From Symmetry: “You keep using that physics word” 

    Symmetry Mag

    Lauren Biron

    Illustration by Sandbox Studio, Chicago with Corinne Mucha

    I do not think it means what you think it means.

    Physics can often seem inconceivable. It’s a field of strange concepts and special terms. Language often fails to capture what’s really going on within the math and theories. And to make things even more complicated, physics has repurposed a number of familiar English words.

    Much like Americans in England, folks from beyond the realm of physics may enter to find themselves in a dream within a dream, surrounded by a sea of words that sound familiar but are still somehow completely foreign.

    Not to worry! Symmetry is here to help guide you with this list of words that acquire a new meaning when spoken by physicists.

    Illustration by Sandbox Studio, Chicago with Corinne Mucha


    The physics version of quench has nothing to do with Gatorade products or slaking thirst. Instead, a quench is what happens when superconducting materials lose their ability to superconduct (or carry electricity with no resistance). During a quench, the electric current heats up the superconducting wire and the liquid coolant meant to keep the wire at its cool, superconducting temperature warms and turns into a gas that escapes through vents. Quenches are fairly common and an important part of training magnets that will focus and guide beams through particle accelerators. They also take place in superconducting accelerating cavities.

    Illustration by Sandbox Studio, Chicago with Corinne Mucha

    Cannibalism, strangulation and suffocation

    These gruesome words take on a new, slightly kinder meaning in astrophysics lingo. They are different ways that a galaxy’s shape or star formation rate can be changed when it is in a crowded environment such as a galaxy cluster. Galactic cannibalism, for example, is what happens when a large galaxy merges with a companion galaxy through gravity, resulting in a larger galaxy.

    Illustration by Sandbox Studio, Chicago with Corinne Mucha


    Depending on how much you know about racecars and driving terms, you may or may not have heard of a chicane. In the driving world, a chicane is an extra turn or two in the road, designed to force vehicles to slow down. This isn’t so different from chicanes in accelerator physics, where collections of four dipole magnets compress a particle beam to cluster the particles together. It squeezes the bunch of particles together so that those in the head (the high-momentum particles at the front of the group) are closer to the tail (the particles in the rear).

    Illustration by Sandbox Studio, Chicago with Corinne Mucha


    A beam cooler won’t be of much use at your next picnic. Beam cooling makes particle accelerators more efficient by keeping the particles in a beam all headed the same direction. Most beams have a tendency to spread out as they travel (something related to the random motion, or “heat,” of the particles), so beam cooling helps kick rogue particles back onto the right path—staying on the ideal trajectory as they race through the accelerator.

    Illustration by Sandbox Studio, Chicago with Corinne Mucha


    In particle physics, a house is a place for magnets to reside in a particle accelerator. House is also used as a collective noun for a group of magnets. Fermilab’s Tevatron particle accelerator, for example, had six sectors, each of which had four houses of magnets.

    Illustration by Sandbox Studio, Chicago with Corinne Mucha


    A barn is a unit of measurement used in nuclear and particle physics that indicates the target area (“cross section”) a particle represents. The meaning of the science term was originally classified, owing to the secretive nature of efforts to better understand the atomic nucleus in the 1940s. Now you can know: One barn is equal to 10-24 cm2. In the subatomic world, a particle with that size is quite large—and hitting it with another particle is practically like hitting the broad side of a barn.

    Illustration by Sandbox Studio, Chicago with Corinne Mucha


    Most people dread cavities, but not in particle physics. A cavity is the name for a common accelerator part. These metal chambers shape the accelerator’s electric field and propel particles, pushing them closer to the speed of light. The electromagnetic field within a radio-frequency cavity changes back and forth rapidly, kicking the particles along. The cavities also keep the particles bunched together in tight groups, increasing the beam’s intensity.

    Illustration by Sandbox Studio, Chicago with Corinne Mucha


    Most people associate doping with drug use and sports. But doping can be so much more! It’s a process to introduce additional materials (often considered impurities) into a metal to change its conducting properties. Doped superconductors can be far more efficient than their pure counterparts. Some accelerator cavities made of niobium are doped with atoms of argon or nitrogen. This is being investigated for use in designing superconducting magnets as well.

    Illustration by Sandbox Studio, Chicago with Corinne Mucha


    In particle physics, injections don’t deliver a vaccine through a needle into your arm. Instead, injections are a way to transfer particle beams from one accelerator into another. Particle beams can be injected from a linear accelerator into a circular accelerator, or from a smaller circular accelerator (a booster) into a larger one.

    Illustration by Sandbox Studio, Chicago with Corinne Mucha


    Most people associate decay with things that are rotting. But a particle decay is the process through which one particle changes into other particles. Most particles in the Standard Model are unstable, which means that they decay almost immediately after coming into being. When a particle decays, its energy is divided into less massive particles, which may then decay as well.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.

  • richardmitnick 12:23 pm on September 27, 2016 Permalink | Reply
    Tags: , , , How did we make sense of the cosmic abyss?   

    From Ethan Siegel: “How did we make sense of the cosmic abyss?” 

    Ethan Siegel


    Galaxy cluster LCDCS-0829, as observed by the Hubble Space Telescope. This galaxy cluster is speeding away from us, and in only a few billion years will become unreachable, even at the speed of light. Image credit: ESA/Hubble & NASA.

    Looking into the great, dark unknown was a mystery for thousands of years. No longer!

    “Science cannot tell theology how to construct a doctrine of creation, but you can’t construct a doctrine of creation without taking account of the age of the universe and the evolutionary character of cosmic history.” -John Polkinghorne

    A look out into the night sky raises a slew of question that any intelligent, curious person might wonder about:

    What are those points of light in the sky?
    Are there other Suns like our own, and if so, do they have planets like we do?
    How far away are the stars, and how long do they live?
    What lies beyond our Milky Way galaxy?
    What does the entire Universe look like?
    And how did it come to be this way?

    For thousands of years, these were questions for poets, philosophers and theologians. But scientifically, we’ve not only discovered the answers to all of these questions, but the answers have raised some even bigger ones we never could have anticipated.

    A standard cosmic timeline of our Universe’s history. Image credit: NASA/CXC/M.Weiss.

    With the exception of a few bodies in our Solar System that reflect our Sun’s light back at us, every point of shining light we see in the night sky is a star. They come in different colors, from red to orange to yellow to white to blue, and they come in different brightnesses, from only about 0.1% as bright as our Sun to literally millions of times the Sun’s brightness. They are so far away that they appear to be in the same position not only night after night, but year after year as well. The very first attempt to measure their distances was based on a single assumption: if the stars were identical to the Sun, how bright would they be? Based on our understanding of how brightness is affected by distance, the night sky’s brightest star, Sirius, was estimated to be 0.4 light years away, a tremendous distance. If they had known in the 1600s how many times brighter Sirius was than the Sun, the distance estimate would have been off by less than 10%.

    Our sun is a G-class star. Although the larger, brighter ones are more impressive, they are far fewer in number. Sirius, an A-class star, is 20–25 times brighter than our Sun, yet O, B and A stars represent only 1% of stars *total* in the galaxy. Image credit: Wikimedia Commons user LucasVB.

    That the other stars are Suns like our own wasn’t proven until the invention of spectroscopy, where we could break light up into individual wavelengths and see the signatures of what atoms and molecules were present. About 90% of stars are smaller and fainter than our own, about 5% are more massive and brighter, and about 5% are Sun-like in their mass, size and brightness. Over the past 25 years, we’ve discovered that planets are the norm around stars, having confirmed more than 3,000 planets beyond our own Solar System. NASA’s Kepler spacecraft is by far the greatest planet-finding tool we’ve ever employed, discovering around 90% of the exoplanets we know today.

    The 21 Kepler planets discovered in the habitable zones of their stars, no larger than twice the Earth’s diameter. (Proxima b, not discovered with Kepler, will bring the count up to 22.) Most of these worlds orbit red dwarfs, closer to the “bottom” of the graph. Image credit: NASA Ames/N. Batalha and W. Stenzel.

    By measuring how a star moves due to the gravitational tug of its planets, we can infer their masses and orbital periods. By measuring how much a star’s light dims due to a planet passing in front of it, we can measure both its period and its physical size. So far, more than 20 rocky, roughly Earth-sized worlds have been found in the “potentially habitable” zones around their stars, meaning that if these worlds have Earth-like atmospheres, they’ll have the right temperatures and pressures for liquid water on their surface. Most recently, Proxima Centauri, the closest star to our Sun, has been found to house perhaps the most Earth-like planet yet, at just 4.2 light years away.

    An artist’s rendition of Proxima Centauri as seen from the “ring” portion of the world, Proxima b. It would be over 3 times the diameter and 10 times the area that our Sun takes up. Alpha Centauri A and B (shown) would be visible during the day. Image credit: ESO/M. Kornmesser.

    To accurately measure the distances to the stars, the best technique is to measure their positions as precisely as possible over the course of an entire year. As Earth moves in its orbit around the Sun, traveling as far as 300 million kilometers from its location six months prior, the nearest stars will appear to shift, the same way your thumb appears to shift if you hold it at arm’s length and close one eye at first, then open it and close the other.

    The parallax method, employed by GAIA, involves noting the apparent change in position of a nearby star relative to the more distant, background ones. Image credit: ESA/ATG medialab.

    This phenomenon, known as parallax, wasn’t first accurately measured until the mid-19th century, giving us the distance to the nearest stars. Once you know how far away a star is and you measure its other properties, you can use that information to identify other stars just like it, and hence determine how far away anything you can see in the Universe is. We can step from the nearest stars to all stars in our galaxy to stars in galaxies beyond our own to the most distant galaxies observable.

    The Hubble eXtreme Deep Field (XDF), which revealed approximately 50% more galaxies-per-square-degree than the previous Ultra-Deep Field. Image credit: NASA; ESA; G. Illingworth, D. Magee, and P. Oesch, University of California, Santa Cruz; R. Bouwens, Leiden University; and the HUDF09 Team.

    This works just like a ladder, where you step on the first rung and use that step to get to the next rung, and each time you get a little bit further on your journey. The European Space Agency’s GAIA satellite, launched in 2013, seeks to measure the parallax positions of millions of stars, giving us the most secure “first rung” on the cosmic distance ladder of all-time.

    A map of star density in the Milky Way and surrounding sky, clearly showing the Milky Way, large and small Magellanic Clouds, and if you look more closely, NGC 104 to the left of the SMC, NGC 6205 slightly above and to the left of the galactic core, and NGC 7078 slightly below. Image credit: ESA/GAIA.

    Stars burn through their fuel just like the Sun does: by converting hydrogen into helium in their cores. This process of nuclear fusion emits a tremendous amount of energy by Einstein’s E = mc^2, as each helium nucleus that you produce from four hydrogen nuclei is 0.7% lighter than what you started with. Over the 4.5 billion year history of our Sun, it’s lost approximately the mass of Saturn in the process of shining the way it does. But at some point, the Sun and every star in the Universe will run out of fuel in its core.

    The anatomy of the Sun, including the inner core, which is the only place where fusion occurs. Image credit: NASA/Jenny Mottar.

    When it does, it will expand and turn into a red giant, fusing helium into carbon. Even more massive stars will fuse carbon into oxygen, oxygen into silicon, sulphur and magnesium, and the most massive stars will fuse silicon into iron, cobalt and nickel. Stars like our Sun will die softly, blowing off their outer layers in a planetary nebula, while the most massive stars will die in a catastrophic supernova explosion, with both recycling the heavy elements formed within back into the interstellar medium.

    Our Sun will have a total lifetime of about 12 billion years, while the lowest-mass stars (at about 8% the mass of our Sun) will burn through their fuel the most slowly, living for more than 10 trillion years: many times the present age of the Universe. But the most massive stars burn through their fuel more quickly, with some stars only living a few million years before they die and expel their heavy elements back into the Universe.

    The supernova remnant N 49, found within our own Milky Way. Image credit: NASA/ESA and The Hubble Heritage Team (STScI/AURA).

    These heavy elements like carbon, oxygen, nitrogen, phosphorous, silicon, copper and iron are not only essential to life-as-we-know-it, but for creating rocky planets in the first place. It takes multiple generations of stars living, burning through their fuel, dying and recycling those ingredients back into space, where they help form the next generations of stars, to give rise to a world like Earth. And here, from our perspective, we’ve been able to look out into the Universe, not only across the great cosmic distances, but back into the Universe’s past.

    The galaxy NGC 7331, with more distant galaxies and closer, foreground stars also in frame. Image credit: Adam Block/Mount Lemmon SkyCenter/University of Arizona.

    The fact that the speed of light is finite and constant, at 299,792,458 m/s, doesn’t just mean that there’s a delay in sending signals across very large distances. It means that as we look at objects that are far away, we’re seeing them not as they are today, but as they were back in the Universe’s distant past. Look at a star 20 light years away, and you’re seeing it as it was 20 years ago. Look at a galaxy that’s 20 million light years away, and you’re seeing it 20 million years ago.

    Galaxies similar to the Milky Way as they were at earlier times in the Universe. Image credit: NASA, ESA, P. van Dokkum (Yale University), S. Patel (Leiden University), and the 3D-HST Team.

    We’ve been able to look so far back, thanks to powerful telescopes like Hubble, that we’ve been able to view galaxies in the Universe as they were billions of years ago, back when the Universe was just a few percent of its current age. We see that galaxies in the past were smaller, less massive, bluer in intrinsic color, forming stars more rapidly, and were less rich in these heavy elements that we need to form planets. We also see that, over time, these galaxies merge together to form larger structures. We can put this whole picture together, and visualize how the Universe has evolved to get to be the way it is at the present.

    The entire Universe is a vast cosmic web, where galaxies and clusters of galaxies form at the intersection of these cosmic filaments. In between, there are vast cosmic voids devoid of stars and galaxies, where gravitation in the denser regions has pulled that matter away to use for other purposes. We’re seeing that happen on our local scale today, as the galaxies in the local group are moving towards one another. At some point, four-to-seven billion years in the future, our nearest large neighbor, Andromeda, will merge with our Milky Way, creating a giant elliptical galaxy: Milkdromeda.

    A series of stills showing the Milky Way-Andromeda merger, and how the sky will appear different from Earth as it happens. Image credit: NASA; Z. Levay and R. van der Marel, STScI; T. Hallas; and A. Mellinger.

    And all the meanwhile, the Universe continues to expand, towards a colder, emptier, more distant fate. Galaxies beyond our local group recede from our own and from each other. The things that are gravitationally bound together — planets, stars, solar systems, galaxies and galaxy clusters — will remain bound together for as long as the stars burn in our Universe. But each individual galaxy group or cluster will recede from all the others, as the Universe gets colder and lonelier as time goes on.

    The four possible fates of the Universe with only matter, radiation, curvature and a cosmological constant allowed. The bottom “fate” is supported by the evidence. Image credit: E. Siegel, from his book, Beyond The Galaxy.

    Which means, if we go back to the very beginning, and ask how it all came to be, we have:

    an observable Universe that began with a hot, dense, mostly uniform state known as the Big Bang;
    that cooled, enabling matter and antimatter to annihilate, leaving only a tiny amount of matter left over;
    that cooled further, allowing protons and neutrons to fuse together into helium without getting blasted apart;
    that cooled even further, allowing the creation of stable, neutral atoms;
    where the gravitational imperfections grew and grew, leading to gas clumping together in some regions, that became dense enough to form the first stars;
    where the most massive stars burned through their fuel, died and recycled their heavier elements back into the interstellar medium;
    small star clusters and galaxies merged together and grew, triggering new waves of star formation;
    where after billions of years, new stars form with rocky planets on them and the ingredients for life;
    where galaxies that house them grew into the spiral and elliptical giants that we have today;
    and where, 9.2 billion years after the Big Bang, a run-of-the-mill star cluster is formed in an isolated spiral galaxy, where 2% of the elements are now heavier than hydrogen-and-helium;
    one of which happens to be our Sun;
    and where, after an additional 4.54 (or so) billion years, an intelligent species arises that can begin putting the pieces of our cosmic history together, understanding where we come from for the first time.

    There are more things that we’ve learned, and there’s more depth to explore on all of these issues. (My first book, Beyond The Galaxy, does exactly this.) Yes, there are questions we’re still working on, such as how the matter/antimatter asymmetry came to be, how the Big Bang got set up and started, and how, exactly, the Universe will meet its ultimate fate. But the questions of what the Universe looks like, how it came to be this way and what it’s physically doing have been answered: not by philosophers, poets or theologians, but by the scientific endeavor. And if the new big questions are to be answered — the ones that the answers to the previous big questions raised — it will, again, be science that shows us the way.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 11:43 am on September 27, 2016 Permalink | Reply
    Tags: Book Anna Garry Thomas Feurer A Journey into Time in Powers of Ten, , Expanding the horizons of experience, NCCR MUST, , Ultrafast processes, Ultrashort research   

    From ETH Zürich: “Ultrafast processes in the blink of an eye” 

    ETH Zurich bloc

    ETH Zürich

    Florian Meyer

    Ultrafast processes beyond the human imagination occur in nature, but basic research has been able to measure and explore them only since the turn of the millennium. A book and an exhibition by the National Centre of Competence in Research Molecular Ultrafast Science and Technology (NCCR MUST) now connect this to everyday life by inviting you on a journey into time.

    In recent years, ultrafast laser physics – such as Ursula Keller’s Attoline laboratory – has ventured into increasingly smaller units of time: today, measurements are possible in the attosecond or trillionth of a second range. (Photo: ETH Zurich/H. Hostettler)

    The exhibition, which is located in the entrance of Campus Info at ETH Hönggerberg until mid-December, can be enjoyed in no time at all. The name says it all – “Fast”. Several posters present image sequences with processes that take place very quickly – or slowly – and at the same time provide an insight into “ultrafast research”.

    Some processes occur so quickly in nature that the blink of an eye is by comparison very slow. A light wave, for example, is ultrafast, rising and falling again in a mere two quadrillionths of a second. In scientific terms, this is two femtoseconds or, when written as the power of ten, 2·10−15 seconds.

    Even faster are elementary particles such as electrons or photons: when they move in molecules, it takes an ultrafast 100 trillionths of a second, which is 100 attoseconds or 10−16 s. No one is able to see such ultrafast processes with the naked eye, and even for basic research it is comparatively uncharted territory.

    Expanding the horizons of experience

    Basic and essential physical, chemical and biological reactions take place in the ultrafast time scale between nanoseconds (10−9 s) and attoseconds (10−18 s). Understanding them can help in the development of alternative sources of energy, new data storage and medical applications, such as artificial blood.

    “The human senses perceive light and speed in only a very limited manner,” says Ursula Keller. “This is why we are attempting to expand the horizons of experience through research.” Throughout her career, the ETH Professor of Experimental Physics has contributed to the research to venture into increasingly smaller units of time.

    Only since the turn of the millennium has basic research been able to measure and investigate processes in the attosecond range. The breakthrough came thanks to the development of new light sources, such as the new X-ray radar SwissFEL at the Paul Scherrer Institute, based on ultrashort light and laser pulses.

    Today, “ultrafast research” comprises subjects such as physics, chemistry, biology and materials science, which are linked together at NCCR MUST. Keller is the co-director of MUST and her Ultrafast Laser Physics group has contributed to the rapid development.

    Linking ultrashort research and everyday life

    Uncharted territory for research poses a challenge for public communication and the dissemination of knowledge to schools, as neither ultrafast processes nor the research technologies and mathematical notations and models used to explore them are really tangible to lay people.

    To establish a link between “ultrashort research” and everyday life, Thomas Feurer, co-director of NCCR MUST and a physics professor at the University of Bern, and Anna Garry, responsible for public communication at NCCR MUST, pursued an idea that had come to Jürg Osterwalder, Professor of Surface Physics at the University of Zurich, while he was on the train.

    The idea was to invite non-specialists on a journey into ultrafast processes. The result was the book A Journey into Time in Powers of Ten, as well as exhibitions at Scientifica 2015, the Festival de Science 2016 in Neuchâtel and now at Hönggerberg. The material is also being used in exchanges with schools.

    “We would like to make schools and the public excited about our research, inspire them to think about time and the duration of essential processes, and show them how these processes can be expressed numerically,” says Garry, relating how her six-year-old nephew was fascinated by the series of images and immediately asked questions.

    The starting point of the journey into time – in both the book and the exhibition – is the blink of an eye: this takes one second or 100 s. In everyday life, blinking is the smallest felt unit in which a human perceives events in their environment. Proceeding from this, the journey travels ten steps from blinking to the attosecond.

    Each step corresponds to a power of ten and is connected to one aspect of ultrafast research by a short story. In addition to scientific facts, excerpts from the daily lives of the researchers also appear, rendering numerical reflections about essential processes clear and comprehensible.

    Insight into fast and slow processes

    A person has a thought in a tenth of a second or 10-1 s. Walking 100 metres takes about ten seconds or 101 s. Someone travelling by train from Zurich to Geneva requires 2¾ hours or 104 s. Conversely, lightning takes only 10-4 s.

    A doctoral student in their fourth year has put in about 108 s towards their education, whereas a jellyfish in the sea needs only 10-8 s to glow green. “We can easily imagine the step from blinking to sprinting. So when we take these ten steps ten times, we arrive at our research in a way that can be understood,” says Feurer.

    “As many people find it much more difficult to imagine powers of ten in the minus range than in the plus range, we have juxtaposed the slow processes with the fast.” The slowest process presented in the book and exhibition is the formation of the Milky Way, which has been dragging on for more than 30 billion years or 10^18 s.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ETH Zurich campus
    ETH Zurich is one of the leading international universities for technology and the natural sciences. It is well known for its excellent education, ground-breaking fundamental research and for implementing its results directly into practice.

    Founded in 1855, ETH Zurich today has more than 18,500 students from over 110 countries, including 4,000 doctoral students. To researchers, it offers an inspiring working environment, to students, a comprehensive education.

    Twenty-one Nobel Laureates have studied, taught or conducted research at ETH Zurich, underlining the excellent reputation of the university.

  • richardmitnick 10:18 am on September 27, 2016 Permalink | Reply
    Tags: , astronomy.com, , , Rossiter-McLaughlin effect,   

    From astronomy.com: “Sun spots may be tricking scientists” 

    Astronomy magazine


    September 27, 2016
    Shannon Stirone

    Illustration of the Rossiter-McLaughlin effect. Ricardo Cardoso Reis(IA/UPorto)

    For millennia scientists and observers have learned from our Sun and other stars that move throughout the night sky. The life and death of stars teaches us about star formation, solar system formation and occasionally fills on those odds and ends questions about how we came to be. Depending on what they’re searching for, stars like our Sun can help modern day scientists discover other planetary bodies, like exoplanets. One of the main ways they do this is by using telescopes like Kepler that looks for a dimming of the light curve as a planet transits across the plane of the star.

    Planet transit. NASA/Ames
    Planet transit. NASA/Ames

    The bane of exoplanet hunter’s existence is the sunspot. Our sun goes through cycles and sunspots can easily be seen with a solar telescope of even solar eclipse glasses. But a sunspot can be there one day, and gone the next. If a telescope like Kepler is examining a star looking for planets and a sunspot is present on the surface that will affect the amount of photons hitting its spectrometer, and could lead to false positives.

    However, understanding these strange planetary bodies from these extreme distances can prove challenging. Recently researches at the University of Porto in Portugal and Institute of Astrophysics of Georg-August-University of Göttingen published a paper in the journal of Astronomy and Astrophysics showing that the normal activity of stars could be the reason why some exoplanet angles appear extremely misaligned.

    Using the Rossiter-McLaughlin effect, or RM effect, exoplanet researchers measure the spin-orbit tilt of exoplanets, which is a factor in determining a specific planetary migration model. They want to know if the planet formed cleanly inside the planetary nebula, or migrated inward after forming in the outer reaches, accounting for a discombobulated arrangement.

    The RM effect measures the amount of doppler effected light coming from the star. Using spectroscopic wavelengths of light they measure the amount of red and blue photons coming from the host star. If looking head on at a star like our Sun, it will spin in one direction — if it spins in a clockwise motion, the right side of the star will look slightly red-shifted as it moves away from us and the side spinning towards us will look slightly more blue-shifted. When a planet passes in front of its star, it will block out either more red or blue light, depending on its size and the angle in which it’s orbiting.

    Planet formation is a tricky business and it can happen in a myriad of ways. Many planets orbit at a relatively normal angle, but some can be tilted a full 90 degrees, or pole-on as they orbit their star. These extreme tilts tell scientists a lot about how these planets formed, and how their host solar system formed.

    “This angle is very important because it is a window into the past,” says co-author Pedro Figueira from the University of Porto. “It lets us know if the planet is likely to have been formed in a disk and migrated in smoothly through gravitational interaction, or formed violently around other planets or stars. In the first case we have a small misalignment angle, in the second one a large one.”

    One of the biggest issues scientists have when searching for these exoplanets is stellar activity of the planet’s host star. As we know from our Sun, stars can be quite active, creating sunspots, ejecting plasma out into space, and displaying brighter spots on the surface called “plages” which are convecting areas that appear to be red-shifting as the material moves deeper into the star. All of these normal stellar behaviors interfere with the amount of red and blue light reaching the observer, and in turn can interfere with the spectra data being collected from the telescope.

    One way around this is to study the stars of interests for months or even years to predict their surface patterns. “This study is a part of a growing body of work pointing to how crucial the understanding of solar activity is when it comes to interpreting exoplanet results,” says Dr. Sarah Ballard, exoplanetary researcher at MIT. “It has a context of a growing movement within exoplanet research of how stellar activity bears upon radial velocity observations, transits and RM measurements.”

    The team hopes that their findings will inform future research and how instruments are designed on upcoming observatories. “These results will make us rethink a bit the way we derive these angles, and will be more conservative on the errors assigned to each spinning angle,” says Figueira.While this likely won’t negate any previous results, it could help explain the anomalies that still exist in the catalog of wonky exoplanets and hopefully help scientists better understand how they form in the first place.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 9:18 am on September 27, 2016 Permalink | Reply
    Tags: , ,   

    From Eos: “The Gravity of Volcanic Eruptions” 

    Eos news bloc


    Wudan Yan

    A view of Kīlauea’s summit lava lake on 7 September 2016, when the lake was just 8 meters below the floor of Halemaʻumaʻu Crater. Credit: USGS

    In the United States, nearly a quarter of the country’s 169 active volcanoes could pose a threat to public safety as more communities settle and grow in areas adjacent to them. Monitoring volcanoes in real time allows scientists to understand potential future volcanic activity, which helps public officials and emergency managers to make decisions and minimize losses during an eruption. Volcanoes can erupt with little to no warning, so finding ways to continually monitor small changes in activity could provide important information before potential future eruptions.

    Part of the challenge of forecasting eruptions is that scientists can’t look directly inside a volcano to gauge what’s happening. Currently, scientists track seismic activity, gas emissions, and surface deformation for clues. Researchers can also look at changes in gravity, which reflect small variations in activity beneath the surface. Since anything with mass has a gravity field, Earth’s gravitational pull is stronger in areas with more mass and weaker in areas with less mass. Therefore, a change toward stronger gravity in a certain area—from, for instance, more magma—can potentially be indicative of future volcanic activity.

    In a new study [Journal of Geophysical Research: Solid Earth], Poland and Carbone continually monitored gravity changes at Kīlauea Volcano in Hawaii from 2011 to 2015 to understand how gravity varied with volcanic activity. Using data from a gravimeter located within the volcano’s caldera, they noted that gravity change correlated strongly with both deformation and the depth of a lava lake inside the summit’s eruptive vent. This allowed the scientists to assess the density of the lava lake over time.

    Using this 5-year set of observations, the researchers found that the density was relatively low overall—slightly more than that of water—which reflects the large amount of gas in the lava lake. A few spikes in density over time were also recorded, possibly indicating accumulation of a small amount of magma beneath the surface that was not accompanied by surface deformation—a process that would not have been recognized by any other monitoring method. Some of these spikes were associated with changes in seismicity that had been interpreted as possible magmatic intrusions, emphasizing the importance of gravity in mapping transient and potentially hazardous volcanic activity.

    Continually tracking changes in gravity near highly active and accessible volcanoes, the authors say, could have great potential for sensing previously overlooked or underappreciated volcanic signals. (Journal of Geophysical Research: Solid Earth, doi:10.1002/2016JB013057, 2016)

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Eos is the leading source for trustworthy news and perspectives about the Earth and space sciences and their impact. Its namesake is Eos, the Greek goddess of the dawn, who represents the light shed on understanding our planet and its environment in space by the Earth and space sciences.

  • richardmitnick 9:01 am on September 27, 2016 Permalink | Reply
    Tags: , , , Lyman-alpha blobs, Mysterious 'Blobs' Can be Closer Than We Thought   

    From Gemini: “Mysterious ‘Blobs’ Can be Closer Than We Thought” 


    Gemini Observatory
    Gemini Observatory

    September 20, 2016

    Gemini/GMOS images of the LAB host galaxies, taken in gri filters. The images reveal a broad variety of gaseous outflows driven by the AGN. The green color is caused by [OIII] narrow-line emission in the gas.

    Astronomers studying a mysterious phenomenon known as Lyman-alpha blobs (LABs) have discovered several of these high-energy objects in galaxies that are much closer than previously known. The discovery is significant because these closer specimens are much easier to study, and because they live at a time when the Universe was much older and more mature, allowing astronomers to study their evolution with cosmic time.

    The observations are of a rare type of relatively nearby galaxy, engulfed in large clouds of ionized gas as a result of violent, energetic activity in their cores. These closer specimens were first described in 2013, catching the astronomers’ attention with their luminosities and sizes. Data taken with the Gemini Observatory quickly revealed that these galaxies are unparalleled by any other objects known in the nearby Universe. To understand their nature and formation, observations with the Chandra X-ray observatory, the GALEX UV satellite, and the mid-infrared WISE satellite were included to augment the Gemini data.

    NASA/Chandra Telescope
    NASA/Chandra Telescope

    NASA/Galex telescope
    NASA/Galex telescope

    NASA/WISE Telescope
    NASA/WISE Telescope

    “Looking at the far ultraviolet images taken with GALEX, we realized that these huge ionized clouds of gas are similar to the Lyman-alpha blobs, or LABs,” says Mischa Schirmer of the Gemini Observatory. “So far, LABs were only known in the young Universe, at a time when galaxies were forming much more vigorously than today. It’s an exciting discovery that LABs may still exist 4-7 billion years later in the Universe, albeit in much lower numbers.” Schirmer adds that the existence of these objects has been postulated, but due to their scarcity they were difficult to find.

    LABs have puzzled astronomers since they were first discovered in 1999. They emit copious amounts of energetic far-ultraviolet radiation, yet their power sources often remained unknown. Hai Fu of the University of Iowa, and a co-author of the study, says that various explanations exist, “yet, taken together, they could still not explain all the data at hand.” The main problems are the LABs’ great distances, making them very dim. “Furthermore, their high redshifts make it difficult to access these parts of the spectrum from which we gain most information about their physical state”, adds Fu. “Having identified LABs at our cosmic doorstep makes our analysis so much easier.”

    One of the team’s surprising results is that the active galactic nuclei (AGN) in their sample are weak. AGN are supermassive black holes at the centers of galaxies, actively accreting material from their immediate surroundings. This process can release enormous amounts of energy and radiation, making AGN amongst the most luminous objects in the Universe. “Given the luminosity of the ionized gas in the LABs in our study, we expected the most powerful AGN in their centers. However, when we directly measured the energy output of the AGN with the Chandra X-ray telescope, we found the AGN 10 to 1000 times less powerful than required,” says Nancy Levenson of the Gemini Observatory. This means that the AGN must have rapidly faded within the last few 1000-10000 years. The ionizing radiation from their previous high state is still propagating through the galaxy, powering the gaseous nebula. Several such “ionization echoes” have been found by the Galaxy Zoo project in nearby galaxies, albeit none of them as powerful as in the objects of this study.

    “The most exciting result about our research is that the fading AGN naturally explain the absence of powerful sources in many LABs,” says Sangeeta Malhotra from Arizona State University. “The ultraviolet Lyman-alpha photons cannot leave the cloud of gas in a straight line like most other photons. Performing a random walk in the gas, the LAB can easily trap them for hundreds of thousands of years.” By the time the photons manage to escape, the central AGN may have long faded from the astronomers’ view, causing the apparent energy deficits.

    “It’s amazing that we could finally identify this missing piece of the puzzle,” says Schirmer. “However, there is still a tremendous amount of work to be done, now that we can embark on the details and the bigger picture with further observations.”

    The research included imaging and spectroscopic observations taken with the Gemini Multi-Object Spectrographs (GMOS) at both of the Gemini telescopes.


    Nearby LABs are extremely rare, with only about one found for every 1000 square degrees of sky. Once identified, the Gemini observations were straightforward because these LABs are very bright despite their light travel time distance of three billion light years. For comparison, the high redshift LABs that have been known so far, are typically 100 times dimmer and 2-3 times smaller. To unlock the LABs mysteries, the astronomers had to include further observations in X-ray, UV and infrared wavelengths, using the Chandra, GALEX and WISE satellites, respectively. The core team includes Mischa Schirmer (Gemini Observatory), Sangeeta Malhotra (Arizona State University), Nancy Levenson (Gemini Observatory), Hai Fu (University of Iowa), Rebecca Davies (Max-Planck Institute for Extraterrestial Physics), William Keel (University of Alabama), Paul Torrey (Harvard-Smithsonian Center for Astrophysics), and James Turner (Gemini Observatory). The research has been published at Monthly Notices of the Royal Astronomical Society.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Gemini North
    Gemini North, Hawai’i

    Gemini South
    Gemini South, Chile
    AURA Icon

    Gemini’s mission is to advance our knowledge of the Universe by providing the international Gemini Community with forefront access to the entire sky.

    The Gemini Observatory is an international collaboration with two identical 8-meter telescopes. The Frederick C. Gillett Gemini Telescope is located on Mauna Kea, Hawai’i (Gemini North) and the other telescope on Cerro Pachón in central Chile (Gemini South); together the twin telescopes provide full coverage over both hemispheres of the sky. The telescopes incorporate technologies that allow large, relatively thin mirrors, under active control, to collect and focus both visible and infrared radiation from space.

    The Gemini Observatory provides the astronomical communities in six partner countries with state-of-the-art astronomical facilities that allocate observing time in proportion to each country’s contribution. In addition to financial support, each country also contributes significant scientific and technical resources. The national research agencies that form the Gemini partnership include: the US National Science Foundation (NSF), the Canadian National Research Council (NRC), the Chilean Comisión Nacional de Investigación Cientifica y Tecnológica (CONICYT), the Australian Research Council (ARC), the Argentinean Ministerio de Ciencia, Tecnología e Innovación Productiva, and the Brazilian Ministério da Ciência, Tecnologia e Inovação. The observatory is managed by the Association of Universities for Research in Astronomy, Inc. (AURA) under a cooperative agreement with the NSF. The NSF also serves as the executive agency for the international partnership.

  • richardmitnick 8:16 am on September 27, 2016 Permalink | Reply
    Tags: , , , Stopped light   

    From COSMOS: “Stopped light means go for quantum computers (eventually)” 

    Cosmos Magazine bloc


    27 September 2016
    Cathal O’Connell

    A cold cloud of atoms (red smear in the centre) holds light in place. Ben Buchler / ANU

    Australian physicists have brought quantum computing a step closer by bringing light to a standstill. This kind of system, reported in Nature Physics, could be used to store light in a quantum memory or build optical gates – two vital components in the futuristic goal of assembling a light-based quantum computer.

    The researchers from the Australian National University in Canberra liken the experiment to a scene from the 2015 film Star Wars: The Force Awakens when the character Kylo Ren used the Force to stop a laser blast mid-air.

    “Of course, we’re not using the Force, we’re using a light-matter interaction,” says study co-author Geoff Campbell, adding that the movie scene does give an intuitive idea about what the experiment was about.

    The work follows 20 years of research into slowing or stopping the fastest phenomenon in the universe. Light barrels along through a vacuum at three hundred billion metres per second.

    In 1999 physicists managed to slow it to 17 metres per second in a cloud of cold gas.

    And by 2013, scientists at the University of Darmstadt in Germany stopped it entirely, for a full minute, inside an opaque crystal.

    But what physicists call ‘stopped light’ is not quite what you might imagine from that Star Wars scene.

    When physicists stop light, it’s actually only the light’s information that’s held in place – imprinted on surrounding atoms as light is absorbed. They can then retrieve this information by setting it in motion again as another light wave, for instance.

    This storage and retrieval of light information could be vital for building light-based quantum computers.

    The new experiment is a new variation on the stopped light technique, called ‘stationary light’. To pull it off, the Australian team shone infrared lasers into an ultra-cold cloud of rubidium atoms which excited atoms in two locations.

    The two excited groups of atoms then exchanged photons in a self-sustaining interaction – a bit like two groups of excited supporters exchanging chants at a football game.

    This optical chanting is called stationary light, because it preserves the information of the original light sent into the cloud – although only for a fraction of a second.

    While light has ground to a halt before, the Australian team managed to create a self-correcting arrangement – something which has never been done but makes preparation a lot easier. They were also able to image the cloud of atoms side-on and show the light exchange in action.

    The physicists see the experiment as an important step towards building a quantum logic gate, a critical element of optical quantum computers.

    Although some quantum logic gates have been built, they have been probabilistic, meaning they only work some of the time and can’t be scaled up. Building more reliable quantum gates hinges on finding a way to get two particles of light to interact.

    “The problem is photons tend not to talk to one another,” says Ben Buchler, who led the research.

    Using this technique, holding the light in place could give it more of a chance to interact, he adds: “That’s the building block for a quantum gate which is essential to a quantum computer.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 7:58 am on September 27, 2016 Permalink | Reply
    Tags: , , , UCLA researchers help design wearable microscope that can measure fluorescent dyes through skin   

    From UCLA: “UCLA researchers help design wearable microscope that can measure fluorescent dyes through skin” 

    UCLA bloc


    September 26, 2016
    Meghan Steele Horan

    Researchers can detect spatial frequencies of a fluorescent image…

    CLA researchers working with a team at Verily Life Sciences have designed a mobile microscope that can detect and monitor fluorescent biomarkers inside the skin with a high level of sensitivity, an important tool in tracking various biochemical reactions for medical diagnostics and therapy.

    This new system weighs less than a one-tenth of a pound, making it small and light enough for a person to wear around their bicep, among other parts of their body. In the future, technology like this could be used for continuous patient monitoring at home or at point-of-care settings.

    The research, which was published in the journal ACS Nano, was led by Aydogan Ozcan, UCLA’s Chancellor’s Professor of Electrical Engineering and Bioengineering and associate director of the California NanoSystems Institute and Vasiliki Demas of Verily Life Sciences (formerly Google Life Sciences).

    Fluorescent biomarkers are routinely used for cancer detection and drug delivery and release among other medical therapies. Recently, biocompatible fluorescent dyes have emerged, creating new opportunities for noninvasive sensing and measuring of biomarkers through the skin.

    However, detecting artificially added fluorescent objects under the skin is challenging. Collagen, melanin and other biological structures emit natural light in a process called autofluorescence. Various methods have been tried to investigate this problem using different sensing systems. Most are quite expensive and difficult to make small and cost-effective enough to be used in a wearable imaging system.

    To test the mobile microscope, researchers first designed a tissue phantom — an artificially created material that mimics human skin optical properties, such as autofluorescence, absorption and scattering. The target fluorescent dye solution was injected into a micro-well with a volume of about one-hundredth of a microliter, thinner than a human hair, and subsequently implanted into the tissue phantom half a millimeter to 2 millimeters from the surface — which would be deep enough to reach blood and other tissue fluids in practice.

    This microscope can monitor fluorescent biomarkers inside the skin. Ozcan Research Group/UCLA

    To measure the fluorescent dye, the wearable microscope created by Ozcan and his team used a laser to hit the skin at an angle. The fluorescent image at the surface of the skin was captured via the wearable microscope. The image was then uploaded to a computer where it was processed using a custom-designed algorithm, digitally separating the target fluorescent signal from the autofluorescence of the skin, at a very sensitive parts-per-billion level of detection.

    “We can place various tiny bio-sensors inside the skin next to each other, and through our imaging system, we can tell them apart,” Ozcan said. “We can monitor all these embedded sensors inside the skin in parallel, even understand potential misalignments of the wearable imager and correct it to continuously quantify a panel of biomarkers.”

    This computational imaging framework might also be used in the future to continuously monitor various chronic diseases through the skin using an implantable or injectable fluorescent dye.

    Other authors of the manuscript include UCLA postdoctoral researchers Zoltan Gorocs, Yair Rivenson, Hatice Koydemir, UCLA development engineer Derek Tseng, and Tamara Troy of Verily Life Sciences.

    This project was supported by Verily Life Sciences. Ozcan’s research group is supported by a Presidential Early Career Award for Scientists and Engineers, and by the Army Research Office Life Sciences Division, the National Science Foundation’s CBET Division Biophotonics Program, a National Science Foundation Emerging Frontiers in Research and Innovation award, an NSF EAGER award, an NSF INSPIRE award, the NSF Partnerships for Innovation: Building Innovation Capacity program, the Office of Naval Research, the Howard Hughes Medical Institute, the Vodaphone Americas Foundation, and King Abdullah University of Science and Technology.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    UC LA Campus

    For nearly 100 years, UCLA has been a pioneer, persevering through impossibility, turning the futile into the attainable.

    We doubt the critics, reject the status quo and see opportunity in dissatisfaction. Our campus, faculty and students are driven by optimism. It is not naïve; it is essential. And it has fueled every accomplishment, allowing us to redefine what’s possible, time after time.

    This can-do perspective has brought us 12 Nobel Prizes, 12 Rhodes Scholarships, more NCAA titles than any university and more Olympic medals than most nations. Our faculty and alumni helped create the Internet and pioneered reverse osmosis. And more than 100 companies have been created based on technology developed at UCLA.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: