Tagged: Interferometry Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:51 pm on December 18, 2019 Permalink | Reply
    Tags: "Remote Quantum Systems Produce Interfering Photons", , Interferometry, , , , , , ,   

    From Joint Quantum Institute: “Remote Quantum Systems Produce Interfering Photons” 

    JQI bloc

    From Joint Quantum Institute

    December 17, 2019

    Research Contact
    Steve Rolston

    Story by Jillian Kunze

    A schematic showing the paths taken by photons from two different sources in neighboring buildings. (Credit: S. Kelley/NIST)

    Scientists at the Joint Quantum Institute (JQI) have observed, for the first time, interference between particles of light created using a trapped ion and a collection of neutral atoms. Their results could be an essential step toward the realization of a distributed network of quantum computers capable of processing information in novel ways.

    In the new experiment, atoms in neighboring buildings produced photons—the quantum particles of light—in two distinct ways. Several hundred feet of optical cables then brought the photons together, and the research team, which included scientists from JQI as well as the Army Research Lab, measured a telltale interference pattern. It was the first time that photons from these two particular quantum systems were manipulated into having the same wavelength, energy and polarization—a feat that made the particles indistinguishable. The result, which may prove vital for communicating over quantum networks of the future, was published recently in the journal Physical Review Letters.

    “If we want to build a quantum internet, we need to be able to connect nodes of different types and functions,” says JQI Fellow Steve Rolston, a co-author of the paper and a professor of physics at the University of Maryland. “Quantum interference between photons generated by the different systems is necessary to eventually entangle the nodes, making the network truly quantum.”

    The first source of photons was a single trapped ion—an atom that is missing an electron—held in place by electric fields. Collections of these ions, trapped in a chain, are leading candidates for the construction of quantum computers due to their long lifetimes and ease of control. The second source of photons was a collection of very cold atoms, still in possession of all their electrons. These uncharged, or neutral, atomic ensembles are excellent interfaces between light and matter, as they easily convert photons into atomic excitations and vice versa. The photons produced by each of these two systems are typically different, limiting their ability to work together.

    In one building, researchers used a laser to excite a trapped barium ion to a higher energy. When it transitioned back to a lower energy, it emitted a photon at a known wavelength but in a random direction. When scientists captured a photon, they stretched its wavelength to match photons from the other source.

    In an adjacent building, a cloud of tens of thousands of neutral rubidium atoms generated the photons. Lasers were again used to pump up the energy of these atoms, and that procedure imprinted a single excitation across the whole cloud through a phenomenon called the Rydberg blockade. When the excitation shed its energy as photons, they traveled in a well-defined direction, making it easy for researchers to collect them.

    The team used an interferometer to measure the degree to which two photons were identical. A single photon entering the interferometer is equally likely to take either of two possible exits. And two distinguishable photons entering the interferometer at the same time don’t notice each other, acting like two independent single photons.

    But when researchers brought together the photons from their two sources, they almost always took the same exit—a result of quantum interference and an indication that they were nearly identical. This was precisely what the research team had hoped for: the first demonstration of interference between photons from these two very different quantum systems.

    In this experiment, photons traveled from the first building to the second via hundreds of feet of optical fiber. Due to this distance, sending photons from both systems to meet at the interferometer simultaneously was a feat of precise timing. Detectors were placed at the exits of the interferometer to detect where the photons came out, but the team often had to wait—gathering all the data took 24 hours over a period of 3 days.

    Further experimental upgrades could be used to generate a special quantum connection called entanglement between the ion and the neutral atoms. In entanglement, two quantum objects become so closely linked that the results from measuring one are correlated with the results from measuring the other, even if the objects are separated by a huge distance. Entanglement is necessary for the speedy algorithms that scientists hope to run on quantum computers in the future.

    Generating entanglement between different quantum systems usually requires identical photons, which the researchers were able to create. Unfortunately, trapped ions emit photons in a random direction, making the probability of catching them low. This meant that only about eight photons from the trapped ion made it to the interferometer each second. If the researchers attempted to perform more intricate experiments with that rate, the data could take months to collect. However, future work may increase how frequently the ion emits photons and allow for a useful rate of entanglement production.

    “This is a stepping-stone on the way to being able to entangle these two systems,” says Alexander Craddock, a graduate student at JQI and the lead author of this study. “And that would be fantastic, because you can then take advantage of all the different weird and wonderful properties of both of them.”

    In addition to Rolston and Craddock, co-authors of the paper include JQI graduate students John Hannegan, Dalia Ornelas-Huerta, and Andrew Hachtel, JQI postdoctoral researcher James Siverns, Army Research Laboratory scientists and JQI Affiliates Elizabeth Goldschmidt (now an Assistant Professor of Physics at the University of Illinois) and Qudsia Quraishi, and JQI Fellow Trey Porto.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    JQI supported by Gordon and Betty Moore Foundation

    We are on the verge of a new technological revolution as the strange and unique properties of quantum physics become relevant and exploitable in the context of information science and technology.

    The Joint Quantum Institute (JQI) is pursuing that goal through the work of leading quantum scientists from the Department of Physics of the University of Maryland (UMD), the National Institute of Standards and Technology (NIST) and the Laboratory for Physical Sciences (LPS). Each institution brings to JQI major experimental and theoretical research programs that are dedicated to the goals of controlling and exploiting quantum systems.

  • richardmitnick 4:14 pm on December 6, 2019 Permalink | Reply
    Tags: "More Than Just Astronomy: Radio Telescopes for Geophysics", , European Space Agency’s Sentinel-1 satellite constellation, InSAR, Interferometry, International VLBI Service for Geodesy and Astrometry,   

    From Eos: “More Than Just Astronomy: Radio Telescopes for Geophysics” 

    Eos news bloc

    From Eos

    Katherine Kornei

    Linking an existing network of radio telescopes with satellite radar would make it possible to measure ground displacements in a globally consistent way, scientists propose.

    A radio telescope, part of the Goldstone Deep Space Communications Complex, looms over California’s Mojave Desert. Credit: NASA/JPL-Caltech

    Radio telescopes reveal distant solar systems and bubbles of gas near our galaxy’s center. But they’re useful for more than just astronomy—a subset of the world’s radio telescopes could also play an important role in geophysics research. A team of scientists has now demonstrated how radio telescopes could be linked to satellites that measure ground deformation, the first step toward studying changes on Earth’s surface on a global scale.

    Wanted: A Global View

    “The height of Earth’s surface is changing all of the time,” said Amy Parker, a satellite radar specialist at Curtin University in Perth, Australia. These displacements occur for a myriad of reasons, some natural and some anthropogenic: earthquakes, mining, and groundwater extraction, for example.

    But accurately monitoring these changes on intercontinental scales—important for determining how land movements affect calculations of sea level rise and fall, for instance—is currently impossible: Interferometric synthetic aperture radar (InSAR), which involves bouncing microwaves off Earth’s surface and measuring their travel time and phase to trace ground deformation, works only over contiguous swaths of land. (That’s because water scatters microwaves inconsistently.) InSAR is “pretty amazing,” said Parker, but it measures ground displacement only relative to an arbitrary reference like the mean value in an image. It doesn’t measure changes relative to an absolute reference frame, and it can’t be used to study global-scale processes, said Parker. “We need to tie measurements on different continents into a consistent reference frame.”

    One way of doing so, Parker and her colleagues suggest, is to connect two existing networks: InSAR satellites and radio telescopes capable of very long baseline interferometry (VLBI).

    Global mm-VLBI Array

    Here Come the Telescopes

    Astronomical observations often involve resolving fine details, like separating two objects that appear close together in the sky. Physically larger telescopes have better angular resolution, but there’s a practical limit to how large a single telescope can be.

    That’s where interferometry comes in. By carefully combining the light gathered by multiple telescopes linked together by precise timing, astronomers can, in a sense, build a much larger telescope: They can achieve an angular resolution equal to that of a telescope with a diameter that’s the distance between the linked telescopes. Very long baseline interferometry refers to interferometry done over very large distances (“baselines”), even across continents. (Astronomers used VLBI to create the Event Horizon Telescope, a network of telescopes that obtained the first image of a black hole.)

    The first image of a black hole, Messier 87 Credit Event Horizon Telescope Collaboration, via NSF and ERC 4.10.19

    When a network of VLBI telescopes accurately measures the arrival of light from a distant galaxy, researchers can compare the time stamps of the observations to determine the telescopes’ positions relative to one another. Thanks to precise timing, the distances between telescopes can be measured to within a few millimeters.

    Because telescopes don’t move relative to Earth’s surface, these measurements reflect changes in the planet’s crust and can be used to trace the motion of tectonic plates, for instance. The International VLBI Service for Geodesy and Astrometry coordinates these geodetic measurements from NASA’s Goddard Space Flight Center in Greenbelt, Md. Currently, there are about 40 VLBI telescopes worldwide that can do this sort of geodetic monitoring.

    Tests on Two Continents

    Connecting the capabilities of InSAR satellites and geodetic VLBI telescopes would open up new observing opportunities, Parker said. “We get a connection between what the satellite is measuring and the reference frame that the telescope is measuring.”

    To test the feasibility of this idea, the researchers focused on four geodetic VLBI telescopes, three in Australia and one in Sweden. They showed that the telescopes could be tied to the European Space Agency’s Sentinel-1 satellite constellation used for InSAR by simply pointing the telescopes statically toward the location of an overpassing satellite.

    ESA Sentinel-1B

    Microwaves emitted by the satellites were readily picked up by the telescopes and reflected back, even when the telescopes didn’t track a satellite’s overpass. “It’s the easiest solution for an operator to implement, and it’s as good as steering the telescope,” said Parker.

    These observations can be completed in only a minute or two, Parker and her colleagues showed, and they don’t require any new instruments or infrastructure. However, it might be necessary to protect telescopes’ sensitive electronics from the satellites’ relatively strong signals, the researchers found. One option is to install metallic foil—impervious to radar frequencies—around a telescope’s low-noise amplifier. Another possibility, which Parker and her team tested, was to simply point the telescope slightly away from a satellite’s position.

    “The international network of Very Long Baseline Interferometry telescopes provides an existing, yet unexploited, link to unify satellite-radar measurements on a global scale,” the researchers concluded in their study, which was published in Geophysical Research Letters in November.

    “It’s a really nice piece of work,” said John Gipson, a physicist at NASA Goddard Space Flight Center and an International VLBI Service for Geodesy and Astrometry team member not involved in this research. “It’s very practical.”

    Parker and her colleagues are optimistic that the scientific community will see the advantages of using radio telescopes for geophysics applications. They hope to see a sizeable number of telescopes and InSAR satellites linked within the next year or two.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Eos is the leading source for trustworthy news and perspectives about the Earth and space sciences and their impact. Its namesake is Eos, the Greek goddess of the dawn, who represents the light shed on understanding our planet and its environment in space by the Earth and space sciences.

  • richardmitnick 4:16 pm on May 11, 2019 Permalink | Reply
    Tags: , , , , , Interferometry   

    From Ethan Siegel: “How Does The Event Horizon Telescope Act Like One Giant Mirror?” 

    From Ethan Siegel
    May 11, 2019

    The Allen Telescope Array is potentially capable of detecting a strong radio signal from Proxima b, or any other star system with strong enough radio transmissions. It has successfully worked in concert with other radio telescopes across extremely long baselines to resolve the event horizon of a black hole: arguably its crowning achievement. (WIKIMEDIA COMMONS / COLBY GUTIERREZ-KRAYBILL)

    If you want to observe the Universe more deeply and at higher resolution than ever before, there’s one tactic that everyone agrees is ideal: build as big a telescope as possible. But the highest resolution image we’ve ever constructed in astronomy doesn’t come from the biggest telescope, but rather from an enormous array of modestly-sized telescopes: the Event Horizon Telescope. How is that possible? That’s what our Ask Ethan questioner for this week, Dieter, wants to know, stating:

    “I’m having difficulty understanding why the EHT array is considered as ONE telescope (which has the diameter of the earth).
    When you consider the EHT as ONE radio telescope, I do understand that the angular resolution is very high due to the wavelength of the incoming signal and earth’s diameter. I also understand that time syncing is critical.
    But it would help very much to explain why the diameter of the EHT is considered as ONE telescope, considering there are about 10 individual telescopes in the array.”

    It’s made up of scores of telescopes at many different sites across the world. But it acts like one giant telescope. Here’s how.

    Event Horizon Telescope Array

    Arizona Radio Observatory
    Arizona Radio Observatory/Submillimeter-wave Astronomy (ARO/SMT)

    Atacama Pathfinder EXperiment

    CARMA Array no longer in service
    Combined Array for Research in Millimeter-wave Astronomy (CARMA)

    Atacama Submillimeter Telescope Experiment (ASTE)
    Atacama Submillimeter Telescope Experiment (ASTE)

    Caltech Submillimeter Observatory
    Caltech Submillimeter Observatory (CSO)

    IRAM 30m Radio telescope, on Pico Veleta in the Spanish Sierra Nevada,, Altitude 2,850 m (9,350 ft)

    Institut de Radioastronomie Millimetrique (IRAM) 30m

    James Clerk Maxwell Telescope interior, Mauna Kea, Hawaii, USA
    James Clerk Maxwell Telescope interior, Mauna Kea, Hawaii, USA

    Large Millimeter Telescope Alfonso Serrano
    Large Millimeter Telescope Alfonso Serrano

    CfA Submillimeter Array Mauna Kea, Hawaii, USA, Altitude 4,080 m (13,390 ft)

    Submillimeter Array Hawaii SAO

    ESO/NRAO/NAOJ ALMA Array, Chile

    South Pole Telescope SPTPOL
    South Pole Telescope SPTPOL

    Future Array/Telescopes

    IRAM NOEMA in the French Alps on the wide and isolated Plateau de Bure at an elevation of 2550 meters, the telescope currently consists of ten antennas, each 15 meters in diameter.interferometer, Located in the French Alpes on the wide and isolated Plateau de Bure at an elevation of 2550 meters

    NSF CfA Greenland telescope

    Greenland Telescope

    ARO 12m Radio Telescope, Kitt Peak National Observatory, Arizona, USA, Altitude 1,914 m (6,280 ft)

    ARO 12m Radio Telescope

    Constructing an image of the black hole at the center of Messier 87 is one of the most remarkable achievements we’ve ever made. Here’s what made it possible.

    The brightness distance relationship, and how the flux from a light source falls off as one over the distance squared. The Earth has the temperature that it does because of its distance from the Sun, which determines how much energy-per-unit-area is incident on our planet. Distant stars or galaxies have the apparent brightness they do because of this relationship, which is demanded by energy conservation. Note that the light also spreads out in area as it leaves the source. (E. SIEGEL / BEYOND THE GALAXY)

    The first thing you need to understand is how light works. When you have any light-emitting object in the Universe, the light it emits will spread out in a sphere upon leaving the source. If all you had was a photo-detector that was a single point, you could still detect that distant, light-emitting object.

    But you wouldn’t be able to resolve it.

    When light (i.e., a photon) strikes your point-like detector, you can register that the light arrived; you can measure the light’s energy and wavelength; you can know what direction the light came from. But you wouldn’t be able to know anything about that object’s physical properties. You wouldn’t know its size, shape, physical extent, or whether different parts were different colors or brightnesses. This is because you’re only receiving information at a single point.

    Nebula NGC 246 is better known as the Skull Nebula, for the presence of its two glowing eyes. The central eye is actually a pair of binary stars, and the smaller, fainter one is responsible for the nebula itself, as it blows off its outer layers. It’s only 1,600 light-years away, in the constellation of Cetus. Seeing this as more than a single object requires the ability to resolve these features, dependent on the size of the telescope and the number of wavelengths of light that fit across its primary mirror. (GEMINI SOUTH GMOS, TRAVIS RECTOR (UNIV. ALASKA))

    Gemini Observatory GMOS on Gemini South

    Gemini/South telescope, Cerro Tololo Inter-American Observatory (CTIO) campus near La Serena, Chile, at an altitude of 7200 feet

    What would it take to know whether you were looking at a single point of light, such as a star like our Sun, or multiple points of light, like you’d find in a binary star system? For that, you’d need to receive light at multiple points. Instead of a point-like detector, you could have a dish-like detector, like the primary mirror on a reflecting telescope.

    When the light comes in, it’s not striking a point anymore, but rather an area. The light that had spread out in a sphere now gets reflected off of the mirror and focused to a point. And light that comes from two different sources, even if they’re close together, will be focused to two different locations.

    Any reflecting telescope is based on the principle of reflecting incoming light rays via a large primary mirror which focuses that light to a point, where it’s then either broken down into data and recorded or used to construct an image. This specific diagram illustrates the light-paths for a Herschel-Lomonosov telescope system. Note that two distinct sources will have their light focused to two distinct locations (blue and green paths), but only if the telescope has sufficient capabilities. (WIKIMEDIA COMMONS USER EUDJINNIUS)

    If your telescope mirror is large enough compared to the separation of the two objects, and your optics are good enough, you’ll be able to resolve them. If you build your apparatus right, you’ll be able to tell that there are multiple objects. The two sources of light will appear to be distinct from one another. Technically, there’s a relationship between three quantities:

    the angular resolution you can achieve,
    the diameter of your mirror,
    and the wavelength of light you’re looking in.

    If your sources are closer together, or your telescope mirror is smaller, or you look using a longer wavelength of light, it becomes more and more challenging to resolve whatever you’re looking at. It makes it harder to resolve whether there are multiple objects or not, or whether the object you’re viewing has bright-and-dark features. If your resolution is insufficient, everything appears as nothing more than a blurry, unresolved single spot.

    The limits of resolution are determined by three factors: the diameter of your telescope, the wavelength of light your viewing in, and the quality of your optics. If you have perfect optics, you can resolve all the way down to the Rayleigh limit, which grants you the highest-possible resolution allowed by physics. (SPENCER BLIVEN / PUBLIC DOMAIN)

    So that’s the basics of how any large, single-dish telescope works. The light comes in from the source, with every point in space — even different points originating from the same object — emitting its own light with its own unique properties. The resolution is determined by the number of wavelengths of light that can fit across our primary mirror.

    If our detectors are sensitive enough, we’ll be able to resolve all sorts of features on an object. Hot-and-cold regions of a star, like sunspots, can appear. We can make out features like volcanoes, geysers, icecaps and basins on planets and moons. And the extent of light-emitting gas or plasma, along with their temperatures and densities, can be imaged as well. It’s a fantastic achievement that only depends on the physical and optical properties of your telescope.

    The second-largest black hole as seen from Earth, the one at the center of the galaxy Messier 87, is shown in three views here. At the top is optical from Hubble, at the lower-left is radio from NRAO, and at the lower-right is X-ray from Chandra. These differing views have different resolutions dependent on the optical sensitivity, wavelength of light used, and size of the telescope mirrors used to observe them. The Chandra X-ray observations provide exquisite resolution despite having an effective 8-inch (20 cm) diameter mirror, owing to the extremely short-wavelength nature of the X-rays it observes. (TOP, OPTICAL, HUBBLE SPACE TELESCOPE / NASA / WIKISKY; LOWER LEFT, RADIO, NRAO / VERY LARGE ARRAY (VLA); LOWER RIGHT, X-RAY, NASA / CHANDRA X-RAY TELESCOPE)

    NASA/ESA Hubble Telescope

    NRAO/Karl V Jansky Expanded Very Large Array, on the Plains of San Agustin fifty miles west of Socorro, NM, USA, at an elevation of 6970 ft (2124 m)

    NASA/Chandra X-ray Telescope

    But maybe you don’t need the entire telescope. Building a giant telescope is expensive and resource intensive, and it actually serves two purposes to build them so large.

    The larger your telescope, the better your resolution, based on the number of wavelengths of light that fit across your primary mirror.
    The larger your telescope’s collecting area, the more light you can gather, which means you can observe fainter objects and finer details than you could with a lower-area telescope.

    If you took your large telescope mirror and started darkening out some spots — like you were applying a mask to your mirror — you’d no longer be able to receive light from those locations. As a result, the brightness limits on what you could see would decrease, in proportion to the surface area (light-gathering area) of your telescope. But the resolution would still be equal to the separation between the various portions of the mirror.

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    ALMA is perhaps the most advanced and most complex array of radio telescopes in the world, is capable of imaging unprecedented details in protoplanetary disks, and is also an integral part of the Event Horizon Telescope.

    This is the principle on which arrays of telescopes are based. There are many sources out there, particularly in the radio portion of the spectrum, that are extremely bright, so you don’t need all that collecting area that comes with building an enormous, single dish.

    Instead, you can build an array of dishes. Because the light from a distant source will spread out, you want to collect light over as large an area as possible. You don’t need to invest all your resources in constructing an enormous dish with supreme light-gathering power, but you still need that same superior resolution. And that’s where the idea of using a giant array of radio telescopes comes from. With a linked array of telescopes all over the world, we can resolve some of the radio-brightest but smallest angular-size objects out there.

    EHT map

    This diagram shows the location of all of the telescopes and telescope arrays used in the 2017 Event Horizon Telescope observations of M87. Only the South Pole Telescope was unable to image M87, as it is located on the wrong part of the Earth to ever view that galaxy’s center. Every one of these locations is outfitted with an atomic clock, among other pieces of equipment. (NRAO)

    Functionally, there is no difference between thinking about the following two scenarios.

    The Event Horizon Telescope is a single mirror with a lot of masking tape over portions of it. The light gets collected and focused from all these disparate locations across the Earth into a single point, and then synthesized together into an image that reveals the differing brightnesses and properties of your target in space, up to your maximal resolution.
    The Event Horizon Telescope is itself an array of many different individual telescopes and individual telescope arrays. The light gets collected, timestamped with an atomic clock (for syncing purposes), and recorded as data at each individual site. That data is then stitched-and-processed together appropriately to create an image that reveals the brightnesses and properties of whatever you’re looking at in space.

    The only difference is in the techniques you have to use to make it happen, but that’s why we have the science of VLBI: very long-baseline interferometry.

    In VLBI, the radio signals are recorded at each of the individual telescopes before being shipped to a central location. Each data point that’s received is stamped with an extremely accurate, high-frequency atomic clock alongside the data in order to help scientists get the synchronization of the observations correct. (PUBLIC DOMAIN / WIKIPEDIA USER RNT20)

    You might immediately start thinking of wild ideas, like launching a radio telescope into deep space and using that, networked with the telescopes on Earth, to extend your baseline. It’s a great plan, but you must understand that there’s a reason we didn’t just build the Event Horizon Telescope with two well-separated sites: we want that incredible resolution in all directions.

    We want to get full two-dimensional coverage of the sky, which means ideally we’d have our telescopes arranged in a large ring to get those enormous separations. That’s not feasible, of course, on a world with continents and oceans and cities and nations and other borders, boundaries and constraints. But with eight independent sites across the world (seven of which were useful for the M87 image), we were able to do incredibly well.

    The Event Horizon Telescope’s first released image achieved resolutions of 22.5 microarcseconds, enabling the array to resolve the event horizon of the black hole at the center of M87. A single-dish telescope would have to be 12,000 km in diameter to achieve this same sharpness. Note the differing appearances between the April 5/6 images and the April 10/11 images, which show that the features around the black hole are changing over time. This helps demonstrate the importance of syncing the different observations, rather than just time-averaging them. (EVENT HORIZON TELESCOPE COLLABORATION)

    Right now, the Event Horizon Telescope is limited to Earth, limited to the dishes that are presently networked together, and limited by the particular wavelengths it can measure. If it could be modified to observe at shorter wavelengths, and could overcome the atmospheric opacity at those wavelengths, we could achieve higher resolutions with the same equipment. In principle, we might be able to see features three-to-five times as sharp without needing a single new dish.

    By making these simultaneous observations all across the world, the Event Horizon Telescope really does behave as a single telescope. It only has the light-gathering power of the individual dishes added together, but can achieve the resolution of the distance between the dishes in the direction that the dishes are separated.

    By spanning the diameter of Earth with many different telescopes (or telescope arrays) simultaneously, we were able to obtain the data necessary to resolve the event horizon.

    The Event Horizon Telescope behaves like a single telescope because of the incredible advances in the techniques we use and the increases in computational power and novel algorithms that enable us to synthesize this data into a single image. It’s not an easy feat, and took a team of over 100 scientists working for many years to make it happen.

    But optically, the principles are the same as using a single mirror. We have light coming in from different spots on a single source, all spreading out, and all arriving at the various telescopes in the array. It’s just as though they’re arriving at different locations along an extremely large mirror. The key is in how we synthesize that data together, and use it to reconstruct an image of what’s actually occurring.

    Now that the Event Horizon Telescope team has successfully done exactly that, it’s time to set our sights on the next target: learning as much as we can about every black hole we’re capable of viewing. Like all of you, I can hardly wait.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 9:49 am on May 8, 2019 Permalink | Reply
    Tags: , , , , , , , , Interferometry, , Persistent gravitational wave observables, , When two massive objects such as neutron stars or black holes collide they send shockwaves through the Universe rippling the very fabric of space-time itself.   

    From Cornell University via Science Alert: “Gravitational Waves Could Be Leaving Some Weird Lasting Effects in Their Wake” 

    From Cornell University



    Science Alert

    8 MAY 2019


    The faint, flickering distortions of space-time we call gravitational waves are tricky to detect, and we’ve only managed to do so in recent years. But now scientists have calculated that these waves may leave more persistent traces of their passing – traces we may also be able to detect.

    Such traces are called ‘persistent gravitational wave observables’, and in a new paper [Physical Review D], an international team of researchers [see paper for science team authors] has refined the mathematical framework for defining them. In the process, they give three examples of what these observables could be.

    Here’s the quick lowdown on gravitational waves: When two massive objects such as neutron stars or black holes collide, they send shockwaves through the Universe, rippling the very fabric of space-time itself. This effect was predicted by Einstein in his theory of general relativity in 1916, but it wasn’t until 2015 that we finally had equipment sensitive enough to detect the ripples.

    That equipment is an interferometer that shoots two or more laser beams down arms that are several kilometres in length. The wavelengths of these laser beams interfere to cancel each other out, so, normally, no light hits the instrument’s photodetectors.

    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    Caltech/MIT Advanced aLigo Hanford, WA, USA installation

    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger

    Gravity is talking. Lisa will listen. Dialogos of Eide

    ESA/eLISA the future of gravitational wave research

    Localizations of gravitational-wave signals detected by LIGO in 2015 (GW150914, LVT151012, GW151226, GW170104), more recently, by the LIGO-Virgo network (GW170814, GW170817). After Virgo came online in August 2018

    Skymap showing how adding Virgo to LIGO helps in reducing the size of the source-likely region in the sky. (Credit: Giuseppe Greco (Virgo Urbino group)

    But when a gravitational wave hits, the warping of space-time causes these laser beams to oscillate, shrinking and stretching. This means that their interference pattern is disrupted, and they no longer cancel each other out – so the laser hits the photodetector. The pattern of the light that hits can tell scientists about the event that created the wave.

    But that shrinking and stretching and warping of space-time, according to astrophysicist Éanna Flanagan of Cornell University and colleagues, could be having a much longer-lasting effect.

    As the ripples in space-time propagate, they can change the velocity, acceleration, trajectories and relative positions of objects and particles in their way – and these features don’t immediately return to normal afterwards, making them potentially observable.

    Particles, for instance, disturbed by a burst of gravitational waves, could show changes. In their new framework, the research team mathematically detailed changes that could occur in the rotation rate of a spinning particle, as well as its acceleration and velocity.

    Another of these persistent gravitational wave observables involves a similar effect to time dilation, whereby a strong gravitational field slows time.

    Because gravitational waves warp both space and time, two extremely precise and synchronised clocks in different locations, such as atomic clocks, could be affected by gravitational waves, showing different times after the waves have passed.

    Finally, the gravitational waves could actually permanently shift the relative positions in the mirrors of a gravitational wave interferometer – not by much, but enough to be detectable.

    Between its first detection in 2015 and last year, the LIGO-Virgo gravitational wave collaboration detected a handful of events before LIGO was taken offline for upgrades.

    At the moment, there are not enough detections in the bank for a meaningful statistical database to test these observables.

    But LIGO-Virgo was switched back on on 1 April, and since then has been detecting at least one gravitational wave event per week.

    The field of gravitational wave astronomy is heating up, space scientists are itching to test new mathematical calculations and frameworks, and it won’t be long before we’re positively swimming in data.

    This is just such an incredibly exciting time for space science, it really is.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Once called “the first American university” by educational historian Frederick Rudolph, Cornell University represents a distinctive mix of eminent scholarship and democratic ideals. Adding practical subjects to the classics and admitting qualified students regardless of nationality, race, social circumstance, gender, or religion was quite a departure when Cornell was founded in 1865.

    Today’s Cornell reflects this heritage of egalitarian excellence. It is home to the nation’s first colleges devoted to hotel administration, industrial and labor relations, and veterinary medicine. Both a private university and the land-grant institution of New York State, Cornell University is the most educationally diverse member of the Ivy League.

    On the Ithaca campus alone nearly 20,000 students representing every state and 120 countries choose from among 4,000 courses in 11 undergraduate, graduate, and professional schools. Many undergraduates participate in a wide range of interdisciplinary programs, play meaningful roles in original research, and study in Cornell programs in Washington, New York City, and the world over.

  • richardmitnick 12:02 pm on April 26, 2019 Permalink | Reply
    Tags: , ALMA- Atacama Large Millimeter/submillimeter Array, , , Automated Imaging Routine for Compact Arrays for the Radio Sun (AIRCARS), , , Interferometry, , , SKA- Pathfinder (ASKAP) is a radio telescope array located at Murchison Radio-astronomy Observatory (MRO) in the Australian Mid West   

    From AAS NOVA: “Prepping for Even Bigger Data in the Era of Interferometry” 


    From AAS NOVA

    26 April 2019
    Kerry Hensley

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    The Atacama Large Millimeter/submillimeter Array (ALMA), with its collection of 66 radio dishes, is one of several telescope arrays capable of capturing images of the Sun at radio wavelengths. [Y. Beletsky (LCO)/ESO]

    Interferometric arrays collect massive amounts of information, leaving astronomers with a happy problem: too much data! How can we handle mountains of data in an efficient way?

    SKA Murchison Widefield Array, Boolardy station in outback Western Australia, at the Murchison Radio-astronomy Observatory (MRO)

    One of many tiles comprising the Murchison Widefield Array (MWA). Radio interferometric arrays like MWA generate vast amounts of data. [Dr. John Goldsmith/Celestial Visions]

    Too Much of a Good Thing?

    Astronomers have come a long way from the early days of manually cataloging stars and sketching sunspots by hand. Even though today’s data sets are larger and more complex, many astronomers still manually calibrate and process their data.

    This hands-on data processing won’t always be feasible, though; interferometry — the process of linking together tens to thousands of telescopes or antennae to produce images with ever-finer angular resolution — generates far more data than humans could hope to handle manually. Just one minute’s worth of data from the Murchison Widefield Array (MWA), a radio interferometer made up of 4,096 antennae, yields roughly 10,000 images!

    With the number of interferometers increasing, we’ll need to be smart about how we process all that data to minimize computing hours while maximizing the quality of the output. Among the many detectors requiring novel data-processing techniques is the planned Square Kilometer Array (SKA), which will comprise a million antennae and 2,000 radio telescopes. How can we get a handle on all this data without getting too hands-on?

    Australian Square Kilometre Array Pathfinder (ASKAP) is a radio telescope array located at Murchison Radio-astronomy Observatory (MRO) in the Australian Mid West. ASKAP consists of 36 identical parabolic antennas, each 12 metres in diameter, working together as a single instrument with a total collecting area of approximately 4,000 square metres.

    SKA Hera at SKA South Africa

    SKA Meerkat telescope(s), 90 km outside the small Northern Cape town of Carnarvon, SA

    An illustration of how increasing numbers of detectors are included in the model of the target for self-calibration. The first step includes only the blue detectors near the center, and subsequent steps add the red, teal, black, and yellow detectors to increase the complexity of the model. [Mondal et al. 2019]

    Dealing with Data Pileup

    To tackle this problem, a team led by Surajit Mondal (Tata Institute of Fundamental Research, India) developed an automated processing pipeline for interferometric data — the Automated Imaging Routine for Compact Arrays for the Radio Sun (AIRCARS). They focused on processing solar radio images, which need to capture a huge dynamic range — from extremely bright active regions to faint, wispy filaments.

    One of the challenges in radio interferometry is removing the effects of instrumental artifacts and the plasma in Earth’s atmosphere. Most radio interferometry data are corrected with a self-calibration process that treats the instrumental artifacts and the brightness of the target as free parameters and iteratively minimizes the difference between the observations and a model of the target.

    AIRCARS works especially well when applied to a compact array — one with many detectors clustered in the center and fewer near the outskirts. This configuration allows the pipeline to start with relatively little information about the target from just a few central detectors and gradually build a complex model of the target to be used in its self-calibration routine.

    An example of the improvement of the dynamic range of MWA images through the self-calibration process. The number of iterations increases from top to bottom and left to right. The dashed circle indicates the location of the Sun’s disk. [Mondal et al. 2019]

    AIRCARS in Our Future

    In their tests on MWA data, the authors find that AIRCARS is capable of capturing a dynamic range up to 100,000:1 — a huge improvement over previous processing methods.

    Mondal and collaborators note that AIRCARS can be configured to attain the maximum possible dynamic range without constraints on computing time, or to accept user-imposed time limits to rapidly process large amounts of data, depending on the user’s computational requirements.

    Because the pipeline needs no human supervision, astronomers can take a step back from processing the vast amount of incoming data and focus instead on the exciting science we can do with interferometry.


    “Unsupervised Generation of High Dynamic Range Solar Images: A Novel Algorithm for Self-calibration of Interferometry Data,” Surajit Mondal et al 2019 ApJ 875 97.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition


    AAS Mission and Vision Statement

    The mission of the American Astronomical Society is to enhance and share humanity’s scientific understanding of the Universe.

    The Society, through its publications, disseminates and archives the results of astronomical research. The Society also communicates and explains our understanding of the universe to the public.
    The Society facilitates and strengthens the interactions among members through professional meetings and other means. The Society supports member divisions representing specialized research and astronomical interests.
    The Society represents the goals of its community of members to the nation and the world. The Society also works with other scientific and educational societies to promote the advancement of science.
    The Society, through its members, trains, mentors and supports the next generation of astronomers. The Society supports and promotes increased participation of historically underrepresented groups in astronomy.
    The Society assists its members to develop their skills in the fields of education and public outreach at all levels. The Society promotes broad interest in astronomy, which enhances science literacy and leads many to careers in science and engineering.

    Adopted June 7, 2009

  • richardmitnick 11:00 am on February 7, 2019 Permalink | Reply
    Tags: Abraham (Avi) Loeb, , , , , Black Hole Initiative, Black Hole Institute, , , Infrared results beautifully complemented by observations at radio wavelengths, Interferometry, , , S-02, , , The development of high-resolution infrared cameras revealed a dense cluster of stars at the center of the Milky Way   

    From Nautilus: “How Supermassive Black Holes Were Discovered” 


    From Nautilus

    February 7, 2019
    Mark J. Reid, CfA SAO

    Astronomers turned a fantastic concept into reality.

    An Introduction to the Black Hole Institute

    Fittingly, the Black Hole Initiative (BHI) was founded 100 years after Karl Schwarzschild solved Einstein’s equations for general relativity—a solution that described a black hole decades before the first astronomical evidence that they exist. As exotic structures of spacetime, black holes continue to fascinate astronomers, physicists, mathematicians, philosophers, and the general public, following on a century of research into their mysterious nature.

    Pictor A Blast from Black Hole in a Galaxy Far, Far Away

    This computer-simulated image of a supermassive black hole at the core of a galaxy. Credit NASA, ESA, and D. Coe, J. Anderson

    The mission of the BHI is interdisciplinary and, to that end, we sponsor many events that create the environment to support interaction between researchers of different disciplines. Philosophers speak with mathematicians, physicists, and astronomers, theorists speak with observers and a series of scheduled events create the venue for people to regularly come together.

    As an example, for a problem we care about, consider the singularities at the centers of black holes, which mark the breakdown of Einstein’s theory of gravity. What would a singularity look like in the quantum mechanical context? Most likely, it would appear as an extreme concentration of a huge mass (more than a few solar masses for astrophysical black holes) within a tiny volume. The size of the reservoir that drains all matter that fell into an astrophysical black hole is unknown and constitutes one of the unsolved problems on which BHI scholars work.

    We are delighted to present a collection of essays which were carefully selected by our senior faculty out of many applications to the first essay competition of the BHI. The winning essays will be published here on Nautilus over the next five weeks, beginning with the fifth-place finisher and working up to the first-place finisher. We hope that you will enjoy them as much as we did.

    —Abraham (Avi) Loeb
    Frank B. Baird, Jr. Professor of Science, Harvard University
    Chair, Harvard Astronomy Department
    Founding Director, Black Hole Initiative (BHI)

    In the 1700s, John Michell in England and Pierre-Simon Laplace in France independently thought “way out of the box” and imagined what would happen if a huge mass were placed in an incredibly small volume. Pushing this thought experiment to the limit, they conjectured that gravitational forces might not allow anything, even light, to escape. Michell and Laplace were imagining what we now call a black hole.

    Astronomers are now convinced that when massive stars burn through their nuclear fuel, they collapse to near nothingness and form a black hole. While the concept of a star collapsing to a black hole is astounding, the possibility that material from millions and even billions of stars can condense into a single supermassive black hole is even more fantastic.

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    Yet astronomers are now confident that supermassive black holes exist and are found in the centers of most of the 100 billion galaxies in the universe.

    How did we come to this astonishing conclusion? The story begins in the mid-1900s when astronomers expanded their horizons beyond the very narrow range of wavelengths to which our eyes are sensitive. Very strong sources of radio waves were discovered and, when accurate positions were determined, many were found to be centered on distant galaxies. Shortly thereafter, radio antennas were linked together to greatly improve angular resolution.

    NRAO/Karl V Jansky Expanded Very Large Array, on the Plains of San Agustin fifty miles west of Socorro, NM, USA, at an elevation of 6970 ft (2124 m)

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    CfA Submillimeter Array Mauna Kea, Hawaii, USA,4,207 m (13,802 ft) above sea level

    These new “interferometers” revealed a totally unexpected picture of the radio emission from galaxies—the radio waves did not appear to come from the galaxy itself, but from two huge “lobes” symmetrically placed about the galaxy. Figure One shows an example of such a “radio galaxy,” named Cygnus A. Radio lobes can be among the largest structures in the universe, upward of a hundred times the size of the galaxy itself.

    Figure One: Radio image of the galaxy Cygnus A. Dominating the image are two huge “lobes” of radio emitting plasma. An optical image of the host galaxy would be smaller than the gap between the lobes. The minimum energy needed to power some radio lobes can be equivalent to the total conversion of 10 million stars to energy! Note the thin trails of radio emission that connect the lobes with the bright spot at the center, where all of the energy originates. NRAO/AUI

    How are immense radio lobes energized? Their symmetrical placement about a galaxy clearly suggested a close relationship. In the 1960s, sensitive radio interferometers confirmed the circumstantial case for a relationship by discovering faint trails, or “jets,” tracing radio emission from the lobes back to a very compact source at the precise center of the galaxy. These findings motivated radio astronomers to increase the sizes of their interferometers in order to better resolve these emissions. Ultimately this led to the technique of Very Long Baseline Interferometry (VLBI), in which radio signals from antennas across the Earth are combined to obtain the angular resolution of a telescope the size of our planet!

    GMVA The Global VLBI Array

    Radio images made from VLBI observations soon revealed that the sources at the centers of radio galaxies are “microscopic” by galaxy standards, even smaller than the distance between the sun and our nearest star.

    When astronomers calculated the energy needed to power radio lobes they were astounded. It required 10 million stars to be “vaporized,” totally converting their mass to energy using Einstein’s famous equation E = mc2! Nuclear reactions, which power stars, cannot even convert 1 percent of a star’s mass to energy. So trying to explain the energy in radio lobes with nuclear power would require more than 1 billion stars, and these stars would have to live within the “microscopic” volume indicated by the VLBI observations. Because of these findings, astronomers began considering alternative energy sources: supermassive black holes.

    Given that the centers of galaxies might harbor supermassive black holes, it was natural to check the center of our Milky Way galaxy for such a monster. In 1974, a very compact radio source, smaller than 1 second of arc (1/3600 of a degree) was discovered there. The compact source was named Sagittarius A*, or Sgr A* for short, and is shown at the center of the right panel of Figure 2. Early VLBI observations established that Sgr A* was far more compact than the size of our solar system. However, no obvious optical, infrared, or even X-ray emitting source could be confidently identified with it, and its nature remained mysterious.

    Figure Two: Images of the central region of the Milky Way. The left panel shows an infrared image. The orbital track of star S2 is overlaid, magnified by a factor of 100. The orbit has period of 16 years, requires an unseen mass of 4 million times that of the sun, and the gravitational center is indicated by the arrow. The right panel shows a radio image. The point-like radio source Sgr A* (just below the middle of the image) is precisely at the gravitational center of the orbiting stars. Sgr A* is intrinsically motionless at the galactic center and, therefore, must be extremely massive.Left panel: R. Genzel; Right panel: J.-H. Zhao

    Star S0-2 Andrea Ghez Keck/UCLA Galactic Center Group

    Andrea’s Favorite star SO-2

    Andrea Ghez, astrophysicist and professor at the University of California, Los Angeles, who leads a team of scientists observing S2 for evidence of a supermassive black hole UCLA Galactic Center Group

    SGR A and SGR A* from Penn State and NASA/Chandra

    SGR A* , the supermassive black hole at the center of the Milky Way. NASA’s Chandra X-Ray Observatory

    Meanwhile, the development of high-resolution infrared cameras revealed a dense cluster of stars at the center of the Milky Way. These stars cannot be seen at optical wavelengths, because visible light is totally absorbed by intervening dust. However, at infrared wavelengths 10 percent of their starlight makes its way to our telescopes, and astronomers have been measuring the positions of these stars for more than two decades. These observations culminated with the important discovery that stars are moving along elliptical paths, which are a unique characteristic of gravitational orbits. One of these stars has now been traced over a complete orbit, as shown in the left panel of Figure Two.

    Many stars have been followed along partial orbits, and all are consistent with orbits about a single object. Two stars have been observed to approach the center to within the size of our solar system, which by galaxy standards is very small. At this point, gravity is so strong that stars are orbiting at nearly 10,000 kilometers per second—fast enough to cross the Earth in one second! These measurements leave no doubt that the stars are responding to an unseen mass of 4 million times that of the sun. Combining this mass with the (astronomically) small volume indicated by the stellar orbits implies an extraordinarily high density. At this density it is hard to imagine how any type of matter would not collapse to form a black hole.

    The infrared results just described are beautifully complemented by observations at radio wavelengths. In order to identify an infrared counterpart for Sgr A*, the position of the radio source needed to be precisely transferred to infrared images. An ingenious method to do this uses sources visible at both radio and infrared wavelengths to tie the reference frames together. Ideal sources are giant red stars, which are bright in the infrared and have strong emission at radio wavelengths from molecules surrounding them. By matching the positions of these stars at the two wavebands, the radio position of Sgr A* can be transferred to infrared images with an accuracy of 0.001 seconds of arc. This technique placed Sgr A* precisely at the position of the gravitational center of the orbiting stars.

    How much of the dark mass within the stellar orbits can be directly associated with the radio source Sgr A*? Were Sgr A* a star, it would be moving at over 10,000 kilometers per second in the strong gravitational field as other stars are observed to do. Only if Sgr A* is extremely massive would it move slowly. The position of Sgr A* has been monitored with VLBI techniques for over two decades, revealing that it is essentially stationary at the dynamical center of the Milky Way. Specifically, the component of Sgr A*’s intrinsic motion perpendicular to the plane of the Milky Way is less than one kilometer per second. By comparison, this is 30 times slower than the Earth orbits the sun. The discovery that Sgr A* is essentially stationary and anchors the galactic center requires that Sgr A* contains over 400,000 times the mass of the sun.

    Recent VLBI observations have shown that the size of the radio emission of Sgr A* is less than that contained within the orbit of Mercury. Combining this volume available to Sgr A* with the lower limit to its mass yields a staggeringly high density. This density is within a factor of less than 10 of the ultimate limit for a black hole. At such an extreme density, the evidence is overwhelming that Sgr A* is a supermassive black hole.

    These discoveries are elegant for their directness and simplicity. Orbits of stars provide an absolutely clear and unequivocal proof of a great unseen mass concentration. Finding that the compact radio source Sgr A* is at the precise location of the unseen mass and is motionless provides even more compelling evidence for a supermassive black hole. Together they form a simple, unique demonstration that the fantastic concept of a supermassive black hole is indeed a reality. John Michell and Pierre-Simon Laplace would be astounded to learn that their conjectures about black holes not only turned out to be correct, but were far grander than they ever could have imagined.

    Mark J. Reid is a senior astronomer at the Center for Astrophysics, Harvard & Smithsonian. He uses radio telescopes across the globe simultaneously to obtain the highest resolution images of newborn and dying stars, as well as black holes.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

  • richardmitnick 8:54 am on June 22, 2017 Permalink | Reply
    Tags: , , , , Center for High Angular Resolution Astronomy (CHARA) Array, , Interferometry, Michigan Infra-Red Combiner (MIRC), Mount Wilson Observatry perched atop the San Gabriel Mountains outside Los Angeles CA USA, Mt Wilson 100 inch Hooker Telescope   

    From aeon: “How the face of a distant star reveals our place in the cosmos” 27 July, 2016, but worth a look 



    Rachael Roettenbacher
    27 July, 2016

    Courtesy Dr Rachael Roettenbacher, University of Michigan

    Mount Wilson Observatory, perched atop the San Gabriel Mountains outside Los Angeles, has been the site of some of the greatest expansions in human knowledge of the cosmos.

    Mt Wilson 100 inch Hooker Telescope, perched atop the San Gabriel Mountains outside Los Angeles, CA, USA

    It is here that, in 1924, Edwin Hubble proved the existence of galaxies beyond our own, and here that he also collected the clinching evidence that the Universe is expanding. Now Mount Wilson is home to another observational leap: bringing the stars into view, not as points of light but as evolving, dynamic suns every bit as tangible as our own.

    The essential tool for this breakthrough is interferometry, in which astronomers combine light from widely separated telescopes to create a virtual telescope with a diameter as large as that separation. This technique makes it possible to resolve details far too small to discern using a standard telescope. The first such observations of stars took place at Mount Wilson in the 1920s. Using a 20-foot interferometer (two small mirrors mounted 20 feet apart on the 100-inch Hooker reflector to effectively make a 20-foot telescope), Albert A Michelson and Francis G Pease managed to measure the angular size of stars other than the Sun for the first time. Their interferometer was powerful enough to measure only a few of the closest stars. Building a much larger device was beyond the practical engineering capabilities of the time.

    After that, the field fell dormant for many decades. In 1950, the astronomer Gerald E Kron mused on the possibility of resolving the surfaces of other stars but concluded that they are ‘too distant to be observed as resolved disks with optical equipment now available, or, probably, with optical equipment that will ever be available to us’. (He later managed to infer the presence of dark surface features on other stars, albeit indirectly.) With the recent rebirth of optical interferometry using distinct telescopes, though, the technology has progressed far beyond what Kron could imagine.

    Several such facilities are now operating, but Mount Wilson boasts the world’s longest optical interferometer: the Center for High Angular Resolution Astronomy (CHARA) Array. The CHARA Array is resolving the surfaces of nearby stars, providing unprecedented glimpses of the Sun’s neighbours.


    The CHARA Array consists of six one-metre telescopes that are in a Y-shaped configuration, having baselines of various lengths up to 331 metres. Those six telescopes can be combined into 15 telescope pairs that fill in unique parts of the virtual 331-metre telescope with each observation. An instrument called the Michigan Infra-Red Combiner (MIRC), developed by John D Monnier and his group at the University of Michigan, can simultaneously combine light from all six telescopes to take full advantage of the Array. MIRC has previously been used to image the oblate (flattened) surfaces of rapidly rotating stars, circumstellar disks, and the expanding shell of a nova explosion.

    Using the CHARA Array and MIRC together, it is now possible to do what Kron thought impossible: directly image the spotted, active surface of distant stars. The job is still hugely taxing. Most stars are too small to resolve even with the current state-of-the-art technology. Creating a resolved image requires selecting the right targets. The stars have to appear bright and relatively large in the sky. They must have starspots – regions of magnetic activity, analogous to sunspots on the Sun – so that there are dark features for us to observe. Finally, the stars must spin quickly enough that we can watch them through a full rotation without the spots evolving too much.

    I was excited to take on the challenge as part of my doctoral dissertation. I chose as my target the primary member of the binary system zeta Andromedae, a star dimly visible to the naked eye in the autumn sky. Zeta And (as it is commonly called) is fairly nearby to us (181 light years away) and is 16 times the radius of the Sun. It has an approximate prolate spheroid shape, akin to the shape of an American football, caused by the gravity of its close companion; it has also been shown via indirect imaging to host dark spots, including one on its visible pole, so I knew it would be a perfect target for my dissertation work. Getting a clear look at zeta And required a group of 14 collaborators, including my advisor (and MIRC’s creator), Monnier. We observed the star for as many nights as possible through a single stellar rotation, spanning 18 nights, in September 2013 at the CHARA array. Combining all the data and mapping it on the rotating surface required a great deal of additional time and effort.

    This May, we published [Nature] our triumphant result: the highest-resolution image ever of a star outside of our Solar System. We were able to detect the spot on the pole of zeta And, along with starspots that form with seemingly no pattern on the surface. The star’s behaviour is quite unlike that of the Sun, which forms sunspots only at very specific latitudes on its surface. Part of the reason for the difference is that zeta And is an older, evolved star with a different internal structure. Theoretical models suggest that much of zeta And’s interior outside of its core is convective, with hotter material rising and cooler material falling like a boiling pot of water on a stove; in contrast, only the Sun’s outermost layers behave that way. Zeta And’s 18-day spin is also significantly faster than the 27-day rotation period of the Sun.

    Our study of zeta And constrains theories attempting to link solar magnetism to that of other stars. It also offers an intriguing glimpse into the past. Evolutionary models indicate that the young Sun similarly had a thick convective layer and rotated more rapidly than it does today. By examining the spotted surface of zeta And, we get new insights into the solar activity that could have influenced the formation of the solar system 4.5 billion years ago, and also the subsequent development of life on Earth.

    Best of all, our mapping of zeta And is only the beginning. Planned upgrades to the CHARA Array and MIRC will make it possible to observe the surfaces of fainter stars, including ‘young solar analogs’ – that is, infant stars surrounded by disks that are currently adding mass to the star and forming new planets. By resolving a variety of types of stars and their features, we can constrain our theories on stellar structure, magnetism, formation and evolution.

    The power of interferometry is just beginning to be harnessed, and the images of zeta And demonstrate the great potential of this under-utilised technique.

    ESO VLT Interferometer image, Cerro Paranal, with an elevation of 2,635 metres (8,645 ft) above sea level

    Four centuries ago, Galileo, an avid observer of sunspots, realised that the Milky Way is composed of ‘a mass of innumerable stars planted together in clusters’. Today, we are at last beginning to find out what those other stars really look like.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 11:47 am on November 10, 2015 Permalink | Reply
    Tags: , , , Interferometry, ,   

    From ALMA: “ALMA Links with Other Observatories to Create Earth-size Telescope” 

    ESO ALMA Array

    10 November 2015
    Valeria Foncea

    Education and Public Outreach Officer

    Joint ALMA Observatory

    Santiago, Chile

    Tel: +56 2 467 6258

    Cell: +56 9 75871963
    Email: vfoncea@alma.cl

    Charles E. Blue
    Public Information Officer
    National Radio Astronomy Observatory
    Charlottesville, Virginia, USA
    Tel: +1 434 296 0314
    Cell: +1 434.242.9559
    E-mail: cblue@nrao.edu

    Richard Hook
    Public Information Officer, ESO

    Garching bei München, Germany

    Tel: +49 89 3200 6655

    Cell: +49 151 1537 3591
    Email: rhook@eso.org

    Masaaki Hiramatsu

    Education and Public Outreach Officer, NAOJ Chile
Tokyo, Japan

    Tel: +81 422 34 3630

    E-mail: hiramatsu.masaaki@nao.ac.jp

    ALMA combined his[?] power with IRAM and VLBA in VLBI separated observations. Credit: A. Angelich (NRAO/AUI/NSF)

    The Atacama Large Millimeter/submillimeter Array (ALMA) continues to expand its power and capabilities by linking with other millimeter-wavelength telescopes in Europe and North American in a series of very long baseline interferometry (VLBI) observations.

    In VLBI, data from two or more telescopes are combined to form a single virtual telescope that spans the geographic distance between them. The most recent of these experiments with ALMA formed an Earth-size telescope with extraordinarily fine resolution.

    These experiments are an essential step in including ALMA in the Event Horizon Telescope (EHT), a global network of millimeter-wavelength telescopes that will have the power to study the supermassive black hole at the center of the Milky Way in unprecedented detail.

    Event Horizon Telescope map
    EHT map

    Before ALMA could participate in VLBI observations, it first had to be upgraded adding a new capability known as a phased array [1]. This new version of ALMA allows its 66 antennas to function as a single radio dish 85 meters in diameter, which then becomes one element in a much larger VLBI telescope.

    The first test of ALMA’s VLBI capabilities occurred on 13 January 2015, when ALMA successfully linked with the Atacama Pathfinder Experiment Telescope (APEX), which is about two kilometers from the center of the ALMA array.


    On 30 March 2015, ALMA reached out much further by linking with the Institut de Radioastronomie Millimetrique’s (IRAM) 30-meter radio telescope in the Sierra Nevada of southern Spain.

    IRAM 30m Radio telescope
    IRAM 30 meter telescope

    Together they simultaneously observed [2] the bright quasar 3C 273. Data from this observation were combined into a single observation with a resolution of 34 microarcseconds. This is equivalent to distinguish an object of less than ten centimeters on the Moon, seen from Earth.

    The most recent VLBI observing run was performed on 1–3 August 2015 with six of the National Radio Astronomy Observatory’s (NRAO) Very Long Baseline Array (VLBA) antennas [3].


    This combined instrument formed a virtual Earth-size telescope and observed the quasar 3C 454.3, which is one of the brightest radio beacons on the sky, despite lying at a distance of 7.8 billion light-years. These data were first processed at NRAO and MIT-Haystack in the United States and further post-processing analysis is being performed at the Max Planck Institute for Radio Astronomy (MPIfR) in Bonn, Germany.

    The new observations are a further step towards global interferometric observations with ALMA in the framework of the Global mm-VLBI Array and the Event Horizon Telescope, with ALMA as the largest and the most sensitive element. The addition of ALMA to millimeter VLBI will boost the imaging sensitivity and capabilities of the existing VLBI arrays by an order of magnitude.


    [1] The following groups and institutions participated in the ALMA Phasing Project: National Radio Astronomy Observatory, Academia Sinica Institute of Astronomy and Astrophysics, National Astronomical Observatory of Japan, Smithsonian Astrophysical Observatory, MIT Haystack, MPIfR-Bonn, Onsala Space Observatory, University de Concepcion in Chile, and the Joint ALMA Observatory.

    [2] The March observations were made during an observing campaign of the EHT at a wavelength of 1.3 mm.

    [3] The VLBA is an array of 10 antennas spread across the United States from Hawaii to St. Croix. For this observation, six antennas were used: North Liberty, IA; Fort Davis, TX; Los Alamos, NM; Owens Valley, CA; Brewster, WA; and Mauna Kea, HI. The observing wavelength was 3 mm.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    The Atacama Large Millimeter/submillimeter Array (ALMA), an international astronomy facility, is a partnership of Europe, North America and East Asia in cooperation with the Republic of Chile. ALMA is funded in Europe by the European Organization for Astronomical Research in the Southern Hemisphere (ESO), in North America by the U.S. National Science Foundation (NSF) in cooperation with the National Research Council of Canada (NRC) and the National Science Council of Taiwan (NSC) and in East Asia by the National Institutes of Natural Sciences (NINS) of Japan in cooperation with the Academia Sinica (AS) in Taiwan.

    ALMA construction and operations are led on behalf of Europe by ESO, on behalf of North America by the National Radio Astronomy Observatory (NRAO), which is managed by Associated Universities, Inc. (AUI) and on behalf of East Asia by the National Astronomical Observatory of Japan (NAOJ). The Joint ALMA Observatory (JAO) provides the unified leadership and management of the construction, commissioning and operation of ALMA.

    NRAO Small

    ESO 50


  • richardmitnick 3:01 pm on December 2, 2014 Permalink | Reply
    Tags: , , , , , Interferometry,   

    From Keck: “Scientists Accurately Quantify Dust Around Planets in Search for Life” 

    Keck Observatory

    Keck Observatory

    Keck Observatory

    December 2, 2014
    Bertrand Mennesson, PhD
    Jet Propulsion Laboratory

    Steve Jefferson
    Communications Officer
    W. M. Keck Observatory

    A new study from the Keck Interferometer, a former NASA project that combined the power of the twin W. M. Keck Observatory telescopes atop Mauna Kea, Hawaii, has brought exciting news to planet hunters. After surveying nearly 50 stars from 2008 to 2011, scientists have been able to determine with remarkable precision how much dust is around distant stars – a big step closer into finding planets than might harbor life. The discovery is being published in the Astrophysical Journal online, on December 8th.

    Credit: NASA/JPL-Caltech
    A dusty planetary system (left) is compared to another system with little dust in this artist’s conception. Dust can make it difficult for telescopes to image planets because light from the dust can outshine that of the planets. Dust reflects visible light and shines with its own infrared, or thermal, glow. As the illustration shows, planets appear more readily in the planetary system shown at right with less dust. Research with the NASA-funded Keck Interferometer, a former NASA key science project that combined the power of the twin telescopes of the W. M. Keck Observatory atop Mauna Kea, Hawaii, shows that mature, sun-like stars appear to be, on average, not all that dusty. This is good news for future space missions wanting to take detailed pictures of planets like Earth and seek out possible signs of life.

    “This was really a mathematical tour de force,” said Peter Wizinowich, Interferometer Project Manager for Keck Observatory. “This team did something that we seldom see in terms of using all the available statistical techniques to evaluate the combined data set. They were able to dramatically reduce all the error bars, by a factor of 10, to really understand the amount of dust around these systems.”

    The Keck Interferometer was built to seek out this dust, and to ultimately help select targets for future NASA Earth-like planet-finding missions.

    Like planets, dust near a star is also hard to see. Interferometry is a high-resolution imaging technique that can be used to block out a star’s light, making the region nearby easier to observe. Light waves from the precise location of a star, collected separately by the twin 10-meter Keck Observatory telescopes, are combined and canceled out in a process called nulling.

    “If you don’t turn off the star, you are blinded and can’t see dust or planets,” said co-author Rafael Millan-Gabet of NASA’s exoplanet Science Institute at the California Institute of Technology in Pasadena, California, who led the Keck Interferometer’s science operations system.

    “Dust is a double-edged sword when it comes to imaging distant planets,” explained Bertrand Mennesson, lead author of the study who works at NASA’s Jet Propulsion Laboratory, Pasadena, California. “The presence of dust is a signpost for the planet formation process, but too much dust can block our view.” Mennesson has been involved in the Keck Interferometer project since its inception more than 10 years ago, both as a scientist and as the optics lead for one of its instruments.

    “Using the two Keck telescopes in concert and interfering their light beams, it is possible to distinguish astronomical objects much closer to each other than when using a single Keck telescope,” Mennesson said. “However, there is an additional difficulty when searching for warm dust in the immediate stellar environment: it generally contributes very little emission compared to the star, and that is when nulling interferometry comes into play.”

    In addition to requiring high performance from a large number of hardware and software subsystems, the nuller mode requires them to work smoothly together as a single, integrated system, according to Mark Colavita, the Keck Interferometer System Architect. “The nulling mode of the interferometer uses starlight across a wide range of wavelengths, including visible light for the adaptive optics to correct the telescope wave-fronts, near-infrared light to stabilize the path-lengths, and mid-infrared light for the nulling science measurements.”

    Planet Hunting

    Ground- and space-based telescopes have already captured images of exoplanets, or planets orbiting stars beyond our sun. These early images, which show giant planets in cool orbits far from the glow of their stars, represent a huge technological leap. The glare from stars can overwhelm the light of planets, like a firefly buzzing across the sun. So, researchers have developed complex instruments to block the starlight, allowing information about a planet’s shine to be obtained.

    The next challenge is to image smaller planets in the “habitable” zone around stars where possible life-bearing Earth-like planets outside the solar system could reside. Such a lofty goal may take decades, but researchers are already on the path to get there, developing new instruments and analyzing the dust kicked up around stars to better understand how to snap crisp planetary portraits. Scientists want to find out: Which stars have the most dust? And how dusty are the habitable zones of sun-like stars?

    In the latest study, nearly 50 mature, sun-like stars were analyzed with high precision to search for warm, room-temperature dust in their habitable zones. Roughly half of the stars selected for the study had previously shown no signs of cool dust circling in their outer reaches. This outer dust is easier to see than the inner, warm dust due to its greater distance from the star. Of this first group of stars, none were found to host the warm dust, making them good targets for planet imaging, and a good indication that other relatively dust-free stars are out there.

    The other stars in the study were already known to have significant amounts of distant cold dust orbiting them. In this group, many of the stars were found to also have the room-temperature dust. This is the first time a link between the cold and warm dust has been established. In other words, if a star is observed to have a cold belt of dust, astronomers can make an educated guess that its warm habitable zone is also riddled with dust, making it a poor target for imaging smaller planets in the ‘habitable zone’ around stars, or exo-Earths.

    “We want to avoid planets that are buried in dust,” said Mennesson.

    Like a busy construction site, the process of building planets is messy. It’s common for young, developing star systems to be covered in dust. Proto-planets collide, scattering dust. But eventually, the chaos settles and the dust clears – except in some older stars. Why are these mature stars still laden with warm dust in their habitable zones?

    The newfound link between cold and warm dust belts helps answer this question.

    “The outer belt is somehow feeding material into the inner warm belt,” said Geoff Bryden of JPL, a co-author of the study. “This transport of material could be accomplished as dust smoothly flows inward, or there could be larger cometary bodies thrown directly into the inner system.”

    The Keck Interferometer began construction in 1997, and finished its mission in 2012. It was developed by JPL, the Keck Observatory and the NASA Exoplanet Science Institute at Caltech. It was funded by NASA as a part of the Exoplanet Exploration Program with telescope and instrument operations managed by the W. M. Keck Observatory.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    To advance the frontiers of astronomy and share our discoveries with the world.

    The W. M. Keck Observatory operates the largest, most scientifically productive telescopes on Earth. The two, 10-meter optical/infrared telescopes on the summit of Mauna Kea on the Island of Hawaii feature a suite of advanced instruments including imagers, multi-object spectrographs, high-resolution spectrographs, integral-field spectrometer and world-leading laser guide star adaptive optics systems. Keck Observatory is a private 501(c) 3 non-profit organization and a scientific partnership of the California Institute of Technology, the University of California and NASA.

    Today Keck Observatory is supported by both public funding sources and private philanthropy. As a 501(c)3, the organization is managed by the California Association for Research in Astronomy (CARA), whose Board of Directors includes representatives from the California Institute of Technology and the University of California, with liaisons to the board from NASA and the Keck Foundation.
    Keck UCal

    Keck NASA

    Keck Caltech

  • richardmitnick 2:15 pm on April 8, 2014 Permalink | Reply
    Tags: , Interferometry, ,   

    From Symmetry: “Searching for the holographic universe” 

    April 08, 2014
    Fermilab Leah Hesla
    Leah Hesla

    Physicist Aaron Chou keeps the Holometer experiment—which looks for a phenomenon whose implications border on the unreal—grounded in the realities of day-to-day operations.

    The beauty of the small operation—the mom-and-pop restaurant or the do-it-yourself home repair—is that pragmatism begets creativity. The industrious individual who makes do with limited resources is compelled onto paths of ingenuity, inventing rather than following rules to address the project’s peculiarities.

    As project manager for the Holometer experiment at Fermilab, physicist Aaron Chou runs a show that, though grandiose in goal, is remarkably humble in setup. Operated out of a trailer by a small team with a small budget, it has the feel more of a scrappy startup than of an undertaking that could make humanity completely rethink our universe.

    During an exceptionally snowy winter, Aaron Chou and Vanderbilt University student Brittany Kamai make their way to the Holometer’s modest home base, a relatively isolated trailer on the Fermilab prairie. Photo by: Reidar Hahn, Fermilab

    The experiment is based on the proposition that our familiar, three-dimensional universe is a manifestation of a two-dimensional, digitized space-time. In other words, all that we see around us is no more than a hologram of a more fundamental, lower-dimensional reality.

    If this were the case, then space-time would not be smooth; instead, if you zoomed in on it far enough, you would begin to see the smallest quantum bits—much as a digital photo eventually reveals its fundamental pixels.

    In 2009, the GEO600 experiment, which searches for gravitational waves emanating from black holes, was plagued by unaccountable noise. This noise could, in theory, be a telltale sign of the universe’s smallest quantum bits. The Holometer experiment seeks to measure space-time with far more precision than any experiment before—and potentially observe effects from those fundamental bits.

    Such an endeavor is thrilling—but also risky. Discovery would change the most basic assumptions we make about the universe. But there also might not be any holographic noise to find. So for Chou, managing the Holometer means building and operating the apparatus on the cheap—not shoddily, but with utmost economy.

    Thus Chou and his team take every opportunity to make rather than purchase, to pick up rather than wait for delivery, to seize the opportunity and take that measurement when all the right people are available.

    Some of the Holometer’s parts are ordered custom, and some are homemade. Chou makes sure all of them work together in harmony.
    Photo by: Reidar Hahn, Fermilab

    “It’s kind of like solving a Rubik’s cube,” Chou says. “You have an overview of every aspect of the measurement that you’re trying to make. You have to be able to tell the instant something doesn’t look right, and tell that it conflicts with some other assumption you had. And the instant you have a conflict, you have to figure out a way to resolve it. It’s a lot of fun.”

    Chou is one of the experiment’s 1.5 full-time staff members; a complement of students rounds out a team of 10. Although Chou is essentially the overseer, he runs the experiment from down in the trenches.

    Aaron Chou, project manager for Fermilab’s Holometer, tests the experiment’s instrumentation.
    Photo by: Reidar Hahn, Fermilab

    The Holometer experimental area, for example, is a couple of aboveground, dirt-covered tunnels whose walls don’t altogether keep out the water after a heavy rain. So any time the area needs the attention of a wet-dry vacuum, he and his team are down on the ground, cheerfully squeegeeing, mopping and vacuuming away.

    Research takes place as much in the trailer as in the Holometer tunnel, where the instrument itself sits.
    Photo by: Reidar Hahn, Fermilab

    “That’s why I wear such shabby clothes,” he says. “This is not the type of experiment where you sit behind the computer and analyze data or control things remotely all day long. It’s really crawling-around-on-the-floor kind of work, which I actually find to be kind of a relief, because I spent more than a decade sitting in front of a computer for more well-established experiments where the installation took 10 years and most of the resulting experiment is done from behind a keyboard.”

    As a graduate student at Stanford University, Chou worked on the SLD experiment at SLAC National Accelerator Laboratory, writing software to help look for parity violation in Z bosons. As a Fermilab postdoc on the Pierre Auger experiment, he analyzed data on ultra-high-energy cosmic rays.

    Now Chou and his team are down in the dirt, hunting for the universe’s quantum bits. In length terms, these bits are expected to be on the smallest scale of the universe, the Planck scale: 1.6 x 10-35 meters. That’s roughly 10 trillion trillion times smaller than an atom; no existing instrument can directly probe objects that small. If humanity could build a particle collider the size of the Milky Way, we might be able to investigate Planck-scale bits directly.

    The Holometer instead will look for a jitter arising from the cosmos’ minuscule quanta. In the experiment’s dimly lit tunnels, the team built two interferometers, L-shaped configurations of tubes. Beginning at the L’s vertex, a laser beam travels down each of the L’s 40-meter arms simultaneously, bounces off the mirrors at the ends and recombines at the starting point. Since the laser beam’s paths down each arm of the L are the same length, absent a holographic jitter, the beam should cancel itself out as it recombines. If it doesn’t, it could be evidence of the jitter, a disruption in the laser beam’s flight.

    The light path through a Michelson interferometer. The two light rays with a common source combine at the half-silvered mirror to reach the detector. They may either interfere constructively (strengthening in intensity) if their light waves arrive in phase, or interfere destructively (weakening in intensity) if they arrive out of phase, depending on the exact distances between the three mirrors. No image credit.

    And why are there two interferometers? The two beam spots’ particular brightening and dimming will match if it’s the looked-for signal.

    “Real signals have to be in sync,” Chou says. “Random fluctuations won’t be heard by both instruments.”

    Should the humble Holometer find a jitter when it looks for the signal—researchers will soon begin the initial search and expect results by 2015—the reward to physics would be extraordinarily high, especially given the scrimping behind the experiment and the fact that no one had to build an impossibly high-energy, Milky Way-sized collider. The data would support the idea that the universe we see around us is only a hologram. It would also help bring together the two thus-far-irreconcilable principles of quantum mechanics and relativity.

    “Right now, so little experimental data exists about this high-energy scale that theorists are unable to construct any meaningful models other than those based on speculation,” Chou says. “Our experiment is really a mission of exploration—to obtain data about an extremely high-energy scale that is otherwise inaccessible.”

    In the Holometer trailer, University of Michigan scientist Dick Gustafson checks a signal from the Holometer during a test.
    Photo by: Reidar Hahn, Fermilab

    What’s more, when the Holometer is up and running, it will be able to look for other phenomena that manifest themselves in the form of high-frequency gravitational waves, including topological defects in our cosmos—areas of tension between large regions in space-time that were formed by the big bang.

    “Whenever you design a new apparatus, what you’re doing is building something that’s more sensitive to some aspect of nature than anything that has ever been built before,” Chou says. “We may discover evidence of holographic jitter. But even if we don’t, if we’re smart about how we use our newly built apparatus, we may still be able to discover new aspects of our universe.”

    See the full article here.

    Symmetry is a joint Fermilab/SLAC publication.

    ScienceSprings is powered by MAINGEAR computers

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: