Tagged: Optics & Photonics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:11 pm on October 8, 2018 Permalink | Reply
    Tags: , , , , , Excitation of sodium atoms in the mesosphere creates an artificial point of light at a precise known location and elevation, Laser guide star systems, Laser Guide Stars Measure Geomagnetism, Optics & Photonics   

    From Optics & Photonics: “Laser Guide Stars Measure Geomagnetism” 

    From Optics & Photonics

    08 October 2018
    Stewart Wills

    1
    Polar mesospheric clouds. [Image: NASA]

    Laser-created “guide stars” form a key part of the adaptive-optics (AO) techniques that have revolutionized astronomy, by setting up ways for ground-based telescopes to see through atmospheric distortions.

    ESO VLT 4 lasers on Yepun

    ESO VLT AOF new laser at Cerro Paranal, with an elevation of 2,635 metres (8,645 ft) above sea level

    In work published this year, several research groups have found another use for these laser-induced artificial lanterns: pinning down the shape and intensity of Earth’s magnetic field at a scientifically crucial—and, previously, largely inaccessible—range.

    Wobbling beacons

    In AO techniques using guide stars, a powerful laser adjacent to a ground-based telescope zaps a small piece of Earth’s middle atmosphere, or mesosphere, 85 to 100 km above the telescope. The consequent excitation of sodium atoms in the mesosphere creates an artificial point of light at a precise, known location and elevation. That winking light source, in turn, gives the telescope operators on the ground a known point to grab onto, and lets them computationally cancel out (via wavefront-shaping techniques) the turbulence in the lower atmosphere. The result? A sharper view of the stars beyond.

    In 2011, scientists led by James Higbie of Bucknell University, USA, suggested that laser guide stars for AO might also function as tiny, remote magnetometers for measuring Earth’s mesospheric magnetic field [PNAS]. The approach would work through measurements of the precession, or wobbling, of the laser-excited sodium atoms in the magnetic field.

    More specifically, as the circularly polarized laser beam excites the sodium atoms, it also spin-polarizes them, which causes them to wobble like spinning tops in the magnetic field. The frequency of that precession, which depends on the local field strength, can be read via changes in the fluorescence signal captured by a ground-based detector.

    Such measurements, if they could be made to work, would prove a nice win for geophysics. That’s because the mesosphere occupies a difficult-to-access middle zone between space-based and ground-based measurements of the magnetic field. Yet understanding that elusive part of the field is crucial to a complete picture of overall geomagnetism, for scientific applications ranging from plate tectonics to ocean circulation to space weather.

    In May of this year, a group of U.S. researchers finally put the idea of using wobbling laser guide stars as magnetometers to the test (Journal of Geophysical Research). The team fired a 1.33-W laser to excite mesospheric sodium atoms, and then gathered the backscattered guide star light with a 1.55-m-aperture telescope. By measuring the precession, they were able to obtain a value of the field consistent with several models “within a fraction of a percent.” But the method’s sensitivity, at 162 nT/Hz½, fell considerably short of the level of around 1 nT/Hz½ thought to be necessary for useful measurements of mesospheric magnetic-field variations.

    In work published at the end of September, scientists from Germany, Italy, Canada and the United States reported a significant improvement on that accuracy (Nature Communications). They aimed a continuous-wave laser from the European Southern Observatory’s laser guide star unit adjacent to the William Herschel Telescope on La Palma, Canary Islands at the mesosphere, delivering roughly 2 W of laser power to the sky.

    Glistening against the awesome backdrop of the night sky above ESO’s Paranal Observatory, four laser beams project out into the darkness from Unit Telescope 4 UT4 of the VLT.

    The team then captured the received light with a 40-cm-aperture telescope mounted on the AO system’s receiver control unit. The received light, after passing through a photomultiplier tube, was then sent to a digital signal-processing stack for backing out the precession from the fluorescence signal, and for tuning the laser to reduce scintillation noise from the atmosphere.

    2
    A team of scientists from Germany, Italy, Canada and the U.S. used a ground-based laser developed for astronomical adaptive optics, equipped with an acousto-optical modulator (AOM), to excite sodium atoms in the mesosphere. Ground-based measurement of the optical signal was used to estimate the precession of the spin-polarized sodium atoms and, thus, of the Earth’s magnetic field in that area. [Image: F.P. Bustos et al., Nature Communications]

    Boosting sensitivity

    By focusing on a narrower resonance frequency in the precession signal, the international team reported, the researchers were able to achieve accuracy of 0.28 mG/Hz½ (28 nT/Hz½)—“an order of magnitude better sensitivity” than the U.S. team’s result early in the year, albeit still below the ultimate 1-nT/Hz½ target. The international group suggested that the results could be improved still further; using higher laser powers, for example, would allow researchers to boost the number of atoms interrogated by the method, sharpening sensitivity.

    The researchers also noted an interesting side-effect of the recent work. By digging into the details of resonance frequencies tied to the excited sodium atoms’ spin-relaxation rate, they were able to suss out information on the rates of atomic collisions in the mesosphere. Getting such quantitative information on collisional dynamics, the team wrote, is “important for the optimization of sodium laser guide stars and mesospheric magnetometers”—and could thus further improve the potential usefulness of these beacons for astronomy and geophysics alike.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Optics & Photonics News (OPN) is The Optical Society’s monthly news magazine. It provides in-depth coverage of recent developments in the field of optics and offers busy professionals the tools they need to succeed in the optics industry, as well as informative pieces on a variety of topics such as science and society, education, technology and business. OPN strives to make the various facets of this diverse field accessible to researchers, engineers, businesspeople and students. Contributors include scientists and journalists who specialize in the field of optics. We welcome your submissions.

     
  • richardmitnick 2:16 pm on April 20, 2018 Permalink | Reply
    Tags: , Optics & Photonics, , , Tracking Entanglement in a Quantum Simulator   

    From Optics & Photonics: “Tracking Entanglement in a Quantum Simulator” 

    Optics & Photonics

    4.19.18
    Stewart Wills

    1
    In an experiment involving a linear array of ions, the IQOQI Innsbruck team tracked the evolution of entanglement as the system evolved in time. [Image: IQOQI Innsbruck/Harald Ritsch]

    Using a system of 20 trapped calcium ions, physicists in Austria have demonstrated what they describe as the largest register of entangled, individually controllable quantum bits (qubits) shown to date [Phys. Rev. X]. The team was able both to encode a complex initial state in the 20-ion ensemble and to demonstrate entanglement among those individual qubits at various stages of the system’s evolution. Moreover, by using both analytical and numerical “entanglement witnesses” developed in the work, the team found that it could detect not only entanglement among pairs of neighboring ions, but “genuine multipartite entanglement” among neighboring groups of three, four and even five ions.

    The result, according to Ben Lanyon, a co-leader of the research at the Institute of Quantum Optics and Quantum Information (IQOQI), Innsbruck, provides evidence that ion-based quantum simulators—which some view as a way station on the road to full-fledged quantum computers—do indeed provide a window into otherwise inscrutable quantum behavior. “We’re building confidence,” he says, “that we are accessing complex quantum states” in these simulators.

    Toward individual control

    Quantum simulators use systems of atoms, ions or other qubits to model quantum behavior beyond the abilities of classical computing algorithms. They’ve come a long way in the past year, with several groups successfully building simulators involving 50 or more qubits. But Lanyon says these experiments with larger ensembles haven’t yet demonstrated the level of control of each individual particle in the system, and of particle entanglement, shown in the 20-qubit Innsbruck experiments.

    To achieve that control, the IQOQI team employed a radio-frequency Paul trap to set up a register of 20 calcium ions in a linear array. Next, the researchers used laser fields to flip every second qubit, setting up a specific initial Hamiltonian, or energy state, for the system. They then quenched the system at various time steps, and used quantum-state-dependent resonance fluorescence imaging, via a single-ion-resolving CCD camera, to measure the state of each qubit.

    2
    In the experiment, the Austrian group (a) captured an array of 20 ions in Paul trap, (b and c) used laser fields to initialize the ions, (d) allowed the system to evolve in time to an unknown quantum state, and then (e and f) quenched the evolution and used resonance fluorescence imaging to read out the quantum state of each qubit. [Image: N. Friis et al., “Observation of entangled states of a fully controlled 20-qubit system,” Phys. Rev. X, doi: 10.1103/PhysRevX.8.021012 (link is above)]

    From twins to quintuplets

    The IQOQI team next dug into measuring the level of entanglement among the qubits. Estimating the full quantum state density matrix for the 20-qubit ensemble would have required assessing billions of measurement bases. So the researchers focused on the set of bases required to reconstruct the density matrices of all neighboring 3-quibit groups in the linear array—a much more tractable problem involving only 27 bases.

    Using those density matrices, and a parameter called the genuine multipartite negativity, they were able to establish that all adjacent pairs of the linear array became entangled very quickly as the system evolved. But what about multipartite entanglement, among larger numbers of qubits? To assess that, Lanyon and his co-lead investigator, Rainer Blatt, turned for help to two groups of theorists, Marcus Huber’s team at IQOQI’s Vienna branch, and Martin Plenio’s research group at the University of Ulm, Germany.

    The theoretical groups devised two complementary entanglement witnesses that could be used to establish higher orders of entanglement. Huber’s group offered a relatively simple, analytical method that was effective at inferring entangled triplets from the two-qubit observables already established by the Innsbruck team. For detecting multipartite entanglement among collections of four and five qubits, Plenio’s team provided a computationally intense, “brute force” numerical search technique.

    Putting all of the techniques together, the Innsbruck team was able to establish that adjacent pairs of particles in the linear system quickly became entangled, and that quantum correlations built up among triplets, most quadruplets and some quintuplets as the system evolved in time.

    Accessing true quantum behavior

    Establishing multipartite entanglement for groups of more than five qubits outstripped the computational power that the team could bring to bear, and “remains an open challenge,” according to the paper. But even that five-qubit limit, according to Lanyon, might be a feature rather than a bug, since it underscores that the simulator is indeed demonstrating quantum behavior beyond the reach of classical algorithms.

    “What we’re interested in doing with these controlled quantum simulators is accessing quantum dynamics that we can’t otherwise access using conventional means,” says Lanyon. “So we’re kind of encouraged that we’re seeing this behavior—that our simulator goes beyond the point where we can follow it with existing methods at the moment.” As simulator systems become larger and larger, he maintains, it’s important to have methods to verify that the simulator is “behaving the way we think it is”; verifying lower-order entanglement is one way to “get feedback in the lab that we’re doing the right thing.”

    Meanwhile, Lanyon notes that while the techniques don’t scale well in order of entanglement, they do scale efficiently in system size. That opens up the prospect for practical simulators involving hundreds or even thousands of particles. And the paper concludes that the ability to individually control qubit–qubit interactions means that the system “has the capability to perform universal quantum simulation and quantum computation.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Optics & Photonics News (OPN) is The Optical Society’s monthly news magazine. It provides in-depth coverage of recent developments in the field of optics and offers busy professionals the tools they need to succeed in the optics industry, as well as informative pieces on a variety of topics such as science and society, education, technology and business. OPN strives to make the various facets of this diverse field accessible to researchers, engineers, businesspeople and students. Contributors include scientists and journalists who specialize in the field of optics. We welcome your submissions.

     
  • richardmitnick 3:35 pm on December 28, 2017 Permalink | Reply
    Tags: Femtosecond X-ray lasers, Inelastic X-ray scattering, , Optics & Photonics, , , , ,   

    From Optics & Photonics: “X-Ray Studies Probe Water’s Elusive Properties” 

    Optics & Photonics

    28 December 2017
    Stewart Wills

    1
    Unlike most substances, liquid water is denser than its solid phase, ice. [Image: Stockholm University]

    In two different X-ray investigations, researchers have dug into some of the exotic properties of that most familiar of substances—water.

    In one study, researchers from Sweden, Japan and South Korea used a femtosecond X-ray laser to investigate the behavior of evaporatively supercooled liquid water, and to confirm the long-suspected view that water at low temperatures can exist in two different liquid phases (Science). In the other, a U.S.-Japanese team used high-resolution inelastic X-ray scattering to probe the dynamics of water molecules and how the liquid’s hydrogen bonds contribute to its unusual characteristics (Science Advances).

    Burst pipes and floating cubes

    Anyone who has confronted a burst water pipe on a frozen winter morning has firsthand knowledge of one of H20’s unusual characteristics. Whereas most substances increase in density as they go from a liquid to a solid state, water reaches its maximum density at 4°C, above its nominal freezing point of 0°C. That’s also the reason that the ice cubes float at the top of your water glass rather than sinking to the bottom.

    Grappling with this anomalous behavior, a research team at Boston University suggested around 25 years ago, based on computer simulations, that in a metastable, supercooled state, water might actually coexist in two liquid phases—a low-density liquid and a high-density liquid. Those two phases, the researchers proposed, merged into a single phase at a critical point in water phase diagram at around –44°C (analogous to the better-known critical point at a higher temperature between water’s liquid and gas phases).

    3
    Experiments using femtosecond X-ray free-electron lasers illuminated fluctuations between two different phases of liquid water—a high-density liquid (red) and a low-density liquid (blue)—as a function of temperature in the supercooled regime. [Image: Stockholm University]

    Actually getting liquid water to that frigid point has, however, seemed a bit of a pipe dream. While very pure liquid water can be rapidly supercooled to temperatures moderately below 0°C relatively easily, the proposed critical point lies far below that temperature range, in what researchers have dubbed a “no-man’s land” in which ice crystalizes much faster than the timescale of conventional lab measurements.

    Leveraging ultrafast lasers

    To move past that barrier, a research team led by Anders Nilsson of Stockholm University, Sweden, turned to the rapid timescales enabled by femtosecond X-ray free-electron lasers (XFELs). At XFEL facilities in Korea and Japan [un-named], the team sent a stream of tiny water droplets (approximately 14 microns in diameter) into a vacuum chamber, and fired the XFEL at the droplets at varying distances from the water-dispensing nozzle to obtain ultrafast X-ray scattering data.

    The tiny size of the droplets meant that as they traveled through the vacuum they rapidly evaporatively cooled—with the amount of cooling related to the time they spent in vacuum under a well-established formula. Thus, by taking X-ray measurements at varying distances from the nozzle, the researchers could examine the structural behavior of the liquid water at multiple temperatures in the deep-supercooling regime, near the hypothesized critical point. “We were able to X-ray unimaginably fast before the ice froze,” Nilsson said in a press release, “and could observe how it fluctuated” between the two hypothesized metastable phases of liquid water.

    The experiments allowed the team to flesh out the phase diagram of liquid water in a supercooled region previously thought to be inaccessible to experiment. And the researchers believe that the use of femtosecond XFELs to probe thermodynamic functions and structural changes at extreme states “can be generalized to many supercooled liquids.”

    Illuminating water’s dynamics

    4
    A team led by scientists at the U.S. Oak Ridge National Laboratory used inelastic X-ray scattering to visualize and quantify the movement of water molecules in space and time. [Image: Jason Richards/Oak Ridge National Laboratory, US Dept. of Energy]

    A second set of experiments, from researchers at the U.S. Oak Ridge National Laboratory, the University of Tennessee, and the SPring-8 synchrotron laboratory in Japan, looked at water’s dynamics at room temperature, using inelastic X-ray scattering (IXS).

    SPring-8 synchrotron, located in Hyōgo Prefecture, Japan

    The researchers illuminated these dynamics through a series of experiments in which they trained radiation from the SPring-8 facility’s high-resolution IXS beamline, BL35XU, onto a 2-mm-thick sample of liquid water. Through multiple scattering measurements across a range of momentum and energy-transfer values, the team was able to build a detailed picture of the so-called Van Hove function, which describes the probability of interactions between a molecule and its nearest neighbors as a function of distance and time.

    The team found that water’s hydrogen bonds behave in a highly correlated fashion with respect to one another, which gives liquid water its high stability and explains its viscosity characteristics. And, in a press release, the researchers further speculated that the techniques used here could be extended to studying the dynamics and viscosity of a variety of other liquids. Some of those studies, they suggested, could prove useful in “the development of new types of semiconductor devices with liquid electrolyte insulating layers, better batteries and improved lubricants.”

    Here, the research team was interested in sussing out how water molecules interact in real time, and how the strongly directional hydrogen bonds of water molecules work together to determine properties such the liquid’s viscosity.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Optics & Photonics News (OPN) is The Optical Society’s monthly news magazine. It provides in-depth coverage of recent developments in the field of optics and offers busy professionals the tools they need to succeed in the optics industry, as well as informative pieces on a variety of topics such as science and society, education, technology and business. OPN strives to make the various facets of this diverse field accessible to researchers, engineers, businesspeople and students. Contributors include scientists and journalists who specialize in the field of optics. We welcome your submissions.

     
  • richardmitnick 11:01 am on November 6, 2017 Permalink | Reply
    Tags: An international task force of metrologists has updated the values of four fundamental constants—Planck’s constant (h) the elementary charge (e) Boltzmann’s constant (k); and Avagadro’s number, Nailing Down Four Fundamental Constants, , Optics & Photonics, The redefinition of these fundamental constants represents the latest step in a long slow march away from physical “artifact-based” SI standards and toward standards based on exact values of funda   

    From Optics & Photonics: “Nailing Down Four Fundamental Constants” 

    Optics & Photonics

    06 November 2017
    Stewart Wills

    1
    A NIST wallet card displays the fundamental constants and physical values that will define the revised system of SI units. [Image: Stoughton/NIST]

    An international task force of metrologists has updated the values of four fundamental constants—Planck’s constant (h), the elementary charge (e), Boltzmann’s constant (k); and Avagadro’s number, NA (Metrologia, doi: 10.1088/1681-7575/aa950a).

    The new values for these constants, which rest on an analysis of state-of-the-art measurements from a worldwide assemblage of metrology labs, won’t, alas, change the morning reading on your bathroom scale. But they’re a big deal for metrologists, as they set up a comprehensive reassessment of the International System of Units (SI), or metric system, slated for November 2018—when the metrology community is expected to redefine all seven basic SI units solely in terms of fundamental constants and invariant properties of atoms.

    Moving away from physical artifacts

    The redefinition of these fundamental constants represents the latest step in a long, slow march away from physical, “artifact-based” SI standards and toward standards based on exact values of fundamental constant. Perhaps the most celebrated physical standard was the platinum-iridium bar located in Paris that, for decades, denoted the precise dimensions of the basic SI unit of length, the meter.

    After a long process—during which the scientific community tried, and failed, to replace the bar with a universally accepted standard based a wavelength of light—the metrology community eventually simply turned the problem around. Metrologists defined a fundamental constant, the speed of light in vacuum (c), as an exact value (299,792,458 m/s) and used that fundamental constant to define the exact length of a meter. (For more on the standard-meter story, see “Mercury-198 and the Standard Meter,” OPN, September 2017.)

    Dethroning “Le Grand K”

    Metrologists would like to achieve a similar fundamental-constant-based standard for all seven basic SI units: the meter, the second, the mole, the ampere, the kelvin, the candela, and the kilogram. This would enable researchers worldwide to make authoritative measurements using precisely the same standard units anywhere on the planet, and on any scale of measurement.

    The kilogram constitutes a particular thorn in the metrology community’s side; it is the last remaining SI unit that still is defined by a physical artifact—“Le Grand K,” a platinum-iridium cylinder stored in France that has represented the standard kilogram since 1879. In principle, that means that local standards for the kilogram elsewhere in the world must be calibrated directly against that physical original.

    In other cases, standard SI units have been defined by theoretical ideals difficult to realize in practice. For example, temperature has been defined in terms of the triple point of pure water in a sealed glass cell—begging the question of how to make the water sufficiently pure, and of potential measurement inaccuracies as one gets farther and farther from the triple point.

    Hammering down uncertainties

    In principle, defining the SI measurements in terms of the exact value of fundamental constants avoids these local and practical difficulties—but it requires a highly precise, international consensus definition of the values of the constants themselves. The work of creating those consensus values for h, e, k, and NA falls to the Task Group on Fundamental Constants of the international Committee on Data for Science and Technology (CODATA), which periodically reviews fundamental-constant values based on the best available experimental evidence. The team proceeded by collecting measurements from multiple techniques and labs, and using a number of techniques to harmonize the data and minimize uncertainties.

    For the redefinition of Planck’s constant and Avogadro’s number, for example, the CODATA task group relied on a suite of measurements using a so-called Kibble balance and X-ray crystal-density measurements of a specific sphere of ultrapure silicon-28. As a result, the task group was able to hammer down uncertainties in these constants to just four parts per billion.

    Relevant to precise measurements

    The four new constant definitions join three other constants—the speed of light, the hyperfine transition frequency of cesium (ΔνCs), and the luminous efficacy constant (Kcd)—that have previously been exactly defined, and have been used to provide definitions of units such as the meter, the second and the candela. The new definition of Planck’s constant, which has units of kg-m2/s, will be used to provide a worldwide, invariant definition of the kilogram, replacing “Le Grand K”; the new standard Boltzmann’s constant will underlie a constant-based definition of the kelvin temperature unit, superseding the definition based on the triple point of water.

    The task group members stress that the changes to these constants will have little relevance day to day, in the lab or elsewhere. “The whole thing,” said Peter Mohr, a member of the task group who works at the U.S. National Institute of Standards and Technology, “is geared to not have any impact on the average person.” Yet it the shift will have considerable relevance to contemporary metrology researchers, whose work increasingly involves measurements at precisions undreamed of in earlier eras.

    For this reason, according to CODATA, while the shift to the full suite of SI units based on the new values of these fundamental constants will be decided in November 2018, its official rollout won’t come until 20 May 2019—“World Metrology Day”—to give the community time to adapt.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Optics & Photonics News (OPN) is The Optical Society’s monthly news magazine. It provides in-depth coverage of recent developments in the field of optics and offers busy professionals the tools they need to succeed in the optics industry, as well as informative pieces on a variety of topics such as science and society, education, technology and business. OPN strives to make the various facets of this diverse field accessible to researchers, engineers, businesspeople and students. Contributors include scientists and journalists who specialize in the field of optics. We welcome your submissions.

     
  • richardmitnick 1:22 pm on October 23, 2017 Permalink | Reply
    Tags: ART- Anomalous radiative trapping, , , , , , Exawatt Center for Extreme Light Studies, Optics & Photonics, QED’s double-edged sword, The ART of gamma-ray creation, Toward a Better Gamma-Ray Source?   

    From Optics & Photonics: “Toward a Better Gamma-Ray Source?” 

    Optics & Photonics

    10.23.17
    Stewart Wills

    1
    In the Chalmers team’s concept, electrons and positrons (green) trapped in a petawatt-power laser field (surfaces in red, orange and yellow) are oscilated to produce cascades of high-energy gamma-ray photons (pink). The team’s concept relies on careful control of laser pulse duration and peak power, as well as density of charged particles, to maximize gamma ray production and energy. [Image: Arkady Gonoskov]

    The advent of lasers of petawatt peak powers, at facilities such as those of the European Extreme Light Infrastructure (ELI), has physicists licking their chops for a previously unavailable, extremely bright source of high-energy gamma-ray photons for new kinds of experiments. But just how “high” can “high-energy” be?

    1
    European Extreme Light Infrastructure (ELI)

    Previous simulations have suggested that as laser peak powers reach lofty petawatt levels, the laser field itself can start to run into fundamental limits. Those limits are tied to strong-field quantum electrodynamic (QED) effects, which can, through complex feedbacks, eventually sap the energy of the laser field driving them. As a result, it’s generally been assumed that efficient gamma-ray production from these new petawatt-peak-power lasers would be limited to energies well under a billion electron volts (GeV).

    Now, researchers from Sweden, Russia and the United Kingdom have re-crunched the numbers, and suggested that this fundamental limit might not be so fundamental after all (Phys. Rev. X, doi: 10.1103/PhysRevX.7.041003). The team’s modeling suggests that, by tweaking the laser pulse intensity and duration in the right way, it’s possible to tune the system to minimize the energy-depleting effects and maximize the creation of gamma rays. This, says the team, would allow the radiation from the high-power laser to be “converted into a well-collimated flash of GeV photons.”

    Thus far, the scenario, requiring lasers with peak powers on the order of 10 PW, has been proved out only on the computer. But the authors hope to see it verified in practice as such powerful lasers start come on line with the maturing of the ELI and other projects—a development that, they maintain, “could enable a new era of experiments in photonuclear and quark-nuclear physics.”

    QED’s double-edged sword

    One reason for doubts about maximum attainable energy has to do with the previously inaccessible physics of strong-field QED that petawatt-peak-power lasers will suddenly put on the table. On the plus side, the strong fields of 10-PW-plus lasers, interacting with and accelerating particles in an electron–positron plasma, can cause those particles to radiate a large fraction of their energy as energetic gamma-ray photons. That, in turn, has raised considerable anticipation that these soon-to-be-launched high-peak-power lasers could provide a source for high-energy gamma rays for new kinds of experiments.

    But there’s a catch. As the flux of gamma-ray photons produced by these light–matter interactions increases, a significant share of those high-energy photons would themselves interact with the laser field to create a cascade of electron–positron pairs, through the QED process of pair production. The result would be an increasingly dense plasma cloud in the laser field that would rapidly pull energy out of the field itself, quickly erasing its ability to create additional gamma-ray photons and preventing its use as a sustainable a gamma-ray source above a certain energy threshold.

    The ART of gamma-ray creation

    The team behind the new research—led by physicist Arkady Gonoskov of Chalmers University of Technology, Sweden, along with colleagues at Chalmers, the Russian Academy of Sciences, Lobachevsky State University in Russia, and the University of Plymouth in the U.K.—sought to get around that limit. To do so, they looked in detail at the interaction of the electron–positron cascade with another process in these high-energy laser fields, so-called anomalous radiative trapping (ART).

    In ART, using a complex set of parabolic mirrors, 12 laser pulses can be focused into a dipole standing wave that traps electrons and positrons. The trapped particles are then oscillated in the wave in such a way that they gain substantial energy and have a high probability of emitting a substantial part of that gained energy in a single gamma-ray photon.

    As with other approaches to gamma-ray creation, the increasing gamma-ray flux from ART leads to a pair-production cascade and a growing plasma cloud of electrons and positrons. But using advanced 3-D QED particle-in-cell (PIC) numerical simulations, the Gonoskov team was able to establish that, at laser powers above around 7 PW, it’s possible to keep that cascade from putting a lid on the laser field’s energy for gamma-ray production.

    The trick, according to the team is to tune the ART setup’s pulse duration, peak power and initial particle density to maximize the field intensity, and thus the gamma-ray production, just before the plasma effects from the cascade start to reduce the energy of the generated photons. This, according to the researchers, allows “a maximal number of particles to interact with the most intense part of the laser pulses, and emit a large number of high-energy photons.”

    From simulation to reality?

    In their comprehensive PIC simulation, the researchers found that an experiment using 12 laser pulses with a total peak power of 40 PW could result in a well-collimated gamma-ray beam with an energy greater than 2 GeV, and “the unique capability of achieving high peak brilliance in an energy range unachievable for conventional sources.” As such, it could offer “a powerful tool for studying fundamental electromagnetic processes, and will open qualitatively new possibilities for studying photonuclear processes.”

    Putting the that promise to the test outside of numerical experiments, of course, must await the full production implementation of petawatt-scale lasers in ELI and elsewhere. In a press release accompanying the study, Gonoskov noted that the team’s concept “is already part of the experimental program proposed for one such facility: the Exawatt Center for Extreme Light Studies in Russia,” currently under construction.

    3
    Exawatt Center for Extreme Light Studies, Russia

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Optics & Photonics News (OPN) is The Optical Society’s monthly news magazine. It provides in-depth coverage of recent developments in the field of optics and offers busy professionals the tools they need to succeed in the optics industry, as well as informative pieces on a variety of topics such as science and society, education, technology and business. OPN strives to make the various facets of this diverse field accessible to researchers, engineers, businesspeople and students. Contributors include scientists and journalists who specialize in the field of optics. We welcome your submissions.

     
  • richardmitnick 2:52 pm on July 16, 2017 Permalink | Reply
    Tags: , , , , How to Weigh a Star, Optics & Photonics   

    From Optics & Photonics: “How to Weigh a Star” 

    Optics & Photonics

    08 June 2017
    Stewart Wills

    1
    Astronomers used the superior angular resolution of the Hubble Space Telescope’s Wide Field Camera 3 to assess the gravitational bending of light from a background star (small object in picture), about 5,000 light years away, by the much closer and brighter white-dwarf star Stein 2051B (larger object), 17 light years away. [Image: NASA, ESA, and K. Sahu (STScI)]

    NASA/ESA Hubble Telescope

    NASA/ESA Hubble WFC3

    With excitement continuing to build over the next year’s planned launch of the James Webb Space Telescope, it’s easy to forget that its predecessor—the Hubble Space Telescope (HST), originally put into Earth orbit more than 27 years ago—still has some great science left to do.

    NASA/ESA/CSA Webb Telescope annotated

    The latest evidence: researchers have used the superior angular resolution of the HST’s Wide Field Camera 3 to directly determine, through the gravitational bending of light, the mass of a white-dwarf star 17 light years away (http://science.sciencemag.org/content/early/2017/06/06/science.aal2879).

    Radio galaxies gravitationally lensed by a very large foreground galaxy cluster Hubble

    Stellar lenses and “Einstein rings”

    What is now called “gravitational microlensing” was one of the most famous predictions of Einstein’s general theory of relativity. Under that theory of gravity, massive bodies such as stars actually deform the space around them; as a result, the theory predicts that the spatial warping could deflect light from distant stars around such a body. That prediction, made in 1915, was borne out four years later in a celebrated experiment by the British astronomer Arthur Eddington, who measured the deflection of a background star’s light by the sun’s gravity during a total solar eclipse.

    In a 1936 gloss [Science]on his original theory in the journal Science, Einstein noted “the results of a little calculation which I had made” on the lens-like effects of stars. If two stars were lined up precisely, he suggested, the observer would in principle perceive, because of the bending of the light from the far star by the nearer one, a “luminius [sic] circle” of a predictable angular radius around the closer star—a phenomenon that came to be called an “Einstein ring.”

    Einstein himself blithely noted that “of course, there is no hope of observing this phenomenon directly,” given that the chances of such alignment are remote in themselves, and because the angular deflection of light by a star outside of our solar system “will defy the resolving power of our instruments.” The most one could hope to observe, he wrote, would be an increase in the apparent brightness of the closer of the two stars.

    2
    The gravity of a white dwarf star warps space and bends the light of a distant star behind it. [Image: NASA, ESA, and A. Feild (STScI)]

    Subtle signal

    Some eighty years later, scientists did have an instrument with potentially sufficient resolving power, the HST. But they still needed to find a pair of stars with the right alignment at the right time.

    To do so, researchers in the United States, Canada, and the United Kingdom—led by Kailash Sahu of the Space Telescope Science Institute (STScI) in Baltimore, Md., USA—computationally scoured a catalog of more than 5,000 nearby stars. They looked for candidates that had particularly rapid apparent motions in the sky, and that thus might have a better shot at lining up in the right way with a more distant star. They settled on the white-dwarf star called Stein 2051B, around 17 light years from earth. They then observed Stein 2051B (using the HST’s Wide Field Camera 3) seven times over the course of two years to assess its gravitational effect on light from a specific, much more distant background star around 5,000 light years away.

    Even with the HST’s superior resolving power, that effect was tough to see; Stein 2015B appears about 400 times brighter to an Earth observer than the background star, and the lensing effect was expected to be around three orders of magnitude smaller than the one observed by Eddington during the 1919 eclipse. Nonetheless, the team did indeed manage to tease out the gravitational lensing of the distant star’s light by the closer one—the first measurement of such a deflection by a star other than the sun.

    Weighing a star with light

    The STScI-led team went further, however, using the angular deflection they observed from the light of the background star to get at the mass of the closer Stein 2015B. Under Einstein’s equations, the radius of an Einstein ring relates directly to the square root of the mass of the closer (lensing) object. While, even with the HST’s sensitive instruments, the research team did not directly observe an Einstein ring, they were able to infer the ring’s radius over the series of measurements through the slightly asymmetric apparent offsets of the distant star and their impacts on the closer star’s brightness.

    As a result, the researchers were able to put the mass of Stein 2015B at 0.675±0.051 solar masses—right in line with the theoretical expectations for a white dwarf of its radius. That observation was interesting in itself, since Stein 2015B in particular has attracted some controversy, with suggestions that it might represent an exotic “iron-core” white dwarf with an anomalously large mass. The new observations suggest that the star actually lies right in the white-dwarf mainstream.

    More broadly, the “astrometric lensing” technique that the new research lays out offers a nice additional arrow in the quiver of astronomers seeking to suss out stellar masses across the sky. And the catalog of stars open for such analysis could expand significantly in coming years as new instruments come online, conducting even more massive sky surveys. (The Large Synoptic Survey Telescope, for example, slated to go online in 2023, will undertake a 10-year sky-survey campaign that’s expected to produce a 200-petabyte data set.)

    LSST


    LSST Camera, built at SLAC



    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    “This microlensing method is a very independent and direct way to determine the mass of a star,” team leader Sahu said in a press release. “It’s like placing the star on a scale.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Optics & Photonics News (OPN) is The Optical Society’s monthly news magazine. It provides in-depth coverage of recent developments in the field of optics and offers busy professionals the tools they need to succeed in the optics industry, as well as informative pieces on a variety of topics such as science and society, education, technology and business. OPN strives to make the various facets of this diverse field accessible to researchers, engineers, businesspeople and students. Contributors include scientists and journalists who specialize in the field of optics. We welcome your submissions.

     
  • richardmitnick 9:28 pm on July 15, 2017 Permalink | Reply
    Tags: , , Leveraging existing tools, , Optics & Photonics, Passing through a satellite, QKD-message-encryption technique known as quantum key distribution, QUESS-Quantum Experiment at Space Scale also known as Micius or Mozi,   

    From Optics & Photonics: “Quantum Key Distribution Takes Flight” 

    Optics & Photonics

    June 15, 2017
    Patricia Daukantas

    Three research teams—in Canada, in China, and in Germany—have lifted the message-encryption technique known as quantum key distribution (QKD) out of optical fibers and into literal new heights: an airplane in flight and satellites orbiting Earth.

    Preparing for a proposed Canadian quantum-communications spacecraft, researchers from the University of Waterloo, Ontario, uplinked secure quantum keys from a ground-based transmitter to a receiver that was mounted on an aircraft passing overhead (Quantum Sci. Technol., doi:10.1088/2058-9565/aa701f).

    1
    Thanks to new research from two separate, global teams, QKDs may head up toward the sky and stars. [Image: iStock].

    Across the globe, a team from the Chinese Academy of Sciences sent entangled photon pairs from the country’s quantum-technology satellite to two different ground stations (Science, doi:10.1126/science.aan3211).

    And researchers at the Max Planck Institute for the Science of Light, Germany, were able to demonstrate ground-based measurements of quantum states sent by a laser from a satellite 38,000 kilometers above Earth’s surface—using components not even designed for quantum communication (Optica, doi:10.1364/OPTICA.4.000611).

    It’s a bird, it’s a plane, it’s QKD

    Scientists have been investigating QKD as an unbreakable encryption scheme for more than three decades, but transmitting the keys over optical fiber doesn’t work for distances greater than a few hundred kilometers, due to exponentially scaling losses. Short-range QKD has been demonstrated for a prototype handheld device, as well as key transmissions from aircraft to ground bases. However, until the Waterloo experiments, no one had sent quantum keys from a terrestrial transmitter to a moving aircraft, even though the uplink mode requires simpler airborne equipment than the downlink scheme.

    The team from the University of Waterloo’s Institute for Quantum Computing, led by professor Thomas Jennewein and doctoral student Christopher Pugh, used many space-rated electronic components for its QKD receiver in anticipation of use in future satellites. Its ground transmitter, which was situated near a general-aviation airport in southern Ontario, employed two infrared lasers and the standard BB84 photon-polarization protocol (the technique of QKD was proposed by Charles H. Bennett and Gilles Brassard in 1984). The receiver, carried aboard a research aircraft, consisted of a 10-cm-aperture refractive telescope hitched to custom-designed sensors and controllers, including a dichroic mirror that separated the quantum and beacon signals. Both the transmitter and receiver used beacon lasers and tracking mechanisms to help find each other.

    The aircraft made 14 passes at approximately 1.6-km above sea level, with line-of-sight distances to the transmitter of 3 to 10 km and the plane flying up to 259 km/h. The team registered a signal on seven of the 14 passes and extracted a secret key, up to 868 kilobits long, from six of those seven. According to the Canadian team, the equipment maintained milli-degree pointing precision while the receiver was moving at an angular speed simulating that of a low-Earth-orbit spacecraft. The experiments lay a foundation for Canada’s future Quantum Encryption and Science Satellite mission.

    Passing through a satellite

    Last August, China launched the world’s first satellite for quantum optics experiments.

    4
    China’s 600-kilogram quantum satellite contains a crystal that produces entangled photons. Cai Yang/Xinhua via ZUMA Wire.

    Now researchers from multiple Chinese academic institutions have transmitted entangled photons from two widely separated ground stations via the orbiting satellite, officially named Quantum Experiment at Space Scale (QUESS) but informally dubbed Micius or Mozi after an ancient Chinese philosopher.

    The team sent the transmission between two ground stations separated by 1203 km; the path lengths between QUESS and the stations, Lijiang in southwestern China and Delingha in the northern province of Qinghai, varied from 500 to 2000 km. One of the corresponding authors, Jian-Wei Pan of the University of Science and Technology of China, Shanghai, likens the satellite-borne message exchange to seeing a single human hair at a distance of 300 m, or detecting from Earth a single photon that came from a match’s flame on the moon.

    Most of the photon loss and turbulence effects that plague free-space QKD occurs in the lower 10 km of the atmosphere, as the majority of the photons’ path is through a near vacuum. The Chinese researchers developed stable, bright two-photon entanglement sources with advanced pointing and tracking for both the satellite and the ground. Analysis of the received signals showed that the photons remained entangled and violated the Bell inequality. The researchers estimated that the link was 12 to 17 orders of magnitude more efficient than an equivalent long-distance connection along optical fibers.

    Pan had wanted to experiment with space-borne quantum communications since 2003, when quantum-optics experiments usually happened on a well-shielded optical table. The following year, he participated in a distribution of entangled photon pairs through a noisy, ground atmosphere of 13-km path length. In 2010 and 2012, the group extended the ground-based teleportation range to 16 km and 100 km. “Through these ground-based feasibility studies, we gradually developed the necessary tool box for the quantum science satellite, for example, high-precision and high-bandwidth acquiring, pointing, and tracking,” Pan says.

    And, according to Pan, the Chinese team will continue its quantum optical experiments at longer distances and also plan preliminary tests of quantum behavior under zero-gravity conditions.

    Leveraging existing tools

    A third set of experiments—conducted by a team led by OSA Member Christoph Marquardt, working in the research group of OSA Fellow Gerd Leuchs at the Max Planck Institute in Erlangen, Germany—built off of efforts toward satellite-to-earth optical communications by the German government, operating in partnership with the firm Tesat-Spacecom GmbH. And, notably, the experiments leveraged components not originally built for quantum communications.

    In the German experiments, coherent beams from a 1065-nm Nd:YAG laser communications terminal on the geostationary Earth orbiting satellite Alphasat I-XL, originally lofted into space in July 2013, were received at a transportable optical terminal then located at the Teide Observatory in Tenerife, Spain.

    5
    ESA/geostationary Earth orbiting satellite Alphasat I-XL

    The terminal was equipped with an adaptive-optics setup that corrected for phase distortions and piped the signal into a single-mode fiber, and used homodyne detection to pull out the quantum signature.

    To show that a true quantum link between satellite and ground, through the turbulent atmosphere, was possible, the Max Planck team used a phase modulator in the satellite equipment to encode a number of binary phase-modulated coherent states on the light field—states known to be compatible with quantum communication. With amplification and processing of the signal, the researchers were able to reliably pick up those quantum states at the ground station, from a beam that had “propagated 38,600 km through Earth’s gravitational potential, as well as its turbulent atmosphere.”

    “We were quite surprised by how well the quantum states survived traveling through the atmospheric turbulence to a ground station,” Marquardt noted in a press release. And, he said, the experiments suggested that the light beamed from a satellite to Earth could be “very well suited to be operated as a quantum key distribution network”—a surprising finding, he says, because the system was not built for quantum communication. In light of the work, he predicted that such a network “could be possible” in as little as five years.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Optics & Photonics News (OPN) is The Optical Society’s monthly news magazine. It provides in-depth coverage of recent developments in the field of optics and offers busy professionals the tools they need to succeed in the optics industry, as well as informative pieces on a variety of topics such as science and society, education, technology and business. OPN strives to make the various facets of this diverse field accessible to researchers, engineers, businesspeople and students. Contributors include scientists and journalists who specialize in the field of optics. We welcome your submissions.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: