Tagged: Optics & Photonics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:14 pm on August 30, 2021 Permalink | Reply
    Tags: "Toward a Graphene Laser", , Because graphene is a zero-band-gap material it’s been hard to find a route to graphene lasers., , , Optics & Photonics, Researchers in Singapore have now taken an intriguing step toward graphene lasers.   

    From Optics & Photonics : “Toward a Graphene Laser” 

    From Optics & Photonics

    8.30.21
    Stewart Wills

    1
    Artist’s rendering of nanopillars in strained graphene, which can give rise to large pseudo-magnetic fields in the material, changing its optical properties. [Image: Courtesy of D.-H. Kang]

    Graphene—atomically thin, 2D sheets of carbon, whose discovery captured the 2010 Nobel Prize in physics—has some remarkable properties. It’s strong, yet super-light; it’s hard, yet flexible; and it boasts extremely high electron mobility and a brisk photoelectric response. Thus the material is finding its way into designs for a variety of next-gen optoelectronic devices, in components such as modulators, photodetectors, low-loss waveguides and more.

    One possibility, however, has remained elusive. Because graphene is a zero-band-gap material it’s been hard to find a route to graphene lasers.

    Researchers in Singapore have now taken an intriguing step toward that goal, by tweaking the strains in the material’s ultrathin sheets [Nature Communications]. Specifically, the team has shown that engineering a periodic structure of nanoscale pillars into a graphene sheet can produce localized tensile strains that give rise to extremely strong pseudo-magnetic fields. Those fields, in turn, can affect electron transport, open up energy gaps and significantly modify the material’s optical transitions. The result, the authors argue, could presage “a new class of graphene-based optoelectronic devices,” including lasers.

    Faking magnetic fields

    It’s been known for some time that subjecting graphene to an external magnetic field can create energy gaps in the 2D sheets, by affecting charge-carrier motion and relaxation times. The problem has been that pulling off the feat requires rather intense fields—on the order of those produced by laboratory-scale superconducting magnets. That’s hardly a practical approach for creating integrated electronic and photonic devices on the chip scale.

    One alternative explored in the past decade has been creating a pseudo-magnetic field within the graphene itself, through judicious strain engineering. A variety of studies have shown that straining graphene flakes at the nanoscale can generate strong localized gauge fields, effectively giving rise to enormous pseudo-magnetic fields—as beefy as 800 T in one recent study [Science Advances]. Those fields, in turn, should in principle allow development of so-called Landau quantization (discrete energy levels tied to electron motion in a magnetic field) in the graphene, thereby building an energy-gap structure in an otherwise zero-band-gap material.

    Nanopillar array

    2
    Top: Nonuniform tensile strain at the edges of graphene nanopillars induces pseudomagnetic fields of opposite signs. Bottom: SEM image of nanopillar array (scale bars: 2 µm in main image, 1 µm in inset). [Image: D.-H. Kang et al. Nature Communications [above].

    What’s been missing from the picture has been experimental testing of how these giant pseudo-magnetic fields actually affect graphene’s optical properties. To take a step in that direction, Dong-Ho Kang, a postdoctoral fellow on the research team of Dongkuk Nam at Nanyang Technological University [Universiti Teknologi Nanyang](SG), and colleagues drilled down into the dynamics of “hot” charge carriers—electrons and holes—in strained graphene.

    To create a platform for the experiments, the team started out with a rectilinear array of nanopillars, chemically etched into an SiO2/Si substrate and then topped with a 20-nm-thick Al2O3 layer. A graphene layer was then wet-transferred onto the top of the nanopillar array, and muscled into conforming to the array topography via capillary forces. Graphene’s exceptional mechanical properties allowed the accumulation of large, localized tensile strains—and, thus, the potential of strong pseudo-magnetic fields—at the nanopillar boundaries.

    The team then used scanning electron microscopy and Raman spectroscopy to characterize the strain distribution in detail. Numerical modeling using the measured strain values suggested that the nanopillar-deformed graphene should host pseudo-magnetic fields as high as 100 T near the points of strongest deformation.

    Measuring carrier relaxation

    Finally, to see how such deformation-induced fields might affect the graphene’s optical properties, the Singapore researchers dug down into the strained graphene’s charge-carrier dynamics. Specifically, they fired femtosecond pump pulses into the material to excite charge carriers, followed by probe pulses at varying delay times to suss out how long it took the carriers to relax down to their original energy level.

    The team found that the strained graphene sported carrier-relaxation times more than an order of magnitude longer than in unstrained graphene. Further experiments and numerical modeling suggested that the electron behavior was consistent with the formation of pseudo-Landau levels—and, thus, the creation of an energy-gap structure—in the graphene.

    Graphene lasers ahead?

    3
    Pump–probe experiments (top) demonstrated a significant deceleration of the relaxation time of charge carriers in strained versus unstrained graphene, consistent with the development of Landau quantization in the material. The behavior could, according to the authors, enable the creation of pseudo-Landau-level lasers and other new optoelectronic devices in graphene. [Image: D.-H. Kang et al., Nature Communications [above].

    Lead author Dong-Ho Kang told OPN that under this system, the pseudo-magnetic field “can be easily tuned by varying the size of the nanopillars.” That, he maintains, makes it possible to have “an infinite number of unique graphene devices with different band gaps.”

    More intriguing still, Kang says, is the possibility of leveraging these techniques to create graphene-based lasers. In a theoretical study published early this year in Optics Express, a number of authors on the new study argued that Landau quantization due to pseudo-magnetic fields in strained graphene could make the material “an excellent gain medium” that would support the building of chip-scale graphene lasers.

    Thus the team’s demonstration and analysis of the optical properties in the strained 2D material should, Kang maintains, help with the project of “realizing the world’s strongest, thinnest lasers.” Such graphene lasers, he says, would “complete the last missing link toward the realization of all-graphene electronic-photonic integrated circuits … which is anticipated to revolutionize the way computer chips work.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Optics and Photonics News (OPN) is The Optical Society’s monthly news magazine. It provides in-depth coverage of recent developments in the field of optics and offers busy professionals the tools they need to succeed in the optics industry, as well as informative pieces on a variety of topics such as science and society, education, technology and business. OPN strives to make the various facets of this diverse field accessible to researchers, engineers, businesspeople and students. Contributors include scientists and journalists who specialize in the field of optics. We welcome your submissions.

     
  • richardmitnick 9:26 am on February 12, 2021 Permalink | Reply
    Tags: "Shaping Light Pulses with Deep Learning", , Directly shaping arbitrary THz input pulses into a variety of desired waveforms., Optics & Photonics, Ozcan Lab, ,   

    From UCLA via Optics & Photonics: “Shaping Light Pulses with Deep Learning” 

    UCLA bloc

    From UCLA

    via

    From Optics & Photonics

    11 February 2021
    William G. Schulz

    1
    Illustration of an optical diffractive network, trained with deep learning techniques, to directly shape pulses of light. Credit: Ozcan Lab/UCLA.

    Direct engineering and control of terahertz pulses could boost access to those wavelengths for many powerful applications in spectroscopy, imaging, optical communications and more. But wrangling the phase and amplitude values of a continuum of frequencies in the THz band has proved challenging.

    Now, researchers at University of California, Los Angeles, led by OSA Fellows Aydogan Ozcan and Mona Jarrahi, say they have used deep learning and a 3D printer to create a passive network device that can directly shape arbitrary THz input pulses into a variety of desired waveforms [Nature Communications]. The team writes that these results further motivate “the development of all-optical machine learning and information processing platforms that can better harness the 4D spatiotemporal information carried by light.”

    Shaping any terahertz pulse

    The team’s method, Ozcan says, can directly shape any input THz pulse through passive light diffraction via deep-learning-designed, 3D-printed polymer wafers. It is fundamentally different, he says, from previous approaches that indirectly synthesize a desired THz pulse through optical-to-terahertz converters or shaping of the optical pump that interacts with THz sources.

    What is more, Ozcan adds, the deep-learning framework is flexible and versatile; it can be used to engineer THz pulses regardless of polarization state, beam shape, beam quality or aberrations of the specific generation mechanism.

    Diffractive optical networks

    In 2018, Ozcan’s group reported development of the first all-optical diffractive deep neural network using 3D-printed polymer wafers with uneven surfaces for light diffraction. That work was primarily about machine learning by way of light propagated through the trained diffractive layers to execute an image-classification task, he says.

    But deep-learning-designed diffractive networks can also tackle inverse design problems in optics and photonics, Ozcan says, and the team’s new work in THz pulse shaping “highlights this unique opportunity.” They used diffractive optical networks—four wafers in a precisely stacked and spaced arrangement—to shape pulses by simultaneously controlling the relative phase and amplitude of each spectral component across a continuous and wide range of frequencies, the researchers write.

    On-demand synthesis of new pulses

    2
    A 3D-printed optical diffractive network that is used to engineer THz pulses. Credit: Ozcan Lab/UCLA.

    For on-demand synthesis of new pulses, Ozcan says, the team used a Lego-like physical transfer learning approach. That is, by training with deep learning a new layer or layers to replace part of an existing network model, the team found new pulses can be synthesized.

    In terms of its footprint, the pulse-shaping framework has a compact design, with an axial length of approximately 250 wavelengths, Ozcan says. Moreover, he adds, it does not use any conventional optical components such as spatial light modulators, which makes it ideal for pulse shaping in the THz band—where high-resolution spatiotemporal modulation and control of complex wavefronts over a broad bandwidth represent a significant challenge.

    Improving efficiency

    To improve the efficiency of the network, Ozcan says, a switch to low-absorption polymers for the 3D-printing material could be beneficial. To further improve output efficiency, he says, antireflective coatings over diffractive surfaces could be used to reduce back reflections.

    Altogether, the capabilities of the deep-learning-designed diffractive network approach to pulse shaping enable a variety of new opportunities, Ozcan says. When merged with appropriate fabrication methods and materials, he adds, the approach can be used to directly engineer THz pulses generated through quantum cascade lasers, solid-state circuits and particle accelerators.

    “There is already commercial interest in licensing diffractive-network–related intellectual property,” Ozcan says, “and we expect this to accelerate as we continue demonstrating some of the unique advantages of this framework for various applications in machine learning, computer vision and optical design.”

    The team is also working on visible diffractive networks, which could benefit various applications in computer vision and computational imaging fields, says Ozcan, calling it “work in progress.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Optics and Photonics News (OPN) is The Optical Society’s monthly news magazine. It provides in-depth coverage of recent developments in the field of optics and offers busy professionals the tools they need to succeed in the optics industry, as well as informative pieces on a variety of topics such as science and society, education, technology and business. OPN strives to make the various facets of this diverse field accessible to researchers, engineers, businesspeople and students. Contributors include scientists and journalists who specialize in the field of optics. We welcome your submissions.

    UC LA Campus

    For nearly 100 years, UCLA has been a pioneer, persevering through impossibility, turning the futile into the attainable.

    We doubt the critics, reject the status quo and see opportunity in dissatisfaction. Our campus, faculty and students are driven by optimism. It is not naïve; it is essential. And it has fueled every accomplishment, allowing us to redefine what’s possible, time after time.

    This can-do perspective has brought us 12 Nobel Prizes, 12 Rhodes Scholarships, more NCAA titles than any university and more Olympic medals than most nations. Our faculty and alumni helped create the Internet and pioneered reverse osmosis. And more than 100 companies have been created based on technology developed at UCLA.

     
  • richardmitnick 3:34 pm on December 23, 2020 Permalink | Reply
    Tags: "Controlled Timing of Light Echoes", , Control on picosecond timescales, , , Optics & Photonics, , , , Quantum memory applications and beyond, Studying photon echoes   

    From Optics & Photonics: “Controlled Timing of Light Echoes” 

    From Optics & Photonics

    23 December 2020
    Patricia Daukantas

    Mountaineers who shout into a canyon have no way of controlling the time it takes for the acoustic echo of their yell to return to them. Researchers at two German universities have devised a laser-pulse method for precise control of optical echoes from semiconductor quantum dots: Communications Physics.

    1
    Symbolic photo shows the control of photon echoes using laser pulses. Credit: Bezim Mazhiqi/Paderborn University.

    Studying photon echoes

    Quantum dots and their excitons can be modeled as simple two-level quantum systems, which makes them useful for investigating nanoscale phenomena. Photon echoes are the optical-frequency analog to spin echoes, in which a pulse of radio-frequency radiation resets the spin of a simple system such as a hydrogen atom—the technology behind magnetic resonance imaging in medicine.

    “The main idea is to stop and freeze the dephasing in an inhomogeneous ensemble of oscillators—an intrinsic process that usually happens automatically and cannot be avoided if one does not act actively against it,” says Ilja A. Akimov, a physicist at the Technical University of Dortmund (DE).

    Control on picosecond timescales

    First, the group, led by Akimov and Torsten Meier of Paderborn University, Germany, mathematically modeled the effect light pulses would have on the phases of the exciton ensemble of a quantum dot. Next, the team set up an experiment involving a single layer of indium/gallium arsenide quantum dots sandwiched between layers of aluminum gallium arsenide. The substrate layers, ranging in thickness from 68 nm to 82 nm, formed a Bragg mirror with a resonator mode in the spectral range of 910 nm to 923 nm.

    With the sandwich cooled to 2 K, the scientists hit it with 2.5-ps excitation and control pulses from a mode-locked Ti:sapphire laser, with mechanical translation stages varying the delay times between the two types of pulses. These pulses create the photon echo. A third resonant pulse—coming from the same laser—either slows down or speeds up the emission time of the photon echo, depending on its arrival time. This delay or advance was up to 5 ps.

    “Previous studies of photon echoes in atomic ensembles and rare-earth crystals used optical pulses with durations of 100 ns and longer,” Akimov says. The short duration of the photon echo pulses in these semiconductor quantum dots enabled the researchers to extend the timing control into the picosecond range.

    Quantum memory applications and beyond

    This type of control over photon echoes could be important in future plans to manipulate light emission from quantum dots and other nanoscale photonic systems. Akimov and the Paderborn University researchers intend to apply this work to high-bandwidth optical quantum memories based on semiconductor quantum dots.

    “In general, an accurate control of timing of short optical signals is required in optoelectronic and nanophotonic circuits,” Akimov says. “In particular, our results could be used for timing corrections in quantum optical memory protocols where non-classical optical signals are stored in the QD [quantum dot] ensemble and can be retrieved in the form of photon echoes at any desired time.”

    “At the same time, our studies open plenty of other possibilities such as quantum interferometry of electronic excitations in solid-state systems,” Akimov adds. “In this case the control pulse will be used to split the photon echo in two pulses in a similar way as it is done in conventional Michelson or Mach-Zehnder interferometers.”

    Meier and his colleagues led the theoretical portion of the study, while Akimov and collaborators implemented the experiment work. Researchers from two other German universities (University of Würzburg and University of Oldenburg) and two Russian institutions (St. Petersburg State University and the Ioffe Institute of the Russian Academy of Sciences) also participated in the study.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Optics and Photonics News (OPN) is The Optical Society’s monthly news magazine. It provides in-depth coverage of recent developments in the field of optics and offers busy professionals the tools they need to succeed in the optics industry, as well as informative pieces on a variety of topics such as science and society, education, technology and business. OPN strives to make the various facets of this diverse field accessible to researchers, engineers, businesspeople and students. Contributors include scientists and journalists who specialize in the field of optics. We welcome your submissions.

     
  • richardmitnick 8:33 am on November 3, 2020 Permalink | Reply
    Tags: "Flexible and Accessible Multifocus Microscopy", A low-cost light-efficient z-splitter prism, , Optics & Photonics   

    From Optics & Photonics: “Flexible, Accessible Multifocus Microscopy” 

    From Optics & Photonics

    02 November 2020
    Molly Moser

    1
    Researchers developed a new multifocus technique that uses a z-splitter prism (right) to split detected light in a standard microscope. Credit: Sheng Xiao, Boston University.

    Optical microscopy is an essential tool for producing sharp, in-focus 2D biological images, but it can be tricky to achieve that same sharpness over extended depth ranges with a standard camera-based microscope. Now, researchers at Boston University, USA, have demonstrated a high-speed, large field-of-view (FOV) multifocus microscopy technique based on a z-splitter prism that can be readily applied to a range of biomedical and biological imaging applications [Optica].

    The researchers say their method is versatile, flexible and fast, with a basic design that can be assembled using off-the-shelf components and easily added to most camera-based microscopes.

    Going deeper

    The most common optical-microscopy technique is to use a digital camera to record images at a single focal plane. It’s a simple, affordable, low-noise solution that—thanks to modern sCMOS sensors—offers lightning-fast speeds and high pixel counts. Standard camera-based optical microscopes, however, only provide a sharp image over a very thin 2D plane. Acquiring images at different focal depths usually requires axial scans, which sacrifice either imaging speed or system complexity.

    The Boston team wanted to find a simple and fast way to obtain 3D information with standard microscopy. At the crux of the researchers’ multifocus technique, says coauthor Sheng Xiao, is a low-cost, light-efficient z-splitter prism. As implied in its name, a z-splitter prism splits the microscope detected path into multiple paths that are directed onto a single camera with increasing delays. Each path corresponds to a different focal plane and can be simultaneously imaged in one frame.

    Compared with traditional multifocus techniques that use beam-splitting optics, says Xiao, the team’s z-splitter-based design is simpler, more achromatic and more versatile, so it can be easily applied to a variety of imaging modalities and can be assembled entirely from off-the-shelf parts. “Our system is also able to provide much larger FOV,” says Xiao, “allowing for imaging hundreds of neurons across volumes for brain function studies, or imaging freely moving organisms in their natural state for animal behavioral studies.”

    Shifting focus

    Another aspect of the volumetric imaging strategy is a clever deconvolution algorithm, which tackles the common problem of low image contrast caused by out-of-focus backgrounds when imaging thick fluorescent samples. Traditional algorithms, according to Xiao, only account for out-of-focus background contributions in the volume being imaged, failing to remove the far-out-of-focus background outside of that volume.

    The researchers’ extended-volume 3D (EV-3D) deconvolution strategy, on the other hand, explicitly extrapolates fluorescent signals beyond the imaging volume, allowing the team to more accurately estimate and remove such far-out-of-focus background, and to ultimately improve the image contrast and signal-to-noise ratio. “This is particularly beneficial in imaging applications involving thick samples,” Xiao says, “where the fluorescence labeling is dense, as is often encountered, for example, when imaging brain tissue.”

    In experiments, the team applied its versatile method to fluorescent, phase-contrast and dark-field imaging, capturing large-FOV brain images and monitoring freely moving organisms in 3D and at video rate. The researchers swapped in three different z-splitter configurations to a standard widefield microscope to prioritize either speed or imaging volume, depending on the application.

    More modalities

    Down the road, Xiao expects that the team’s z-splitter method could be used for brain imaging to help scientists understand brain function and cure neurological diseases, as well as for small-animal behavioral studies.

    Currently, Xiao and his colleagues, including lead researcher Jerome Mertz, are working to expand the system to even more imaging modalities and contrast mechanisms. The goal, Xiao says, is to “image an even wider range of samples, making our technique a more general platform for high-speed 3D imaging for biological and biomedical researches.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Optics and Photonics News (OPN) is The Optical Society’s monthly news magazine. It provides in-depth coverage of recent developments in the field of optics and offers busy professionals the tools they need to succeed in the optics industry, as well as informative pieces on a variety of topics such as science and society, education, technology and business. OPN strives to make the various facets of this diverse field accessible to researchers, engineers, businesspeople and students. Contributors include scientists and journalists who specialize in the field of optics. We welcome your submissions.

     
  • richardmitnick 12:48 pm on October 6, 2020 Permalink | Reply
    Tags: Adaptive Optics Spurred Nobel-Worthy Discovery", Andrea Ghez- University of California Los Angeles USA, , , , , , Optics & Photonics, Reinhard Genzel- Max Planck Institute for Extraterrestrial Physics in Germany and UC Berkeley USA, Roger Penrose- Oxford University U.K,   

    From Optics & Photonics: “Infrared Imaging, Adaptive Optics Spurred Nobel-Worthy Discovery” 

    From Optics & Photonics

    06 October 2020
    Stewart Wills

    1
    Roger Penrose, Reinhard Genzel and Andrea Ghez have been awarded the 2020 Nobel Prize in physics, for their work on the theory and observation of black holes. [Image: ©Nobel Media. III. Niklas Elmehed]

    The 2020 Nobel Prize in Physics has been awarded to Roger Penrose, Oxford University, U.K., for his mathematical proof that black holes are an inevitable consequence of general relativity; and to Reinhard Genzel, Max Planck Institute for Extraterrestrial Physics, Germany, and Andrea Ghez, University of California, Los Angeles, USA, for their discovery of the black hole at the center of the Milky Way galaxy. Penrose will receive half of the prize of 10 million Swedish kronor (more than US$1.1 million); Genzel and Ghez will share the other half.

    The accomplishments of the observational teams led by Genzel and Ghez were enabled by infrared telescopes—and, in particular, by the emergence of adaptive optics.

    ESO VLT 4 lasers on Yepun, a major asset of the Adaptive Optics system.

    The latter technique dramatically sped up the process of making the intricate observations of stellar orbits needed to infer the presence of the supermassive black hole at the galactic center.

    Sgr A* from ESO VLT.

    Piercing interstellar clouds

    After the classic papers by Penrose in the mid-1960s that led to his award of half of this year’s Nobel Prize, scientists began to ponder the role of supermassive black holes in galactic nuclei as a potential answer to a number of astrophysical mysteries. But while such objects had been inferred theoretically, no observations had been made, as the telescopes of the time lacked sufficient angular resolution. The only way to observe such an object was indirectly—through inferences on mass density drawn from the motion of stars near the galactic center.

    Such measurements could be made only in the near-infrared, as clouds of interstellar gas obscure observations at optical wavelengths. And the need to track the motions of stars over long periods meant that the observations had to be undertaken using Earth-based telescopes.

    In the 1990s, teams led by Genzel, using telescopes in Chile operated by the European Southern Observatory, and Ghez, using the Keck Telescope in Hawaii, began employing improved optical instruments and techniques to stare at the galactic center over long periods and tease out the required observations.

    ESO VLT at Cerro Paranal in the Atacama Desert, •ANTU (UT1; The Sun ),
    •KUEYEN (UT2; The Moon ),
    •MELIPAL (UT3; The Southern Cross ), and
    •YEPUN (UT4; Venus – as evening star).
    elevation 2,635 m (8,645 ft) from above Credit J.L. Dauvergne & G. Hüdepohl atacama photo.

    Keck Observatory, two 10 meter telescopes operated by Caltech and the University of California, Maunakea Hawaii USA, altitude 4,207 m (13,802 ft).


    Sharpening the picture

    One big issue for these Earth-based scopes was the one that has dogged astronomers since the days of Galileo: the obscuring effect of atmospheric turbulence. As a first approach to resolving that problem, the teams led by both Genzel and Ghez developed a technique called speckle imaging. The technique involved taking numerous short exposures of the target star with high sensitivity, and then stacking the data to sharpen the image.

    The result was an impressive increase in the sharpness of the observations by the two teams. That was important, as the teams needed to track stellar motions at a relatively fine scale (at least in astronomical terms) to make the requisite calculations. But the brief exposure times of speckle imaging meant that it could be used to sharpen up only the brightest stars in the galactic center. And the technique was slow, requiring surveys stretching over years to extract velocity information for only a handful of stars.

    An adaptive-optics speedup

    The advent of adaptive optics at the end of the 1990s changed the game and allowed both research teams to speed things up considerably.

    Adaptive optics works by first taking an observation of a “guidestar”—either a nearby natural star, or an artificially created point source made by exciting sodium atoms in the upper atmosphere with a powerful laser. Then, the known position and brightness of that point source is used to calculate the effect of atmospheric turbulence at that instant. Using that information, the wavefront of the light from the astronomical target actually being observed can in turn be reshaped in real time, via equipment such as rapidly deformable mirrors, to compensate for those atmospheric distortions.

    3
    Some adaptive-optics systems use lasers to excite sodium atoms in the upper atmosphere. The emissions from those excited atoms form a “guidestar”—an artifical point source of light that can be used to computationally back out the effect of turbulence in Earth’s atmosphere. [Image: Getty Images]

    Glistening against the awesome backdrop of the night sky above ESO_s Paranal Observatory, four laser beams project out into the darkness from Unit Telescope 4 UT4 of the VLT, a major asset of the Adaptive Optics system.

    While the idea of adaptive optics had been proposed in the 1950s, it wasn’t until the late 1990s that such systems started to become available on large ground-based telescopes. These included the Keck Telescope, where Ghez’s team was working, and ESO’s Very Large Telescope (VLT), one site of the Genzel team’s effort.

    A key advantage of this new image-sharpening technique was that it permitted long exposures, expanding the number of stars that could be observed and the imaging depth that was possible. It also opened up the prospect of using sensitive spectroscopes that could probe (through the Doppler effect) the radial velocities of the stars, limning out a more complete picture of their motion. And it allowed monitoring of stellar motion over a much shorter timescale than was possible with speckle imaging.

    “Still important”

    These optical advances allowed both teams to image and analyze a crucial short-orbital-period star near the galactic center. And the data from the two teams’ observations showed excellent agreement, effectively nailing the case that the object at the galactic center was indeed a supermassive black hole.

    Since then, the exploits of these bizarre dark objects have made many a scientific headline. Some high points have included the collisions of black holes now detected almost routinely by the LIGO and Virgo gravitational-wave observatories, and the stunning first “picture” of a black hole by the Event Horizon Telescope, released in 2019.

    MIT /Caltech Advanced aLigo .

    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA.

    Caltech/MIT Advanced aLigo detector installation Hanford, WA, USA.

    VIRGO Gravitational Wave interferometer, near Pisa, Italy.

    VIRGO Gravitational Wave interferometer, near Pisa, Italy.

    Messier 87*, The first image of a black hole. This is the supermassive black hole at the center of the galaxy Messier 87. Image via JPL/ Event Horizon Telescope Collaboration.

    EHT map.

    At the press conference announcing this year’s Nobel physics prize, co-laureate Andrea Ghez, reached by phone, noted in response to a question that the prize for her groundbreaking work underscored the importance of science, at a time when some have sensed a more than a whiff of anti-science sentiment in the zeitgeist.

    “Science is still important, and pursuing the reality of our physical world is critical to human beings,” Ghez said. “I think today I feel more passionate about the teaching side of my job … It’s so important to teach the younger generation that their ability to question, and to think, is crucial to the future of the world.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Optics and Photonics News (OPN) is The Optical Society’s monthly news magazine. It provides in-depth coverage of recent developments in the field of optics and offers busy professionals the tools they need to succeed in the optics industry, as well as informative pieces on a variety of topics such as science and society, education, technology and business. OPN strives to make the various facets of this diverse field accessible to researchers, engineers, businesspeople and students. Contributors include scientists and journalists who specialize in the field of optics. We welcome your submissions.

     
  • richardmitnick 10:51 am on September 16, 2020 Permalink | Reply
    Tags: "Supreme or Unproven?", A look at optics, Classical shortcuts, Dots, Erring on the side of caution, , ions and photons, Long-sought milestone?, Optics & Photonics, , , Quantum’s power   

    From Optics & Photonics: “Supreme or Unproven?” 

    From Optics & Photonics

    01 March 2020 [Missed this very important article. Making amends here.]
    Edwin Cartlidge

    Despite much recent fanfare, quantum computers still need to show that they can do something useful.

    Google 54-qubit Sycamore superconducting processor quantum computer.

    Judging by the cover of Nature that day, 24 October 2019 marked a turning point in the decades-long effort to harness the strange laws of quantum mechanics in the service of computing.

    1

    The words “quantum supremacy,” emblazoned in large capital letters on the front of the prestigious journal, announced to the world that a quantum computer had, for the first time, performed a computation impossible to carry out on a classical supercomputer in any reasonable amount of time—despite having vastly less in the way of processors, memory and software to draw on.

    The quantum computer in question, Sycamore, comprised a mere 53 superconducting quantum bits, or qubits. It was built by a group of scientists at Google led by physicist John Martinis, who used it to execute an algorithm that generated a semi-random series of numbers. Those researchers then worked out how long they would have needed to simulate that operation on the IBM-built Summit supercomputer at Oak Ridge National Laboratory in Tennessee, USA, the processors of which include tens of trillions of transistors and which has 250,000 terabytes of storage.

    ORNL IBM AC922 SUMMIT supercomputer, was No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy.


    The IBM-built Summit supercomputer at the Oak Ridge National Laboratory, USA, contains tens of trillions of transistors and can carry out about 200,000 trillion operations a second. Credit:ORNL.

    Amazingly, Martinis and colleagues concluded that what Sycamore could do in a little over three minutes, Summit would take 10,000 years to simulate.

    2
    Google CEO Sundar Pichai next to the company’s quantum computer. Credit: Google.

    Long-sought milestone?

    For many scientists, Sycamore’s result represents a major milestone on the road to a real-world, general-purpose quantum computer. Having invested millions of dollars in the field over the course of more than 30 years, governments and, increasingly, industry have bet that the exponential speed-up in processing power offered by quantum states in theory can be realized practically.

    _________________________________________________

    3
    Google’s Sycamore processor. Credit: Erik Lucero, Google.
    Sycamore—A quantum chip bearing fruit

    Google’s Sycamore processor consists of a 1-cm^2 piece of aluminum containing a 2D array of 53 qubits—each acting as a tiny superconducting resonator that encodes the values 0 and 1 in its two lowest energy levels, and coupled to its four nearest neighbors. Cooled to below 20 mK to minimize thermal interference, the qubits are subject to “gate” operations—having their coupling turned on and off, as well as absorbing microwaves and experiencing variations in magnetic flux.

    The Google team executed a series of cycles, each involving a random selection of one-qubit gates and a specific two-qubit gate. After completing the last cycle, they then read out the value of each qubit to yield a 53-bit-long string of 0s and 1s. That sequence appears random, but quantum entanglement and interference dictate that some of the 253 permutations are much more likely to occur than others. Repeating the process a million times builds up a statistically significant number of bit strings that can be compared with the theoretical distribution calculated using a classical computer.

    Measuring Sycamore’s “fidelity” to the theoretical distribution over 14 cycles, Martinis and coworkers found that the figure, 0.8%, agreed with calculations based on the fidelities of individual gates—and used that fact to estimate that after 20 cycles, the fidelity would have been about 0.1% (as the fidelity is gradually eroded by gate errors). At this level of complexity and fidelity, the team calculated, the classical Summit supercomputer would require a whopping 10,000 years to simulate the quantum wave function—whereas Sycamore needed a mere 200 seconds to take its 1 million samples.
    _________________________________________________

    Winning that bet, however, depends on being able to protect a quantum computer’s delicate superposition states from even the smallest amounts of noise, such as tiny temperature fluctuations or minuscule electric fields. The Google result shows that noise can be controlled sufficiently to enable the execution of a classically difficult algorithm, according to Greg Kuperberg, a mathematician at the University of California, Davis, USA. “This advance is a major blow against arguments that quantum computers are impossible,” he says. “It is a tremendous confidence builder for the future.”

    Not everyone, however, is convinced by the research. A number of experts, including several at IBM, believe that the Google group has seriously underestimated the capacity of traditional digital computers to simulate the kind of algorithms that could be run on Sycamore. More fundamentally, it still remains to be seen whether scientists can develop a quantum algorithm that is resilient to noise and that does something people are willing to pay for—given how little practical utility the current algorithm is likely to have.

    “For me, the biggest value in the Google research is the technical achievement,” says Lieven Vandersypen, who works on rival quantum-dot qubits at the Delft University of Technology in the Netherlands. He points out that the previous best superconducting computer featured just 20 quite poorly controlled qubits. “But what we in the field are after is a computer that can solve useful problems, and we are still far from that.”

    Quantum’s power

    Quantum computers offer the possibility of carrying out certain tasks far more quickly than is possible with classical devices, owing to a number of bizarre properties of the quantum world. Whereas a classical computer processes data sequentially, a quantum computer should operate as a massively parallel processor. It does so thanks to the fact that each qubit—encoded in quantum particles such as atoms, electrons or photons—can exist in a superposition of the “0” and “1” states, rather than simply one or the other, and because the qubits are linked together through entanglement.

    For N qubits, each of the 2^N possible states that can be represented has an associated amplitude. The idea is to carry out a series of operations on the qubits, specified by a quantum algorithm, such that the system’s wave function evolves in a predetermined way, causing the amplitudes to change at each step. When the computer’s output is then obtained by measuring the value of each qubit, the wave function collapses to yield the result.

    The Google experiment, carried out in company labs in Santa Barbara, CA, USA, was designed to execute an algorithm whose answer could only be found classically by simulating the system’s wave function. So while running the algorithm on a quantum computer would only take as long as is needed to execute its limited number of steps, simulating that algorithm classically would involve tracking the 2^N probability amplitudes. Even with just 53 qubits that is an enormous number—9×10^15, or 9,000 trillion.

    Sycamore is not the first processor to have harnessed quantum interference to perform a calculation considered very difficult, if not impossible, to do using a classical computer. In 2017, two groups in the U.S. each used about 50 interacting, individually controllable qubits to simulate collections of quantum spins. Christopher Monroe and colleagues at the University of Maryland, College Park, manipulated electrically trapped ions using laser pulses, while OSA Fellow Mikhail Lukin of Harvard University and coworkers used a laser to excite neutral atoms. Both groups used their devices to determine the critical point at which a magnetic-phase transition occurs.

    However, these systems were designed to carry out very specific tasks, somewhat akin to early classical analog computers. Google’s processor, in contrast, is a programmable digital machine. By employing a handful of different logic gates—specific operations applied either to one or two qubits—it in principle can execute many types of quantum algorithms.

    Martinis and colleagues showed that they could use these gates to reliably generate a sample of numbers from the semi-random algorithm. Crucially, they found that they could prevent errors in the gates from building up and generating garbage at the output—leading them to declare that they had achieved quantum supremacy.

    “We are thrilled,” says Martinis, who is also a professor at the University of California, Santa Barbara. “We have been trying to do this for quite a few years and have been talking about it, but of course there is a bit of pressure on you to make good on your claims.”

    Classical shortcuts

    When the Google team published its results—a preliminary version of which had been accidently posted online at NASA a month earlier—rivals lost little time in criticizing them. In particular, researchers at IBM, which itself works on superconducting qubits, posted a paper on the arXiv server arguing that Summit could in fact simulate Sycamore’s operations in just 2.5 days (and at higher fidelity). Google’s oversight, they said, was to not have considered how much more efficiently the supercomputer could track the system’s wave function if it fully exploited all of its hard disk space.

    Kuperberg argues that Sycamore’s performance still merits the label “supremacy” given the disparity in resources available to the two computers. (In fact, the IBM researchers didn’t actually carry out the simulation, possibly because it would have been too expensive.) Kuperberg adds that with just a dozen or so more qubits, the simulation time would climb from days to centuries. “If this is what passes as refutation, then this is still a quantum David versus a classical Goliath,” he says. “This is supremacy enough as far as I am concerned.”

    Indeed, in their paper Martinis and colleagues write that while they expect classical simulation techniques to improve, they also expect that “they will be consistently outpaced by hardware improvements on larger quantum processors.” Others, however, suggest that quantum computers might struggle to deliver any meaningful speed-up over classical devices. In particular, argue critics, it remains to be seen just how “quantum mechanical” future quantum computers will be—and therefore how easy it might be to imitate them.

    To make classical simulation more competitive, the IBM researchers, as well as counterparts at the Chinese tech company Alibaba, are looking to make better use of supercomputer hardware. But Graeme Smith, a theoretical physicist at the University of Colorado and the JILA research institute in Boulder, USA, thinks that more radical improvement might be possible. He argues that the noise in Google’s gates, low as it is, could still swamp much of the system’s quantum information after multiple cycles. As such, he reckons it may be possible to develop a classical algorithm that sidesteps the need to calculate the 53-qubit wave function. “There is nothing to suggest that you have to do that to sample from [Google’s] circuit,” he says.

    Indeed, Itay Hen, a numerical physicist at the University of Southern California in Los Angeles, USA, is trying to devise a classical algorithm that directly samples from the distribution output by Google’s circuit. Although too early to know whether the scheme will work, he says it would involve calculating easy bits of the wave function and interfering them to generate a succession of individual data strings very quickly. “I am guessing that lots of other people are doing a similar thing,” he adds.

    As Hen explains, Martinis and colleagues had to make a compromise when designing their quantum-supremacy experiment—making the circuit complex enough to be classically hard, but not so complex that its output ended up being pure noise. And he says that the same compromise faces all developers of what is hoped will become the first generation of useful quantum computers—a technology known as “noisy intermediate-scale quantum,” or NISQ.

    Such devices might consist of several hundred qubits, perhaps allowing them to simulate molecules and other small quantum systems. This is how Richard Feynman, back in the early 1980s, originally envisaged quantum computers being used—conceivably allowing scientists to design new materials or develop new drugs. But as their name suggests, these devices, too, would be limited by noise. The question, says Hen, is whether they can be built with enough qubits and processor cycles to do something that a classical computer can’t.

    Dots, ions and photons

    To try and meet the challenge, physicists are working on a number of competing technologies—superconducting circuits, qubits encoded in nuclear or electronic spins, trapped atoms or ions—each of which has its strengths and weaknesses (see OPN, October 2016, Quantum Computing: How Close Are We?). Vandersypen, for instance, is hopeful that spin qubits made from quantum dots—essentially artificial atoms—can be scaled up. He points out that such qubits have been fabricated in an industrial clean room at the U.S. chip giant Intel, which has teamed up with him and his colleagues at the Delft University of Technology to develop the technology. “We have done measurements [on the qubits],” he adds, “but not yet gotten to the point of qubit manipulation.”

    4
    Collaborating scientists from Intel and QuTech at the Delft University of Technology with Intel’s 17-qubit superconducting test chip. [Courtesy of Intel Corp.]

    Trapped-ion qubits, meanwhile, are relatively slow, but have higher fidelities and can operate more cycles than their superconducting rivals. Monroe is confident that by linking up multiple ion traps, perhaps optically, it should be possible to make NISQ devices with hundreds of qubits. Indeed, he cofounded the company IonQ with OSA Fellow Jungsang Kim from Duke University, USA, to commercialize the technology.

    A completely different approach is to encode quantum information in light rather than matter. Photonic qubits are naturally resistant to certain types of noise, but being harder to manipulate they may ultimately be more suited to communication and sensing rather than computing (see “A look at optics,”).

    _________________________________________________
    6
    Xanadu’s quantum chip. Credit: Xanadu Quantum Technologies Inc.

    A look at optics

    As qubits, photons have several virtues. Because they usually don’t interact with one another they are immune to stray electromagnetic fields, while their high energies at visible wavelengths make them robust against thermal fluctuations—removing the need for refrigeration. But their isolation makes them tricky to manipulate and process.

    Two startups are working to get around this problem—and raising tens of millions of dollars in the process. PsiQuantum in Palo Alto, CA, USA, aims to make a chip with around 1 million qubits. Because photons are bosons and tend to stick together, their paths combine after entering 50-50 beam splitters from opposite sides, effectively interacting. Xanadu in Toronto, Canada, instead relies on the uncertainty principle, generating beams of “squeezed light” that have lower uncertainty in one quantum property at the expense of greater uncertainty in another. In theory, interfering these beams and counting photons at the output might enable quantum computation.

    Both Xanadu and PsiQuantum have major, if different, technical hurdles to overcome before their computers become reality, according to OSA Fellow Michael Raymer, an optical physicist at the University of Oregon, USA, and a driving force behind the U.S. National Quantum Initiative.

    Raymer adds that photons might also interact not directly, but via matter intermediaries, potentially enabling quantum-logic operations between single photons. Or they might be used to link superconducting processors to slower but longer-lived trapped-ion qubits (acting as memory). Alternatively, photon–matter interactions could be exploited in the quantum repeaters needed to ensure entanglement between distant particles—potentially a boost for both communication and sensing.

    “Whether or not optics will be used to create free-standing quantum computers,” says Raymer, “I will defer prediction on that.”
    _________________________________________________

    Yet turning NISQ computers into practical devices will need more than just improvements in hardware, according to William Oliver, an electrical engineer and physicist at the Massachusetts Institute of Technology, USA. Also essential, he says, will be developing new algorithms that can exploit these devices for commercial ends—be those ends optimizing investment portfolios or simulating new materials. “The most important thing,” Oliver says, “is to find commercial applications that gain advantage from the qubits we have today.”

    According to Hen, though, it remains to be seen whether any suitable algorithms can be found. For simulation of chemical systems, he says, it is not clear if even hundreds of qubits would be enough to reproduce the interactions of just 40 electrons—the current classical limit—given the inaccuracies introduced by noise. Indeed, Smith is pessimistic about NISQ computers being able to do anything useful. “There is a lot of hope,” he says, “but not a lot of good science to substantiate that hope.”

    Erring on the side of caution

    The only realistic aim, Hen argues—and one that all experts see as the ultimate goal of quantum computing—is to build large, fault-tolerant machines. These would rely on error correction, which involves spreading the value of a single “logical qubit” over multiple physical qubits to make computations robust against errors on any specific bit (since quantum information cannot simply be copied). But implementing error correction will require that the error rate on individual qubits and logic gates is low enough that adding the error-correcting qubits doesn’t introduce more noise into the system than it removes.

    Vandersypen reckons that this milestone could be achieved in as little as a year or two. The real challenge, he argues, will be scaling up—given how many qubits are likely to be needed for full-scale fault-tolerant computers. Particularly challenging will be making a machine that can find the prime factors of huge numbers, an application put forward by mathematician Peter Shor in 1994 that could famously threaten internet encryption. Martinis himself estimates that a device capable of finding the prime factors of a 2000-bit number in a day would need about 20 million physical qubits, given a two-qubit error probability of about 0.1%.

    Despite the huge challenges that lie ahead, Martinis is optimistic about future progress. He says that he and his colleagues at Google are aiming to get two-qubit error rates down to 0.1% by increasing the coherence time of their qubits—doubling their current value of 10–20 microseconds within six months, and then quadrupling it in two years. They then hope to build a computer with 1,000 logical qubits within 10 years—a device that he says wouldn’t be big enough to threaten internet security but could solve problems in quantum chemistry. “We are putting together a plan and a timeline and we are going to try to stick to that,” he says.

    However, Oliver is skeptical that such an ambitious timeframe can be met, estimating that a full-scale fault-tolerant computer is likely to take “a couple of decades” to build. Indeed, he urges his fellow scientists not to overstate quantum computers’ near-term potential. Otherwise, he fears, the field could enter a “quantum winter” in which enthusiasm gives way to pessimism and the withdrawal of funding. “A better approach,” according to Oliver, “is to be realistic about the promise and the challenges of quantum computing so that progress remains steady.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Optics and Photonics News (OPN) is The Optical Society’s monthly news magazine. It provides in-depth coverage of recent developments in the field of optics and offers busy professionals the tools they need to succeed in the optics industry, as well as informative pieces on a variety of topics such as science and society, education, technology and business. OPN strives to make the various facets of this diverse field accessible to researchers, engineers, businesspeople and students. Contributors include scientists and journalists who specialize in the field of optics. We welcome your submissions.

     
  • richardmitnick 10:09 am on September 16, 2020 Permalink | Reply
    Tags: "Under the Hood of Google’s Quantum Computer", Checking the uncheckable, Finding the right problem, Google's 54-qubit Sycamore superconducting processor quantum computer, Optics & Photonics, Scoping out a prototype, The “tricky stuff”: Building the quantum machine, What is a “useful quantum computer”?   

    From Optics & Photonics: “Under the Hood of Google’s Quantum Computer” 

    From Optics & Photonics

    9.14.20
    Stewart Wills

    Google’s 54-qubit Sycamore superconducting processor quantum computer.

    In a paper published in the 24 October 2019 Nature, Google created quite a stir in the world of quantum information. In that study, a group of the web behemoth’s scientists, led by physicist John Martinis, reported that its 53-qubit superconducting quantum computer, dubbed Sycamore, had succeeded in carrying out a task that would have been impossible for a classical supercomputer to perform in a reasonable time.

    For the keynote address to kick off OSA’s 2020 Quantum 2.0 conference—a brand-new meeting focused on developments in quantum science and technology—a contributor to the Sycamore effort, Marissa Giustina of Google AI Quantum, took the meeting’s online attendees on a whirlwind trip through the process that led to the company’s blockbuster 2019 announcement.

    What is a “useful quantum computer”?

    Giustina began by noting that Google’s interest in quantum computing, and particularly its relevance to optimization problems, began with the work of computer scientist Hartmut Neven in 2012. That interest quickly expanded in 2014 with the addition of the quantum computing effort of researcher John Martinis at the University of California, Santa Barbara, USA. The project has since mushroomed to a 70-person team, with the application effort centered in Los Angeles, hardware research in Santa Barbara, and the cloud interface—which “has the job of letting us talk to each other”—in Seattle, Washington.

    As the name implies, building a “useful quantum computer” requires three things, Giustina said. One is, of course, a computer—a machine that performs a computational task. Another is a controllable quantum system, with many quantum bits, or qubits—which means, she added, that “you’re going to have to harness zillions of amplitudes” to control these qubits. And the quantum computer needs to be useful, which Giustina stressed is “not a trivial addition,” requiring sufficiently evolved error correction to take things in the direction of a universal programmable quantum device.

    Scoping out a prototype

    That goal is something unlikely to be reached anytime soon. So Google’s effort started with a prototype, drawing on the superconducting-circuit flavor of quantum computing being worked on in the Martinis lab. The goal was to try to develop a prototype system that could “enter a space where no classical computer can go” (or, at least, go in a reasonable amount of time). The team’s calculations and reasoning suggested that a size of “around 50 qubits” was the threshold for that computational frontier, and the Sycamore system that the team built in fact sported 53 superconducting qubits.

    Those qubits were to be laid out in a 2D-array, to allow the team to track errors by performing parity checks on pairs of qubits. Giustina admits that this rests on the assumption that errors at the system level can be tracked by looking at discrete errors between qubits—something the team would need to test. Making the system “good for something,” she added, needed “a good handshake between algorithm and hardware developers.” (Indeed, Google has developed an open-source Python library, Cirq, to help enable that collaboration.)

    The “tricky stuff”: Building the quantum machine

    2
    The Google Sycamore chip (top) involves an architecture constructed of control circuitry, superconducting qubits (in aluminum-on-silicon), and microwave resonators for measurement. [Image: Erik Lucero, Google (top); Google AI Quantum (bottom)].

    With that as background, Giustina moved on to what she called “the tricky stuff”—how actually to make such a prototype quantum computer. The superconducting microwave qubits themselves were constructed of capacitors and SQUIDs (pairs of Josephson junctions), fashioned in a cleanroom from aluminum-on-silicon in a 2D pattern. Transmission-line LC resonators were added to each qubit, with the resonator length for each qubit slightly different from the others, so that the individual qubits could read out at different microwave frequencies.

    Each processor chip thus packages together control, qubit and measurement in a single structure. To make a bigger chip, the team used a 3D integration, cementing 2D arrays together with indium bumps. Then the entire package was put on a dilution refrigerator, which chilled it to a temperature of 20 mK.

    Of calibration and music

    A major next challenge was calibrating the machine. Giustina drew on a musical metaphor to explain the calibration. “A musical instrument generates a sound that listener hears as a note—but the musician must be good enough to produce that note,” she said. “Similarly, calibration is the process of producing the right electronic wave that will produce the quantum gate that we want. The qubit is our listener, and the control electronics is our instrument.”

    Such calibration isn’t trivial, Giustina noted. Indeed, Google has written computer programs to handle it, so that the calibration isn’t limited to one or two qubits. “We need all the qubits to work,” she said. “We can’t just rely on ‘hero’ performance for one or two.” The team also did a battery of input–output test to gauge the system fidelity, assessing the system error by looking at the errors between each qubit and its four nearest neighbors. The result was a readout error of 3.8%. “We’re pretty proud of these error rates,” Giustina said.

    Finding the right problem

    With the machine built and calibrated, the Google team next needed to find “a small milestone computational task”—something that the Sycamore unit could handle easily, but that would take “too long” for a classical machine to do. “There’s not much difference between a classical computer and an abacus,” Giustina noted. “They can both simulate each other. A quantum computer uses a different kind of computation altogether.”

    Ideally, such a demonstration would involve some classic problems on which it’s believed quantum computers would excel, such as factoring of large numbers or function inversion. These are problems that tend to hang up classical computers—but, interestingly, that would be easy to check by running them in the other direction with a classical machine. Unfortunately, though, Giustina said that current quantum hardware is not nearly sophisticated enough to solve these problems.

    So the team settled on a specific problem—related to the mapping of quantum logic gates to specific bit strings—that’s known to have high computational complexity. The quantum computer could run the problem directly; a classical computer could solve it only by simulating the quantum machine’s wave function. “We could then determine the classical cost of the quantum machine’s labor,” Giustina said. “If it’s ‘too high’, we have achieved our goal.”

    Checking the uncheckable

    One problem, of course, is that when the number of qubits becomes large enough for the quantum device to outperform the classical computer, then it’s no longer possible to use the latter machine to check the performance of the former. Giustina walked the audience through the complex verification regime that the team used, which include both modeling of the problem and test runs with simplified circuits that could be checked classically, to amass a picture of the machine’s performance indirectly. The two lines of evidence, she said, mutually reinforced one another and provided significant confidence of the quantum machine’s advantage.

    “The take-home message,” according to Giustina, was that “Sycamore can run 53 qubit circuits with nonzero fidelity,” and that achieving the same fidelity on a classical machine would require “a ridiculous amount of resources.” The next big question, she believes, will be to determine whether quantum mechanics, and quantum error correction, can hold for a very large, highly complex system. “For a system of this size, at least, it’s possible,” she said. “But the work done here is the tip of the iceberg. To realize error correction, significant work will be necessary.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Optics and Photonics News (OPN) is The Optical Society’s monthly news magazine. It provides in-depth coverage of recent developments in the field of optics and offers busy professionals the tools they need to succeed in the optics industry, as well as informative pieces on a variety of topics such as science and society, education, technology and business. OPN strives to make the various facets of this diverse field accessible to researchers, engineers, businesspeople and students. Contributors include scientists and journalists who specialize in the field of optics. We welcome your submissions.

     
  • richardmitnick 4:10 pm on August 20, 2020 Permalink | Reply
    Tags: "Photoacoustic Microscopy System Boosts Imaging Speed", , Optics & Photonics, Pulsed laser technology   

    From Optics & Photonics: “Photoacoustic Microscopy System Boosts Imaging Speed” 

    From Optics & Photonics

    20 August 2020
    Meeri Kim

    1
    Multifocal optical-resolution photoacoustic microscopy through an ergodic relay (MFOR-PAMER) shortens the scanning time while maintaining a simple and economic setup. [Image: Yang Li, Terence T. W. Wong, Junhui Shi, Hsun-Chia Hsu and Lihong V. Wang]

    Because light scatters so strongly in biological tissues, purely optical imaging techniques have a short leash when it comes to probing depths beneath the surface. Sound scatters a thousand times less than light in these situations, which has led scientists to develop hybrid optical-acoustic imaging methods.

    Researchers at the California Institute of Technology, USA, report on one of these hybrid methods, a new variation of a photoacoustic microscopy system that is faster, smaller, and cheaper than others of its kind [Nature Light Science and Applications].

    It can reduce the imaging time of a histology sample from several hours to less than a minute, paving the way for rapid, label-free diagnoses of cancer and other diseases.

    Low complexity, high resolution

    Photoacoustic microscopy uses a pulsed laser to illuminate a sample, which heats up the molecules inside. The rise in temperature leads to thermoelastic expansion of the tissue, generating acoustic waves that can be detected by ultrasonic transducers. The result is a map of optical absorption within the sample, which depends on the concentrations of things like hemoglobin, water or lipids.

    For the current study, OSA Fellow Lihong V. Wang and his colleagues wanted to develop a new type of photoacoustic microscopy system that combined a fast imaging speed with low complexity and cost. A method previously created by Wang’s group, called multifocal optical-resolution photoacoustic tomography (MFOR-PACT), boosted imaging speed by adding a microlens array with multiple optical foci and an ultrasonic transducer array to a traditional setup. These modifications got rid of the bottleneck of slow mechanical scanning across the sample to form an image.

    “Applications of the MFOR-PACT system were limited because of the size and complexity of the array-based photoacoustic tomography system,” said Yang Li, the study’s first author.

    Improving scanning time

    The solution was to replace the array-based design with one that used a single-element ultrasonic transducer through an ergodic relay, which scrambles acoustic pulses based on their origin. A light-transparent, right-angle prism served as the ergodic relay that could then collect photoacoustic signals from the entire field-of-view with a single laser shot.

    Li and his colleagues validated the new technique, called multifocal optical-resolution photoacoustic microscopy through an ergodic relay (MFOR-PAMER), with in vitro and in vivo experiments. For example, they successfully imaged blood vessels in a mouse ear with an optical resolution of 13 microns. In addition, MFOR-PAMER achieves a 400-fold improvement in scanning time compared with a traditional photoacoustic microscopy system at the same resolution.

    “One of the useful applications that we envisioned is using UV illumination for high-speed, label-free histological study of biological tissues,” said Li. “A conventional UV-based optical-resolution photoacoustic microscopy system required several hours to image a histology sample. Our system can potentially reduce the imaging time to less than a minute, which will be a significant improvement in efficiency in clinical settings.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Optics and Photonics News (OPN) is The Optical Society’s monthly news magazine. It provides in-depth coverage of recent developments in the field of optics and offers busy professionals the tools they need to succeed in the optics industry, as well as informative pieces on a variety of topics such as science and society, education, technology and business. OPN strives to make the various facets of this diverse field accessible to researchers, engineers, businesspeople and students. Contributors include scientists and journalists who specialize in the field of optics. We welcome your submissions.

     
  • richardmitnick 1:08 pm on August 17, 2020 Permalink | Reply
    Tags: "Toward Brighter Nanopixels Using Quantum Dots", A 3D-printing method devised by researchers in Korea and Hong Kong builds up nanoscale pixels of red green and blue colors by using a glass “nanopipet”., Optics & Photonics, The 3D-printing can produce 620-nm-wide pixels of pure color that are twice as bright as those made from 2D patterning.   

    From Optics & Photonics: “Toward Brighter Nanopixels Using Quantum Dots” 

    From Optics & Photonics

    17 August 2020
    Stewart Wills

    1
    A 3D-printing method devised by researchers in Korea and Hong Kong builds up nanoscale pixels of red, green and blue colors by using a glass “nanopipet” to deposit pillars of polystyrene ink doped with quantum dots. The team believes that the nanopillars, only 620 nm in diameter and 2 to 10 µm in height, can be packed together to achieve display resolutions more than five times greater than those achievable with current commercial technology. [Image: Courtesy of J. Pyo.]

    Display resolutions have improved incredibly in recent years—so much so that some wonder whether the newest generation of high-definition TVs, so-called 8K displays, has reached the limits of resolution enhancement that the human eye can distinguish (see Television Goes 8K, OPN, May 2020). But for some applications, such as microprojection and augmented and virtual reality (AR/VR), researchers continue to chase ever-smaller pixel sizes and higher pixel densities. And these efforts eventually run up against some practical limitations, particularly on pixel brightness, that are imposed by the 2D-patterning approaches commonly used to create display pixels.

    Now, researchers from the Republic of Korea and Hong Kong have proposed a novel 3D-printing approach to manufacturing pixels for display—one, they maintain, that could allow much smaller, nanoscale pixels to be crowded in at far higher densities than is possible with 2D-patterning methods (ACS Nano).

    The team’s approach rests on building up pixels vertically on free-standing nanopillars of a polymer that’s been spiked with luminescent nanoparticles, also known as quantum dots (QDs).

    The 3D-printing process, the research team reports, can produce 620-nm-wide pixels of pure color that are twice as bright as those made from 2D patterning. And the researchers claim that the method can lay down these nanoscale pixels at a “super-high density” some five times higher than limits of current commercial technology—a characteristic that could give the method potential not only in display technology but also in some niches in data storage, cryptography and other applications.

    Brightness limitations

    In addition to photolithography, common 2D methods for printing display pixels include inkjet and electrohydrodynamic-jet (e-jet) printing and transfer printing. These methods lay the light-emitting material down in a programmable pattern on a film layer that’s then used to build the display or other technology.

    The problem with these methods is that, the smaller the pixel gets, the less light-emitting material it can contain—and, thus, the dimmer it will be. The brightness can be pumped up somewhat by repeating the printing process to vertically thicken the pixel; however, this tends to smear out the pixel laterally in practice, reducing the achievable resolution.

    A 3D-printing approach

    To try out a different approach, the team behind the recently published research—led by Jaeyeon Pyo and Seung Kwon Seol of the Korea Electrotechnology Research Institute (KERI)—adapted a 3D-nanoprinting method that the researchers had developed four years earlier for fabricating nanophotonic waveguides [AdvancedOpticalMaterials].

    The team began by selecting QDs, noted for high quantum efficiency and long-term stability, as the luminescent agents to be used in the pixels. The researchers then doped samples of a polystyrene polymer solution with these luminescent nanoparticles, creating three different polymer inks, each with dots emitting at a different wavelength—650 nm (red), 540 nm (green) or 480 nm (blue).

    3
    The quantum-dot-doped “nanopillars” are deposited by a glass pipette with a nozzle around 630 nm in diameter, which builds up each nanopillar before moving to the next, as shown in a video from the research team. [Image: Courtesy of J. Pyo]

    Next, the team got down to the business of manufacturing pixels with these QD-doped inks. To do so, they used a tapered glass “nanopipet,” with an opening diameter of only around 630 nm, to squirt out femtoliter quantities of the ink. As it moves upward away from the substrate, the nozzle builds up a pillar of polymer, around 620 nm in diameter and 2 to 10 µm in height, that rapidly cures in air.

    At the end of the run to build each pixel, the pipette is suddenly yanked upward to terminate the nanopillar. A motorized stage then adjusts the substrate position and the process is repeated to build the next red, green or blue pixel. (A video provided by the researchers shows the nanofabrication process in action.)

    Bright and controllable

    The result of the 3D-printing process is an array of slender, free-standing vertical polymer nanowires, each packed with quantum dots that emit light at a different color when stimulated by light or electricity.

    A variety of tests of the pixel emissions showed, not surprisingly, that the higher the pillar, the brighter the pixel. The tests also showed, however, that building the nanowires higher did not significantly affect the spot size of the pixel—suggesting that the approach offers the prospect of significantly brightening these tiny pixels, without sacrificing resolution. (Another way of pumping up the pixel brightness, the team found, was by simply increasing the density of quantum dots in the polymer solution.)

    The researchers also tested out the pixels in a triangular “delta-type” pixel array, with the triangle consisting of one red-, green- and blue-emitting nanopillars as sub-pixels. By selectively tickling the sub-pixels with laser light, the team was able to nudge the QDs into emitting other apparent colors by mixing of the red, green and blue components.

    Fivefold resolution increase?

    The team says its method can produce pixels twice as bright as with conventional thin-film-based methods, “with a lateral dimension of 620 nm and a pitch of 3 µm for each of the red, green and blue colors.” When laid out as subpixels in the delta-type geometry, the researchers say, the nanopillar emitters allow for a resolution of as high as 5600 pixels per inch (ppi). That is, according to a press release accompanying the research, more than five times the 1000-ppi limit of current commercial technology.

    At those resolutions, the team believes that the method could find use in future super-high-resolution displays, particularly for “nanodisplay”-type applications in microprojection and AR/VR, as well as certain types of wearable technology. But the researchers see some other intriguing applications as well—for example, in certain anti-forgery, information storage and cryptography applications. Getting to any of these on a commercial basis, the team acknowledges, will require “additional efforts” to scale up the femtoliter-nozzle system for manufacturing.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Optics and Photonics News (OPN) is The Optical Society’s monthly news magazine. It provides in-depth coverage of recent developments in the field of optics and offers busy professionals the tools they need to succeed in the optics industry, as well as informative pieces on a variety of topics such as science and society, education, technology and business. OPN strives to make the various facets of this diverse field accessible to researchers, engineers, businesspeople and students. Contributors include scientists and journalists who specialize in the field of optics. We welcome your submissions.

     
  • richardmitnick 12:55 pm on August 15, 2020 Permalink | Reply
    Tags: "Monitoring Volcanoes with Tough Optical Seismometers", Fumaroles-vents near the top of a volcano- are a crucial but untapped source of information about its state of play., Optics & Photonics, Researchers report an optical seismometer able to track critical seismic activity near an active fumarole despite the summit’s extreme heat and other dangers., The cable ends in a sealed passive geophone with a mirrored surface., , The geophone sensors do not contain any electronics that would be destroyed in the high heat and acidity of the summit environment., The instrument reportedly consists of a 1500-m fiber optic cable that- from an interrogator that houses the electronics- winds up the mountain to an intermediary connection.   

    From Optics & Photonics: “Monitoring Volcanoes with Tough Optical Seismometers” 

    From Optics & Photonics

    13 August 2020
    William G. Schulz

    1
    A researcher lays fiber optic cable close to the fumarole at the summit of La Soufrière de Guadeloupe. [Image: Romain Feron]

    Atop the Caribbean’s fiery La Soufrière de Guadeloupe Volcano, researchers report an optical seismometer able to track critical seismic activity near an active fumarole, despite the summit’s extreme heat and other dangers (Seismolical Research Letters).

    The optomechanical instrument—under development for more than a decade, and installed in September 2019—was “built to withstand extreme conditions that would rapidly destroy conventional instruments,” according to the authors, members of an interdisciplinary team representing several French institutions.

    “So far, it is working well,” the team writes of the optical seismometer, which is based on Fabry-Pérot interferometry. “It is to our knowledge, the first high-resolution optical seismometer ever installed on an active volcano or other active hazardous zone.”

    Untapped information sources

    Fumaroles, or vents, near the top of a volcano are a crucial but untapped source of information about its state of play, the researchers say. By monitoring seismic activity there, researchers can infer changes to the vents’ internal structures, gain insight about deep magma flows, and more. Studying and monitoring such internal structures and activities is essential, the authors stress, as knowledge about them can reveal why and how volcanoes erupt, and can help researchers better anticipate potential volcanic disasters.

    Seismologist and geophysicist Pascal Bernard, one of the paper’s co-authors at the Institut de Physique du Globe, Paris (IPGP), says he began the project “with the idea to improve the measurement capabilities of our seismological and volcanological observatories. I investigated the possibilities of optical velocimetry [Doppler effect] but I realized that its accuracy was far from what I needed for [recording] very small amplitude ground-motion signals.”

    A choice interferometer

    The Fabry-Pérot interferometer design was chosen by the research team in 2008, Bernard says, but lack of funding delayed progress.

    2
    http://courses.washington.edu/phys331/fabry-perot/fabry-perot.php

    The instrument reportedly consists of a 1500-m fiber optic cable that, from an interrogator that houses the electronics, winds up the mountain to an intermediary connection. From there, a shorter optical cable, sheathed for better protection in the harsh environs of the summit, connects with a geophone sensor, which monitors ground motions.

    The fiber optic cable comprises eight fibers up to the intermediary connection, Bernard says. Only four fibers continue to connect with a geophone at the fumarole; four are held back for possible use with other optics-based instruments—a pressure meter, long-base tiltmeter or hydrophones, for example.

    The cable ends in a sealed, passive geophone with a mirrored surface. The short gap between the terminus of the fiber optic cable and the mirrored surface of the geophone—a 10-Hz oscillator on springs—is the interferometer cavity. As motions of the geophone change the gap between the geophone and the various optical fibers for different measurement axes, the motion is read as change in the interference signal. Light reflected off the geophone is sent back down the same cable to the interrogator at its safe distance down the mountain.

    No electronics here

    The geophone sensors do not contain any electronics that would be destroyed in the high heat and acidity of the summit environment, the researchers say. They are encased entirely in polytetrafluoroethylene (PTFE), or Teflon. As a result, the researchers characterize the seismometer as “a very sturdy instrument” that’s “essentially oblivious to any natural assault short of a flood of acidic fluid, slope-collapse landslide or major explosion.”

    Innovations unique to this setup, Bernard says, include “construction of the relevant modulations of the optical signals, and the algorithms for the real-time processing of the optical signal. We presently use standard laser diodes and photodiodes [for generating and receiving the laser light].”

    3
    The journey to install the optical seismometer on the volcano at La Soufrière de Guadeloupe took researchers into some perilous territory. [Image: Romain Feron.]

    Bernard sees other potential applications for the optical seismometer, including oil and geothermal well assessment, where heat can compromise electronic approaches; natural areas prone to lightning, which can play havoc with electrical systems; industrial and nuclear power and waste plants; particle accelerators, where radiation can damage electrical sensors; and even the deep ocean, in cabled ocean-bottom observatories. Replacing a failed electrical sensor at 3000 m depth, Bernard points out, can cost as much as US $1 million; optical sensors, he argues, have a much lower chance of failure.

    A hazardous yet promising journey

    In the paper, the team describes a hazard-filled journey, carrying 40-kg-segments of optical cable on their backs up a rough mountain slope, laying the long cable as they went. The cable needed to be placed off a footpath, in vegetation, occasionally braced with metal bars and conduits for protection. At the summit, where the team dug trenches to lay cable and buried the geophone sensors, they wore gas masks to tolerate toxic fumes from the fumaroles.

    “Our work at La Soufrière is just started,” the researchers write. They hope similar stations placed elsewhere on the volcano can provide sustained information during a crisis. What’s more, they say that they’re also pursuing “the design of many other optical geophysical sensors” using the same interrogator, such as hydrophones, microphones, strain-meters and gravimeters,

    The team includes researchers from France’s ESEO Group, Le Mans University, and IPGP’s Volcanological and Seismological Observatory of Guadeloupe.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Optics and Photonics News (OPN) is The Optical Society’s monthly news magazine. It provides in-depth coverage of recent developments in the field of optics and offers busy professionals the tools they need to succeed in the optics industry, as well as informative pieces on a variety of topics such as science and society, education, technology and business. OPN strives to make the various facets of this diverse field accessible to researchers, engineers, businesspeople and students. Contributors include scientists and journalists who specialize in the field of optics. We welcome your submissions.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: