Tagged: Optics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:22 am on February 23, 2019 Permalink | Reply
    Tags: , , Optics, , , , Semiconductor quantum dots,   

    From University of Cambridge: “Physicists get thousands of semiconductor nuclei to do ‘quantum dances’ in unison” 

    U Cambridge bloc

    From University of Cambridge

    22 Feb 2019
    Communications office

    Theoretical ESR spectrum buildup as a function of two-photon detuning δ and drive time τ, for a Rabi frequency of Ω = 3.3 MHz on the central transition. Credit: University of Cambridge.

    A team of Cambridge researchers have found a way to control the sea of nuclei in semiconductor quantum dots so they can operate as a quantum memory device.

    Quantum dots are crystals made up of thousands of atoms, and each of these atoms interacts magnetically with the trapped electron. If left alone to its own devices, this interaction of the electron with the nuclear spins, limits the usefulness of the electron as a quantum bit – a qubit.

    Led by Professor Mete Atatüre from Cambridge’s Cavendish Laboratory, the researchers are exploiting the laws of quantum physics and optics to investigate computing, sensing or communication applications.

    “Quantum dots offer an ideal interface, as mediated by light, to a system where the dynamics of individual interacting spins could be controlled and exploited,” said Atatüre, who is a Fellow of St John’s College. “Because the nuclei randomly ‘steal’ information from the electron they have traditionally been an annoyance, but we have shown we can harness them as a resource.”

    The Cambridge team found a way to exploit the interaction between the electron and the thousands of nuclei using lasers to ‘cool’ the nuclei to less than 1 milliKelvin, or a thousandth of a degree above the absolute zero temperature. They then showed they can control and manipulate the thousands of nuclei as if they form a single body in unison, like a second qubit. This proves the nuclei in the quantum dot can exchange information with the electron qubit and can be used to store quantum information as a memory device. The results are reported in the journal Science.

    Quantum computing aims to harness fundamental concepts of quantum physics, such as entanglement and superposition principle, to outperform current approaches to computing and could revolutionise technology, business and research. Just like classical computers, quantum computers need a processor, memory, and a bus to transport the information backwards and forwards. The processor is a qubit which can be an electron trapped in a quantum dot, the bus is a single photon that these quantum dots generate and are ideal for exchanging information. But the missing link for quantum dots is quantum memory.

    Atatüre said: “Instead of talking to individual nuclear spins, we worked on accessing collective spin waves by lasers. This is like a stadium where you don’t need to worry about who raises their hands in the Mexican wave going round, as long as there is one collective wave because they all dance in unison.

    “We then went on to show that these spin waves have quantum coherence. This was the missing piece of the jigsaw and we now have everything needed to build a dedicated quantum memory for every qubit.”

    In quantum technologies, the photon, the qubit and the memory need to interact with each other in a controlled way. This is mostly realised by interfacing different physical systems to form a single hybrid unit which can be inefficient. The researchers have been able to show that in quantum dots, the memory element is automatically there with every single qubit.

    Dr Dorian Gangloff, one of the first authors of the paper [Science] and a Fellow at St John’s, said the discovery will renew interest in these types of semiconductor quantum dots. Dr Gangloff explained: “This is a Holy Grail breakthrough for quantum dot research – both for quantum memory and fundamental research; we now have the tools to study dynamics of complex systems in the spirit of quantum simulation.”

    The long term opportunities of this work could be seen in the field of quantum computing. Last month, IBM launched the world’s first commercial quantum computer, and the Chief Executive of Microsoft has said quantum computing has the potential to ‘radically reshape the world’.

    Gangloff said: “The impact of the qubit could be half a century away but the power of disruptive technology is that it is hard to conceive of the problems we might open up – you can try to think of it as known unknowns but at some point you get into new territory. We don’t yet know the kind of problems it will help to solve which is very exciting.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    U Cambridge Campus

    The University of Cambridge (abbreviated as Cantab in post-nominal letters) is a collegiate public research university in Cambridge, England. Founded in 1209, Cambridge is the second-oldest university in the English-speaking world and the world’s fourth-oldest surviving university. It grew out of an association of scholars who left the University of Oxford after a dispute with townsfolk. The two ancient universities share many common features and are often jointly referred to as “Oxbridge”.

    Cambridge is formed from a variety of institutions which include 31 constituent colleges and over 100 academic departments organised into six schools. The university occupies buildings throughout the town, many of which are of historical importance. The colleges are self-governing institutions founded as integral parts of the university. In the year ended 31 July 2014, the university had a total income of £1.51 billion, of which £371 million was from research grants and contracts. The central university and colleges have a combined endowment of around £4.9 billion, the largest of any university outside the United States. Cambridge is a member of many associations and forms part of the “golden triangle” of leading English universities and Cambridge University Health Partners, an academic health science centre. The university is closely linked with the development of the high-tech business cluster known as “Silicon Fen”.

  • richardmitnick 4:03 pm on November 5, 2018 Permalink | Reply
    Tags: 'Folded' Optical Devices Manipulate Light in a New Way, , Compact spectrometer, Metasurface optics, Optics   

    From Caltech: “‘Folded’ Optical Devices Manipulate Light in a New Way” 

    Caltech Logo

    From Caltech


    Robert Perkins
    (626) 395-1862

    An array of 11 metasurface-based optical spectrometers, pictured here before the final fabrication step. Each spectrometer is composed of three metasurfaces that disperse and focus light with different wavelengths to different points. Credit: Faraon Lab/Caltech

    The future of optics

    The next generation of electronic devices, ranging from personal health monitors and augmented reality headsets to sensitive scientific instruments that would only be found in a laboratory, will likely incorporate components that use metasurface optics, according to Andrei Faraon, professor of applied physics in Caltech’s Division of Engineering and Applied Science. Metasurface optics manipulate light similarly to how a lens might—bending, focusing, or reflecting it—but do so in a finely controllable way using carefully designed microscopic structures on an otherwise flat surface. That makes them both compact and finely tunable, attractive qualities for electronic devices. However, engineers will need to overcome several challenges to make them widespread.

    The problem

    Most optical systems require more than a single metasurface to function properly. In metasurface-based optical systems, most of the total volume inside the device is just free space through which light propagates between different elements. The need for this free space makes the overall device difficult to scale down, while integrating and aligning multiple metasurfaces into a single device can be complicated and expensive.

    The invention

    To overcome this limitation, the Faraon group has introduced a technology called “folded metasurface optics,” which is a way of printing multiple types of metasurfaces onto either side of a substrate, like glass. In this way, the substrate itself becomes the propagation space for the light. As a proof of concept, the team used the technique to build a spectrometer, which is a scientific instrument for splitting light into different colors, or wavelengths, and measuring their corresponding intensities. (Spectrometers are used in a variety of fields; for example, in astronomy they are used to determine the chemical makeup of stars based on the light they emit.) The spectrometer built by Faraon’s team is 1 millimeter thick and is composed of three reflective metasurfaces placed next to each other that split and reflect light, and ultimately focus it onto a detector array. It was fabricated at the Kavli Nanoscience Institute, and its design is described in a paper published by Nature Communications on October 10.

    What it could be used for

    A compact spectrometer like the one developed by Faraon’s group has a variety of uses, including as a noninvasive blood-glucose measuring system that could be invaluable for diabetes patients. The platform uses multiple metasurface elements that are fabricated in a single step, so, in general, it provides a potential path toward complex but inexpensive optical systems.

    The details

    The paper is titled “Compact folded metasurface spectrometer.” Co-authors include Caltech graduate students MohammadSadegh Faraji-Dana (MS ’18), Ehsan Arbabi (MS ’17), Seyedeh Mahsa Kamali (MS ’17), and Hyounghan Kwon (MS ’18), and Amir Arbabi of the University of Massachusetts Amherst. This research was supported by Samsung Electronics, the National Sciences and Engineering Research Council of Canada, and the U.S. Department of Energy.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”

    Caltech campus

  • richardmitnick 1:02 pm on October 14, 2018 Permalink | Reply
    Tags: , Optics, , The World's Fastest Camera Can 'Freeze Time' Show Beams of Light in Slow Motion, University of Quebec   

    From Science Alert: “The World’s Fastest Camera Can ‘Freeze Time’, Show Beams of Light in Slow Motion” 


    From Science Alert

    14 OCT 2018

    (Adobe Stock)

    When you push the button on a laser pointer, its entire beam seems to appear instantaneously. In reality, though, the photons shoot out like water from a hose, just at a speed too fast to see.

    Too fast for the human eye to see, anyways.

    Researchers at Caltech and the University of Quebec have invented what is now the world’s fastest camera, and it takes a mind-boggling 10 trillion shots per second —enough to record footage of a pulse of light as it travels through space.

    The extraordinary camera, which the researchers describe in a paper published Monday in the journal Light: Science & Applications, builds on a technology called compressed ultrafast photography (CUP).

    Figure 1. The trillion-frame-per-second compressed ultrafast photography system. INRS

    CUP can lock down an impressive 100 billion frames per second, but by simultaneously recording a static image and performing some tricky math, the researchers were able to reconstruct 10 trillion frames.

    They call the new technique T-CUP, and while they don’t say what the “T” stands for, our money is on “trillion.”

    Ludicrous Speed

    The camera more than doubles the speed record set in 2015 by a camera that took 4.4 trillion shots per second. Its inventors hope it’ll be useful in biomedical and materials research.

    But they’ve already turned their attention to smashing their newly set record.

    “It’s an achievement in itself,” said lead author Jinyang Liang in a press release, “but we already see possibilities for increasing the speed to up to one quadrillion frames per second!”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 7:35 am on September 7, 2018 Permalink | Reply
    Tags: , Fish-eye lens may entangle pairs of atoms, , Optics, , ,   

    From MIT News: “Fish-eye lens may entangle pairs of atoms” 

    MIT News
    MIT Widget

    From MIT News

    September 5, 2018
    Jennifer Chu

    James Maxwell was the first to realize that light is able to travel in perfect circles within the fish-eye lens because the density of the lens changes, with material being thickest at the middle and gradually thinning out toward the edges. No image credit.

    Scientists find a theoretical optical device may have uses in quantum computing.

    Nearly 150 years ago, the physicist James Maxwell proposed that a circular lens that is thickest at its center, and that gradually thins out at its edges, should exhibit some fascinating optical behavior. Namely, when light is shone through such a lens, it should travel around in perfect circles, creating highly unusual, curved paths of light.

    He also noted that such a lens, at least broadly speaking, resembles the eye of a fish. The lens configuration he devised has since been known in physics as Maxwell’s fish-eye lens — a theoretical construct that is only slightly similar to commercially available fish-eye lenses for cameras and telescopes.

    Now scientists at MIT and Harvard University have for the first time studied this unique, theoretical lens from a quantum mechanical perspective, to see how individual atoms and photons may behave within the lens. In a study published Wednesday in Physical Review A, they report that the unique configuration of the fish-eye lens enables it to guide single photons through the lens, in such a way as to entangle pairs of atoms, even over relatively long distances.

    Entanglement is a quantum phenomenon in which the properties of one particle are linked, or correlated, with those of another particle, even over vast distances. The team’s findings suggest that fish-eye lenses may be a promising vehicle for entangling atoms and other quantum bits, which are the necessary building blocks for designing quantum computers.

    “We found that the fish-eye lens has something that no other two-dimensional device has, which is maintaining this entangling ability over large distances, not just for two atoms, but for multiple pairs of distant atoms,” says first author Janos Perczel, a graduate student in MIT’s Department of Physics. “Entanglement and connecting these various quantum bits can be really the name of the game in making a push forward and trying to find applications of quantum mechanics.”

    The team also found that the fish-eye lens, contrary to recent claims, does not produce a perfect image. Scientists have thought that Maxwell’s fish-eye may be a candidate for a “perfect lens” — a lens that can go beyond the diffraction limit, meaning that it can focus light to a point that is smaller than the light’s own wavelength. This perfect imaging, scientist predict, should produce an image with essentially unlimited resolution and extreme clarity.

    However, by modeling the behavior of photons through a simulated fish-eye lens, at the quantum level, Perczel and his colleagues concluded that it cannot produce a perfect image, as originally predicted.

    “This tells you that there are these limits in physics that are really difficult to break,” Perczel says. “Even in this system, which seemed to be a perfect candidate, this limit seems to be obeyed. Perhaps perfect imaging may still be possible with the fish eye in some other, more complicated way, but not as originally proposed.”

    Perczel’s co-authors on the paper are Peter Komar and Mikhail Lukin from Harvard University.

    A circular path

    Maxwell was the first to realize that light is able to travel in perfect circles within the fish-eye lens because the density of the lens changes, with material being thickest at the middle and gradually thinning out toward the edges. The denser a material, the slower light moves through it. This explains the optical effect when a straw is placed in a glass half full of water. Because the water is so much denser than the air above it, light suddenly moves more slowly, bending as it travels through water and creating an image that looks as if the straw is disjointed.

    In the theoretical fish-eye lens, the differences in density are much more gradual and are distributed in a circular pattern, in such a way that it curves rather bends light, guiding light in perfect circles within the lens.

    In 2009, Ulf Leonhardt, a physicist at the Weizmann Institute of Science in Israel was studying the optical properties of Maxwell’s fish-eye lens and observed that, when photons are released through the lens from a single point source, the light travels in perfect circles through the lens and collects at a single point at the opposite end, with very little loss of light.

    “None of the light rays wander off in unwanted directions,” Perczel says. “Everything follows a perfect trajectory, and all the light will meet at the same time at the same spot.”

    Leonhardt, in reporting his results, made a brief mention as to whether the fish-eye lens’ single-point focus might be useful in precisely entangling pairs of atoms at opposite ends of the lens.

    “Mikhail [Lukin] asked him whether he had worked out the answer, and he said he hadn’t,” Perczel says. “That’s how we started this project and started digging deeper into how well this entangling operation works within the fish-eye lens.”

    Playing photon ping-pong

    To investigate the quantum potential of the fish-eye lens, the researchers modeled the lens as the simplest possible system, consisting of two atoms, one at either end of a two-dimensional fish-eye lens, and a single photon, aimed at the first atom. Using established equations of quantum mechanics, the team tracked the photon at any given point in time as it traveled through the lens, and calculated the state of both atoms and their energy levels through time.

    They found that when a single photon is shone through the lens, it is temporarily absorbed by an atom at one end of the lens. It then circles through the lens, to the second atom at the precise opposite end of the lens. This second atom momentarily absorbs the photon before sending it back through the lens, where the light collects precisely back on the first atom.

    “The photon is bounced back and forth, and the atoms are basically playing ping pong,” Perczel says. “Initially only one of the atoms has the photon, and then the other one. But between these two extremes, there’s a point where both of them kind of have it. It’s this mind-blowing quantum mechanics idea of entanglement, where the photon is completely shared equally between the two atoms.”

    Perczel says that the photon is able to entangle the atoms because of the unique geometry of the fish-eye lens. The lens’ density is distributed in such a way that it guides light in a perfectly circular pattern and can cause even a single photon to bounce back and forth between two precise points along a circular path.

    “If the photon just flew away in all directions, there wouldn’t be any entanglement,” Perczel says. “But the fish-eye gives this total control over the light rays, so you have an entangled system over long distances, which is a precious quantum system that you can use.”

    As they increased the size of the fish-eye lens in their model, the atoms remained entangled, even over relatively large distances of tens of microns. They also observed that, even if some light escaped the lens, the atoms were able to share enough of a photon’s energy to remain entangled. Finally, as they placed more pairs of atoms in the lens, opposite to one another, along with corresponding photons, these atoms also became simultaneously entangled.

    “You can use the fish eye to entangle multiple pairs of atoms at a time, which is what makes it useful and promising,” Perczel says.

    Fishy secrets

    In modeling the behavior of photons and atoms in the fish-eye lens, the researchers also found that, as light collected on the opposite end of the lens, it did so within an area that was larger than the wavelength of the photon’s light, meaning that the lens likely cannot produce a perfect image.

    “We can precisely ask the question during this photon exchange, what’s the size of the spot to which the photon gets recollected? And we found that it’s comparable to the wavelength of the photon, and not smaller,” Perczel says. “Perfect imaging would imply it would focus on an infinitely sharp spot. However, that is not what our quantum mechanical calculations showed us.”

    Going forward, the team hopes to work with experimentalists to test the quantum behaviors they observed in their modeling. In fact, in their paper, the team also briefly proposes a way to design a fish-eye lens for quantum entanglement experiments.

    “The fish-eye lens still has its secrets, and remarkable physics buried in it,” Perczel says. “But now it’s making an appearance in quantum technologies where it turns out this lens could be really useful for entangling distant quantum bits, which is the basic building block for building any useful quantum computer or quantum information processing device.”

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 8:09 pm on August 23, 2018 Permalink | Reply
    Tags: 3-D x-ray imaging that can visualize bulky materials in great detail, , , called Multilayer Laue lenses (MLLs), HXN’s special optics, Novel X-Ray Optics Boost Imaging Capabilities at NSLS-II, Optics,   

    From Brookhaven National Lab: “Novel X-Ray Optics Boost Imaging Capabilities at NSLS-II” 

    From Brookhaven National Lab

    August 23, 2018
    Rebecca Wilkin

    Brookhaven Lab scientists capture high-resolution, 3-D images of thick materials more efficiently than ever before.

    NSLS-II scientist Hande Öztürk stands next to the Hard X-ray Nanoprobe (HXN) beamline, where her research team developed the new x-ray imaging technique. No photo credit.

    Scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory have developed a new approach to 3-D x-ray imaging that can visualize bulky materials in great detail—an impossible task with conventional imaging methods. The novel technique could help scientists unlock clues about the structural information of countless materials, from batteries to biological systems.

    The scientists developed their approach at Brookhaven’s National Synchrotron Light Source II (NSLS-II)—a DOE Office of Science User Facility where scientists use ultra-bright x-rays to reveal details at the nanoscale. The team is located at NSLS-II’s Hard X-ray Nanoprobe (HXN) beamline, an experimental station that uses advanced lenses to offer world-leading resolution, all the way down to 10 nanometers—about one ten-thousandth the diameter of a human hair.

    HXN produces remarkably high-resolution images that can provide scientists with a comprehensive view of different material properties in 2-D and 3-D. The beamline also has a unique combination of in situ and operando capabilities—methods of studying materials in real-life operating conditions. However, scientists who use x-ray microscopes have been restricted by the size and thickness of the materials they can study.

    “The x-ray imaging community is still facing major challenges in fully exploiting the potential of beamlines like HXN, especially for obtaining high-resolution details from thick samples,” said Yong Chu, lead beamline scientist at HXN. “Obtaining quality, high-resolution images can become challenging when a material is thick—that is, thicker than the x-ray optics’ depth of focus.”

    Now, scientists at HXN have developed an efficient approach to studying thick samples without sacrificing the excellent resolution that HXN provides. They describe their approach in a paper published in the journal Optica.

    “The ultimate goal of our research is to break the technical barrier imposed on sample thickness and develop a new way of performing 3-D imaging—one that involves mathematically slicing through the sample,” said Xiaojing Huang, a scientist at HXN and a co-author of the paper.

    The research team is pictured at the HXN workstation. Standing, from left to right, are Xiaojing Huang, Hanfei Yan, Evgeny Nazaretski, Yong Chu, Mingyuan Ge, and Zhihua Dong. Sitting, from left to right, are Hande Öztürk and Meifeng Lin. Not pictured: Ian Robinson.

    The conventional method of obtaining a 3-D image involves collecting and combining a series of 2-D images. To obtain these 2-D images, the scientists typically rotate the sample 180 degrees; however, large samples cannot easily rotate within the limited space of typical x-ray microscopes. This limitation, in addition to the challenge of imaging thick samples, makes it nearly impossible to reconstruct a 3-D image with high resolution.

    “Instead of collecting a series of 2-D projections by rotating the sample, we simply ‘slice’ the thick material into a series of thin layers,” said lead author Hande Öztürk. “This slicing process is carried out mathematically without physically modifying the sample.”

    Their technique benefits from HXN’s special optics, called Multilayer Laue lenses (MLLs), which are engineered to focus x-rays into a tiny point. These lenses create favorable conditions for studying thinner slices of thick materials, while also reducing the measurement time.

    “HXN’s unique MLLs have a high focusing efficiency, so we can spend much less time collecting the signal we need,” said Hanfei Yan, a scientist at HXN and a co-author of the paper.

    By combining the MLL optics and the multi-slice approach, the HXN scientists were able to visualize two layers of nanoparticles separated by only 10 microns—about one tenth the diameter of a human hair—and with a resolution 100 times smaller. Additionally, the method significantly cut down the time needed to obtain a single image.

    “This development provides an exciting opportunity to perform 3-D imaging on samples that are very difficult to image with conventional methods—for example, a battery with a complicated electrochemical cell,” said Chu. He added that this approach could be very useful for a wide variety of future research applications.

    This study was supported by Brookhaven Lab’s Laboratory Directed Research and Development program. Operations at NSLS-II are supported by DOE’s Office of Science.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus



    BNL RHIC Campus

    BNL/RHIC Star Detector


    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.

  • richardmitnick 8:14 am on August 20, 2018 Permalink | Reply
    Tags: , , Lincoln Laboratory undersea optical communications, , , Optics   

    From MIT News: “Advancing undersea optical communications” 

    MIT News
    MIT Widget

    From MIT News

    A remotely operated vehicle and undersea terminal emits a coarse acquisition stabilized beam after locking onto another lasercom terminal. Photo: Nicole Fandel

    Staff performed tests with the undersea optical communications system at the Boston Sports Club pool in Lexington, proving that two underwater vehicles could efficiently search and locate each other. After detecting the remote terminal’s beacon, the local terminal is able to lock on and pull into coarse track in less than one second. Photo courtesy of the research team.

    Lincoln Laboratory researchers are applying narrow-beam laser technology to enable communications between underwater vehicles.

    Nearly five years ago, NASA and Lincoln Laboratory made history when the Lunar Laser Communication Demonstration (LLCD) used a pulsed laser beam to transmit data from a satellite orbiting the moon to Earth — more than 239,000 miles — at a record-breaking download speed of 622 megabits per second.

    MIT Lincoln Laboratory

    Now, researchers at Lincoln Laboratory are aiming to once again break new ground by applying the laser beam technology used in LLCD to underwater communications.

    “Both our undersea effort and LLCD take advantage of very narrow laser beams to deliver the necessary energy to the partner terminal for high-rate communication,” says Stephen Conrad, a staff member in the Control and Autonomous Systems Engineering Group, who developed the pointing, acquisition, and tracking (PAT) algorithm for LLCD. “In regard to using narrow-beam technology, there is a great deal of similarity between the undersea effort and LLCD.”

    However, undersea laser communication (lasercom) presents its own set of challenges. In the ocean, laser beams are hampered by significant absorption and scattering, which restrict both the distance the beam can travel and the data signaling rate. To address these problems, the Laboratory is developing narrow-beam optical communications that use a beam from one underwater vehicle pointed precisely at the receive terminal of a second underwater vehicle.

    This technique contrasts with the more common undersea communication approach that sends the transmit beam over a wide angle but reduces the achievable range and data rate. “By demonstrating that we can successfully acquire and track narrow optical beams between two mobile vehicles, we have taken an important step toward proving the feasibility of the laboratory’s approach to achieving undersea communication that is 10,000 times more efficient than other modern approaches,” says Scott Hamilton, leader of the Optical Communications Technology Group, which is directing this R&D into undersea communication.

    Most above-ground autonomous systems rely on the use of GPS for positioning and timing data; however, because GPS signals do not penetrate the surface of water, submerged vehicles must find other ways to obtain these important data. “Underwater vehicles rely on large, costly inertial navigation systems, which combine accelerometer, gyroscope, and compass data, as well as other data streams when available, to calculate position,” says Thomas Howe of the research team. “The position calculation is noise sensitive and can quickly accumulate errors of hundreds of meters when a vehicle is submerged for significant periods of time.”

    This positional uncertainty can make it difficult for an undersea terminal to locate and establish a link with incoming narrow optical beams. For this reason, “We implemented an acquisition scanning function that is used to quickly translate the beam over the uncertain region so that the companion terminal is able to detect the beam and actively lock on to keep it centered on the lasercom terminal’s acquisition and communications detector,” researcher Nicolas Hardy explains. Using this methodology, two vehicles can locate, track, and effectively establish a link, despite the independent movement of each vehicle underwater.

    Once the two lasercom terminals have locked onto each other and are communicating, the relative position between the two vehicles can be determined very precisely by using wide bandwidth signaling features in the communications waveform. With this method, the relative bearing and range between vehicles can be known precisely, to within a few centimeters, explains Howe, who worked on the undersea vehicles’ controls.

    To test their underwater optical communications capability, six members of the team recently completed a demonstration of precision beam pointing and fast acquisition between two moving vehicles in the Boston Sports Club pool in Lexington, Massachusetts. Their tests proved that two underwater vehicles could search for and locate each other in the pool within one second. Once linked, the vehicles could potentially use their established link to transmit hundreds of gigabytes of data in one session.

    This summer, the team is traveling to regional field sites to demonstrate this new optical communications capability to U.S. Navy stakeholders. One demonstration will involve underwater communications between two vehicles in an ocean environment — similar to prior testing that the Laboratory undertook at the Naval Undersea Warfare Center in Newport, Rhode Island, in 2016. The team is planning a second exercise to demonstrate communications from above the surface of the water to an underwater vehicle — a proposition that has previously proven to be nearly impossible.

    The undersea communication effort could tap into innovative work conducted by other groups at the laboratory. For example, integrated blue-green optoelectronic technologies, including gallium nitride laser arrays and silicon Geiger-mode avalanche photodiode array technologies, could lead to lower size, weight, and power terminal implementation and enhanced communication functionality.

    In addition, the ability to move data at megabit-to gigabit-per-second transfer rates over distances that vary from tens of meters in turbid waters to hundreds of meters in clear ocean waters will enable undersea system applications that the laboratory is exploring.

    Howe, who has done a significant amount of work with underwater vehicles, both before and after coming to the laboratory, says the team’s work could transform undersea communications and operations. “High-rate, reliable communications could completely change underwater vehicle operations and take a lot of the uncertainty and stress out of the current operation methods.”

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 2:30 pm on December 21, 2017 Permalink | Reply
    Tags: An easy-to-build camera that produces 3D images from a single 2D image without any lenses, DiffuserCam, New Lensless Camera Creates Detailed 3D Images Without Scanning, Optics, OSA- The Optical Society, , The project is funded by DARPA’s Neural Engineering System Design program   

    From OSA: “New Lensless Camera Creates Detailed 3D Images Without Scanning” 

    The Optical Society

    21 December 2017

    Innovative computational imaging approach could advance applications from brain research to self-driving cars.

    Researchers have developed an easy-to-build camera that produces 3D images from a single 2D image without any lenses. In an initial application of the technology, the researchers plan to use the new camera, which they call DiffuserCam, to watch microscopic neuron activity in living mice without a microscope. Ultimately, it could prove useful for a wide range of applications involving 3D capture.

    The camera is compact and inexpensive to construct because it consists of only a diffuser – essentially a bumpy piece of plastic – placed on top of an image sensor. Although the hardware is simple, the software it uses to reconstruct high resolution 3D images is very complex.

    “The DiffuserCam can, in a single shot, capture 3D information in a large volume with high resolution,” said the research team leader Laura Waller, University of California, Berkeley. “We think the camera could be useful for self-driving cars, where the 3D information can offer a sense of scale, or it could be used with machine learning algorithms to perform face detection, track people or automatically classify objects.”

    In Optica, The Optical Society’s journal for high impact research, the researchers show that the DiffuserCam can be used to reconstruct 100 million voxels, or 3D pixels, from a 1.3-megapixel (1.3 million pixels) image without any scanning. For comparison, the iPhone X camera takes 12-megapixel photos. The researchers used the camera to capture the 3D structure of leaves from a small plant.

    The lensless DiffuserCam consists of a diffuser placed in front of a sensor (bumps on the diffuser are exaggerated for illustration). The system turns a 3D scene into a 2D image on the sensor. After a one-time calibration, an algorithm is used to reconstruct 3D images computationally. The result is a 3D image reconstructed from a single 2D measurement. Image Credit: Laura Waller, University of California, Berkeley.

    “Our new camera is a great example of what can be accomplished with computational imaging — an approach that examines how hardware and software can be used together to design imaging systems,” said Waller. “We made a concerted effort to keep the hardware extremely simple and inexpensive. Although the software is very complicated, it can also be easily replicated or distributed, allowing others to create this type of camera at home.”

    A DiffuserCam can be created using any type of image sensor and can image objects that range from microscopic in scale all the way up to the size of a person. It offers a resolution in the tens of microns range when imaging objects close to the sensor. Although the resolution decreases when imaging a scene farther away from the sensor, it is still high enough to distinguish that one person is standing several feet closer to the camera than another person, for example.

    A simple approach to complex imaging

    The DiffuserCam is a relative of the light field camera, which captures how much light is striking a pixel on the image sensor as well as the angle from which the light hits that pixel. In a typical light field camera, an array of tiny lenses placed in front of the sensor is used to capture the direction of the incoming light, allowing computational approaches to refocus the image and create 3D images without the scanning steps typically required to obtain 3D information.

    Until now, light field cameras have been limited in spatial resolution because some spatial information is lost while collecting the directional information. Another drawback of these cameras is that the microlens arrays are expensive and must be customized for a particular camera or optical components used for imaging.

    “I wanted to see if we could achieve the same imaging capabilities using simple and cheap hardware,” said Waller. “If we have better algorithms, could the carefully designed, expensive microlens arrays be replaced with a plastic surface with a random pattern such as a bumpy piece of plastic?”

    After experimenting with various types of diffusers and developing the complex algorithms, Nick Antipa and Grace Kuo, students in Waller’s lab, discovered that Waller’s idea for a simple light field camera was possible. In fact, using random bumps in privacy glass stickers, Scotch tape or plastic conference badge holders, allowed the researchers to improve on traditional light field camera capabilities by using compressed sensing to avoid the typical loss of resolution that comes with microlens arrays.

    Although other light field cameras use lens arrays that are precisely designed and aligned, the exact size and shape of the bumps in the new camera’s diffuser are unknown. This means that a few images of a moving point of light must be acquired to calibrate the software prior to imaging. The researchers are working on a way to eliminate this calibration step by using the raw data for calibration. They also want to improve the accuracy of the software and make the 3D reconstruction faster.

    No microscope required

    The new camera will be used in a project at University of California Berkeley that aims to watch a million individual neurons while stimulating 1,000 of them with single-cell accuracy. The project is funded by DARPA’s Neural Engineering System Design program – part of the federal government’s BRAIN Initiative – to develop implantable, biocompatible neural interfaces that could eventually compensate for visual or hearing deficits.

    As a first step, the researchers want to create what they call a cortical modem that will “read” and “write” to the brains of animal models, much like the input-output activity of internet modems. The DiffuserCam will be the heart of the reading device for this project, which will also use special proteins that allow scientists to control neuronal activity with light.

    “Using this to watch neurons fire in a mouse brain could in the future help us understand more about sensory perception and provide knowledge that could be used to cure diseases like Alzheimer’s or mental disorders,” said Waller.

    Although newly developed imaging techniques can capture hundreds of neurons firing, how the brain works on larger scales is not fully understood. The DiffuserCam has the potential to provide that insight by imaging millions of neurons in one shot. Because the camera is lightweight and requires no microscope or objective lens, it can be attached to a transparent window in a mouse’s skull, allowing neuronal activity to be linked with behavior. Several arrays with overlying diffusers could be tiled to image large areas.

    A need for interdisciplinary designers

    “Our work shows that computational imaging can be a creative process that examines all parts of the optical design and algorithm design to create optical systems that accomplish things that couldn’t be done before or to use a simpler approach to something that could be done before,” Waller said. “This is a very powerful direction for imaging, but requires designers with optical and physics expertise as well as computational knowledge.”

    The new Berkeley Center for Computational Imaging, headed by Waller, is working to train more scientists in this interdisciplinary field. Scientists from the center also meet weekly with bioengineers, physicists and electrical engineers as well as experts in signal processing and machine learning to exchange ideas and to better understand the imaging needs of other fields.

    The open source software for the DiffuserCam is available on the project page: DiffuserCam: Lensless Single-exposure 3D Imaging.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 12:20 pm on December 18, 2017 Permalink | Reply
    Tags: , , Dartmouth engineers produce breakthrough sensor for photography and life sciences and security, Optics, Thayer School of Engineering   

    From Dartmouth College: “Dartmouth engineers produce breakthrough sensor for photography, life sciences, security” 

    Dartmouth College bloc

    Dartmouth College


    December 18, 2017

    The Quanta Image Sensor enables new imaging capability in accessible, inexpensive process.

    This is a sample photo taken with the 1Megapixel Quanta Image Sensor operating at 1,040 frames per second, with total power consumption as low as 17mW. It is a binary single-photon image, so if the pixel was hit by one or more photons, it is white; if not, it is black. Figure 4 shows how an image in grayscale was created by summing up eight frames of binary images taken continuously. This process is where the innovative image processing of the QIS can be applied. (Courtesy of Jiaju Ma)

    Engineers from Dartmouth’s Thayer School of Engineering have produced a new imaging technology that may revolutionize medical and life sciences research, security, photography, cinematography and other applications that rely on high quality, low light imaging.

    Called the Quanta Image Sensor, or QIS, this next generation of light sensing technology enables highly sensitive, more easily manipulated and higher quality digital imaging than is currently available, even in low light situations, according to co-inventor Eric R. Fossum, professor of engineering at Dartmouth. Fossum also invented the CMOS image sensor found in nearly all smartphones and cameras across the world today.

    Documented in the Dec. 20 issue of The Optical Society’s OSA Optica, the new QIS technology is able to reliably capture and count the lowest level of light, single photons, with resolution as high as one megapixel, or one million pixels, and as fast as thousands of frames per second. Plus, the QIS can accomplish this in low light, at room temperature and while using mainstream image sensor technology, according to the Optica article. Previous technology required large pixels or cooling to low temperatures or both.

    What does this mean for industry? For cinematographers, the QIS will enable IMAX-quality video in an easily edited digital format while still providing many of the same characteristics of film. For astrophysicists, the QIS will allow for the detection and capture of better signals from distant objects in space. And for life science researchers, the QIS will provide improved visualization of cells under a microscope, which is critical for determining the effectiveness of therapies.

    Building this new imaging capability in a commercially accessible, inexpensive process is important, said Fossum, so he and his team made it compatible with the low cost and mass production of today’s CMOS image sensor technology. They also made it readily scalable for higher resolution, with as many as hundreds of megapixels per chip.

    “That way it’s easier for industry to adopt it and mass produce it,” said Fossum, who was recognized earlier this month at Buckingham Palace for his role in developing the CMOS image sensor. On Dec. 6, Charles, Prince of Wales, awarded Fossum the engineering equivalent of the Nobel Prize, the Queen Elizabeth Prize for Engineering.

    “The QIS is a revolutionary change in the way we collect images in a camera,” said Jiaju Ma who co-authored this month’s Optica paper with Fossum, Saleh Masoodian and researcher Dakota Starkey who is currently pursuing his PhD at Thayer. Ma and Masoodian received their PhDs in electrical and electronics engineering from Thayer and are co-inventors of the QIS with Fossum.

    The QIS platform technology is unique, according to Ma, because the sensor incorporates:

    “Jots,” named by the research team for very small pixels, which are sensitive enough to detect a single photon of light
    Ultra-fast scanning of the jots

    With this combination, the QIS captures data from every single photon, or particle of light, enabling extremely high quality, easily manipulated digital imaging, as well as computer vision and 3-D sensing, even in low light conditions.

    While the current QIS resolution is one megapixel, the team’s goal is for the QIS to contain hundreds of millions to billions of these jots, all scanned at a very fast rate, said Ma.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Dartmouth College campus

    Dartmouth College is a private, Ivy League, research university in Hanover, New Hampshire, United States. Incorporated as the “Trustees of Dartmouth College”, it is one of the nine Colonial Colleges founded before the American Revolution. Dartmouth College was established in 1769 by Eleazar Wheelock, a Congregational minister. After a long period of financial and political struggles, Dartmouth emerged in the early 20th century from relative obscurity, into national prominence.

    Comprising an undergraduate population of 4,307 and a total student enrollment of 6,350 (as of 2016), Dartmouth is the smallest university in the Ivy League. Its undergraduate program, which reported an acceptance rate around 10 percent for the class of 2020, is characterized by the Carnegie Foundation and U.S. News & World Report as “most selective”. Dartmouth offers a broad range of academic departments, an extensive research enterprise, numerous community outreach and public service programs, and the highest rate of study abroad participation in the Ivy League.

  • richardmitnick 2:04 pm on December 2, 2017 Permalink | Reply
    Tags: and Cameras Will Never Be the Same, , Lenses Are Being Reinvented, , Optics,   

    From MIT Tech Review: “Lenses Are Being Reinvented, and Cameras Will Never Be the Same” 

    MIT Technology Review
    M.I.T Technology Review

    December 1, 2017
    No writer credit

    “Metalenses” created with photolithography could change the nature of imaging and optical processing.

    Lenses are almost as old as civilization itself. The ancient Egyptians, Greeks, and Babylonians all developed lenses made from polished quartz and used them for simple magnification. Later, 17th-century scientists combined lenses to make telescopes and microscopes, instruments that changed our view of the universe and our position within it.

    Now lenses are being reinvented by the process of photolithography, which carves subwavelength features onto flat sheets of glass. Today, Alan She and pals at Harvard University in Massachusetts show how to arrange these features in ways that scatter light with greater control than has ever been possible. They say the resulting “metalenses” are set to revolutionize imaging and usher in a new era of optical processing.

    Lens making has always been a tricky business. It is generally done by pouring molten glass, or silicon dioxide, into a mold and allowing it to set before grinding and polishing it into the required shape. This is a time-consuming business that is significantly different from the manufacturing processes for light-sensing components on microchips.

    Metalenses are carved onto wafers of silicon dioxide in a process like that used to make silicon chips. No image credit.

    So a way of making lenses on chips in the same way would be hugely useful. It would allow lenses to be fabricated in the same plants as other microelectronic components, even at the same time.

    She and co show how this process is now possible. The key idea is that tiny features, smaller than the wavelength of light, can manipulate it. For example, white light can be broken into its component colors by reflecting it off a surface into which are carved a set of parallel trenches that have the same scale as the wavelength of light.

    Metalenses can produce high quality images

    Physicists have played with so-called diffraction gratings for centuries. But photolithography makes it possible to take the idea much further by creating a wider range of features and varying their shape and orientation.

    Since the 1960s, photolithography has produced ever smaller features on silicon chips. In 1970, this technique could carve shapes in silicon with a scale of around 10 micrometers. By 1985, feature size had dropped to one micrometer, and by 1998, to 250 nanometers. Today, the chip industry makes features around 10 nanometers in size.

    Visible light has a wavelength of 400 to 700 nanometers, so the chip industry has been able to make features of this size for some time. But only recently have researchers begun to investigate how these features can be arranged on flat sheets of silicon dioxide to create metalenses that bend light.

    The process begins with a silicon dioxide wafer onto which is deposited a thin layer of silicon covered in a photoresist pattern. The silicon below is then carved away using ultraviolet light. Washing away the remaining photoresist leaves the unexposed silicon in the desired shape.

    She and co use this process to create a periodic array of silicon pillars on glass that scatter visible light as it passes through. And by carefully controlling the spacing between the pillars, the team can bring the light to a focus.

    Specific pillar spacings determine the precise optical properties of this lens. For example, the researchers can control chromatic aberration to determine where light of different colors comes to a focus.

    In imaging lenses, chromatic aberration must be minimized—it otherwise produces the colored fringes around objects viewed through cheap toy telescopes. But in spectrographs, different colors must be brought to focus in different places. She and co can do either.

    Neither do these lenses suffer from spherical aberration, a common problem with ordinary lenses caused by their three-dimensional spherical shape. Metalenses do not have this problem because they are flat. Indeed, they are similar to the theoretical “ideal lenses” that undergraduate physicists study in optics courses.

    Of course, physicists have been able to make flat lenses, such as Fresnel lenses, for decades. But they have always been hard to make.

    The key advance here is that metalenses, because they can be fabricated in the same way as microchips, can be mass-produced with subwavelength surface features. She and co make dozens of them on a single silica wafer. Each of these lenses is less than a micrometer thick, with a diameter of 20 millimeters and a focal length of 50 millimeters.

    “We envision a manufacturing transition from using machined or moulded optics to lithographically patterned optics, where they can be mass produced with the similar scale and precision as IC chips,” say She and co.

    And they can do this with chip fabrication technology that is more than a decade old. That will give old fab plants a new lease on life. “State-of-the-art equipment is useful, but not necessarily required,” say She and co.

    Metalenses have a wide range of applications. The most obvious is imaging. Flat lenses will make imaging systems thinner and simpler. But crucially, since metalenses can be fabricated in the same process as the electronic components for sensing light, they will be cheaper.

    So cameras for smartphones, laptops, and augmented-reality imaging systems will suddenly become smaller and less expensive to make. They could even be printed onto the end of optical fibers to acts as endoscopes.

    Astronomers could have some fun too. These lenses are significantly lighter and thinner than the behemoths they have launched into orbit in observatories such as the Hubble Space Telescope. A new generation of space-based astronomy and Earth observing beckons.

    But it is within chips themselves that this technology could have the biggest impact. The technique makes it possible to build complex optical bench-type systems into chips for optical processing.

    And there are further advances in the pipeline. One possibility is to change the properties of metalenses in real time using electric fields. That raises the prospect of lenses that change focal length with voltage—or, more significant, that switch light.

    Science paper:
    Alan She, Shuyan Zhang, Samuel Shian, David R. Clarke, Federico Capasso
    Large Area Metalenses: Design, Characterization, and Mass Manufacturing. No Journal reference.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

  • richardmitnick 12:42 pm on September 6, 2017 Permalink | Reply
    Tags: , Medical camera sees through the body, , Optics,   

    From U Edinburgh: “Medical camera sees through the body” 


    University of Edinburgh

    Sep 4, 2017
    No writer credit

    Scientists have developed a camera that can see through the human body.

    The camera is designed to help doctors track medical tools known as endoscopes that are used to investigate a range of internal conditions. The new device is able to detect sources of light inside the body, such as the illuminated tip of the endoscope’s long flexible tube.

    Light detection

    Until now, it has not been possible to track where an endoscope is located in the body in order to guide it to the right place without using X-rays or other expensive methods. Light from the endoscope can pass through the body, but it usually scatters or bounces off tissues and organs rather than travelling straight through. This makes it nearly impossible to get a clear picture of where the endoscope is.

    Images from a new camera that can detect tiny traces of light through the body’s tissues. Here, the camera is detecting light emitted from a medical device known as an optical endomicroscope whilst in use in sheep lungs. Image on left shows light emitted from the tip of the endomicroscope, revealing its precise location in the lungs. Right image shows the picture that would be obtained using a conventional camera, with light scattered through the structures of the lung.

    Advanced technology
    The new camera takes advantage of advanced technology that can detect individual particles of light, called photons. Experts have integrated thousands of single photon detectors onto a silicon chip, similar to that found in a digital camera.


    The technology is so sensitive that it can detect the tiny traces of light that pass through the body’s tissue from the light of the endoscope. It can also record the time taken for light to pass through the body, allowing the device to also detect the scattered light.

    Bedside tool

    By taking into account both the scattered light and the light that travels straight to the camera, the device is able to work out exactly where the endoscope is located in the body. Researchers have developed the new camera so that it can be used at the patient’s bedside.


    Early tests have demonstrated that the prototype device can track the location of a point light source through 20 centimetres of tissue under normal light conditions.

    Proteus project

    The project – led by the University of Edinburgh and Heriot-Watt University – is part of the Proteus Interdisciplinary Research Collaboration, which is developing a range of revolutionary new technologies for diagnosing and treating lung diseases. Proteus is a collaboration between the Universities of Edinburgh and Bath and Heriot-Watt University. It is funded by the Engineering and Physical Sciences Research Council. The research is published in the journal Biomedical Optics Express.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The University’s mission is the creation, dissemination and curation of knowledge.

    As a world-leading centre of academic excellence we aim to:

    Enhance our position as one of the world’s leading research and teaching universities and to measure our performance against the highest international standards
    Provide the highest quality learning and teaching environment for the greater wellbeing of our students
    Produce graduates fully equipped to achieve the highest personal and professional standards
    Make a significant, sustainable and socially responsible contribution to Scotland, the UK and the world, promoting health and economic and cultural wellbeing.

    As a great civic university, Edinburgh especially values its intellectual and economic relationship with the Scottish community that forms its base and provides the foundation from which it will continue to look to the widest international horizons, enriching both itself and Scotland.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: