Tagged: Optics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:39 am on August 8, 2019 Permalink | Reply
    Tags: "Stanford researchers design a light-trapping, , , color-converting crystal", , , , Optics, Photonic crystal cavities, ,   

    From Stanford University: “Stanford researchers design a light-trapping, color-converting crystal” 

    Stanford University Name
    From Stanford University

    August 7, 2019

    Taylor Kubota
    Stanford News Service
    (650) 724-7707
    tkubota@stanford.edu

    1
    Researchers propose a microscopic structure that changes laser light from infrared to green and traps both wavelengths of light to improve efficiency of that transformation. This type of structure could help advance telecommunication and computing technologies. (Image credit: Getty Images)

    Five years ago, Stanford postdoctoral scholar Momchil Minkov encountered a puzzle that he was impatient to solve. At the heart of his field of nonlinear optics are devices that change light from one color to another – a process important for many technologies within telecommunications, computing and laser-based equipment and science. But Minkov wanted a device that also traps both colors of light, a complex feat that could vastly improve the efficiency of this light-changing process – and he wanted it to be microscopic.

    “I was first exposed to this problem by Dario Gerace from the University of Pavia in Italy, while I was doing my PhD in Switzerland. I tried to work on it then but it’s very hard,” Minkov said. “It has been in the back of my mind ever since. Occasionally, I would mention it to someone in my field and they would say it was near-impossible.”

    In order to prove the near-impossible was still possible, Minkov and Shanhui Fan, professor of electrical engineering at Stanford, developed guidelines for creating a crystal structure with an unconventional two-part form. The details of their solution were published Aug. 6 in Optica, with Gerace as co-author. Now, the team is beginning to build its theorized structure for experimental testing.

    2
    An illustration of the researchers’ design. The holes in this microscopic slab structure are arranged and resized in order to control and hold two wavelengths of light. The scale bar on this image is 2 micrometers, or two millionths of a meter. (Image credit: Momchil Minkov)

    A recipe for confining light

    Anyone who’s encountered a green laser pointer has seen nonlinear optics in action. Inside that laser pointer, a crystal structure converts laser light from infrared to green. (Green laser light is easier for people to see but components to make green-only lasers are less common.) This research aims to enact a similar wavelength-halving conversion but in a much smaller space, which could lead to a large improvement in energy efficiency due to complex interactions between the light beams.

    The team’s goal was to force the coexistence of the two laser beams using a photonic crystal cavity, which can focus light in a microscopic volume. However, existing photonic crystal cavities usually only confine one wavelength of light and their structures are highly customized to accommodate that one wavelength.

    So instead of making one uniform structure to do it all, these researchers devised a structure that combines two different ways to confine light, one to hold onto the infrared light and another to hold the green, all still contained within one tiny crystal.

    “Having different methods for containing each light turned out to be easier than using one mechanism for both frequencies and, in some sense, it’s completely different from what people thought they needed to do in order to accomplish this feat,” Fan said.

    After ironing out the details of their two-part structure, the researchers produced a list of four conditions, which should guide colleagues in building a photonic crystal cavity capable of holding two very different wavelengths of light. Their result reads more like a recipe than a schematic because light-manipulating structures are useful for so many tasks and technologies that designs for them have to be flexible.

    “We have a general recipe that says, ‘Tell me what your material is and I’ll tell you the rules you need to follow to get a photonic crystal cavity that’s pretty small and confines light at both frequencies,’” Minkov said.

    Computers and curiosity

    If telecommunications channels were a highway, flipping between different wavelengths of light would equal a quick lane change to avoid a slowdown – and one structure that holds multiple channels means a faster flip. Nonlinear optics is also important for quantum computers because calculations in these computers rely on the creation of entangled particles, which can be formed through the opposite process that occurs in the Fan lab crystal – creating twinned red particles of light from one green particle of light.

    Envisioning possible applications of their work helps these researchers choose what they’ll study. But they are also motivated by their desire for a good challenge and the intricate strangeness of their science.

    “Basically, we work with a slab structure with holes and by arranging these holes, we can control and hold light,” Fan said. “We move and resize these little holes by billionths of a meter and that marks the difference between success and failure. It’s very strange and endlessly fascinating.”

    These researchers will soon be facing off with these intricacies in the lab, as they are beginning to build their photonic crystal cavity for experimental testing.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Stanford University campus. No image credit

    Stanford University

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
  • richardmitnick 11:49 am on July 3, 2019 Permalink | Reply
    Tags: , GANpaint system developed at MIT can easily add features to an existing image., , Optics   

    From MIT News: “Teaching artificial intelligence to create visuals with more common sense” 

    MIT News

    From MIT News

    July 1, 2019
    Adam Conner-Simons | MIT CSAIL

    1
    The GANpaint system developed at MIT can easily add features to an existing image. At left, the original photo of a kitchen; at right, the same kitchen with the addition of a window. Co-author Jun-Yan Zhu believes better understanding of GANs will help researchers be able to better stamp out fakery: “This understanding may potentially help us detect fake images more easily.”

    2
    GANpaint Studio general interface

    An MIT/IBM system could help artists and designers make quick tweaks to visuals while also helping researchers identify “fake” images.

    David Bau, a PhD student at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), describes the project as one of the first times computer scientists have been able to actually “paint with the neurons” of a neural network — specifically, a popular type of network called a generative adversarial network (GAN).

    Available online as an interactive demo, GANpaint Studio allows a user to upload an image of their choosing and modify multiple aspects of its appearance, from changing the size of objects to adding completely new items like trees and buildings.

    Boon for designers

    Spearheaded by MIT professor Antonio Torralba as part of the MIT-IBM Watson AI Lab he directs, the project has vast potential applications. Designers and artists could use it to make quicker tweaks to their visuals. Adapting the system to video clips would enable computer-graphics editors to quickly compose specific arrangements of objects needed for a particular shot. (Imagine, for example, if a director filmed a full scene with actors but forgot to include an object in the background that’s important to the plot.)

    GANpaint Studio could also be used to improve and debug other GANs that are being developed, by analyzing them for “artifact” units that need to be removed. In a world where opaque AI tools have made image manipulation easier than ever, it could help researchers better understand neural networks and their underlying structures.

    “Right now, machine learning systems are these black boxes that we don’t always know how to improve, kind of like those old TV sets that you have to fix by hitting them on the side,” says Bau, lead author on a related paper about the system with a team overseen by Torralba. “This research suggests that, while it might be scary to open up the TV and take a look at all the wires, there’s going to be a lot of meaningful information in there.”

    One unexpected discovery is that the system actually seems to have learned some simple rules about the relationships between objects. It somehow knows not to put something somewhere it doesn’t belong, like a window in the sky, and it also creates different visuals in different contexts. For example, if there are two different buildings in an image and the system is asked to add doors to both, it doesn’t simply add identical doors — they may ultimately look quite different from each other.

    “All drawing apps will follow user instructions, but ours might decide not to draw anything if the user commands to put an object in an impossible location,” says Torralba. “It’s a drawing tool with a strong personality, and it opens a window that allows us to understand how GANs learn to represent the visual world.”

    GANs are sets of neural networks developed to compete against each other. In this case, one network is a generator focused on creating realistic images, and the second is a discriminator whose goal is to not be fooled by the generator. Every time the discriminator ‘catches’ the generator, it has to expose the internal reasoning for the decision, which allows the generator to continuously get better.

    “It’s truly mind-blowing to see how this work enables us to directly see that GANs actually learn something that’s beginning to look a bit like common sense,” says Jaakko Lehtinen, an associate professor at Finland’s Aalto University who was not involved in the project. “I see this ability as a crucial steppingstone to having autonomous systems that can actually function in the human world, which is infinite, complex and ever-changing.”

    Stamping out unwanted “fake” images

    The team’s goal has been to give people more control over GAN networks. But they recognize that with increased power comes the potential for abuse, like using such technologies to doctor photos. Co-author Jun-Yan Zhu says that he believes that better understanding GANs — and the kinds of mistakes they make — will help researchers be able to better stamp out fakery.

    “You need to know your opponent before you can defend against it,” says Zhu, a postdoc at CSAIL. “This understanding may potentially help us detect fake images more easily.”

    To develop the system, the team first identified units inside the GAN that correlate with particular types of objects, like trees. It then tested these units individually to see if getting rid of them would cause certain objects to disappear or appear. Importantly, they also identified the units that cause visual errors (artifacts) and worked to remove them to increase the overall quality of the image.

    “Whenever GANs generate terribly unrealistic images, the cause of these mistakes has previously been a mystery,” says co-author Hendrik Strobelt, a research scientist at IBM. “We found that these mistakes are triggered by specific sets of neurons that we can silence to improve the quality of the image.”

    Bau, Strobelt, Torralba and Zhu co-wrote the paper with former CSAIL PhD student Bolei Zhou, postdoctoral associate Jonas Wulff, and undergraduate student William Peebles. They will present it next month at the SIGGRAPH conference in Los Angeles. “This system opens a door into a better understanding of GAN models, and that’s going to help us do whatever kind of research we need to do with GANs,” says Lehtinen.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 11:10 am on May 13, 2019 Permalink | Reply
    Tags: "Better Microring Sensors for Optical Applications", , , Microring sensors, , Optics,   

    From Michigan Technical University: “Better Microring Sensors for Optical Applications” 

    Michigan Tech bloc

    From Michigan Technical University

    May 10, 2019
    Kelley Christensen

    1
    An exceptional surface-based sensor. The microring resonator is coupled to a waveguide with an end mirror that partially reflects light, which in turn enhances the sensitivity. Image Credit: Ramy El-Ganainy and Qi Zhong

    Tweaking the design of microring sensors enhances their sensitivity without adding more implementation complexity.

    Optical sensing is one of the most important applications of light science. It plays crucial roles in astronomy, environmental science, industry and medical diagnoses.

    Despite the variety of schemes used for optical sensing, they all share the same principle: The quantity to be measured must leave a “fingerprint” on the optical response of the system. The fingerprint can be its transmission, reflection or absorption. The stronger these effects are, the stronger the response of the system.

    While this works well at the macroscopic level, measuring tiny, microscopic quantities that induce weak response is a challenging task. Researchers have developed techniques to overcome this difficulty and improve the sensitivity of their devices. Some of these techniques, which rely on complex quantum optics concepts and implementations, have indeed proved useful, such as in sensing gravitational waves in the LIGO project.


    Others, which are based on trapping light in tiny boxes called optical resonators, have succeeded in detecting micro-particles and relatively large biological components.

    Nonetheless, the ability to detect small nano-particles and eventually single molecules remains a challenge. Current attempts focus on a special type of light trapping devices called microring or microtoroid resonators — these enhance the interaction between light and the molecule to be detected. The sensitivity of these devices, however, is limited by their fundamental physics.

    In their article “Sensing with Exceptional Surfaces in Order to Combine Sensitivity with Robustness” published in Physical Review Letters, physicists and engineers from Michigan Technological University, Pennsylvania State University and the University of Central Florida propose a new type of sensor. They are based on the new notion of exceptional surfaces: surfaces that consist of exceptional points.

    Exceptional Points for Exceptionally Sensitive Detection

    In order to understand the meaning of exceptional points, consider an imaginary violin with only two strings. In general, such a violin can produce just two different tones — a situation that corresponds to a conventional optical resonator. If the vibration of one string can alter the vibration of the other string in a way that the sound and the elastic oscillations create only one tone and one collective string motion, the system has an exceptional point.

    A physical system that exhibits an exceptional point is very fragile. In other words, any small perturbation will dramatically alter its behavior. The feature makes the system highly sensitive to tiny signals.

    “Despite this promise, the same enhanced sensitivity of exceptional point-based sensors is also their Achilles heel: These devices are very sensitive to unavoidable fabrication errors and undesired environmental variations,” said Ramy El-Ganainy, associate professor of physics, adding that the sensitivity necessitated clever tuning tricks in previous experimental demonstrations.

    “Our current proposal alleviates most of these problems by introducing a new system that has the same enhanced sensitivity reported in previous work, while at the same time robust against the majority of the uncontrivable experimental uncertainty,” said Qi Zhong, lead author on the paper and a graduate student who is currently working towards his doctorate degree at Michigan Tech.

    Though the design of microring sensors continues to be refined, researchers are hopeful that by improving the devices, seemingly tiny optical observations will have large effects.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Michigan Tech Campus
    Michigan Technological University (http://www.mtu.edu) is a leading public research university developing new technologies and preparing students to create the future for a prosperous and sustainable world. Michigan Tech offers more than 130 undergraduate and graduate degree programs in engineering; forest resources; computing; technology; business; economics; natural, physical and environmental sciences; arts; humanities; and social sciences.
    The College of Sciences and Arts (CSA) fills one of the most important roles on the Michigan Tech campus. We play a part in the education of every student who comes through our doors. We take pride in offering essential foundational courses in the natural sciences and mathematics, as well as the social sciences and humanities—courses that underpin every major on campus. With twelve departments, 28 majors, 30-or-so specializations, and more than 50 minors, CSA has carefully developed programs to suit many interests and skill sets. From sound design and audio technology to actuarial science, applied cognitive science and human factors to rhetoric and technical communication, the college offers many unique programs.

     
  • richardmitnick 10:22 am on February 23, 2019 Permalink | Reply
    Tags: , , Optics, , , , Semiconductor quantum dots,   

    From University of Cambridge: “Physicists get thousands of semiconductor nuclei to do ‘quantum dances’ in unison” 

    U Cambridge bloc

    From University of Cambridge

    22 Feb 2019
    Communications office

    1
    Theoretical ESR spectrum buildup as a function of two-photon detuning δ and drive time τ, for a Rabi frequency of Ω = 3.3 MHz on the central transition. Credit: University of Cambridge.

    A team of Cambridge researchers have found a way to control the sea of nuclei in semiconductor quantum dots so they can operate as a quantum memory device.

    Quantum dots are crystals made up of thousands of atoms, and each of these atoms interacts magnetically with the trapped electron. If left alone to its own devices, this interaction of the electron with the nuclear spins, limits the usefulness of the electron as a quantum bit – a qubit.

    Led by Professor Mete Atatüre from Cambridge’s Cavendish Laboratory, the researchers are exploiting the laws of quantum physics and optics to investigate computing, sensing or communication applications.

    “Quantum dots offer an ideal interface, as mediated by light, to a system where the dynamics of individual interacting spins could be controlled and exploited,” said Atatüre, who is a Fellow of St John’s College. “Because the nuclei randomly ‘steal’ information from the electron they have traditionally been an annoyance, but we have shown we can harness them as a resource.”

    The Cambridge team found a way to exploit the interaction between the electron and the thousands of nuclei using lasers to ‘cool’ the nuclei to less than 1 milliKelvin, or a thousandth of a degree above the absolute zero temperature. They then showed they can control and manipulate the thousands of nuclei as if they form a single body in unison, like a second qubit. This proves the nuclei in the quantum dot can exchange information with the electron qubit and can be used to store quantum information as a memory device. The results are reported in the journal Science.

    Quantum computing aims to harness fundamental concepts of quantum physics, such as entanglement and superposition principle, to outperform current approaches to computing and could revolutionise technology, business and research. Just like classical computers, quantum computers need a processor, memory, and a bus to transport the information backwards and forwards. The processor is a qubit which can be an electron trapped in a quantum dot, the bus is a single photon that these quantum dots generate and are ideal for exchanging information. But the missing link for quantum dots is quantum memory.

    Atatüre said: “Instead of talking to individual nuclear spins, we worked on accessing collective spin waves by lasers. This is like a stadium where you don’t need to worry about who raises their hands in the Mexican wave going round, as long as there is one collective wave because they all dance in unison.

    “We then went on to show that these spin waves have quantum coherence. This was the missing piece of the jigsaw and we now have everything needed to build a dedicated quantum memory for every qubit.”

    In quantum technologies, the photon, the qubit and the memory need to interact with each other in a controlled way. This is mostly realised by interfacing different physical systems to form a single hybrid unit which can be inefficient. The researchers have been able to show that in quantum dots, the memory element is automatically there with every single qubit.

    Dr Dorian Gangloff, one of the first authors of the paper [Science] and a Fellow at St John’s, said the discovery will renew interest in these types of semiconductor quantum dots. Dr Gangloff explained: “This is a Holy Grail breakthrough for quantum dot research – both for quantum memory and fundamental research; we now have the tools to study dynamics of complex systems in the spirit of quantum simulation.”

    The long term opportunities of this work could be seen in the field of quantum computing. Last month, IBM launched the world’s first commercial quantum computer, and the Chief Executive of Microsoft has said quantum computing has the potential to ‘radically reshape the world’.

    Gangloff said: “The impact of the qubit could be half a century away but the power of disruptive technology is that it is hard to conceive of the problems we might open up – you can try to think of it as known unknowns but at some point you get into new territory. We don’t yet know the kind of problems it will help to solve which is very exciting.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U Cambridge Campus

    The University of Cambridge (abbreviated as Cantab in post-nominal letters) is a collegiate public research university in Cambridge, England. Founded in 1209, Cambridge is the second-oldest university in the English-speaking world and the world’s fourth-oldest surviving university. It grew out of an association of scholars who left the University of Oxford after a dispute with townsfolk. The two ancient universities share many common features and are often jointly referred to as “Oxbridge”.

    Cambridge is formed from a variety of institutions which include 31 constituent colleges and over 100 academic departments organised into six schools. The university occupies buildings throughout the town, many of which are of historical importance. The colleges are self-governing institutions founded as integral parts of the university. In the year ended 31 July 2014, the university had a total income of £1.51 billion, of which £371 million was from research grants and contracts. The central university and colleges have a combined endowment of around £4.9 billion, the largest of any university outside the United States. Cambridge is a member of many associations and forms part of the “golden triangle” of leading English universities and Cambridge University Health Partners, an academic health science centre. The university is closely linked with the development of the high-tech business cluster known as “Silicon Fen”.

     
  • richardmitnick 4:03 pm on November 5, 2018 Permalink | Reply
    Tags: 'Folded' Optical Devices Manipulate Light in a New Way, , Compact spectrometer, Metasurface optics, Optics   

    From Caltech: “‘Folded’ Optical Devices Manipulate Light in a New Way” 

    Caltech Logo

    From Caltech

    10/30/2018

    Robert Perkins
    (626) 395-1862
    rperkins@caltech.edu

    1
    An array of 11 metasurface-based optical spectrometers, pictured here before the final fabrication step. Each spectrometer is composed of three metasurfaces that disperse and focus light with different wavelengths to different points. Credit: Faraon Lab/Caltech

    The future of optics

    The next generation of electronic devices, ranging from personal health monitors and augmented reality headsets to sensitive scientific instruments that would only be found in a laboratory, will likely incorporate components that use metasurface optics, according to Andrei Faraon, professor of applied physics in Caltech’s Division of Engineering and Applied Science. Metasurface optics manipulate light similarly to how a lens might—bending, focusing, or reflecting it—but do so in a finely controllable way using carefully designed microscopic structures on an otherwise flat surface. That makes them both compact and finely tunable, attractive qualities for electronic devices. However, engineers will need to overcome several challenges to make them widespread.

    The problem

    Most optical systems require more than a single metasurface to function properly. In metasurface-based optical systems, most of the total volume inside the device is just free space through which light propagates between different elements. The need for this free space makes the overall device difficult to scale down, while integrating and aligning multiple metasurfaces into a single device can be complicated and expensive.

    The invention

    To overcome this limitation, the Faraon group has introduced a technology called “folded metasurface optics,” which is a way of printing multiple types of metasurfaces onto either side of a substrate, like glass. In this way, the substrate itself becomes the propagation space for the light. As a proof of concept, the team used the technique to build a spectrometer, which is a scientific instrument for splitting light into different colors, or wavelengths, and measuring their corresponding intensities. (Spectrometers are used in a variety of fields; for example, in astronomy they are used to determine the chemical makeup of stars based on the light they emit.) The spectrometer built by Faraon’s team is 1 millimeter thick and is composed of three reflective metasurfaces placed next to each other that split and reflect light, and ultimately focus it onto a detector array. It was fabricated at the Kavli Nanoscience Institute, and its design is described in a paper published by Nature Communications on October 10.

    What it could be used for

    A compact spectrometer like the one developed by Faraon’s group has a variety of uses, including as a noninvasive blood-glucose measuring system that could be invaluable for diabetes patients. The platform uses multiple metasurface elements that are fabricated in a single step, so, in general, it provides a potential path toward complex but inexpensive optical systems.

    The details

    The paper is titled “Compact folded metasurface spectrometer.” Co-authors include Caltech graduate students MohammadSadegh Faraji-Dana (MS ’18), Ehsan Arbabi (MS ’17), Seyedeh Mahsa Kamali (MS ’17), and Hyounghan Kwon (MS ’18), and Amir Arbabi of the University of Massachusetts Amherst. This research was supported by Samsung Electronics, the National Sciences and Engineering Research Council of Canada, and the U.S. Department of Energy.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”

    Caltech campus

     
  • richardmitnick 1:02 pm on October 14, 2018 Permalink | Reply
    Tags: , Optics, , The World's Fastest Camera Can 'Freeze Time' Show Beams of Light in Slow Motion, University of Quebec   

    From Science Alert: “The World’s Fastest Camera Can ‘Freeze Time’, Show Beams of Light in Slow Motion” 

    ScienceAlert

    From Science Alert

    14 OCT 2018
    JON CHRISTIAN

    1
    (Adobe Stock)

    When you push the button on a laser pointer, its entire beam seems to appear instantaneously. In reality, though, the photons shoot out like water from a hose, just at a speed too fast to see.

    Too fast for the human eye to see, anyways.

    Researchers at Caltech and the University of Quebec have invented what is now the world’s fastest camera, and it takes a mind-boggling 10 trillion shots per second —enough to record footage of a pulse of light as it travels through space.

    The extraordinary camera, which the researchers describe in a paper published Monday in the journal Light: Science & Applications, builds on a technology called compressed ultrafast photography (CUP).

    2
    Figure 1. The trillion-frame-per-second compressed ultrafast photography system. INRS

    CUP can lock down an impressive 100 billion frames per second, but by simultaneously recording a static image and performing some tricky math, the researchers were able to reconstruct 10 trillion frames.

    They call the new technique T-CUP, and while they don’t say what the “T” stands for, our money is on “trillion.”

    Ludicrous Speed

    The camera more than doubles the speed record set in 2015 by a camera that took 4.4 trillion shots per second. Its inventors hope it’ll be useful in biomedical and materials research.

    But they’ve already turned their attention to smashing their newly set record.

    “It’s an achievement in itself,” said lead author Jinyang Liang in a press release, “but we already see possibilities for increasing the speed to up to one quadrillion frames per second!”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 7:35 am on September 7, 2018 Permalink | Reply
    Tags: , Fish-eye lens may entangle pairs of atoms, , Optics, , ,   

    From MIT News: “Fish-eye lens may entangle pairs of atoms” 

    MIT News
    MIT Widget

    From MIT News

    September 5, 2018
    Jennifer Chu

    1
    James Maxwell was the first to realize that light is able to travel in perfect circles within the fish-eye lens because the density of the lens changes, with material being thickest at the middle and gradually thinning out toward the edges. No image credit.

    Scientists find a theoretical optical device may have uses in quantum computing.

    Nearly 150 years ago, the physicist James Maxwell proposed that a circular lens that is thickest at its center, and that gradually thins out at its edges, should exhibit some fascinating optical behavior. Namely, when light is shone through such a lens, it should travel around in perfect circles, creating highly unusual, curved paths of light.

    He also noted that such a lens, at least broadly speaking, resembles the eye of a fish. The lens configuration he devised has since been known in physics as Maxwell’s fish-eye lens — a theoretical construct that is only slightly similar to commercially available fish-eye lenses for cameras and telescopes.

    Now scientists at MIT and Harvard University have for the first time studied this unique, theoretical lens from a quantum mechanical perspective, to see how individual atoms and photons may behave within the lens. In a study published Wednesday in Physical Review A, they report that the unique configuration of the fish-eye lens enables it to guide single photons through the lens, in such a way as to entangle pairs of atoms, even over relatively long distances.

    Entanglement is a quantum phenomenon in which the properties of one particle are linked, or correlated, with those of another particle, even over vast distances. The team’s findings suggest that fish-eye lenses may be a promising vehicle for entangling atoms and other quantum bits, which are the necessary building blocks for designing quantum computers.

    “We found that the fish-eye lens has something that no other two-dimensional device has, which is maintaining this entangling ability over large distances, not just for two atoms, but for multiple pairs of distant atoms,” says first author Janos Perczel, a graduate student in MIT’s Department of Physics. “Entanglement and connecting these various quantum bits can be really the name of the game in making a push forward and trying to find applications of quantum mechanics.”

    The team also found that the fish-eye lens, contrary to recent claims, does not produce a perfect image. Scientists have thought that Maxwell’s fish-eye may be a candidate for a “perfect lens” — a lens that can go beyond the diffraction limit, meaning that it can focus light to a point that is smaller than the light’s own wavelength. This perfect imaging, scientist predict, should produce an image with essentially unlimited resolution and extreme clarity.

    However, by modeling the behavior of photons through a simulated fish-eye lens, at the quantum level, Perczel and his colleagues concluded that it cannot produce a perfect image, as originally predicted.

    “This tells you that there are these limits in physics that are really difficult to break,” Perczel says. “Even in this system, which seemed to be a perfect candidate, this limit seems to be obeyed. Perhaps perfect imaging may still be possible with the fish eye in some other, more complicated way, but not as originally proposed.”

    Perczel’s co-authors on the paper are Peter Komar and Mikhail Lukin from Harvard University.

    A circular path

    Maxwell was the first to realize that light is able to travel in perfect circles within the fish-eye lens because the density of the lens changes, with material being thickest at the middle and gradually thinning out toward the edges. The denser a material, the slower light moves through it. This explains the optical effect when a straw is placed in a glass half full of water. Because the water is so much denser than the air above it, light suddenly moves more slowly, bending as it travels through water and creating an image that looks as if the straw is disjointed.

    In the theoretical fish-eye lens, the differences in density are much more gradual and are distributed in a circular pattern, in such a way that it curves rather bends light, guiding light in perfect circles within the lens.

    In 2009, Ulf Leonhardt, a physicist at the Weizmann Institute of Science in Israel was studying the optical properties of Maxwell’s fish-eye lens and observed that, when photons are released through the lens from a single point source, the light travels in perfect circles through the lens and collects at a single point at the opposite end, with very little loss of light.

    “None of the light rays wander off in unwanted directions,” Perczel says. “Everything follows a perfect trajectory, and all the light will meet at the same time at the same spot.”

    Leonhardt, in reporting his results, made a brief mention as to whether the fish-eye lens’ single-point focus might be useful in precisely entangling pairs of atoms at opposite ends of the lens.

    “Mikhail [Lukin] asked him whether he had worked out the answer, and he said he hadn’t,” Perczel says. “That’s how we started this project and started digging deeper into how well this entangling operation works within the fish-eye lens.”

    Playing photon ping-pong

    To investigate the quantum potential of the fish-eye lens, the researchers modeled the lens as the simplest possible system, consisting of two atoms, one at either end of a two-dimensional fish-eye lens, and a single photon, aimed at the first atom. Using established equations of quantum mechanics, the team tracked the photon at any given point in time as it traveled through the lens, and calculated the state of both atoms and their energy levels through time.

    They found that when a single photon is shone through the lens, it is temporarily absorbed by an atom at one end of the lens. It then circles through the lens, to the second atom at the precise opposite end of the lens. This second atom momentarily absorbs the photon before sending it back through the lens, where the light collects precisely back on the first atom.

    “The photon is bounced back and forth, and the atoms are basically playing ping pong,” Perczel says. “Initially only one of the atoms has the photon, and then the other one. But between these two extremes, there’s a point where both of them kind of have it. It’s this mind-blowing quantum mechanics idea of entanglement, where the photon is completely shared equally between the two atoms.”

    Perczel says that the photon is able to entangle the atoms because of the unique geometry of the fish-eye lens. The lens’ density is distributed in such a way that it guides light in a perfectly circular pattern and can cause even a single photon to bounce back and forth between two precise points along a circular path.

    “If the photon just flew away in all directions, there wouldn’t be any entanglement,” Perczel says. “But the fish-eye gives this total control over the light rays, so you have an entangled system over long distances, which is a precious quantum system that you can use.”

    As they increased the size of the fish-eye lens in their model, the atoms remained entangled, even over relatively large distances of tens of microns. They also observed that, even if some light escaped the lens, the atoms were able to share enough of a photon’s energy to remain entangled. Finally, as they placed more pairs of atoms in the lens, opposite to one another, along with corresponding photons, these atoms also became simultaneously entangled.

    “You can use the fish eye to entangle multiple pairs of atoms at a time, which is what makes it useful and promising,” Perczel says.

    Fishy secrets

    In modeling the behavior of photons and atoms in the fish-eye lens, the researchers also found that, as light collected on the opposite end of the lens, it did so within an area that was larger than the wavelength of the photon’s light, meaning that the lens likely cannot produce a perfect image.

    “We can precisely ask the question during this photon exchange, what’s the size of the spot to which the photon gets recollected? And we found that it’s comparable to the wavelength of the photon, and not smaller,” Perczel says. “Perfect imaging would imply it would focus on an infinitely sharp spot. However, that is not what our quantum mechanical calculations showed us.”

    Going forward, the team hopes to work with experimentalists to test the quantum behaviors they observed in their modeling. In fact, in their paper, the team also briefly proposes a way to design a fish-eye lens for quantum entanglement experiments.

    “The fish-eye lens still has its secrets, and remarkable physics buried in it,” Perczel says. “But now it’s making an appearance in quantum technologies where it turns out this lens could be really useful for entangling distant quantum bits, which is the basic building block for building any useful quantum computer or quantum information processing device.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 8:09 pm on August 23, 2018 Permalink | Reply
    Tags: 3-D x-ray imaging that can visualize bulky materials in great detail, , , called Multilayer Laue lenses (MLLs), HXN’s special optics, Novel X-Ray Optics Boost Imaging Capabilities at NSLS-II, Optics,   

    From Brookhaven National Lab: “Novel X-Ray Optics Boost Imaging Capabilities at NSLS-II” 

    From Brookhaven National Lab

    August 23, 2018
    Rebecca Wilkin
    rebeccalwilkin@gmail.com

    Brookhaven Lab scientists capture high-resolution, 3-D images of thick materials more efficiently than ever before.

    1
    NSLS-II scientist Hande Öztürk stands next to the Hard X-ray Nanoprobe (HXN) beamline, where her research team developed the new x-ray imaging technique. No photo credit.

    Scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory have developed a new approach to 3-D x-ray imaging that can visualize bulky materials in great detail—an impossible task with conventional imaging methods. The novel technique could help scientists unlock clues about the structural information of countless materials, from batteries to biological systems.

    The scientists developed their approach at Brookhaven’s National Synchrotron Light Source II (NSLS-II)—a DOE Office of Science User Facility where scientists use ultra-bright x-rays to reveal details at the nanoscale. The team is located at NSLS-II’s Hard X-ray Nanoprobe (HXN) beamline, an experimental station that uses advanced lenses to offer world-leading resolution, all the way down to 10 nanometers—about one ten-thousandth the diameter of a human hair.

    HXN produces remarkably high-resolution images that can provide scientists with a comprehensive view of different material properties in 2-D and 3-D. The beamline also has a unique combination of in situ and operando capabilities—methods of studying materials in real-life operating conditions. However, scientists who use x-ray microscopes have been restricted by the size and thickness of the materials they can study.

    “The x-ray imaging community is still facing major challenges in fully exploiting the potential of beamlines like HXN, especially for obtaining high-resolution details from thick samples,” said Yong Chu, lead beamline scientist at HXN. “Obtaining quality, high-resolution images can become challenging when a material is thick—that is, thicker than the x-ray optics’ depth of focus.”

    Now, scientists at HXN have developed an efficient approach to studying thick samples without sacrificing the excellent resolution that HXN provides. They describe their approach in a paper published in the journal Optica.

    “The ultimate goal of our research is to break the technical barrier imposed on sample thickness and develop a new way of performing 3-D imaging—one that involves mathematically slicing through the sample,” said Xiaojing Huang, a scientist at HXN and a co-author of the paper.

    2
    The research team is pictured at the HXN workstation. Standing, from left to right, are Xiaojing Huang, Hanfei Yan, Evgeny Nazaretski, Yong Chu, Mingyuan Ge, and Zhihua Dong. Sitting, from left to right, are Hande Öztürk and Meifeng Lin. Not pictured: Ian Robinson.

    The conventional method of obtaining a 3-D image involves collecting and combining a series of 2-D images. To obtain these 2-D images, the scientists typically rotate the sample 180 degrees; however, large samples cannot easily rotate within the limited space of typical x-ray microscopes. This limitation, in addition to the challenge of imaging thick samples, makes it nearly impossible to reconstruct a 3-D image with high resolution.

    “Instead of collecting a series of 2-D projections by rotating the sample, we simply ‘slice’ the thick material into a series of thin layers,” said lead author Hande Öztürk. “This slicing process is carried out mathematically without physically modifying the sample.”

    Their technique benefits from HXN’s special optics, called Multilayer Laue lenses (MLLs), which are engineered to focus x-rays into a tiny point. These lenses create favorable conditions for studying thinner slices of thick materials, while also reducing the measurement time.

    “HXN’s unique MLLs have a high focusing efficiency, so we can spend much less time collecting the signal we need,” said Hanfei Yan, a scientist at HXN and a co-author of the paper.

    By combining the MLL optics and the multi-slice approach, the HXN scientists were able to visualize two layers of nanoparticles separated by only 10 microns—about one tenth the diameter of a human hair—and with a resolution 100 times smaller. Additionally, the method significantly cut down the time needed to obtain a single image.

    “This development provides an exciting opportunity to perform 3-D imaging on samples that are very difficult to image with conventional methods—for example, a battery with a complicated electrochemical cell,” said Chu. He added that this approach could be very useful for a wide variety of future research applications.

    This study was supported by Brookhaven Lab’s Laboratory Directed Research and Development program. Operations at NSLS-II are supported by DOE’s Office of Science.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus

    BNL NSLS-II


    BNL NSLS II

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL RHIC PHENIX

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 8:14 am on August 20, 2018 Permalink | Reply
    Tags: , , Lincoln Laboratory undersea optical communications, , , Optics   

    From MIT News: “Advancing undersea optical communications” 

    MIT News
    MIT Widget

    From MIT News

    1
    A remotely operated vehicle and undersea terminal emits a coarse acquisition stabilized beam after locking onto another lasercom terminal. Photo: Nicole Fandel

    2
    Staff performed tests with the undersea optical communications system at the Boston Sports Club pool in Lexington, proving that two underwater vehicles could efficiently search and locate each other. After detecting the remote terminal’s beacon, the local terminal is able to lock on and pull into coarse track in less than one second. Photo courtesy of the research team.

    Lincoln Laboratory researchers are applying narrow-beam laser technology to enable communications between underwater vehicles.

    Nearly five years ago, NASA and Lincoln Laboratory made history when the Lunar Laser Communication Demonstration (LLCD) used a pulsed laser beam to transmit data from a satellite orbiting the moon to Earth — more than 239,000 miles — at a record-breaking download speed of 622 megabits per second.


    MIT Lincoln Laboratory

    Now, researchers at Lincoln Laboratory are aiming to once again break new ground by applying the laser beam technology used in LLCD to underwater communications.

    “Both our undersea effort and LLCD take advantage of very narrow laser beams to deliver the necessary energy to the partner terminal for high-rate communication,” says Stephen Conrad, a staff member in the Control and Autonomous Systems Engineering Group, who developed the pointing, acquisition, and tracking (PAT) algorithm for LLCD. “In regard to using narrow-beam technology, there is a great deal of similarity between the undersea effort and LLCD.”

    However, undersea laser communication (lasercom) presents its own set of challenges. In the ocean, laser beams are hampered by significant absorption and scattering, which restrict both the distance the beam can travel and the data signaling rate. To address these problems, the Laboratory is developing narrow-beam optical communications that use a beam from one underwater vehicle pointed precisely at the receive terminal of a second underwater vehicle.

    This technique contrasts with the more common undersea communication approach that sends the transmit beam over a wide angle but reduces the achievable range and data rate. “By demonstrating that we can successfully acquire and track narrow optical beams between two mobile vehicles, we have taken an important step toward proving the feasibility of the laboratory’s approach to achieving undersea communication that is 10,000 times more efficient than other modern approaches,” says Scott Hamilton, leader of the Optical Communications Technology Group, which is directing this R&D into undersea communication.

    Most above-ground autonomous systems rely on the use of GPS for positioning and timing data; however, because GPS signals do not penetrate the surface of water, submerged vehicles must find other ways to obtain these important data. “Underwater vehicles rely on large, costly inertial navigation systems, which combine accelerometer, gyroscope, and compass data, as well as other data streams when available, to calculate position,” says Thomas Howe of the research team. “The position calculation is noise sensitive and can quickly accumulate errors of hundreds of meters when a vehicle is submerged for significant periods of time.”

    This positional uncertainty can make it difficult for an undersea terminal to locate and establish a link with incoming narrow optical beams. For this reason, “We implemented an acquisition scanning function that is used to quickly translate the beam over the uncertain region so that the companion terminal is able to detect the beam and actively lock on to keep it centered on the lasercom terminal’s acquisition and communications detector,” researcher Nicolas Hardy explains. Using this methodology, two vehicles can locate, track, and effectively establish a link, despite the independent movement of each vehicle underwater.

    Once the two lasercom terminals have locked onto each other and are communicating, the relative position between the two vehicles can be determined very precisely by using wide bandwidth signaling features in the communications waveform. With this method, the relative bearing and range between vehicles can be known precisely, to within a few centimeters, explains Howe, who worked on the undersea vehicles’ controls.

    To test their underwater optical communications capability, six members of the team recently completed a demonstration of precision beam pointing and fast acquisition between two moving vehicles in the Boston Sports Club pool in Lexington, Massachusetts. Their tests proved that two underwater vehicles could search for and locate each other in the pool within one second. Once linked, the vehicles could potentially use their established link to transmit hundreds of gigabytes of data in one session.

    This summer, the team is traveling to regional field sites to demonstrate this new optical communications capability to U.S. Navy stakeholders. One demonstration will involve underwater communications between two vehicles in an ocean environment — similar to prior testing that the Laboratory undertook at the Naval Undersea Warfare Center in Newport, Rhode Island, in 2016. The team is planning a second exercise to demonstrate communications from above the surface of the water to an underwater vehicle — a proposition that has previously proven to be nearly impossible.

    The undersea communication effort could tap into innovative work conducted by other groups at the laboratory. For example, integrated blue-green optoelectronic technologies, including gallium nitride laser arrays and silicon Geiger-mode avalanche photodiode array technologies, could lead to lower size, weight, and power terminal implementation and enhanced communication functionality.

    In addition, the ability to move data at megabit-to gigabit-per-second transfer rates over distances that vary from tens of meters in turbid waters to hundreds of meters in clear ocean waters will enable undersea system applications that the laboratory is exploring.

    Howe, who has done a significant amount of work with underwater vehicles, both before and after coming to the laboratory, says the team’s work could transform undersea communications and operations. “High-rate, reliable communications could completely change underwater vehicle operations and take a lot of the uncertainty and stress out of the current operation methods.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 2:30 pm on December 21, 2017 Permalink | Reply
    Tags: An easy-to-build camera that produces 3D images from a single 2D image without any lenses, DiffuserCam, New Lensless Camera Creates Detailed 3D Images Without Scanning, Optics, OSA- The Optical Society, , The project is funded by DARPA’s Neural Engineering System Design program   

    From OSA: “New Lensless Camera Creates Detailed 3D Images Without Scanning” 

    The Optical Society

    21 December 2017

    Innovative computational imaging approach could advance applications from brain research to self-driving cars.

    Researchers have developed an easy-to-build camera that produces 3D images from a single 2D image without any lenses. In an initial application of the technology, the researchers plan to use the new camera, which they call DiffuserCam, to watch microscopic neuron activity in living mice without a microscope. Ultimately, it could prove useful for a wide range of applications involving 3D capture.

    The camera is compact and inexpensive to construct because it consists of only a diffuser – essentially a bumpy piece of plastic – placed on top of an image sensor. Although the hardware is simple, the software it uses to reconstruct high resolution 3D images is very complex.

    “The DiffuserCam can, in a single shot, capture 3D information in a large volume with high resolution,” said the research team leader Laura Waller, University of California, Berkeley. “We think the camera could be useful for self-driving cars, where the 3D information can offer a sense of scale, or it could be used with machine learning algorithms to perform face detection, track people or automatically classify objects.”

    In Optica, The Optical Society’s journal for high impact research, the researchers show that the DiffuserCam can be used to reconstruct 100 million voxels, or 3D pixels, from a 1.3-megapixel (1.3 million pixels) image without any scanning. For comparison, the iPhone X camera takes 12-megapixel photos. The researchers used the camera to capture the 3D structure of leaves from a small plant.

    2
    The lensless DiffuserCam consists of a diffuser placed in front of a sensor (bumps on the diffuser are exaggerated for illustration). The system turns a 3D scene into a 2D image on the sensor. After a one-time calibration, an algorithm is used to reconstruct 3D images computationally. The result is a 3D image reconstructed from a single 2D measurement. Image Credit: Laura Waller, University of California, Berkeley.

    “Our new camera is a great example of what can be accomplished with computational imaging — an approach that examines how hardware and software can be used together to design imaging systems,” said Waller. “We made a concerted effort to keep the hardware extremely simple and inexpensive. Although the software is very complicated, it can also be easily replicated or distributed, allowing others to create this type of camera at home.”

    A DiffuserCam can be created using any type of image sensor and can image objects that range from microscopic in scale all the way up to the size of a person. It offers a resolution in the tens of microns range when imaging objects close to the sensor. Although the resolution decreases when imaging a scene farther away from the sensor, it is still high enough to distinguish that one person is standing several feet closer to the camera than another person, for example.

    A simple approach to complex imaging

    The DiffuserCam is a relative of the light field camera, which captures how much light is striking a pixel on the image sensor as well as the angle from which the light hits that pixel. In a typical light field camera, an array of tiny lenses placed in front of the sensor is used to capture the direction of the incoming light, allowing computational approaches to refocus the image and create 3D images without the scanning steps typically required to obtain 3D information.

    Until now, light field cameras have been limited in spatial resolution because some spatial information is lost while collecting the directional information. Another drawback of these cameras is that the microlens arrays are expensive and must be customized for a particular camera or optical components used for imaging.

    “I wanted to see if we could achieve the same imaging capabilities using simple and cheap hardware,” said Waller. “If we have better algorithms, could the carefully designed, expensive microlens arrays be replaced with a plastic surface with a random pattern such as a bumpy piece of plastic?”

    After experimenting with various types of diffusers and developing the complex algorithms, Nick Antipa and Grace Kuo, students in Waller’s lab, discovered that Waller’s idea for a simple light field camera was possible. In fact, using random bumps in privacy glass stickers, Scotch tape or plastic conference badge holders, allowed the researchers to improve on traditional light field camera capabilities by using compressed sensing to avoid the typical loss of resolution that comes with microlens arrays.

    Although other light field cameras use lens arrays that are precisely designed and aligned, the exact size and shape of the bumps in the new camera’s diffuser are unknown. This means that a few images of a moving point of light must be acquired to calibrate the software prior to imaging. The researchers are working on a way to eliminate this calibration step by using the raw data for calibration. They also want to improve the accuracy of the software and make the 3D reconstruction faster.

    No microscope required

    The new camera will be used in a project at University of California Berkeley that aims to watch a million individual neurons while stimulating 1,000 of them with single-cell accuracy. The project is funded by DARPA’s Neural Engineering System Design program – part of the federal government’s BRAIN Initiative – to develop implantable, biocompatible neural interfaces that could eventually compensate for visual or hearing deficits.

    As a first step, the researchers want to create what they call a cortical modem that will “read” and “write” to the brains of animal models, much like the input-output activity of internet modems. The DiffuserCam will be the heart of the reading device for this project, which will also use special proteins that allow scientists to control neuronal activity with light.

    “Using this to watch neurons fire in a mouse brain could in the future help us understand more about sensory perception and provide knowledge that could be used to cure diseases like Alzheimer’s or mental disorders,” said Waller.

    Although newly developed imaging techniques can capture hundreds of neurons firing, how the brain works on larger scales is not fully understood. The DiffuserCam has the potential to provide that insight by imaging millions of neurons in one shot. Because the camera is lightweight and requires no microscope or objective lens, it can be attached to a transparent window in a mouse’s skull, allowing neuronal activity to be linked with behavior. Several arrays with overlying diffusers could be tiled to image large areas.

    A need for interdisciplinary designers

    “Our work shows that computational imaging can be a creative process that examines all parts of the optical design and algorithm design to create optical systems that accomplish things that couldn’t be done before or to use a simpler approach to something that could be done before,” Waller said. “This is a very powerful direction for imaging, but requires designers with optical and physics expertise as well as computational knowledge.”

    The new Berkeley Center for Computational Imaging, headed by Waller, is working to train more scientists in this interdisciplinary field. Scientists from the center also meet weekly with bioengineers, physicists and electrical engineers as well as experts in signal processing and machine learning to exchange ideas and to better understand the imaging needs of other fields.

    The open source software for the DiffuserCam is available on the project page: DiffuserCam: Lensless Single-exposure 3D Imaging.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: