Tagged: Photonics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:29 am on October 2, 2019 Permalink | Reply
    Tags: "Penn Engineers’ New Topological Insulator Can Reroute Photonic ‘Traffic’ On the Fly Making for Faster Chips", , Photonics, , , Using photons instead of electrons   

    From University of Pennsylvania Engineering: “Penn Engineers’ New Topological Insulator Can Reroute Photonic ‘Traffic’ On the Fly, Making for Faster Chips” 

    From University of Pennsylvania Engineering

    Topological insulators are a game-changing class of materials; charged particles can flow freely on their edges and route themselves around defects, but can’t pass through their interiors. This perfect surface conduction holds promise for fast and efficient electronic circuits, though engineers must contend with the fact that the interiors of such materials are effectively wasted space.

    The researchers’ chip features a tessellated grid of oval rings. By “pumping” individual rings with an external laser, they are able to dynamically redefine the path photons take. (Image: Penn Engineering)

    Now, researchers from the University of Pennsylvania, where topological insulators were first discovered in 2005, have shown a way to fulfill that promise in a field where physical space is at an even bigger premium: photonics. They have shown, for the first time, a way for a topological insulator to make use of its entire footprint.

    By using photons instead of electrons, photonic chips promise even faster data transfer speeds and information-dense applications, but the components necessary for building them remain considerably larger than their electronic counterparts, due to the lack of efficient data-routing architecture.

    A photonic topological insulator with edges that can be redefined on the fly, however, would help solve the footprint problem. Being able to route these “roads” around one another as needed means the entire interior bulk could be used to efficiently build data links.

    Researchers at Penn’s School of Engineering and Applied Science have built and tested such a device for the first time, publishing their findings in the journal Science.

    “This could have a big impact on large-information capacity applications, like 5G, or even 6G, cellphone networks,” says Liang Feng, assistant professor in Penn Engineering’s Departments of Materials Science and Engineering and Electrical and Systems Engineering.

    “We think this may be the first practical application of topological insulators,” he says.

    Liang Feng and Han Zhao

    Feng led the study along with graduate student Han Zhao, a member of his lab. Fellow lab members Xingdu Qiao, Tianwei Wu and Bikashkali Midya, along with Stefano Longhi, professor at the Polytechnic University of Milan in Italy, also contributed to the research.

    The data centers that form the backbone of communication networks route calls, texts, email attachments and streaming movies to and between millions of cellular devices. But as the amount of data flowing through these data centers increases, so does the need for high-capacity data routing that can keep up with the demand.

    Switching from electrons to photons would speed up this process for the upcoming information explosion, but engineers must first design a whole new library of devices for getting those photons from input to output without mixing them up and losing them in the process.

    Advances in data-processing speed in electronics have relied on making their core components smaller and smaller, but photonics researchers have needed to take a different approach.

    Feng, Zhao and their colleagues set out to maximize the complexity of photonic waveguides — the prescribed paths individual photons take on their way from input to output — on a given chip.

    Microscope details of the researchers’ photonic chip.

    The researchers’ prototype photonic chip is roughly 250 microns squared, and features a tessellated grid of oval rings. By “pumping” the chip with an external laser, targeted to alter the photonic properties of individual rings, they are able to alter which of those rings constitute the boundaries of a waveguide.

    The result is a reconfigurable topological insulator. By changing the pumping patterns, photons headed in different directions can be routed around each other, allowing photons from multiple data packets to travel through the chip simultaneously, like a complicated highway interchange.

    “We can define the edges such that photons can go from any input port to any output port, or even to multiple outputs at once,” Feng says. “That means the ports-to-footprint ratio is at least two orders of magnitude greater than current state-of-the-art photonic routers and switches.”

    Increased efficiency and speed is not the only advantage of the researchers’ approach.

    “Our system is also robust against unexpected defects,” Zhao says. “If one of the rings is damaged by a grain of dust, for example, that damage is just making a new set of edges that we can send photons along.”

    Since the system requires an off-chip laser source to redefine the shape of the waveguides, the researcher’s system is not yet small enough to be useful for data centers or other commercial applications. Next steps for the team will be to establish a fast reconfiguring scheme in an integrated fashion.

    Support for this research comes from the U.S. Army Research Office through grant W911NF-19–1–0249 and the National Science Foundation through grants ECCS-1846766 and CMMI-1635026 and University of Pennsylvania Materials Research Science and Engineering Center NSF MRSEC grant DMR-1720530. The work was carried out in part at the Singh Center for Nanotechnology, which is supported by the NSF National Nanotechnology Coordinated Infrastructure Program under grant NNCI-1542153.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    U Penn campus

    Penn Engineering – Galway, Ireland

    Academic life at Penn is unparalleled, with 100 countries and every U.S. state represented in one of the Ivy League’s most diverse student bodies. Consistently ranked among the top 10 universities in the country, Penn enrolls 10,000 undergraduate students and welcomes an additional 10,000 students to our world-renowned graduate and professional schools.

    Penn’s award-winning educators and scholars encourage students to pursue inquiry and discovery, follow their passions, and address the world’s most challenging problems through an interdisciplinary approach.

  • richardmitnick 8:36 am on June 6, 2019 Permalink | Reply
    Tags: "Chip design drastically reduces energy needed to compute with light", AI accelerators, , , Mach-Zehnder interferometers, Machinel Learning, , Photonics,   

    From MIT News: “Chip design drastically reduces energy needed to compute with light” 

    MIT News

    From MIT News

    June 5, 2019
    Rob Matheson

    A new photonic chip design drastically reduces energy needed to compute with light, with simulations suggesting it could run optical neural networks 10 million times more efficiently than its electrical counterparts. Image: courtesy of the researchers, edited by MIT News

    Simulations suggest photonic chip could run optical neural networks 10 million times more efficiently than its electrical counterparts.

    MIT researchers have developed a novel “photonic” chip that uses light instead of electricity — and consumes relatively little power in the process. The chip could be used to process massive neural networks millions of times more efficiently than today’s classical computers do.

    Neural networks are machine-learning models that are widely used for such tasks as robotic object identification, natural language processing, drug development, medical imaging, and powering driverless cars. Novel optical neural networks, which use optical phenomena to accelerate computation, can run much faster and more efficiently than their electrical counterparts.

    But as traditional and optical neural networks grow more complex, they eat up tons of power. To tackle that issue, researchers and major tech companies — including Google, IBM, and Tesla — have developed “AI accelerators,” specialized chips that improve the speed and efficiency of training and testing neural networks.

    For electrical chips, including most AI accelerators, there is a theoretical minimum limit for energy consumption. Recently, MIT researchers have started developing photonic accelerators for optical neural networks. These chips perform orders of magnitude more efficiently, but they rely on some bulky optical components that limit their use to relatively small neural networks.

    In a paper published in Physical Review X, MIT researchers describe a new photonic accelerator that uses more compact optical components and optical signal-processing techniques, to drastically reduce both power consumption and chip area. That allows the chip to scale to neural networks several orders of magnitude larger than its counterparts.

    Simulated training of neural networks on the MNIST image-classification dataset suggest the accelerator can theoretically process neural networks more than 10 million times below the energy-consumption limit of traditional electrical-based accelerators and about 1,000 times below the limit of photonic accelerators. The researchers are now working on a prototype chip to experimentally prove the results.

    “People are looking for technology that can compute beyond the fundamental limits of energy consumption,” says Ryan Hamerly, a postdoc in the Research Laboratory of Electronics. “Photonic accelerators are promising … but our motivation is to build a [photonic accelerator] that can scale up to large neural networks.”

    Practical applications for such technologies include reducing energy consumption in data centers. “There’s a growing demand for data centers for running large neural networks, and it’s becoming increasingly computationally intractable as the demand grows,” says co-author Alexander Sludds, a graduate student in the Research Laboratory of Electronics. The aim is “to meet computational demand with neural network hardware … to address the bottleneck of energy consumption and latency.”

    Joining Sludds and Hamerly on the paper are: co-author Liane Bernstein, an RLE graduate student; Marin Soljacic, an MIT professor of physics; and Dirk Englund, an MIT associate professor of electrical engineering and computer science, a researcher in RLE, and head of the Quantum Photonics Laboratory.

    Compact design

    Neural networks process data through many computational layers containing interconnected nodes, called “neurons,” to find patterns in the data. Neurons receive input from their upstream neighbors and compute an output signal that is sent to neurons further downstream. Each input is also assigned a “weight,” a value based on its relative importance to all other inputs. As the data propagate “deeper” through layers, the network learns progressively more complex information. In the end, an output layer generates a prediction based on the calculations throughout the layers.

    All AI accelerators aim to reduce the energy needed to process and move around data during a specific linear algebra step in neural networks, called “matrix multiplication.” There, neurons and weights are encoded into separate tables of rows and columns and then combined to calculate the outputs.

    In traditional photonic accelerators, pulsed lasers encoded with information about each neuron in a layer flow into waveguides and through beam splitters. The resulting optical signals are fed into a grid of square optical components, called “Mach-Zehnder interferometers,” which are programmed to perform matrix multiplication. The interferometers, which are encoded with information about each weight, use signal-interference techniques that process the optical signals and weight values to compute an output for each neuron. But there’s a scaling issue: For each neuron there must be one waveguide and, for each weight, there must be one interferometer. Because the number of weights squares with the number of neurons, those interferometers take up a lot of real estate.

    “You quickly realize the number of input neurons can never be larger than 100 or so, because you can’t fit that many components on the chip,” Hamerly says. “If your photonic accelerator can’t process more than 100 neurons per layer, then it makes it difficult to implement large neural networks into that architecture.”

    The researchers’ chip relies on a more compact, energy efficient “optoelectronic” scheme that encodes data with optical signals, but uses “balanced homodyne detection” for matrix multiplication. That’s a technique that produces a measurable electrical signal after calculating the product of the amplitudes (wave heights) of two optical signals.

    Pulses of light encoded with information about the input and output neurons for each neural network layer — which are needed to train the network — flow through a single channel. Separate pulses encoded with information of entire rows of weights in the matrix multiplication table flow through separate channels. Optical signals carrying the neuron and weight data fan out to grid of homodyne photodetectors. The photodetectors use the amplitude of the signals to compute an output value for each neuron. Each detector feeds an electrical output signal for each neuron into a modulator, which converts the signal back into a light pulse. That optical signal becomes the input for the next layer, and so on.

    The design requires only one channel per input and output neuron, and only as many homodyne photodetectors as there are neurons, not weights. Because there are always far fewer neurons than weights, this saves significant space, so the chip is able to scale to neural networks with more than a million neurons per layer.

    Finding the sweet spot

    With photonic accelerators, there’s an unavoidable noise in the signal. The more light that’s fed into the chip, the less noise and greater the accuracy — but that gets to be pretty inefficient. Less input light increases efficiency but negatively impacts the neural network’s performance. But there’s a “sweet spot,” Bernstein says, that uses minimum optical power while maintaining accuracy.

    That sweet spot for AI accelerators is measured in how many joules it takes to perform a single operation of multiplying two numbers — such as during matrix multiplication. Right now, traditional accelerators are measured in picojoules, or one-trillionth of a joule. Photonic accelerators measure in attojoules, which is a million times more efficient.

    In their simulations, the researchers found their photonic accelerator could operate with sub-attojoule efficiency. “There’s some minimum optical power you can send in, before losing accuracy. The fundamental limit of our chip is a lot lower than traditional accelerators … and lower than other photonic accelerators,” Bernstein says.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 4:15 pm on September 12, 2018 Permalink | Reply
    Tags: , Bowtie-funnel combo best for conducting light; team found answer in undergrad physics equation, Photonics, ,   

    From Vanderbilt University: “Bowtie-funnel combo best for conducting light; team found answer in undergrad physics equation” 

    Vanderbilt U Bloc

    From Vanderbilt University

    Aug. 24, 2018
    Heidi Hall

    Running computers on virtually invisible beams of light rather than microelectronics would make them faster, lighter and more energy efficient. A version of that technology already exists in fiber optic cables, but they’re much too large to be practical inside a computer.

    A Vanderbilt team found the answer in a formula familiar to college physics students – a solution so simple and elegant, it was tough for reviewers to believe. Professor Sharon Weiss; her doctoral student, Shuren Hu, and collaborators at the IBM T. J. Watson Research Center and University of Technology in Troyes, France, published the proof in today’s Science Advances, a peer-reviewed, open-access journal from AAAS.

    They developed a structure that’s part bowtie, part funnel that concentrates light powerfully and nearly indefinitely, as measured by a scanning near field optical microscope. Only 12 nanometers connect the points of the bowtie. The diameter of a human hair is 100,000 nanometers.

    The team combined a nanoscale air slot surrounded by silicon with a nanoscale silicon bar surrounded by air. (Vanderbilt University)

    “Light travels faster than electricity and doesn’t have the same heating issues as the copper wires currently carrying the information in computers,” said Weiss, Cornelius Vanderbilt Endowed Chair and Professor of Electrical Engineering, Physics and Materials Science and Engineering. “What is really special about our new research is that the use of the bowtie shape concentrates the light so that a small amount of input light becomes highly amplified in a small region. We can potentially use that for low-power manipulation of information on computer chips.”

    The team published its work as a theory two years ago in ACS Photonics, then partnered with Will Green’s silicon photonics team at IBM to fabricate a device that could prove it.

    The research began with Maxwell’s equations, which describe how light propagates in space and time. Using two principles from these equations and applying boundary conditions that account for materials used, Weiss and Hu combined a nanoscale air slot surrounded by silicon with a nanoscale silicon bar surrounded by air to make the bowtie shape.

    “To increase optical energy density, there are generally two ways: focus light down to a small tiny space and trap light in that space as long as possible,” Hu said. “The challenge is not only to squeeze a comparatively elephant-size photon into refrigerator-size space, but also to keep the elephant voluntarily in the refrigerator for a long time. It has been a prevailing belief in photonics that you have to compromise between trapping time and trapping space: the harder you squeeze photons, the more eager they are to escape.”

    The team developed structure that’s part bowtie, part funnel that conducts light powerfully and indefinitely, as measured by a scanning near field optical microscope. (Ella Maru Studio)

    Weiss said she and Hu will continue working to improve their device and explore its possible application in future computer platforms.

    This work was funded by National Science Foundation GOALI grant ECCS1407777.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Commodore Cornelius Vanderbilt was in his 79th year when he decided to make the gift that founded Vanderbilt University in the spring of 1873.

    The $1 million that he gave to endow and build the university was the commodore’s only major philanthropy. Methodist Bishop Holland N. McTyeire of Nashville, husband of Amelia Townsend who was a cousin of the commodore’s young second wife Frank Crawford, went to New York for medical treatment early in 1873 and spent time recovering in the Vanderbilt mansion. He won the commodore’s admiration and support for the project of building a university in the South that would “contribute to strengthening the ties which should exist between all sections of our common country.”

    McTyeire chose the site for the campus, supervised the construction of buildings and personally planted many of the trees that today make Vanderbilt a national arboretum. At the outset, the university consisted of one Main Building (now Kirkland Hall), an astronomical observatory and houses for professors. Landon C. Garland was Vanderbilt’s first chancellor, serving from 1875 to 1893. He advised McTyeire in selecting the faculty, arranged the curriculum and set the policies of the university.

    For the first 40 years of its existence, Vanderbilt was under the auspices of the Methodist Episcopal Church, South. The Vanderbilt Board of Trust severed its ties with the church in June 1914 as a result of a dispute with the bishops over who would appoint university trustees.

    kirkland hallFrom the outset, Vanderbilt met two definitions of a university: It offered work in the liberal arts and sciences beyond the baccalaureate degree and it embraced several professional schools in addition to its college. James H. Kirkland, the longest serving chancellor in university history (1893-1937), followed Chancellor Garland. He guided Vanderbilt to rebuild after a fire in 1905 that consumed the main building, which was renamed in Kirkland’s honor, and all its contents. He also navigated the university through the separation from the Methodist Church. Notable advances in graduate studies were made under the third chancellor, Oliver Cromwell Carmichael (1937-46). He also created the Joint University Library, brought about by a coalition of Vanderbilt, Peabody College and Scarritt College.

    Remarkable continuity has characterized the government of Vanderbilt. The original charter, issued in 1872, was amended in 1873 to make the legal name of the corporation “The Vanderbilt University.” The charter has not been altered since.

    The university is self-governing under a Board of Trust that, since the beginning, has elected its own members and officers. The university’s general government is vested in the Board of Trust. The immediate government of the university is committed to the chancellor, who is elected by the Board of Trust.

    The original Vanderbilt campus consisted of 75 acres. By 1960, the campus had spread to about 260 acres of land. When George Peabody College for Teachers merged with Vanderbilt in 1979, about 53 acres were added.

    wyatt centerVanderbilt’s student enrollment tended to double itself each 25 years during the first century of the university’s history: 307 in the fall of 1875; 754 in 1900; 1,377 in 1925; 3,529 in 1950; 7,034 in 1975. In the fall of 1999 the enrollment was 10,127.

    In the planning of Vanderbilt, the assumption seemed to be that it would be an all-male institution. Yet the board never enacted rules prohibiting women. At least one woman attended Vanderbilt classes every year from 1875 on. Most came to classes by courtesy of professors or as special or irregular (non-degree) students. From 1892 to 1901 women at Vanderbilt gained full legal equality except in one respect — access to dorms. In 1894 the faculty and board allowed women to compete for academic prizes. By 1897, four or five women entered with each freshman class. By 1913 the student body contained 78 women, or just more than 20 percent of the academic enrollment.

    National recognition of the university’s status came in 1949 with election of Vanderbilt to membership in the select Association of American Universities. In the 1950s Vanderbilt began to outgrow its provincial roots and to measure its achievements by national standards under the leadership of Chancellor Harvie Branscomb. By its 90th anniversary in 1963, Vanderbilt for the first time ranked in the top 20 private universities in the United States.

    Vanderbilt continued to excel in research, and the number of university buildings more than doubled under the leadership of Chancellors Alexander Heard (1963-1982) and Joe B. Wyatt (1982-2000), only the fifth and sixth chancellors in Vanderbilt’s long and distinguished history. Heard added three schools (Blair, the Owen Graduate School of Management and Peabody College) to the seven already existing and constructed three dozen buildings. During Wyatt’s tenure, Vanderbilt acquired or built one-third of the campus buildings and made great strides in diversity, volunteerism and technology.

    The university grew and changed significantly under its seventh chancellor, Gordon Gee, who served from 2000 to 2007. Vanderbilt led the country in the rate of growth for academic research funding, which increased to more than $450 million and became one of the most selective undergraduate institutions in the country.

    On March 1, 2008, Nicholas S. Zeppos was named Vanderbilt’s eighth chancellor after serving as interim chancellor beginning Aug. 1, 2007. Prior to that, he spent 2002-2008 as Vanderbilt’s provost, overseeing undergraduate, graduate and professional education programs as well as development, alumni relations and research efforts in liberal arts and sciences, engineering, music, education, business, law and divinity. He first came to Vanderbilt in 1987 as an assistant professor in the law school. In his first five years, Zeppos led the university through the most challenging economic times since the Great Depression, while continuing to attract the best students and faculty from across the country and around the world. Vanderbilt got through the economic crisis notably less scathed than many of its peers and began and remained committed to its much-praised enhanced financial aid policy for all undergraduates during the same timespan. The Martha Rivers Ingram Commons for first-year students opened in 2008 and College Halls, the next phase in the residential education system at Vanderbilt, is on track to open in the fall of 2014. During Zeppos’ first five years, Vanderbilt has drawn robust support from federal funding agencies, and the Medical Center entered into agreements with regional hospitals and health care systems in middle and east Tennessee that will bring Vanderbilt care to patients across the state.

    studentsToday, Vanderbilt University is a private research university of about 6,500 undergraduates and 5,300 graduate and professional students. The university comprises 10 schools, a public policy center and The Freedom Forum First Amendment Center. Vanderbilt offers undergraduate programs in the liberal arts and sciences, engineering, music, education and human development as well as a full range of graduate and professional degrees. The university is consistently ranked as one of the nation’s top 20 universities by publications such as U.S. News & World Report, with several programs and disciplines ranking in the top 10.

    Cutting-edge research and liberal arts, combined with strong ties to a distinguished medical center, creates an invigorating atmosphere where students tailor their education to meet their goals and researchers collaborate to solve complex questions affecting our health, culture and society.

    Vanderbilt, an independent, privately supported university, and the separate, non-profit Vanderbilt University Medical Center share a respected name and enjoy close collaboration through education and research. Together, the number of people employed by these two organizations exceeds that of the largest private employer in the Middle Tennessee region.
    Related links

  • richardmitnick 12:02 pm on August 14, 2018 Permalink | Reply
    Tags: , , , , Optics in cameras, Photonics,   

    From MIT News: “Novel optics for ultrafast cameras create new possibilities for imaging” 

    MIT News
    MIT Widget

    From MIT News

    August 13, 2018
    Rob Matheson

    MIT researchers have developed novel photography optics, dubbed “time-folded optics,” that captures images based on the timing of reflecting light inside the lens, instead of the traditional approach that relies on the arrangement of optical components. The invention opens doors for new capabilities for ultrafast time- or depth-sensitive cameras. Courtesy of the researchers.

    Technique can capture a scene at multiple depths with one shutter click — no zoom lens needed.

    The new optics architecture includes a set of semireflective parallel mirrors that reduce, or “fold,” the focal length every time the light reflects between the mirrors. By placing the set of mirrors between the lens and sensor, the researchers condensed the distance of optics arrangement by an order of magnitude while still capturing an image of the scene.

    In their study [Nature Photnics], the researchers demonstrate three uses for time-folded optics for ultrafast cameras and other depth-sensitive imaging devices. These cameras, also called “time-of-flight” cameras, measure the time that it takes for a pulse of light to reflect off a scene and return to a sensor, to estimate the depth of the 3-D scene.

    Co-authors on the paper are Matthew Tancik, a graduate student in the MIT Computer Science and Artificial Intelligence Laboratory; Guy Satat, a PhD student in the Camera Culture Group at the Media Lab; and Ramesh Raskar, an associate professor of media arts and sciences and director of the Camera Culture Group.

    Folding the optical path into time

    The researchers’ system consists of a component that projects a femtosecond (quadrillionth of a second) laser pulse into a scene to illuminate target objects. Traditional photography optics change the shape of the light signal as it travels through the curved glasses. This shape change creates an image on the sensor. But, with the researchers’ optics, instead of heading right to the sensor, the signal first bounces back and forth between mirrors precisely arranged to trap and reflect light. Each one of these reflections is called a “round trip.” At each round trip, some light is captured by the sensor programed to image at a specific time interval — for example, a 1-nanosecond snapshot every 30 nanoseconds.

    A key innovation is that each round trip of light moves the focal point — where a sensor is positioned to capture an image — closer to the lens. This allows the lens to be drastically condensed. Say a streak camera wants to capture an image with the long focal length of a traditional lens. With time-folded optics, the first round-trip pulls the focal point about double the length of the set of mirrors closer to the lens, and each subsequent round trip brings the focal point closer and closer still. Depending on the number of round trips, a sensor can then be placed very near the lens.

    By placing the sensor at a precise focal point, determined by total round trips, the camera can capture a sharp final image, as well as different stages of the light signal, each coded at a different time, as the signal changes shape to produce the image. (The first few shots will be blurry, but after several round trips the target object will come into focus.)

    In their paper, the researchers demonstrate this by imaging a femtosecond light pulse through a mask engraved with “MIT,” set 53 centimeters away from the lens aperture. To capture the image, the traditional 20-centimeter focal length lens would have to sit around 32 centimeters away from the sensor. The time-folded optics, however, pulled the image into focus after five round trips, with only a 3.1-centimeter lens-sensor distance.

    This could be useful, Heshmat says, in designing more compact telescope lenses that capture, say, ultrafast signals from space, or for designing smaller and lighter lenses for satellites to image the surface of the ground.

    Multizoom and multicolor

    The researchers next imaged two patterns spaced about 50 centimeters apart from each other, but each within line of sight of the camera. An “X” pattern was 55 centimeters from the lens, and a “II” pattern was 4 centimeters from the lens. By precisely rearranging the optics — in part, by placing the lens in between the two mirrors — they shaped the light in a way that each round trip created a new magnification in a single image acquisition. In that way, it’s as if the camera zooms in with each round trip. When they shot the laser into the scene, the result was two separate, focused images, created in one shot — the X pattern captured on the first round trip, and the II pattern captured on the second round trip.

    The researchers then demonstrated an ultrafast multispectral (or multicolor) camera. They designed two color-reflecting mirrors and a broadband mirror — one tuned to reflect one color, set closer to the lens, and one tuned to reflect a second color, set farther back from the lens. They imaged a mask with an “A” and “B,” with the A illuminated the second color and the B illuminated the first color, both for a few tenths of a picosecond.

    When the light traveled into the camera, wavelengths of the first color immediately reflected back and forth in the first cavity, and the time was clocked by the sensor. Wavelengths of the second color, however, passed through the first cavity, into the second, slightly delaying their time to the sensor. Because the researchers knew which wavelength would hit the sensor at which time, they then overlaid the respective colors onto the image — the first wavelength was the first color, and the second was the second color. This could be used in depth-sensing cameras, which currently only record infrared, Heshmat says.

    One key feature of the paper, Heshmat says, is it opens doors for many different optics designs by tweaking the cavity spacing, or by using different types of cavities, sensors, and lenses. “The core message is that when you have a camera that is fast, or has a depth sensor, you don’t need to design optics the way you did for old cameras. You can do much more with the optics by looking at them at the right time,” Heshmat says.

    This work “exploits the time dimension to achieve new functionalities in ultrafast cameras that utilize pulsed laser illumination. This opens up a new way to design imaging systems,” says Bahram Jalali, director of the Photonics Laboratory and a professor of electrical and computer engineering at the University of California at Berkeley. “Ultrafast imaging makes it possible to see through diffusive media, such as tissue, and this work hold promise for improving medical imaging in particular for intraoperative microscopes.”

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 7:31 am on April 30, 2018 Permalink | Reply
    Tags: Breaking bottlenecks to the electronic-photonic information technology revolution, Electro-optic modulator, New electro-optic devices, Photonics, Plasmonic modulator,   

    From University of Washington”Breaking bottlenecks to the electronic-photonic information technology revolution” 

    U Washington

    University of Washington

    April 25, 2018
    Jackson Holtz

    This artistic rendering magnifies a electro-optic modulator.Virginia Commonwealth University image/Nathaniel Kinsey.

    Researchers at the University of Washington, working with researchers from the ETH-Zurich, Purdue University and Virginia Commonwealth University, have achieved an optical communications breakthrough that could revolutionize information technology.

    They created a tiny device, smaller than a human hair, that translates electrical bits (0s and 1s of the digital language) into light, or photonic bits, at speeds 10s of times faster than current technologies.

    “As with earlier advances in information technology, this can dramatically impact the way we live,” said Larry Dalton, a UW chemistry professor emeritus and leader in photonics research.

    These new electro-optic devices approach the size of current electronic circuit elements and are important for integrating photonics and electronics on a single chip. The new technology also involves utilization of a particle, a plasmon polariton, that has properties intermediate between electrons and photons. This hybrid particle technology is referred to as plasmonics.

    The findings were published today in the journal Nature.

    “The device has been built as a plasmonic modulator,” said Christian Haffner, a graduate student at ETH-Zurich and lead author of the paper. “This is unusual as the traditional implementation relies on photonics rather than plasmonics. As a matter of fact, researchers avoid plasmonics, as plasmonics is known in all industry as a technology that comes at the price of highest optical losses. Yet – and this is by far the most spectacular finding – a trick has been found to use plasmonics without suffering from such high losses.”

    To increase the information-handling capacity of computing, telecommunications, sensing and control technologies, data needs to be communicated with high bandwidth over vast distances without signals (information) degrading, or consuming too much energy and generating too much heat. That’s where the new technology described in the Nature article fits in. Called an electro-optic modulator, the device converts electrical signals into optical ones capable at traveling either over fiberglass optic cabling or wirelessly through space via satellite and cell towers. This must be accomplished with excellent energy efficiency using extremely small devices capable of processing massive amounts of data.

    “The device must be very sensitive, capable of responding to very small electrical fields. If the fields needed to control the device are small, then the power consumption is low as well. This is important as energy efficiency is critical to all applications,” co-author Dalton said, adding, “You want to avoid generating heat and information degradation in computing or telecommunication applications.”

    This latest advance follows on a breakthrough in 2000 when Dalton and a team of UW and University of Southern California researchers first introduced newly designed electro-optical polymers or plastics, which were integrated into centimeter-long devices that could be operated with less than a volt and with bandwidths exceeding 100 gigahertz. Unfortunately, these devices were much larger than electronic data-generating elements and were not suited for integration of electronics and photonics elements on a single chip.

    However, transitioning to plasmonics, this footprint issue has now been solved. And it all started when an international team of scientists and engineers set out to improve the device by integrating better organic electro-optic materials with plasmonics. Plasmons are created when light impinges onto a metallic surface, such as gold. Photons then pass on part of their energy to the electrons on the metallic surface such that the electrons oscillate. These new photon-electron oscillations are called plasmon polaritons. Working with plasmon polaritons permits dramatic reduction in the size of optical circuitry and bandwidth operation many times that of photonics. Compared to the 2000 discovery, the bandwidth of the devices increased by almost a factor of 10 while reducing the energy requirements by almost 1,000 and this translates into a reduction in heating.

    The Achilles’ heel of plasmonics, however, is referred to as optical loss. While signal degradation with distance of transmission is not as bad as with electronics, signal degradation with plasmonics is much worse than with photonics.

    “The ETH and Purdue researchers conceived of an elegant device architecture that addresses the problem of plasmonic loss and achieves loss comparable to that of all-photonic modulators by using a combination of plasmonics and photonics,” Dalton said.

    He called the device an elegant integration of electronics, photonics and plasmonics, using an organic electro-optic material that permits integration of all of the signal processing options.

    “This is a doubly significant advance in plasmonics and organic electroactive materials, made possible through creative iteration between materials prediction, design, synthesis, and property optimization,” said Linda S. Sapochak, division director for materials research at the National Science Foundation, which helped fund the research.

    The integration of electronics and photonics on chips has been recognized for more than a decade as a critical next step in the evolution of information technology. Information technology is the science of how we sense our world and both process and communicate that information.

    The applications of the new device can be divided into two categories based on the wavelength of light utilized: Fiber optics telecommunications and optical interconnects in computing utilize light (photons) at optical frequencies (infrared light), while applications such as radar and wireless telecommunications use electromagnetic radiation in the radiofrequency and microwave (long wavelength light) regions.

    In the telecommunications and computing space, electro-optics takes information generated in an electronic device (for example, a computer processor) and transform it into light signals that travel over a fiber optic cable or via a wireless transmission to another electronic device.

    “In that sense, you might think of electro-optics as the ‘on-ramps of the information superhighway,’” said Dalton.

    Electro-optics also is critical to many other applications such as radar and GPS. It represents critical sensor technology, including applications such as embedded network sensing. For example, electro-optics is critical to many components of an autonomous vehicle and for monitoring infrastructure elements such as buildings and bridges. The device is relevant to both digital and analog information processing.

    Co-authors include Daniel Chelladurai, Yuriy Fedoryshyn, Arne Josten, Benedikt Baeuerle, Wolfgang Heni, Tatsuhiko Watanabe, Tong Cui, Bojun Cheng and Juerg Leuthold of ETH Zurich Institute of Electromagnetic Fields; Delwin L. Elder of the UW Department of Chemistry; Soham Saha, Alexandra Boltasseva and Vladimir Shalaev, Purdue University and Brick Nanotechnology Center; and Nathaniel Kinsey, Virginia Commonwealth University.

    Funding for this project is from EU Project PLASMOFAB (688166), the ERC grant PLASILOR (640478), the National Science Foundation (DMR-1303080) and the Air Force Office of Scientific Research grants (FA9550-15-1-0319 and FA9550-14-1-0138). Co-author Kinsey acknowledges support from the Virginia Microelectronics Consortium and the Virginia Commonwealth University Presidential Research Quest Fund.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The University of Washington is one of the world’s preeminent public universities. Our impact on individuals, on our region, and on the world is profound — whether we are launching young people into a boundless future or confronting the grand challenges of our time through undaunted research and scholarship. Ranked number 10 in the world in Shanghai Jiao Tong University rankings and educating more than 54,000 students annually, our students and faculty work together to turn ideas into impact and in the process transform lives and our world. For more about our impact on the world, every day.
    So what defines us —the students, faculty and community members at the University of Washington? Above all, it’s our belief in possibility and our unshakable optimism. It’s a connection to others, both near and far. It’s a hunger that pushes us to tackle challenges and pursue progress. It’s the conviction that together we can create a world of good. Join us on the journey.

  • richardmitnick 8:05 pm on October 15, 2017 Permalink | Reply
    Tags: , , , , , , Nanotechnology is a multidisciplinary field where chemistry medicine and engineering all intersect, , Photonics, Spherical nucleic acid (SNA) technology, Studying and manipulating molecules and materials with dimensions on the 1 to 100 nanometer length scale (1 nm = one billionth of a meter)   

    From Northwestern University- “Titans of nanotechnology: The next big thing is very small” 

    Northwestern U bloc
    Northwestern University

    October 09, 2017

    Teri Odom and Chad Mirkin of the International Institute for Nanotechnology.

    World-renowned nanoscientists and chemists Chad Mirkin, the Director of the International Institute for Nanotechnology (IIN) at Northwestern University, and Teri Odom, the IIN’s Associate Director, sit down to discuss the golden age of miniaturization and how the “science of small things” is fostering major advances.

    The IIN, founded in 2000, is making major strides in nanotechnology and thriving in a big way. Nanoscience and technology — a field focused on studying and manipulating molecules and materials with dimensions on the 1 to 100 nanometer length scale (1 nm = one billionth of a meter) — was anticipated in 1959 by physicist Richard Feynman and made possible with the advent of the electron and scanning tunneling microscopes in the 1980s. It is engaging scientists from all over the world across many disciplines. They are using such tools to explore, and ultimately solve, some of the world’s most pressing issues in medicine, engineering, energy, and defense.

    We [interviewer is not named] sit in on a conversation between Mirkin and Odom to see where this exciting field is headed.

    Q: Your team discovered spherical nucleic acid (SNA) technology, where tiny particles can be decorated with short snippets of DNA or RNA. With the creation of SNAs, you’ve basically taken known molecules, reorganized them at the nanoscale into ball-like forms, and changed their properties. What is the potential of such a discovery, and what exciting breakthroughs are on the near horizon?

    Mirkin: Two really promising areas in which we are applying SNA technology are biomedicine and gene regulation — the idea that one can create ways of using DNA- and RNA-based SNAs as potent new drugs. For example, we can put SNAs into commercially available creams, like Aquaphor®, and apply them topically to treat diseases of the skin. There are more than 200 skin diseases with a known genetic basis, making the DNA- and RNA-based SNAs a general strategy for treating skin diseases. Conventional DNA and RNA constructs based on linear nucleic acids cannot be delivered in this way – they do not penetrate the skin. But, SNAs can because of their unique architecture that changes the way they interact with biological structures and in particular, receptors on skin cells that recognize them, but not linear DNA or RNA. SNAs can also be used to treat diseases of the bladder, colon, lung, and eye — organs and tissues that also are hard to treat using traditional means.

    Q: Nanotechnology is a multidisciplinary field where chemistry, medicine and engineering all intersect to create innovative solutions for a whole range of issues. One area is photonics, where advances at the nanoscale are changing how we communicate. How?

    Odom: We’re trying to reduce the size of lasers, which are typically macroscopic devices, down to the nanometer scale. The ability to design nanomaterials that can control the production and guiding of light — which is composed of individual particles called photons — can transform a range of different technologies. For example, communication based on photons (like in optical fibers) vs. electrons (like in copper wires) is faster and much more efficient. Applications that exploit light can readily be transformed by nanotechnology.

    Q: Nanotechnology has revolutionized the basic sciences, fast-tracking their translational impact. For example, your colleague Samuel Stupp, director of the Simpson Querrey Institute for BioNanotechnology at Northwestern, is on the verge of conducting clinical trials in spinal regeneration through “soft” nanotechnology breakthroughs. Has nanotechnology also revolutionized the traditional scientific method, too?

    Mirkin: The desire to come up with a solution to a given problem often leads scientists to develop new capabilities. That’s the thrilling thing about science in general, but about nanotechnology in particular: we often have goals, which are driven by engineering needs, but along the way we discover fundamentally interesting principles that we didn’t anticipate and that inform our view of the world around us. These discoveries take us down new paths — ones that might be even more interesting than the original ones we were on. This is the nature and importance of basic science research.

    Odom: Nano provides the fundamentals. But then, we adapt, based on these unanticipated properties, while still keeping our long-range goals in mind. That’s pretty neat. You can adjust in ways that keep discovery and creativity at the forefront. Without that, we all would be bored.

    Q: Nobel Prize winner Sir Fraser Stoddart, John Rogers, William Dichtel, Milan Mrksich and the aforementioned Stupp are just a few of the many big names in the Northwestern nanotechnology community. What is Northwestern doing right and what’s the global impact?

    Mirkin: These are heavy hitters, people who can go anywhere in the world, but they chose to come to Northwestern because they recognized that this is a very special time in our history. We are on an incredible trajectory here, and they want to be a part of it.

    Odom: We have a holistic way of training new faculty and graduate students because we want them to have a complete picture of everything that’s going on here. This is how we do science at Northwestern, and we really apply it to nanotechnology. Part of our success as a chemistry department has come from our ability to make things, to measure them, and to model them — I like to think of this integration as the “3Ms” principle. Our achievements in nanotechnology have been built on these three synergistic areas of expertise.

    Mirkin: It really starts with world-class talent, and then collaboration. You can collaborate all you want, but if you don’t have world-class talent, it doesn’t matter. Since we’re going all-in on the medical side, in 15 years I went from having zero collaborations with the medical school, to now having 17. There is a natural interaction here between clinicians, scientists, and engineers that make everyone’s work so much stronger. Within the next five years, I anticipate that there will be cancer treatments based upon nanotechnology that greatly improve outcomes and, in some subsets of diseases, actually leads to cures.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Northwestern South Campus
    South Campus

    On May 31, 1850, nine men gathered to begin planning a university that would serve the Northwest Territory.

    Given that they had little money, no land and limited higher education experience, their vision was ambitious. But through a combination of creative financing, shrewd politicking, religious inspiration and an abundance of hard work, the founders of Northwestern University were able to make that dream a reality.

    In 1853, the founders purchased a 379-acre tract of land on the shore of Lake Michigan 12 miles north of Chicago. They established a campus and developed the land near it, naming the surrounding town Evanston in honor of one of the University’s founders, John Evans. After completing its first building in 1855, Northwestern began classes that fall with two faculty members and 10 students.
    Twenty-one presidents have presided over Northwestern in the years since. The University has grown to include 12 schools and colleges, with additional campuses in Chicago and Doha, Qatar.

    Northwestern is recognized nationally and internationally for its educational programs.

  • richardmitnick 5:12 pm on May 4, 2017 Permalink | Reply
    Tags: , , , Photonics, ,   

    From physicsworld.com: “Optical chip gives microscopes nanoscale resolution” 


    May 3, 2017
    Michael Allen

    Super resolution: image taken using the new chip. No image credit.

    A photonic chip that allows a conventional microscope to work at nanoscale resolution has been developed by a team of physicists in Germany and Norway. The researchers claim that as well as opening up nanoscopy to many more people, the mass-producible optical chip also offers a much larger field of view than current nanoscopy techniques, which rely on complex microscopes.

    Nanoscopy, which is also known as super-resolution microscopy, allows scientists to see features smaller than the diffraction limit – about half the wavelength of visible light. It can be used to produce images with resolutions as high as 20–30 nm – approximately 10 times better than a normal microscope. Such techniques have important implications for biological and medical research, with the potential to provide new insights into disease and improve medical diagnostics.

    “The resolution of the standard optical microscope is basically limited by the diffraction barrier of light, which restricts the resolution to 200–300 nm for visible light,” explains Mark Schüttpelz, a physicist at Bielefeld University in Germany. “But many structures, especially biological structures like compartments of cells, are well below the diffraction limit. Here, super-resolution will open up new insights into cells, visualizing proteins ‘at work’ in the cell in order to understand structures and dynamics of cells.”

    Expensive and complex

    There are a number of different nanoscopy techniques that rely on fluorescent dyes to label molecules within the specimen being imaged. A special microscope illuminates and determines the position of individual fluorescent molecules with nanometre precision to build up an image. The problem with these techniques, however, is that they use expensive and complex equipment. “It is not very straightforward to acquire super-resolved images,” says Schüttpelz. “Although there are some rather expensive nanoscopes on the market, trained and experienced operators are required to obtain high-quality images with nanometer resolution.”

    To tackle this, Schüttpelz and his colleagues turned current techniques on their head. Instead of using a complex microscope with a simple glass slide to hold the sample, their method uses a simple microscope for imaging combined with a complex, but mass-producible, optical chip to hold and illuminate the sample.

    “Our photonic chip technology can be retrofitted to any standard microscope to convert it into an optical nanoscope,” explains Balpreet Ahluwalia, a physicist at The Arctic University of Norway, who was also involved in the research.

    Etched channels

    The chip is essentially a waveguide that completely removes the need for the microscope to contain a light source that excites the fluorescent molecules. It consists of five 25–500 μm-wide channels etched into a combination of materials that causes total internal reflection of light.

    The chip is illuminated by two solid-state lasers that are coupled to the chip by a lens or lensed fibres. Light with two different wavelengths is tightly confined within the channels and illuminates the sample, which sits on top of the chip. A lens and camera on the microscope record the resulting fluorescent signal, and the data obtained are used to construct a high-resolution image of the sample.

    To test the effectiveness of the chip, the researchers imaged liver cells. They demonstrated that a field of view of 0.5 × 0.5 mm2 can be achieved at a resolution of around 340 nm in less than half a minute. In principle, this is fast enough to capture live events in cells. For imaging times of up to 30 min, a similar field of view at a resolution better than 140 nm is possible. Resolutions of less than 50 nm are also achievable with the chip, but require higher magnification lenses, which limit the field of view to around 150 μm.

    Many cells

    Ahluwalia told Physics World that the advantage of using the photonic chip for nanoscopy is that it “decouples illumination and detection light paths” and the “waveguide generates illumination over large fields of view”. He adds that this has enabled the team to acquire super-resolved images over an area 100 times larger than with other techniques. This makes single images of as many as 50 living cells possible.

    According to Schüttpelz, the technique represents “a paradigm shift in optical nanoscopy”. “Not only highly specialized laboratories will have access to super-resolution imaging, but many scientists all over the world can convert their standard microscope into a super-resolution microscope just by retrofitting the microscope in order to use waveguide chips,” he says. “Nanoscopy will then be available to everyone at low costs in the near future.”

    The chip is described in Nature Photonics.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

  • richardmitnick 12:29 pm on October 21, 2016 Permalink | Reply
    Tags: , , , Photonics   

    From Goddard: “Photonics Dawning as the Communications Light For Evolving NASA Missions” 

    NASA Goddard Banner

    NASA Goddard Space Flight Center

    Oct. 21, 2016
    Ashley Hume
    NASA’s Goddard Space Flight Center, Greenbelt, Md.

    A largely unrecognized field called photonics may provide solutions to some of NASA’s most pressing challenges in future spaceflight.

    Photonics explores the many applications of generating, detecting and manipulating photons, or particles of light that, among other things, make up laser beams. On this day in 1983, the General Conference of Weights and Measures adopted the accepted value for the speed of light, an important photonics milestone. Oct. 21, 2016, is Day of Photonics, a biennial event to raise awareness of photonics to the general public. The study has multiple applications across NASA missions, from space communications to reducing the size of mission payloads to performing altitude measurements from orbit.

    Access mp4 video here .
    NASA is using photonics to solve some of the most pressing upcoming challenges in spaceflight, such as better data communications from space to Earth.
    Credits: NASA’s Goddard Space Flight Center/Amber Jacobson, producer.

    One major NASA priority is to use lasers to make space communications for both near-Earth and deep-space missions more efficient. NASA’s communications systems have matured over the decades, but they still use the same radio-frequency (RF) system developed in the earliest days of the agency. After more than 50 years of using solely RF, NASA is investing in new ways to increase data rates while also finding more efficient communications systems.

    Photonics may provide the solution. Several centers across NASA are experimenting with laser communications, which has the potential to provide data rates at least 10 to 100 times better than RF. These higher speeds would support increasingly sophisticated instruments and the transmission of live video from anywhere in the solar system. They would also increase the bandwidth for communications from human exploration missions in deep space, such as those associated with Journey to Mars.

    Conceptual animation depicting a satellite using lasers to relay data from Mars to Earth.
    Credits: NASA’s Goddard Space Flight Center

    NASA’s Goddard Space Flight Center in Greenbelt, Maryland, launched the first laser communications pathfinder mission in 2013. The Lunar Laser Communications Demonstration (LLCD) proved that a space-based laser communications system was viable and that the system could survive both launch and the space environment. But the mission was short-lived by design, as the host payload crashed into the lunar surface in a planned maneuver a few months after launch.

    The Goddard team is now planning a follow-on mission called the Laser Communications Relay Demonstration (LCRD) to prove the proposed system’s longevity. It will also provide engineers more opportunity to learn the best way to operate it for near-Earth missions.

    “We have been using RF since the beginning, 50 to 60 years, so we’ve learned a lot about how it works in different weather conditions and all the little things to allow us to make the most out of the technology, but we don’t have that experience with laser comm,” said Dave Israel, Exploration and Space Communications architect at Goddard and principal investigator on LCRD. “LCRD will allow us to test the performance over all different weather conditions and times of day and learn how to make the most of laser comm.”

    Scheduled to launch in 2019, LCRD will simulate real communications support, practicing for two years with a test payload on the International Space Station and two dedicated ground stations in California and Hawaii. The mission could be the last hurdle to implementing a constellation of laser communications relay satellites similar to the Space Network’s Tracking and Data Relay Satellites.

    NASA’s Jet Propulsion Laboratory in Pasadena, California, and Glenn Research Center in Cleveland are also following up on LLCD’s success. But both will focus on how laser communications could be implemented in deep-space missions.

    Missions to deep space impose special communication challenges because of their distance from Earth. The data return on these missions slowly trickle back to the ground a little at a time using radio frequency. Laser communications could significantly improve data rates in all space regions, from low-Earth orbit to interplanetary.

    JPL’s concept, called Deep Space Optical Communications (DSOC), focuses on laser communications’ benefits to data rates and to space and power constraints on missions. The data-rate benefits of laser communications for deep-space missions are clear, but less recognized is that laser communications can also save mass, space and/or power requirements on missions. That could be monumental on missions like the James Webb Space Telescope, which is so large that, even folded, it will barely fit in the largest rocket currently available. Although Webb is an extreme example, many missions today face size constraints as they become more complex. The Lunar Reconnaissance Orbiter mission carried both types of communications systems, and the laser system was half the mass, required 25 percent less power and transferred data at six times the rate of the RF system. Laser communications could also benefit a class of missions called CubeSats, which are about the size of a shoebox. These missions are becoming more popular and require miniaturized parts, including communications and power systems.

    Power requirements can become a major challenge on missions to the outer solar system. As spacecraft move away from the sun, solar power becomes less viable, so the less power a payload requires, the smaller the spacecraft battery, saving space, and the easier spacecraft components can be recharged.

    Laser communications could help to solve all of these challenges.

    The team at Glenn is developing an idea called Integrated Radio and Optical Communications (iROC) to put a laser communications relay satellite in orbit around Mars that could receive data from distant spacecraft and relay their signal back to Earth. The system would use both RF and laser communications, promoting interoperability amongst all of NASA’s assets in space. By integrating both communications systems, iROC could provide services both for new spacecraft using laser communications systems and older spacecraft like Voyager 1 that use RF.

    But laser communications is not NASA’s only foray into photonics, nor is it the first. In fact, NASA began using lasers shortly after they were invented. Goddard successfully demonstrated satellite laser ranging, a technique to measure distances, in 1964.

    Satellite Laser Ranging is still managed at Goddard. The system uses laser stations worldwide to bounce short pulses of light off of special reflectors installed on satellites. There are also reflectors on the moon that were placed there during the Apollo and Soviet rover programs. By timing the bounce of the pulses, engineers can compute distances and orbits. Measurements are accurate up to a few millimeters. This application is used on numerous NASA missions, such as ICESat-2, which will measure the altitude of the ice surface in the Antarctic and Greenland regions. It will provide important information regarding climate and the health of Earth’s polar regions.

    NASA’s Satellite Laser Ranging system consists of eight stations covering North America, the west coast of South America, the Pacific, South Africa and western Australia. NASA and its partners and associated universities operate the stations. SLR is part of the larger International Laser Ranging Service, and NASA’s contribution comprises more than a third of the organization’s total data volume.

    From communications to altimetry and navigation, photonics’ importance to NASA missions cannot be understated. As technology continues to evolve, many photonics applications may come to fruition over the next several decades. Others may also be discovered, especially as humanity pushes further out into the universe than ever before.

    To find out more, visit http://day-of-photonics.org/.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NASA’s Goddard Space Flight Center is home to the nation’s largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

    Named for American rocketry pioneer Dr. Robert H. Goddard, the center was established in 1959 as NASA’s first space flight complex. Goddard and its several facilities are critical in carrying out NASA’s missions of space exploration and scientific discovery.
    NASA Goddard campus
    NASA/Goddard Campus
    NASA image

  • richardmitnick 7:58 am on June 30, 2016 Permalink | Reply
    Tags: , , Photonics,   

    From UCSB: “We’ll Leave the Lights On For You” 

    UC Santa Barbara Name bloc

    May 17, 2016
    Shelly Leachman

    Photonics advances allow us to be seen across the universe, with major implications for the search for extraterrestrial intelligence, says UC Santa Barbara physicist Philip Lubin.

    Photo Credit: iStock Photo

    Looking up at the night sky — expansive and seemingly endless, stars and constellations blinking and glimmering like jewels just out of reach — it’s impossible not to wonder: Are we alone?

    For many of us, the notion of intelligent life on other planets is as captivating as ideas come. Maybe in some other star system, maybe a billion light years away, there’s a civilization like ours asking the exact same question.

    Imagine if we sent up a visible signal that could eventually be seen across the entire universe. Imagine if another civilization did the same.

    The technology now exists to enable exactly that scenario, according to UC Santa Barbara physics professor Philip Lubin, whose new work applies his research and advances in directed-energy systems to the search for extraterrestrial intelligence (SETI). His recent paper “The Search for Directed Intelligence” appears in the journal REACH – Reviews in Human Space Exploration.

    “If even one other civilization existed in our galaxy and had a similar or more advanced level of directed-energy technology, we could detect ‘them’ anywhere in our galaxy with a very modest detection approach,” said Lubin, who leads the UCSB Experimental Cosmology Group. “If we scale it up as we’re doing with direct energy systems, how far could we detect a civilization equivalent to ours? The answer becomes that the entire universe is now open to us.

    “Similar to the use of directed energy for relativistic interstellar probes and planetary defense that we have been developing, take that same technology and ask yourself, ‘What are consequences of that technology in terms of us being detectable by another ‘us’ in some other part of the universe?’” Lubin added. “Could we see each other? Can we behave as a lighthouse, or a beacon, and project our presence to some other civilization somewhere else in the universe? The profound consequences are, of course, ‘Where are they?’ Perhaps they are shy like us and do not want to be seen, or they don’t transmit in a way we can detect, or perhaps ‘they’ do not exist.”

    The same directed energy technology is at the core of Lubin’s recent efforts to develop miniscule, laser-powered interstellar spacecraft. That work, funded since 2015 by NASA (and just selected by the space agency for “Phase II” support) is the technology behind billionaire Yuri Milner’s newsmaking, $100-million Breakthrough Starshot initiative announced April 12.

    Lubin is a scientific advisor on Starshot, which is using his NASA research as a roadmap as it seeks to send tiny spacecraft to nearby star systems.

    In describing directed energy, Lubin likened the process to using the force of water from a garden hose to push a ball forward. Using a laser light, spacecraft can be pushed and steered in much the same way. Applied to SETI, he said, the directed energy system could be deployed to send a targeted signal to other planetary systems.

    “In our paper, we propose a search strategy that will observe nearly 100 billion planets, allowing us to test our hypothesis that other similarly or more advanced civilizations with this same broadcast capability exist,” Lubin said.

    “As a species we are evolving rapidly in photonics, the production and manipulation of light,” he explained. “Our recent paper explores the hypothesis: We now have the ability to produce light extremely efficiently, and perhaps other species might also have that ability. And if so, then what would be the implications of that? This paper explores the ‘if so, then what?’”

    Traditionally and still, Lubin said, the “mainstay of the SETI community” has been to conduct searches via radio waves. Think Jodie Foster in “Contact,” receiving an extraterrestrial signal by way of a massive and powerful radio telescope. With Lubin’s UCSB-developed photonics approach, however, making “contact” could be much simpler: Take the right pictures and see if any distant systems are beaconing us.

    “All discussions of SETI have to have a significant level of, maybe not humor, but at least hubris as to what makes reason and what doesn’t,” Lubin said. “Maybe we are alone in terms of our technological capability. Maybe all that’s out there is bacteria or viruses. We have no idea because we’ve never found life outside of our Earth.

    “But suppose there is a civilization like ours and suppose — unlike us, who are skittish about broadcasting our presence — they think it’s important to be a beacon, an interstellar or extragalactic lighthouse of sorts,” he added. “There is a photonics revolution going on on Earth that enables this specific kind of transmission of information via visible or near-infrared light of high intensity. And you don’t need a large telescope to begin these searches. You could detect a presence like our current civilization anywhere in our galaxy, where there are 100 billion possible planets, with something in your backyard. Put in context, and we would love to have people really think about this: You can literally go out with your camera from Costco, take pictures of the sky, and if you knew what you were doing you could mount a SETI search in your backyard. The lighthouse is that bright.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    UC Santa Barbara Seal
    The University of California, Santa Barbara (commonly referred to as UC Santa Barbara or UCSB) is a public research university and one of the 10 general campuses of the University of California system. Founded in 1891 as an independent teachers’ college, UCSB joined the University of California system in 1944 and is the third-oldest general-education campus in the system. The university is a comprehensive doctoral university and is organized into five colleges offering 87 undergraduate degrees and 55 graduate degrees. In 2012, UCSB was ranked 41st among “National Universities” and 10th among public universities by U.S. News & World Report. UCSB houses twelve national research centers, including the renowned Kavli Institute for Theoretical Physics.

  • richardmitnick 5:08 pm on July 31, 2015 Permalink | Reply
    Tags: , , Photonics,   

    From NSF: “Innovations from the wild world of optics and photonics” 

    National Science Foundation

    July 31, 2015
    Aaron Dubrow, NSF (703) 292-4489 adubrow@nsf.gov

    Princeton research team explores ways of communicating and processing signals with light waves

    A silicon photonics platform connecting excitable lasers to form a photonic neural network.

    Traditional computers manipulate electrons to turn our keystrokes and Google searches into meaningful actions. But as components of the computer processor shrink to only a few atoms across, those same electrons become unpredictable and our ability to shuttle them across long and short distances diminishes.

    With support from the National Science Foundation (NSF), the Lightwave Communications Laboratory at Princeton University, led by Paul Prucnal, seeks to understand, build, and design the next generation of communication systems that process information far faster than today’s devices using photonics, or the manipulation of light.

    The field of photonics began, roughly, with the invention of the laser in the late 1950s and found widespread applications in the 1990s with the explosive growth of the Internet.

    Photonics not only made high-speed long-distance data transmission via fiber optic cables feasible and affordable, it has also enabled advances in laser manufacturing, chemical sensing, medical diagnostics, display technologies and many other fields.

    But scientists are betting that not all of light’s secret abilities have been discovered yet.

    In his lab at Princeton, Prucnal and his team have been experimenting with a variety of optics and photonics-based applications, creating systems to carry hidden messages, detect malicious cyber-attacks and improve the quality and capacity of wireless communications using light.

    They are even exploring whether it may be possible to create a network of photonic “neurons” to perform functions our brain does well–like pattern recognition–but significantly faster. Through partnership with industry, their innovations are moving quickly from the lab to the factory.

    Coping with billions of devices

    Currently, there are more mobile communication devices than humans on the planet, and this proliferation is expected to continue. However, the radio bandwidth on which wireless communications depend is a limited resource.

    As more and more devices compete for bandwidth, we can expect more bottlenecks and more interference from nearby competing antennas, said Matthew Chang, a Ph.D. student in Prucnal’s lab. To handle the constant growth in demand for capacity and bandwidth, optical solutions are needed.

    “With a frequency 1 million times bigger than radio waves, optics sees the entire current radio-wave spectrum as practically a single frequency,” Chang said. “In terms of its ability to provide the bandwidth for a growing army of mobile phones, we say it’s future proof.”

    In the near term, the lab is working on technologies that harness optical signal processing to improve the efficiency of the cell towers and mobile antennas already in place.

    One such technology the lab developed is called photonic beamforming. It involves encoding wireless signals on light waves to allow antennas to selectively detect signals from a desired spatial direction, operating with precision and over bandwidths that exceed what is possible with electronics.

    The phenomenon is akin to “the cocktail party effect,” where one is able to tune in to the frequency and direction of a friend’s voice in a crowded room.

    The human brain is adept at de-noising signals, Chang explained. “We know what direction the voice is coming from and can train our ears to sense in that direction.”

    We also know what a friend’s voice sounds like and can even read lips if we need to, he said. Radio antennas can’t do that.

    “We want to design processors that give radio antennas the ability to sense a signal, lock on its spatial direction, and follow it to the source,” Chang said.

    The technology that the team developed utilizes an array of antennas coupled to an adaptive processor to filter signals in both space and frequency. The technology allows the processor to steer the beam of radio waves while rejecting interfering directional noise sources.

    Their techniques can even apply algorithms that let the antenna system adapt to rapidly changing environments in order to track fast moving targets or rapidly switch between a wide-angle search and detailed inspection.

    Not surprisingly, many of the group’s technologies are current of interest to the military. However, they imagine that one day every cell phone will contain a beamforming chip to better manage wireless inputs and outputs, the way many of today’s phones contain a small component to switch between different wireless channels.

    Working with partners L3 Telemetry East and Bascom Hunter Technologies, which Prucnal helped found, the researchers are transferring the beamforming technology from the laboratory to the marketplace.

    Bascom Hunter received Phase 1 Small Business Innovation Research (SBIR) support from NSF in 2013 and 2015 to adapt their technologies for public safety radio networks and to improve the intermediate links between the core networks. They hope to see the technology improving the performance of cellphone towers and military applications within one or two years.

    More importantly, as data rates climb exponentially, the group sees optics and photonics as a way to provide gigabit or faster Internet to everyone, without the use of optical fiber.

    “The current processing of radio signals is akin to trying to sense and map the changing surface of the ocean by slowly sucking water through a single straw,” said Prucnal. “Photonic processing makes it possible to process radio signals with greater precision and parallelism, and with greater speed, than electronics.”

    Building a laser-fast brain

    If light-encoded Wi-Fi signals sounds far-out, another of the projects in Prucnal’s lab is truly at the distant frontiers of research: the photonic neuron.

    The project came out of conversations between Prucnal and David Rosenbluth, a neuroscientist at Lockheed Martin, with which the lab collaborates. Prucnal noticed that the differential equations that describe the behavior of neurons have the same forms as the equations for lasers.

    Furthermore, in biological neurons, each has an internal voltage. If that voltage reaches a certain point, the neuron emits a spike, signaling the neurons to which it is connected.

    Likewise, a laser gets pumped with current, exciting more electrons from one state to another; at a certain point, the laser reaches its threshold and outputs an optical spike.

    The dynamics, they noted, were astonishingly similar, but lasers have the potential to perform the same action a billion times faster than the chemical signaling in the brain.

    This led the researchers to wonder if it was possible to design a synthetic system made of optical and photonic materials that could perform some of the functions of a physiological neuron.

    After developing a single photonic neuron in 2009, the research team has been working to build sophisticated, ultrafast signal processing circuits that mimic the visual, auditory, and motor functions found in biological organisms.

    Their initial implementation was inspired by the crayfish tailflip response–a natural wonder of high-speed sensing.

    Crayfish have a neuronal circuit in their brain that is networked in such a way that, if it senses danger, a specific set of neurons fire simultaneously, causing the creature to flip its tail to swim away with amazing speed.

    With this biological model in mind, Prucnal and his team designed a system that uses several excitable lasers, pre-loaded in such a way that if a particular signal comes in, the lasers recognize the specific pattern and fire a spike together.

    In the future, the researchers say such a device could be capable of making nearly instantaneous calculations in life-or-death situations, such as deciding whether to eject a fighter pilot from a jet.

    “When thousands of photonic neurons are networked together and working in unison, we believe we can build a processor that can sense patterns and cues with an almost human-like quality, but a billion times faster,” said Bhavin Shastri, a post-doctoral fellow at Princeton working on the project.

    For example, the team imagines being able to peer into all the wireless signals around you, lock onto one of interest, and take action on it–all in an instant as the signal zooms across your antenna.

    The lab’s research was featured on the cover of the IEEE Photonics Society Newsletter in June 2014; and in November 2014, the group published an article in IEEE’s Journal of Lightwave Technology describing how to scale the signal-processing platform to large numbers of neurons.

    By finding new ways to encode and process light and by applying these methods to existing and brand new applications, Prucnal’s group is helping to solve the looming bandwidth shortage while imagining entirely new capabilities for photonic systems.

    “With all the information that can be obtained using photonic processing of the radio image, we could track signals coming from all directions and all frequencies, separating multiple signals of interest from multiple interferers, quickly finding holes in the radio spectrum available for transmission, and mapping out spatial features of the radio environment in real-time,” Prucnal said. “This vision is not only exciting, but will be necessary as the use of wireless communications proliferates in the future.”

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    The National Science Foundation (NSF) is an independent federal agency created by Congress in 1950 “to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense…we are the funding source for approximately 24 percent of all federally supported basic research conducted by America’s colleges and universities. In many fields such as mathematics, computer science and the social sciences, NSF is the major source of federal backing.


Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: