Tagged: M.I.T. Physics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:22 pm on August 22, 2014 Permalink | Reply
    Tags: , , M.I.T. Physics,   

    From M.I.T.: “Teaching light new tricks” 

    MIT News

    August 22, 2014
    Denis Paiste | Materials Processing Center

    Light is a slippery fellow. Stand in a darkened hallway and close a door to a lighted room: Light will sneak through any cracks — it doesn’t want to be confined. “Typically, in free space, light will go everywhere,” graduate student Chia Wei (Wade) Hsu says. “If you want to confine light, you usually need some special mechanism.”

    Chia Wei (Wade) Hsu

    Last summer, Hsu demonstrated a new way to confine light on the surface of a photonic crystal slab. “We were the first ones to experimentally demonstrate this new way to confine light,” says Hsu, a graduate student in physics who is conducting research under Marin Soljacic, a professor of physics as MIT.

    The photonic crystal is a thin slab whose structure has a periodicity, or repeating pattern, that is comparable in size to the wavelength of light — extremely short distances measured in nanometers (billionths of a meter). “Light can interact with the structure in a non-trivial way. Typically one observes modes called ‘guided resonance,’ where light is semi-confined in the slab but it can radiate outside. It’s not perfectly confined; it still leaks out,” Hsu explains.

    However, at a certain angle (35 degrees in the study), light stays bound to the surface, oscillating indefinitely. Hsu, Soljacic, co-author and MIT graduate student Bo Zhen, and others reported these findings recently in Nature. This phenomenon is called an embedded eigenstate, also known as a “Bound State in the Continuum.” The bound state affects just one wavelength of light that reaches the slab. The particular wavelength that is bound is related to the structure of the photonic crystal slab. So for a different structure, the bound state will appear at a different wavelength and wavevector, or angle of propagation. By manipulating the structure, researchers can manipulate the wavelength and the angle of this special state. Separately, Hsu and colleagues detailed their physical and mathematical analysis of the bound state in a theoretical paper in Light: Science and Application.

    One way to visualize the bound state effect, Hsu says, is to think of the difference between dropping a stone into a lake — where the waves ripple out without being confined — and using a drum stick to hit a drum membrane — which vibrates back and forth, but does not spread because it’s blocked by the boundary of the drum. “Eigenstate or eigenvalue refers to a sustained oscillation,” Hsu explains.

    At a particular angle, or wavevector, as light tries to escape, outgoing waves of the same amplitude, but opposite phase, cancel each other — which is known as destructive interference. “All of the outgoing waves are cancelled, so light becomes confined,” Hsu says. “There are no outgoing waves anymore and then it becomes perfectly confined in the slab.”

    Unexpected finding

    In 1929, scientists John von Neumann and Eugene Wigner theoretically predicted such a state, known as an embedded eigenvalue. The trapped state is in contrast to what typically happens when light resonates on the surface for a time, but then escapes or decays.

    “This bound state was certainly an unexpected discovery. We happened upon it when we were looking for something else,” Soljacic explains.

    The researchers are looking for a practical use of this finding. “The same mechanism we described about this interference cancellation mechanism can also be applied to a structure that’s similar to a fiber, so it may have potential use in optical communication too,” Hsu says. Although light does not escape the typical optical fiber because of its total internal reflection, the fiber confines all angles of light above a critical angle. “All the light above some cut off will be confined. In our mechanism, cancellation only happens at one particular angle. Only light at that particular angle is confined, so it has some more selectivity,” Hsu explains.

    Breakthrough from simplicity

    Prior examples of theoretically predicted embedded eigenstates were too complicated to realize. “Here we found a structure that is very simple to realize,” Hsu says. Fellow graduate student in Soljacic’s group, Jeongwon Lee, fabricated the photonic crystal structure, using a structure which the group had already studied.

    Lee fabricated the photonic crystal on silicon nitride slab, using interference photolithography to etch the periodic structure or repeating pattern. Hsu and Zhen measured the sample in the lab and analyzed the data to confirm the phenomenon. “In this simple structure, we found this phenomenon of this new type of light confinement. Since the structure is simple, we were able to demonstrate it, which other people were not able to because their systems are more complicated,” Hsu explains.

    Hsu is working toward a deeper understanding of why this phenomenon occurs where light gets confined, as well as exploring potential applications in photonic crystal lasers. “We are investigating where this new type of light confinement can give rise to different behaviors of lasers,” he adds.

    Watch how MIT and Harvard University researchers confine light to a crystal slab surface and design a transparent display.

    Video by the Materials Processing Center

    Creating transparent displays

    Besides his light confinement work, Hsu led the demonstration of a blue transparent display composed of a clear polymer coating with embedded resonant nanoparticles made of silver.

    Such displays work because the wavelength of blue light is strongly scattered by interaction with silver. “In this case, we only want to scatter the particular wavelengths of our projector light. We don’t want to scatter other wavelengths because we will need it to be transparent,” he says.

    “We can take a piece of glass which is originally transparent and put in nanoparticles that only scatter a particular, narrow bandwidth of light. Light in the visible spectrum is made of many different wavelengths from 300 nanometers to 750 nanometers. If we have such a structure, then most of the light can pass through, so it is still transparent, but if we project light of that particular narrow bandwidth, light can be scattered strongly as if it were hitting a regular screen,” Hsu explains. The results were published in a Nature article, “Transparent displays enabled by resonant nano particle scattering,” in January.

    Hsu’s theoretical design consisted of a nanoparticle with a silica (silicon dioxide) core and a silver shell, but the experiment was done using purely silver particles. “Silver-only is good enough if we want to scatter only blue light,” he says. A very tiny amount of silver, just six-thousandths of a milligram, produced the effect in the demonstration, making it a potential economical approach.

    Silver has conducting electrons, and when the particular blue wavelength interacts with them, those conducting electrons will oscillate back and forth strongly. “It’s a resonance phenomenon. At that point, you’ll get very strong light scattering,” Hsu explains. The phenomenon is called a localized surface plasmon resonance.

    One advantage of this approach is that the projected image has a broad viewing angle. “Nanoparticle scattering will send light in all different directions, so you will be able to see the image no matter which angle you look at. So it will be useful for applications where you would want people to see it from all different directions,” Hsu says.

    Hsu received his bachelor’s in physics and mathematics at Wesleyan University in 2010. His doctoral thesis will be split between the nanoparticle display work and the confinement of light work.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 3:47 pm on April 16, 2014 Permalink | Reply
    Tags: , , , , , M.I.T. Physics   

    From M.I.T.: “Inflatable antennae could give CubeSats greater reach” 

    September 6, 2013
    Jennifer Chu, MIT News Office

    The future of satellite technology is getting small — about the size of a shoebox, to be exact. These so-called “CubeSats,” and other small satellites, are making space exploration cheaper and more accessible: The minuscule probes can be launched into orbit at a fraction of the weight and cost of traditional satellites.

    But with such small packages come big limitations — namely, a satellite’s communication range. Large, far-ranging radio dishes are impossible to store in a CubeSat’s tight quarters. Instead, the satellites are equipped with smaller, less powerful antennae, restricting them to orbits below those of most geosynchronous satellites.

    Now researchers at MIT have come up with a design that may significantly increase the communication range of small satellites, enabling them to travel much farther in the solar system: The team has built and tested an inflatable antenna that can fold into a compact space and inflate when in orbit.

    View of a CubeSat equipped with an inflated antenna, in a NASA radiation chamber. Photo: Alessandra Babuscia

    Ncube-2, a Norwegian Cubesat

    The antenna significantly amplifies a radio signal, allowing a CubeSat to transmit data back to Earth at a higher rate. The distance that can be covered by a satellite outfitted with an inflatable antenna is seven times farther than that of existing CubeSat communications.

    “With this antenna you could transmit from the moon, and even farther than that,” says Alessandra Babuscia, who led the research as a postdoc at MIT. “This antenna is one of the cheapest and most economical solutions to the problem of communications.”

    The team, led by Babuscia, is part of Professor Sara Seager’s research group and also includes graduate students Benjamin Corbin, Mary Knapp, and Mark Van de Loo from MIT, and Rebecca Jensen-Clem from the California Institute of Technology. The researchers, from MIT’s departments of Aeronautics and Astronautics and of Earth, Atmospheric and Planetary Sciences, have detailed their results in the journal Acta Astronautica.

    ‘Magic’ powder

    An inflatable antenna is not a new idea. In fact, previous experiments in space have successfully tested such designs, though mostly for large satellites: To inflate these bulkier antennae, engineers install a system of pressure valves to fill them with air once in space — heavy, cumbersome equipment that would not fit within a CubeSat’s limited real estate.

    Babuscia raises another concern: As small satellites are often launched as secondary payloads aboard rockets containing other scientific missions, a satellite loaded with pressure valves may backfire, with explosive consequences, jeopardizing everything on board. This is all the more reason, she says, to find a new inflation mechanism.

    The team landed on a lighter, safer solution, based on sublimating powder, a chemical compound that transforms from a solid powder to a gas when exposed to low pressure.

    “It’s almost like magic,” Babuscia explains. “Once you are in space, the difference in pressure triggers a chemical reaction that makes the powder sublimate from the solid state to the gas state, and that inflates the antenna.”

    Testing an inflating idea

    Babuscia and her colleagues built two prototype antennae, each a meter wide, out of Mylar; one resembled a cone and the other a cylinder when inflated. They determined an optimal folding configuration for each design, and packed each antenna into a 10-cubic-centimeter space within a CubeSat, along with a few grams of benzoic acid, a type of sublimating powder. The team tested each antenna’s inflation in a vacuum chamber at MIT, lowering the pressure to just above that experienced in space. In response, the powder converted to a gas, inflating both antennae to the desired shape.

    The group also tested each antenna’s electromagnetic properties — an indication of how well an antenna can transmit data. In radiation simulations of both the conical and cylindrical designs, the researchers observed that the cylindrical antenna performed slightly better, transmitting data 10 times faster, and seven times farther, than existing CubeSat antennae.

    An antenna made of thin Mylar, while potentially powerful, can be vulnerable to passing detritus in space. Micrometeroids, for example, can puncture a balloon, causing leaks and affecting an antenna’s performance. But Babuscia says the use of sublimating powder can circumvent the problems caused by micrometeroid impacts. She explains that a sublimating powder will only create as much gas as needed to fully inflate an antenna, leaving residual powder to sublimate later, to compensate for any later leaks or punctures.

    The group tested this theory in a coarse simulation, modeling the inflatable antenna’s behavior with different frequency of impacts to assess how much of an antenna’s surface may be punctured and how much air may leak out without compromising its performance. The researchers found that with the right sublimating powder, the lifetime of a CubeSat’s inflatable antenna may be a few years, even if it is riddled with small holes.

    Kar-Ming Cheung, an engineer specializing in space communications operations at NASA’s Jet Propulsion Laboratory (JPL), says the group’s design addresses today’s main limitations in CubeSat communications: size, weight and power.

    “A directional antenna has been out of the question for CubeSats,” says Cheung, who was not involved in the research. “An inflatable antenna would enable orders of magnitude improvement in data return. This idea is very promising.”

    Babuscia says future tests may involve creating tiny holes in a prototype and inflating it in a vacuum chamber to see how much powder would be required to keep the antenna inflated. She is now continuing to refine the antenna design at JPL.

    “In the end, what’s going to make the success of CubeSat communications will be a lot of different ideas, and the ability of engineers to find the right solution for each mission,” Babuscia says. “So inflatable antennae could be for a spacecraft going by itself to an asteroid. For another problem, you’d need another solution. But all this research builds a set of options to allow these spacecraft, made directly by universities, to fly in deep space.”

    See the full article here.

    ScienceSprings is powered by MAINGEAR computers

  • richardmitnick 2:42 pm on April 16, 2014 Permalink | Reply
    Tags: , , , , M.I.T. Physics, ,   

    From M.I.T.: “A river of plasma, guarding against the sun” 

    March 6, 2014
    Jennifer Chu, MIT News Office

    MIT scientists identify a plasma plume that naturally protects the Earth against solar storms.

    The Earth’s magnetic field, or magnetosphere, stretches from the planet’s core out into space, where it meets the solar wind, a stream of charged particles emitted by the sun. For the most part, the magnetosphere acts as a shield to protect the Earth from this high-energy solar activity.

    But when this field comes into contact with the sun’s magnetic field — a process called “magnetic reconnection” — powerful electrical currents from the sun can stream into Earth’s atmosphere, whipping up geomagnetic storms and space weather phenomena that can affect high-altitude aircraft, as well as astronauts on the International Space Station.

    Magnetic Reconnection: This view is a cross-section through four magnetic domains undergoing separator reconnection. Two separatrices divide space into four magnetic domains with a separator at the center of the figure. Field lines (and associated plasma) flow inward from above and below the separator, reconnect, and spring outward horizontally. A current sheet (as shown) may be present but is not required for reconnection to occur. This process is not well understood: once started, it proceeds many orders of magnitude faster than predicted by standard models.

    Now scientists at MIT and NASA have identified a process in the Earth’s magnetosphere that reinforces its shielding effect, keeping incoming solar energy at bay.


    By combining observations from the ground and in space, the team observed a plume of low-energy plasma particles that essentially hitches a ride along magnetic field lines — streaming from Earth’s lower atmosphere up to the point, tens of thousands of kilometers above the surface, where the planet’s magnetic field connects with that of the sun. In this region, which the scientists call the merging point, the presence of cold, dense plasma slows magnetic reconnection, blunting the sun’s effects on Earth.

    “The Earth’s magnetic field protects life on the surface from the full impact of these solar outbursts,” says John Foster, associate director of MIT’s Haystack Observatory. “Reconnection strips away some of our magnetic shield and lets energy leak in, giving us large, violent storms. These plasmas get pulled into space and slow down the reconnection process, so the impact of the sun on the Earth is less violent.”

    Foster and his colleagues publish their results in this week’s issue of Science. The team includes Philip Erickson, principal research scientist at Haystack Observatory, as well as Brian Walsh and David Sibeck at NASA’s Goddard Space Flight Center.

    Mapping Earth’s magnetic shield

    For more than a decade, scientists at Haystack Observatory have studied plasma plume phenomena using a ground-based technique called GPS-TEC, in which scientists analyze radio signals transmitted from GPS satellites to more than 1,000 receivers on the ground. Large space-weather events, such as geomagnetic storms, can alter the incoming radio waves — a distortion that scientists can use to determine the concentration of plasma particles in the upper atmosphere. Using this data, they can produce two-dimensional global maps of atmospheric phenomena, such as plasma plumes.

    These ground-based observations have helped shed light on key characteristics of these plumes, such as how often they occur, and what makes some plumes stronger than others. But as Foster notes, this two-dimensional mapping technique gives an estimate only of what space weather might look like in the low-altitude regions of the magnetosphere. To get a more precise, three-dimensional picture of the entire magnetosphere would require observations directly from space.

    Toward this end, Foster approached Walsh with data showing a plasma plume emanating from the Earth’s surface, and extending up into the lower layers of the magnetosphere, during a moderate solar storm in January 2013. Walsh checked the date against the orbital trajectories of three spacecraft that have been circling the Earth to study auroras in the atmosphere.

    As it turns out, all three spacecraft crossed the point in the magnetosphere at which Foster had detected a plasma plume from the ground. The team analyzed data from each spacecraft, and found that the same cold, dense plasma plume stretched all the way up to where the solar storm made contact with Earth’s magnetic field.

    A river of plasma

    Foster says the observations from space validate measurements from the ground. What’s more, the combination of space- and ground-based data give a highly detailed picture of a natural defensive mechanism in the Earth’s magnetosphere.

    “This higher-density, cold plasma changes about every plasma physics process it comes in contact with,” Foster says. “It slows down reconnection, and it can contribute to the generation of waves that, in turn, accelerate particles in other parts of the magnetosphere. So it’s a recirculation process, and really fascinating.”

    Foster likens this plume phenomenon to a “river of particles,” and says it is not unlike the Gulf Stream, a powerful ocean current that influences the temperature and other properties of surrounding waters. On an atmospheric scale, he says, plasma particles can behave in a similar way, redistributing throughout the atmosphere to form plumes that “flow through a huge circulation system, with a lot of different consequences.”

    “What these types of studies are showing is just how dynamic this entire system is,” Foster adds.

    Tony Mannucci, supervisor of the Ionospheric and Atmospheric Remote Sensing Group at NASA’s Jet Propulsion Laboratory, says that although others have observed magnetic reconnection, they have not looked at data closer to Earth to understand this connection.

    “I believe this group was very creative and ingenious to use these methods to infer how plasma plumes affect magnetic reconnection,” says Mannucci, who was not involved in the research. “This discovery of the direct connection between a plasma plume and the magnetic shield surrounding Earth means that a new set of ground-based observations can be used to infer what is occurring deep in space, allowing us to understand and possibly forecast the implications of solar storms.”

    See the full article here.

    ScienceSprings is powered by MAINGEAR computers

  • richardmitnick 9:24 am on March 20, 2014 Permalink | Reply
    Tags: , , , , , , , , M.I.T. Physics   

    From M.I.T.: “3 Questions: Alan Guth on new insights into the ‘Big Bang’” 

    March 19, 2014
    Steve Bradt, MIT News Office

    Earlier this week, scientists announced that a telescope observing faint echoes of the so-called “Big Bang” had found evidence of the universe’s nearly instantaneous expansion from a mere dot into a dense ball containing more than 1090 particles. This discovery, using the BICEP2 telescope at the South Pole, provides the first strong evidence of “cosmic inflation” at the birth of our universe, when it expanded billions of times over.

    BICEP Telescope
    BICEP2 Telescope at South Pole

    The theory of cosmic inflation was first proposed in 1980 by Alan Guth, now the Victor F. Weisskopf Professor of Physics at MIT. Inflation has become a cornerstone of Big Bang cosmology, but until now it had remained a theory without experimental support. Guth discussed the significance of the new BICEP2 results with MIT News.

    Dr. Alan Guth

    Q: Can you explain the theory of cosmic inflation that you first put forth in 1980?

    A: I usually describe inflation as a theory of the “bang” of the Big Bang: It describes the propulsion mechanism that drove the universe into the period of tremendous expansion that we call the Big Bang. In its original form, the Big Bang theory never was a theory of the bang. It said nothing about what banged, why it banged, or what happened before it banged.

    The original Big Bang theory was really a theory of the aftermath of the bang. The universe was already hot and dense, and already expanding at a fantastic rate. The theory described how the universe was cooled by the expansion, and how the expansion was slowed by the attractive force of gravity.

    Inflation proposes that the expansion of the universe was driven by a repulsive form of gravity. According to [Isaac] Newton, gravity is a purely attractive force, but this changed with [Albert] Einstein and the discovery of general relativity. General relativity describes gravity as a distortion of spacetime, and allows for the possibility of repulsive gravity.

    Modern particle theories strongly suggest that at very high energies, there should exist forms of matter that create repulsive gravity. Inflation, in turn, proposes that at least a very small patch of the early universe was filled with this repulsive-gravity material. The initial patch could have been incredibly small, perhaps as small as 10-24 centimeter, about 100 billion times smaller than a single proton. The small patch would then start to exponentially expand under the influence of the repulsive gravity, doubling in size approximately every 10-37 second. To successfully describe our visible universe, the region would need to undergo at least 80 doublings, increasing its size to about 1 centimeter. It could have undergone significantly more doublings, but at least this number is needed.

    During the period of exponential expansion, any ordinary material would thin out, with the density diminishing to almost nothing. The behavior in this case, however, is very different: The repulsive-gravity material actually maintains a constant density as it expands, no matter how much it expands! While this appears to be a blatant violation of the principle of the conservation of energy, it is actually perfectly consistent.

    This loophole hinges on a peculiar feature of gravity: The energy of a gravitational field is negative. As the patch expands at constant density, more and more energy, in the form of matter, is created. But at the same time, more and more negative energy appears in the form of the gravitational field that is filling the region. The total energy remains constant, as it must, and therefore remains very small.

    It is possible that the total energy of the entire universe is exactly zero, with the positive energy of matter completely canceled by the negative energy of gravity. I often say that the universe is the ultimate free lunch, since it actually requires no energy to produce a universe.

    At some point the inflation ends because the repulsive-gravity material becomes metastable. The repulsive-gravity material decays into ordinary particles, producing a very hot soup of particles that form the starting point of the conventional Big Bang. At this point the repulsive gravity turns off, but the region continues to expand in a coasting pattern for billions of years to come. Thus, inflation is a prequel to the era that cosmologists call the Big Bang, although it of course occurred after the origin of the universe, which is often also called the Big Bang.

    Q: What is the new result announced this week, and how does it provide critical support for your theory?

    A: The stretching effect caused by the fantastic expansion of inflation tends to smooth things out — which is great for cosmology, because an ordinary explosion would presumably have left the universe very splotchy and irregular. The early universe, as we can see from the afterglow of the cosmic microwave background (CMB) radiation, was incredibly uniform, with a mass density that was constant to about one part in 100,000.

    CMB Planck ESA
    Cosmic Microwave Background

    ESA Planck

    The tiny nonuniformities that did exist were then amplified by gravity: In places where the mass density was slightly higher than average, a stronger-than-average gravitational field was created, which pulled in still more matter, creating a yet stronger gravitational field. But to have structure form at all, there needed to be small nonuniformities at the end of inflation.

    In inflationary models, these nonuniformities — which later produce stars, galaxies, and all the structure of the universe — are attributed to quantum theory. Quantum field theory implies that, on very short distance scales, everything is in a state of constant agitation. If we observed empty space with a hypothetical, and powerful, magnifying glass, we would see the electric and magnetic fields undergoing wild oscillations, with even electrons and positrons popping out of the vacuum and then rapidly disappearing. The effect of inflation, with its fantastic expansion, is to stretch these quantum fluctuations to macroscopic proportions.

    The temperature nonuniformities in the cosmic microwave background were first measured in 1992 by the COBE satellite, and have since been measured with greater and greater precision by a long and spectacular series of ground-based, balloon-based, and satellite experiments. They have agreed very well with the predictions of inflation. These results, however, have not generally been seen as proof of inflation, in part because it is not clear that inflation is the only possible way that these fluctuations could have been produced.

    NASA COBE satellite

    The stretching effect of inflation, however, also acts on the geometry of space itself, which according to general relativity is flexible. Space can be compressed, stretched, or even twisted. The geometry of space also fluctuates on small scales, due to the physics of quantum theory, and inflation also stretches these fluctuations, producing gravity waves in the early universe.

    The new result, by John Kovac and the BICEP2 collaboration, is a measurement of these gravity waves, at a very high level of confidence. They do not see the gravity waves directly, but instead they have constructed a very detailed map of the polarization of the CMB in a patch of the sky. They have observed a swirling pattern in the polarization (called “B modes”) that can be created only by gravity waves in the early universe, or by the gravitational lensing effect of matter in the late universe.

    But the primordial gravity waves can be separated, because they tend to be on larger angular scales, so the BICEP2 team has decisively isolated their contribution. This is the first time that even a hint of these primordial gravity waves has been detected, and it is also the first time that any quantum properties of gravity have been directly observed.

    Q: How would you describe the significance of these new findings, and your reaction to them?

    A: The significance of these new findings is enormous. First of all, they help tremendously in confirming the picture of inflation. As far as we know, there is nothing other than inflation that can produce these gravity waves. Second, it tells us a lot about the details of inflation that we did not already know. In particular, it determines the energy density of the universe at the time of inflation, which is something that previously had a wide range of possibilities.

    By determining the energy density of the universe at the time of inflation, the new result also tells us a lot about which detailed versions of inflation are still viable, and which are no longer viable. The current result is not by itself conclusive, but it points in the direction of the very simplest inflationary models that can be constructed.

    Finally, and perhaps most importantly, the new result is not the final story, but is more like the opening of a new window. Now that these B modes have been found, the BICEP2 collaboration and many other groups will continue to study them. They provide a new tool to study the behavior of the early universe, including the process of inflation.

    When I (and others) started working on the effect of quantum fluctuations in the early 1980s, I never thought that anybody would ever be able to measure these effects. To me it was really just a game, to see if my colleagues and I could agree on what the fluctuations would theoretically look like. So I am just astounded by the progress that astronomers have made in measuring these minute effects, and particularly by the new result of the BICEP2 team. Like all experimental results, we should wait for it to be confirmed by other groups before taking it as truth, but the group seems to have been very careful, and the result is very clean, so I think it is very likely that it will hold up.

    See the full article here.

    ScienceSprings is powered by MAINGEAR computers

  • richardmitnick 6:39 pm on March 10, 2014 Permalink | Reply
    Tags: , , M.I.T. Physics, , ,   

    From M.I.T.: “Two-dimensional material shows promise for optoelectronics” 

    March 10, 2014
    David L. Chandler, MIT News Office

    Team creates LEDs, photovoltaic cells, and light detectors using novel one-molecule-thick material.

    A team of MIT researchers has used a novel material that’s just a few atoms thick to create devices that can harness or emit light. This proof-of-concept could lead to ultrathin, lightweight, and flexible photovoltaic cells, light emitting diodes (LEDs), and other optoelectronic devices, they say.

    In the team’s experimental setup, electricity was supplied to a tiny piece of tungsten selenide (small rectangle at center) through two gold wires (from top left and right), causing it to emit light (bright area at center), demonstrating its potential as an LED material.
    Image courtesy of Britt Baugher and Hugh Churchill

    Microscope image shows the teams experimental setup.
    Image courtesy of Hugh Churchill and Felice Frankel

    Their report is one of three papers by different groups describing similar results with this material, published in the March 9 issue of Nature Nanotechnology. The MIT research was carried out by Pablo Jarillo-Herrero, the Mitsui Career Development Associate Professor of Physics, graduate students Britton Baugher and Yafang Yang, and postdoc Hugh Churchill.

    The material they used, called tungsten diselenide (WSe2), is part of a class of single-molecule-thick materials under investigation for possible use in new optoelectronic devices — ones that can manipulate the interactions of light and electricity. In these experiments, the MIT researchers were able to use the material to produce diodes, the basic building block of modern electronics.

    Typically, diodes (which allow electrons to flow in only one direction) are made by “doping,” which is a process of injecting other atoms into the crystal structure of a host material. By using different materials for this irreversible process, it is possible to make either of the two basic kinds of semiconducting materials, p-type or n-type.

    But with the new material, either p-type or n-type functions can be obtained just by bringing the vanishingly thin film into very close proximity with an adjacent metal electrode, and tuning the voltage in this electrode from positive to negative. That means the material can easily and instantly be switched from one type to the other, which is rarely the case with conventional semiconductors.

    In their experiments, the MIT team produced a device with a sheet of WSe2 material that was electrically doped half n-type and half p-type, creating a working diode that has properties “very close to the ideal,” Jarillo-Herrero says.

    By making diodes, it is possible to produce all three basic optoelectronic devices — photodetectors, photovoltaic cells, and LEDs; the MIT team has demonstrated all three, Jarillo-Herrero says. While these are proof-of-concept devices, and not designed for scaling up, the successful demonstration could point the way toward a wide range of potential uses, he says.

    “It’s known how to make very large-area materials” of this type, Churchill says. While further work will be required, he says, “there’s no reason you wouldn’t be able to do it on an industrial scale.”

    In principle, Jarillo-Herrero says, because this material can be engineered to produce different values of a key property called bandgap, it should be possible to make LEDs that produce any color — something that is difficult to do with conventional materials. And because the material is so thin, transparent, and lightweight, devices such as solar cells or displays could potentially be built into building or vehicle windows, or even incorporated into clothing, he says.

    While selenium is not as abundant as silicon or other promising materials for electronics, the thinness of these sheets is a big advantage, Churchill points out: “It’s thousands or tens of thousands of times thinner” than conventional diode materials, “so you’d use thousands of times less material” to make devices of a given size.

    In addition to the diodes the team has produced, the team has also used the same methods to make p-type and n-type transistors and other electronic components, Jarillo-Herrero says. Such transistors could have a significant advantage in speed and power consumption because they are so thin, he says.

    Kirill Bolotin, an assistant professor of physics and electrical engineering at Vanderbilt University, says, “The field of two-dimensional materials is still at its infancy, and because of this, any potential devices with well-defined applications are highly desired. Perhaps the most surprising aspect of this study is that all of these devices are efficient. … It is possible that devices of this kind can transform the way we think about applications where small optoelectronic elements are needed.”

    The research was supported by the U.S. Office of Naval Research, by a Packard fellowship, and by a Pappalardo fellowship, and made use of National Science Foundation-supported facilities.

    See the full article here.

    ScienceSprings is powered by MAINGEAR computers

  • richardmitnick 6:04 am on February 20, 2014 Permalink | Reply
    Tags: , , M.I.T. Physics, ,   

    From M.I.T.: “Closing the ‘free will’ loophole” 

    February 20, 2014
    Jennifer Chu, MIT News Office

    In a paper published this week in the journal Physical Review Letters, MIT researchers propose an experiment that may close the last major loophole of Bell’s inequality — a 50-year-old theorem that, if violated by experiments, would mean that our universe is based not on the textbook laws of classical physics, but on the less-tangible probabilities of quantum mechanics.

    Artist’s interpretation of ULAS J1120+0641, a very distant quasar.
    Image: ESO/M. Kornmesser

    Such a quantum view would allow for seemingly counterintuitive phenomena such as entanglement, in which the measurement of one particle instantly affects another, even if those entangled particles are at opposite ends of the universe. Among other things, entanglement — a quantum feature Albert Einstein skeptically referred to as “spooky action at a distance”— seems to suggest that entangled particles can affect each other instantly, faster than the speed of light.

    In 1964, physicist John Bell took on this seeming disparity between classical physics and quantum mechanics, stating that if the universe is based on classical physics, the measurement of one entangled particle should not affect the measurement of the other — a theory, known as locality, in which there is a limit to how correlated two particles can be. Bell devised a mathematical formula for locality, and presented scenarios that violated this formula, instead following predictions of quantum mechanics.

    Since then, physicists have tested Bell’s theorem by measuring the properties of entangled quantum particles in the laboratory. Essentially all of these experiments have shown that such particles are correlated more strongly than would be expected under the laws of classical physics — findings that support quantum mechanics.

    However, scientists have also identified several major loopholes in Bell’s theorem. These suggest that while the outcomes of such experiments may appear to support the predictions of quantum mechanics, they may actually reflect unknown “hidden variables” that give the illusion of a quantum outcome, but can still be explained in classical terms.

    Though two major loopholes have since been closed, a third remains; physicists refer to it as setting independence, or more provocatively, “free will.” This loophole proposes that a particle detector’s settings may “conspire” with events in the shared causal past of the detectors themselves to determine which properties of the particle to measure — a scenario that, however far-fetched, implies that a physicist running the experiment does not have complete free will in choosing each detector’s setting. Such a scenario would result in biased measurements, suggesting that two particles are correlated more than they actually are, and giving more weight to quantum mechanics than classical physics.

    “It sounds creepy, but people realized that’s a logical possibility that hasn’t been closed yet,” says MIT’s David Kaiser, the Germeshausen Professor of the History of Science and senior lecturer in the Department of Physics. “Before we make the leap to say the equations of quantum theory tell us the world is inescapably crazy and bizarre, have we closed every conceivable logical loophole, even if they may not seem plausible in the world we know today?”

    Now Kaiser, along with MIT postdoc Andrew Friedman and Jason Gallicchio of the University of Chicago, have proposed an experiment to close this third loophole by determining a particle detector’s settings using some of the oldest light in the universe: distant quasars, or galactic nuclei, which formed billions of years ago.

    The idea, essentially, is that if two quasars on opposite sides of the sky are sufficiently distant from each other, they would have been out of causal contact since the Big Bang some 14 billion years ago, with no possible means of any third party communicating with both of them since the beginning of the universe — an ideal scenario for determining each particle detector’s settings.

    As Kaiser explains it, an experiment would go something like this: A laboratory setup would consist of a particle generator, such as a radioactive atom that spits out pairs of entangled particles. One detector measures a property of particle A, while another detector does the same for particle B. A split second after the particles are generated, but just before the detectors are set, scientists would use telescopic observations of distant quasars to determine which properties each detector will measure of a respective particle. In other words, quasar A determines the settings to detect particle A, and quasar B sets the detector for particle B.

    The researchers reason that since each detector’s setting is determined by sources that have had no communication or shared history since the beginning of the universe, it would be virtually impossible for these detectors to “conspire” with anything in their shared past to give a biased measurement; the experimental setup could therefore close the “free will” loophole. If, after multiple measurements with this experimental setup, scientists found that the measurements of the particles were correlated more than predicted by the laws of classical physics, Kaiser says, then the universe as we see it must be based instead on quantum mechanics.

    “I think it’s fair to say this [loophole] is the final frontier, logically speaking, that stands between this enormously impressive accumulated experimental evidence and the interpretation of that evidence saying the world is governed by quantum mechanics,” Kaiser says.

    Now that the researchers have put forth an experimental approach, they hope that others will perform actual experiments, using observations of distant quasars.

    Physicist Michael Hall says that while the idea of using light from distant sources like quasars is not a new one, the group’s paper illustrates the first detailed analysis of how such an experiment could be carried out in practice, using current technology.

    “It is therefore a big step to closing the loophole once and for all,” says Hall, a research fellow in the Centre for Quantum Dynamics at Griffith University in Australia. “I am sure there will be strong interest in conducting such an experiment, which combines cosmic distances with microscopic quantum effects — and most likely involving an unusual collaboration between quantum physicists and astronomers.”

    “At first, we didn’t know if our setup would require constellations of futuristic space satellites, or 1,000-meter telescopes on the dark side of the moon,” Friedman says. “So we were naturally delighted when we discovered, much to our surprise, that our experiment was both feasible in the real world with present technology, and interesting enough to our experimentalist collaborators who actually want to make it happen in the next few years.”

    Adds Kaiser, “We’ve said, ‘Let’s go for broke — let’s use the history of the cosmos since the Big Bang, darn it.’ And it is very exciting that it’s actually feasible.”

    This research was funded by the National Science Foundation.

    See the full article here.

    ScienceSprings is powered by MAINGEAR computers

  • richardmitnick 6:32 pm on February 6, 2014 Permalink | Reply
    Tags: , , , M.I.T. Physics,   

    From M.I.T.: “Theorists predict new forms of exotic insulating materials” 

    Topological insulators could exist in six new types not seen before.

    February 6, 2014
    David L. Chandler, MIT News Office

    Topological insulators — materials whose surfaces can freely conduct electrons even though their interiors are electrical insulators — have been of great interest to physicists in recent years because of unusual properties that may provide insights into quantum physics. But most analysis of such materials has had to rely on highly simplified models.

    An idealized band structure for a topological insulator. The Fermi level falls within the bulk band gap which is traversed by topologically-protected surface states.

    Now, a team of researchers at MIT has performed a more detailed analysis that hints at the existence of six new kinds of topological insulators. The work also predicts the materials’ physical properties in sufficient detail that it should be possible to identify them unambiguously if they are produced in the lab, the scientists say.

    The new findings are reported this week in the journal Science by MIT professor of physics Senthil Todadri, graduate student Chong Wang, and Andrew Potter, a former MIT graduate student who is now a postdoc at the University of California at Berkeley.

    “In contrast to conventional insulators, the surface of the topological insulators harbors exotic physics that are interesting both for fundamental physics, and possibly for applications,” Senthil says. But attempts to study the properties of these materials have “relied on a highly simplified model in which the electrons inside the solid are treated as though they did not interact with each other.” New analytical tools applied by the MIT team now reveal “that there are six, and only six, new kinds of topological insulators that require strong electron-electron interactions.”

    “The surface of a three-dimensional material is two-dimensional,” Senthil says — which explains why the electrical behavior of the surface of a topological insulator is so different from that of the interior. But, he adds, “The kind of two-dimensional physics that emerges [on these surfaces] can never be in a two-dimensional material. There has to be something inside, otherwise this physics will never occur. That’s what’s exciting about these materials,” which reveal processes that don’t show up in other ways.

    In fact, Senthil says, this new work based on analysis of such surface phenomena shows that some previous predictions of phenomena in two-dimensional materials “cannot be right.”

    Since this is a new finding, he says, it is too soon to say what applications these new topological insulators might have. But the analysis provides details on predicted properties that should allow experimentalists to begin to understand the behavior of these exotic states of matter.

    “If they exist, we know how to detect them,” Senthil says of these new phases. “And we know that they can exist.” What this research doesn’t yet show, however, is what these new topological insulators’ composition might be, or how to go about creating them.

    The next step, he says, is to try to predict “what compositions might lead to” these newly predicted phases of topological insulators. “It’s an open question now that we need to attack.”

    Joel Moore, a professor of physics at the University of California at Berkeley, says, “I think it is a very insightful piece of work. It is less about a very complicated calculation than about thinking deeply and abstractly.” While much work remains to be done to find or create such materials, he says, “this work provides some clear guidance,” revealing that the number of possible states “is remarkably small” and that understanding their properties should not be as complicated as might have been expected.

    See the full article here.

    ScienceSprings is powered by MAINGEAR computers

  • richardmitnick 6:06 pm on November 18, 2013 Permalink | Reply
    Tags: , , , M.I.T. Physics, ,   

    From Symmetry: “Connecting the visible universe with dark matter” 

    November 18, 2013

    Does the visible photon have a counterpart, a dark photon, that interacts with the components of dark matter?

    Kandice Carter

    For thousands of years, humanity has relied on light to reveal the mysteries of our universe, whether it’s by observing the light given off by brightly burning stars or by shining light on the very small with microscopes.

    Yet, according to recent evidence, scientists think that only about 5 percent of our universe is made of visible matter—ordinary atoms that make up nearly everything we can see, touch and feel. The other 95 percent is composed of the so-called dark sector, which includes dark matter and dark energy. These are described as “dark” because we observe their effects on other objects rather than by seeing them directly. Now, to study the dark, scientists are turning to what they know about light, and they are pointing to a recently successful test of experimental equipment that suggests an exploration of the dark sector may be possible at Jefferson Lab.

    Dark light

    Illustration by Sandbox Studio, Chicago

    We know that the particles of light, photons, interact with visible matter and its building blocks—protons, neutrons and electrons. Perhaps the same is true for dark matter. In other words, does the visible photon have a counterpart, a dark photon, that interacts with the components of dark matter?

    The DarkLight collaboration is hoping to answer that question. Peter Fisher and Richard Milner, professors at the Massachusetts Institute of Technology, serve as spokespersons for the DarkLight collaboration. Fisher was recently appointed head of the MIT physics department, and Milner is director for the institute’s Laboratory of Nuclear Science.

    In a recent interview, Milner said that the dark photon may bridge the dark and light sectors of our universe.

    “Such particles are motivated by the assumption that dark matter exists and that it must somehow couple to the standard matter in the universe. And these dark photons kind of straightforwardly could do that,” he explains.

    According to theory, the dark photon is very similar to the light photon, except that it has mass and interacts with dark matter. The dark photon is sometimes referred to as a heavy photon or as a particle dubbed the A’ (pronounced “A prime”). If the dark photon also interacts with ordinary matter, it may be coaxed out of hiding under just the right conditions. In fact, Milner says that scientists may have already caught glimpses of the effects of dark photons in data from particle physics and astrophysics experiments.

    Hints of dark photons in data past

    For instance, dark photons may play a role in explaining the data in the Muon g-2 experiment (pronounced “Moo-on g minus two experiment”) that was conducted at Brookhaven National Laboratory in 2001. Muons are particles that can be thought of as heavier cousins of electrons.

    The Muon g-2 experiment sought to measure a characteristic of the muon related to its magnetic field. In simplistic terms, an item’s magnetic moment quantifies the strength of its reaction to a magnetic field. The muon has a magnetic moment, but, unlike your typical chunk of steel, the muon’s magnetic moment is altered by its tiny size—this alteration is captured in the muon’s so-called “anomalous magnetic moment.” When the Muon g-2 collaboration measured the muon’s anomalous magnetic moment, its collaborators were surprised to find that the number they they measured didn’t match the number they expected.

    “If this is real, such a discrepancy could be explained by a dark photon of the type and mass that DarkLight is searching for,” Milner says.

    Other evidence of dark photons may be found in astrophysics.

    When a measurement was made of high-energy electron–positron pairs in outer space, there were more than could be explained by production from cosmic rays, suggesting that something else, such as dark photons, produces extra pairs.

    “Also, there are indications from the center of our galaxy that there is radiation which might be consistent with the dark photon,” Milner adds.

    A challenging experiment

    If dark photons are giving rise to these observed phenomena, it means that they do interact with visible matter, if ever so rarely. It also means that the effect should be reproducible and measurable by experimenters.

    “This dark photon that we expect could be seen by emission from a charged particle beam, like an electron beam. So an electron beam can radiate such a dark photon,” Milner explains. “So, we looked around, and the world’s most powerful electron beam is at the Jefferson Lab Free-Electron Laser. It has about 1 megawatt of power in the beam. And so that’s how we arrived at Jefferson Lab; it’s absolutely unique in the world.”

    The scientists drafted a proposal that calls for aiming the beam at the protons in a target of hydrogen gas. MIT theorist Jesse Thaler, whose group has carried out important calculations for DarkLight, proposed the name for the experiment, based on the method that will be used to carry it out (DarkLight: Detecting a Resonance Kinematically with Electrons Incident on a Gaseous Hydrogen Target).

    The experimenters chose hydrogen, because its atoms consist of just one proton with an orbiting electron. When the electrons from the accelerator strike the protons in the hydrogen, they’ll knock the protons out of the target.

    “So if we do it at sufficiently low energies, we know the final state is simple—it’s just the scattered electron, the proton and the electron–positron pair, which could come from this decay of the dark photon,” Milner explains.

    The experiment was approved on the condition that the collaboration could show that they were up to the technical challenges of conducting it. Milner says the main challenge was to prove that the accelerator operators could get an electron beam through the narrow hydrogen target. Even though the electrons in the beam would have low energies, the beam would have a lot of them, amounting to 1 megawatt of power. That much power would destroy any container used to hold the hydrogen gas.

    The experimenters decided that the gas would be pumped into a narrow pipe. The electrons would then be threaded into that same narrow pipe. At its narrowest, the pipe would need to be about 2 millimeters wide and 5 centimeters long, which is roughly the size of a round coffee stirrer.

    “We decided that we really needed to do a test with a beam. So, we basically built a system, a test target system that had basically a mock-up of apertures, 2-millimeter-, 4-millimeter- and 6-millimeter-diameter apertures, in an aluminum block. And we brought it to Jefferson Lab about a year ago. And in late July, we had a test,” he says.

    Jefferson Lab laser accelerator operators threaded an electron beam through a small tube the size of a coffee stirrer inside this apparatus to show that the DarkLight experiment was possible. DarkLight will search for dark photons, which are particles that interact with both dark matter and visible matter. Courtesy of: Jefferson Lab

    Threading the coffee stirrer

    The staff at MIT-Bates Research and Engineering Center designed, constructed and delivered the test target assembly. The Jefferson Lab accelerator operators and a team from the DarkLight collaboration attempted to thread the electron beam through the narrow pipes in the aluminum block, successfully threading the beam through the 6-millimeter, then the 4-millimeter, and finally the 2-millimeter mock targets. What’s more, the electrons in the beam passed through the pipes cleanly. In the case of the smallest aperture, 2 millimeter, the operators threaded the electrons through the pipe continuously over a period of seven hours; in that time, only three electrons were lost as they struck the walls of the pipe for every million that passed cleanly through.

    “So, it’s a very powerful beam, it’s a very bright beam, but it’s also a very clean beam,” Milner says.

    The DarkLight collaboration recently published the results of the successful tests in Physical Review Letters.

    With this successful test, the DarkLight experiment has been approved for running. Milner says the collaboration has a lot of work ahead of it before it can run the experiment, including building the detectors that will be used to capture the protons, electrons and electron–positron pairs, and finalizing the target.

    In the meantime, there are also other hunts for dark photons that are preparing to run at Jefferson Lab. Two of these experiments will be powered by the same accelerator. The Heavy Photon Search is preparing to run in Jefferson Lab’s Experimental Hall B, and the APEX experiment will be carried out in Experimental Hall A.

    See the full article here.

    Symmetry is a joint Fermilab/SLAC publication.

    ScienceSprings is powered by MAINGEAR computers

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 325 other followers

%d bloggers like this: