Tagged: MIT Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 7:30 am on September 18, 2018 Permalink | Reply
    Tags: 3Q: Sheila Widnall on sexual harassment in STEM, MIT,   

    From MIT News: “3Q: Sheila Widnall on sexual harassment in STEM” 

    MIT News
    MIT Widget

    From MIT News

    September 17, 2018
    David L. Chandler

    1
    Sheila Widnall, MIT Institute Professor and former secretary of the U.S. Air Force. Image: Len Rubenstein

    National Academies report cites need for strong leadership and cultural change; will be focus of upcoming MIT panel discussion.

    Sheila Widnall, MIT Institute Professor and former secretary of the U.S. Air Force, was co-chair of a report commissioned by the National Academies of Sciences, Engineering, and Medicine to explore the impact of sexual harassment of women in those fields. Along with co-chair Paula Johnson, president of Wellesley College, Widnall and dozens of panel members and researchers spent two years collecting and analyzing data for the report, which was released over the summer. On Sept. 18, Widnall, Johnson, and Brandeis University Professor Anita Hill will offer their thoughts on the report’s findings and recommendations, in a discussion at MIT’s Huntington Hall, Room 10-250. Widnall spoke with MIT News about some of the report’s key takeaways.

    Q: As a woman who has been working in academia for many years, did you find anything in the results of this report that surprised you, anything that was unexpected?

    A: Well, not unexpected, but the National Academy reports have to be based on data, and so our committee was composed of scientists, engineers, and social scientists, who have somewhat different ways of looking at problems. One of the challenges was to bring the committee together to agree on a common result. We couldn’t just make up things; we had to get data. So, we had some fundamental data from various universities that were taken by a recognized survey platform, and that was the foundation of our data.

    We had data for thousands and thousands of faculty and students. We did not look at student-on-student behavior, which we felt was not really part of our charge. We were looking at the structure of academic institutions and the environment that’s created in the university. We also looked at the relationship between faculty, who hold considerable authority over the climate, and the futures of students, which can be influenced by faculty through activities such as thesis advising, and letter writing, and helping people find the next rung in their career.

    At the end of the report, after we’d accumulated all this data and our conclusions about it, we said, “OK, what’s the solution?” And the solution is leadership. There is no other way to get started in some of these very difficult climate issues than leadership. Presidents, provosts, deans, department heads, faculty — these are the leaders at a university, and they are essential for dealing with these issues. We can’t make little recommendations to do this or do that. It really boils down to leadership.

    Q: What are some of the specific recommendations or programs that the report committee would like to see adopted?

    A: We found many productive actions taken by universities, including climate surveys, and our committee was particularly pleased with ombudsman programs — having a way that individuals can go to people and discuss issues and get help. I think MIT has been a leader in that; I’m not sure all universities have those. And another recommendation — I hate to use the word training, because faculty hate the word training — but MIT has put in place some things that faculty have to work through in terms of training, mainly to understand the definitions of what these various terms mean, in terms of the legal structure, the climate structure. The bottom line is you want to create a civil and welcoming climate where people feel free to express any concerns that they have.

    One of the things we did, since we were data-driven, was that we tried to collect examples of processes and programs that have been put in place by other societies, and put them forward as examples.

    We found various professional societies that are very aware of things that can happen offsite, so they have instituted special policies or even procedures for making sure that a meeting is a safe and welcoming environment for people who come across the country to go to a professional meeting. There are several examples of that in the report, of societies that have really stepped forward and put in place procedures and principles about “this is how you should behave at a meeting.” So I think that’s very welcome.

    Q: One of the interesting findings of the report was that gender harassment — stereotyping what people can or can’t do based on their gender — was especially pervasive. What are some of the impacts of that kind of behavior?

    A: A hostile work environment is caused by the uncivility of the climate. All the little microinsults, things like telling women they can’t solder or that women don’t belong in science or engineering. I think that’s really an important point in our report. Gender discrimination is most pervasive, and many people don’t think it’s wrong; they just don’t give it a second thought.

    If you have a climate where people feel that they can get away with that kind of behavior, then it’s more likely to happen. If you have an environment where people are expected to be polite — is that an old-fashioned word? — or civil, people act respectfully.

    It’s pretty clear that physical assault is unacceptable. So we didn’t deal a lot with that issue. It’s certainly a very serious kind of harassment. But we did try to focus on this less obvious form and the responsibilities of universities to create a safe and welcoming climate. I think MIT does a really good job of that.

    I think the numbers have helped to improve the climate. You know, when I came to MIT women were 1 percent of the undergraduate student body. Now it’s 46 percent, so clearly, times have changed.

    When I came here as a freshman, my freshman advisor said, “What are you doing here?” That wasn’t exactly welcoming. He looked at me as if I didn’t belong here. And I don’t think that’s the case anymore, not with such a high percentage of undergraduates being women. I think increasingly, people do feel that women are an inherent part of the field of engineering, in the field of science, in medicine.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

    Advertisements
     
  • richardmitnick 10:29 am on September 7, 2018 Permalink | Reply
    Tags: AIM-Adaptable Interpretable Machine Learning, , Black-box models, , , MIT,   

    From MIT News: “Taking machine thinking out of the black box” 

    MIT News
    MIT Widget

    From MIT News

    September 5, 2018
    Anne McGovern | Lincoln Laboratory

    1
    Members of a team developing Adaptable Interpretable Machine Learning at Lincoln Laboratory are: (l-r) Melva James, Stephanie Carnell, Jonathan Su, and Neela Kaushik. Photo: Glen Cooper.

    Adaptable Interpretable Machine Learning project is redesigning machine learning models so humans can understand what computers are thinking.

    Software applications provide people with many kinds of automated decisions, such as identifying what an individual’s credit risk is, informing a recruiter of which job candidate to hire, or determining whether someone is a threat to the public. In recent years, news headlines have warned of a future in which machines operate in the background of society, deciding the course of human lives while using untrustworthy logic.

    Part of this fear is derived from the obscure way in which many machine learning models operate. Known as black-box models, they are defined as systems in which the journey from input to output is next to impossible for even their developers to comprehend.

    “As machine learning becomes ubiquitous and is used for applications with more serious consequences, there’s a need for people to understand how it’s making predictions so they’ll trust it when it’s doing more than serving up an advertisement,” says Jonathan Su, a member of the technical staff in MIT Lincoln Laboratory’s Informatics and Decision Support Group.

    Currently, researchers either use post hoc techniques or an interpretable model such as a decision tree to explain how a black-box model reaches its conclusion. With post hoc techniques, researchers observe an algorithm’s inputs and outputs and then try to construct an approximate explanation for what happened inside the black box. The issue with this method is that researchers can only guess at the inner workings, and the explanations can often be wrong. Decision trees, which map choices and their potential consequences in a tree-like construction, work nicely for categorical data whose features are meaningful, but these trees are not interpretable in important domains, such as computer vision and other complex data problems.

    Su leads a team at the laboratory that is collaborating with Professor Cynthia Rudin at Duke University, along with Duke students Chaofan Chen, Oscar Li, and Alina Barnett, to research methods for replacing black-box models with prediction methods that are more transparent. Their project, called Adaptable Interpretable Machine Learning (AIM), focuses on two approaches: interpretable neural networks as well as adaptable and interpretable Bayesian rule lists (BRLs).

    A neural network is a computing system composed of many interconnected processing elements. These networks are typically used for image analysis and object recognition. For instance, an algorithm can be taught to recognize whether a photograph includes a dog by first being shown photos of dogs. Researchers say the problem with these neural networks is that their functions are nonlinear and recursive, as well as complicated and confusing to humans, and the end result is that it is difficult to pinpoint what exactly the network has defined as “dogness” within the photos and what led it to that conclusion.

    To address this problem, the team is developing what it calls “prototype neural networks.” These are different from traditional neural networks in that they naturally encode explanations for each of their predictions by creating prototypes, which are particularly representative parts of an input image. These networks make their predictions based on the similarity of parts of the input image to each prototype.

    As an example, if a network is tasked with identifying whether an image is a dog, cat, or horse, it would compare parts of the image to prototypes of important parts of each animal and use this information to make a prediction. A paper on this work: “This looks like that: deep learning for interpretable image recognition,” was recently featured in an episode of the “Data Science at Home” podcast. A previous paper, “Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions,” used entire images as prototypes, rather than parts.

    The other area the research team is investigating is BRLs, which are less-complicated, one-sided decision trees that are suitable for tabular data and often as accurate as other models. BRLs are made of a sequence of conditional statements that naturally form an interpretable model. For example, if blood pressure is high, then risk of heart disease is high. Su and colleagues are using properties of BRLs to enable users to indicate which features are important for a prediction. They are also developing interactive BRLs, which can be adapted immediately when new data arrive rather than recalibrated from scratch on an ever-growing dataset.

    Stephanie Carnell, a graduate student from the University of Florida and a summer intern in the Informatics and Decision Support Group, is applying the interactive BRLs from the AIM program to a project to help medical students become better at interviewing and diagnosing patients. Currently, medical students practice these skills by interviewing virtual patients and receiving a score on how much important diagnostic information they were able to uncover. But the score does not include an explanation of what, precisely, in the interview the students did to achieve their score. The AIM project hopes to change this.

    “I can imagine that most medical students are pretty frustrated to receive a prediction regarding success without some concrete reason why,” Carnell says. “The rule lists generated by AIM should be an ideal method for giving the students data-driven, understandable feedback.”

    The AIM program is part of ongoing research at the laboratory in human-systems engineering — or the practice of designing systems that are more compatible with how people think and function, such as understandable, rather than obscure, algorithms.

    “The laboratory has the opportunity to be a global leader in bringing humans and technology together,” says Hayley Reynolds, assistant leader of the Informatics and Decision Support Group. “We’re on the cusp of huge advancements.”

    Melva James is another technical staff member in the Informatics and Decision Support Group involved in the AIM project. “We at the laboratory have developed Python implementations of both BRL and interactive BRLs,” she says. “[We] are concurrently testing the output of the BRL and interactive BRL implementations on different operating systems and hardware platforms to establish portability and reproducibility. We are also identifying additional practical applications of these algorithms.”

    Su explains: “We’re hoping to build a new strategic capability for the laboratory — machine learning algorithms that people trust because they understand them.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 7:35 am on September 7, 2018 Permalink | Reply
    Tags: , Fish-eye lens may entangle pairs of atoms, MIT, , , ,   

    From MIT News: “Fish-eye lens may entangle pairs of atoms” 

    MIT News
    MIT Widget

    From MIT News

    September 5, 2018
    Jennifer Chu

    1
    James Maxwell was the first to realize that light is able to travel in perfect circles within the fish-eye lens because the density of the lens changes, with material being thickest at the middle and gradually thinning out toward the edges. No image credit.

    Scientists find a theoretical optical device may have uses in quantum computing.

    Nearly 150 years ago, the physicist James Maxwell proposed that a circular lens that is thickest at its center, and that gradually thins out at its edges, should exhibit some fascinating optical behavior. Namely, when light is shone through such a lens, it should travel around in perfect circles, creating highly unusual, curved paths of light.

    He also noted that such a lens, at least broadly speaking, resembles the eye of a fish. The lens configuration he devised has since been known in physics as Maxwell’s fish-eye lens — a theoretical construct that is only slightly similar to commercially available fish-eye lenses for cameras and telescopes.

    Now scientists at MIT and Harvard University have for the first time studied this unique, theoretical lens from a quantum mechanical perspective, to see how individual atoms and photons may behave within the lens. In a study published Wednesday in Physical Review A, they report that the unique configuration of the fish-eye lens enables it to guide single photons through the lens, in such a way as to entangle pairs of atoms, even over relatively long distances.

    Entanglement is a quantum phenomenon in which the properties of one particle are linked, or correlated, with those of another particle, even over vast distances. The team’s findings suggest that fish-eye lenses may be a promising vehicle for entangling atoms and other quantum bits, which are the necessary building blocks for designing quantum computers.

    “We found that the fish-eye lens has something that no other two-dimensional device has, which is maintaining this entangling ability over large distances, not just for two atoms, but for multiple pairs of distant atoms,” says first author Janos Perczel, a graduate student in MIT’s Department of Physics. “Entanglement and connecting these various quantum bits can be really the name of the game in making a push forward and trying to find applications of quantum mechanics.”

    The team also found that the fish-eye lens, contrary to recent claims, does not produce a perfect image. Scientists have thought that Maxwell’s fish-eye may be a candidate for a “perfect lens” — a lens that can go beyond the diffraction limit, meaning that it can focus light to a point that is smaller than the light’s own wavelength. This perfect imaging, scientist predict, should produce an image with essentially unlimited resolution and extreme clarity.

    However, by modeling the behavior of photons through a simulated fish-eye lens, at the quantum level, Perczel and his colleagues concluded that it cannot produce a perfect image, as originally predicted.

    “This tells you that there are these limits in physics that are really difficult to break,” Perczel says. “Even in this system, which seemed to be a perfect candidate, this limit seems to be obeyed. Perhaps perfect imaging may still be possible with the fish eye in some other, more complicated way, but not as originally proposed.”

    Perczel’s co-authors on the paper are Peter Komar and Mikhail Lukin from Harvard University.

    A circular path

    Maxwell was the first to realize that light is able to travel in perfect circles within the fish-eye lens because the density of the lens changes, with material being thickest at the middle and gradually thinning out toward the edges. The denser a material, the slower light moves through it. This explains the optical effect when a straw is placed in a glass half full of water. Because the water is so much denser than the air above it, light suddenly moves more slowly, bending as it travels through water and creating an image that looks as if the straw is disjointed.

    In the theoretical fish-eye lens, the differences in density are much more gradual and are distributed in a circular pattern, in such a way that it curves rather bends light, guiding light in perfect circles within the lens.

    In 2009, Ulf Leonhardt, a physicist at the Weizmann Institute of Science in Israel was studying the optical properties of Maxwell’s fish-eye lens and observed that, when photons are released through the lens from a single point source, the light travels in perfect circles through the lens and collects at a single point at the opposite end, with very little loss of light.

    “None of the light rays wander off in unwanted directions,” Perczel says. “Everything follows a perfect trajectory, and all the light will meet at the same time at the same spot.”

    Leonhardt, in reporting his results, made a brief mention as to whether the fish-eye lens’ single-point focus might be useful in precisely entangling pairs of atoms at opposite ends of the lens.

    “Mikhail [Lukin] asked him whether he had worked out the answer, and he said he hadn’t,” Perczel says. “That’s how we started this project and started digging deeper into how well this entangling operation works within the fish-eye lens.”

    Playing photon ping-pong

    To investigate the quantum potential of the fish-eye lens, the researchers modeled the lens as the simplest possible system, consisting of two atoms, one at either end of a two-dimensional fish-eye lens, and a single photon, aimed at the first atom. Using established equations of quantum mechanics, the team tracked the photon at any given point in time as it traveled through the lens, and calculated the state of both atoms and their energy levels through time.

    They found that when a single photon is shone through the lens, it is temporarily absorbed by an atom at one end of the lens. It then circles through the lens, to the second atom at the precise opposite end of the lens. This second atom momentarily absorbs the photon before sending it back through the lens, where the light collects precisely back on the first atom.

    “The photon is bounced back and forth, and the atoms are basically playing ping pong,” Perczel says. “Initially only one of the atoms has the photon, and then the other one. But between these two extremes, there’s a point where both of them kind of have it. It’s this mind-blowing quantum mechanics idea of entanglement, where the photon is completely shared equally between the two atoms.”

    Perczel says that the photon is able to entangle the atoms because of the unique geometry of the fish-eye lens. The lens’ density is distributed in such a way that it guides light in a perfectly circular pattern and can cause even a single photon to bounce back and forth between two precise points along a circular path.

    “If the photon just flew away in all directions, there wouldn’t be any entanglement,” Perczel says. “But the fish-eye gives this total control over the light rays, so you have an entangled system over long distances, which is a precious quantum system that you can use.”

    As they increased the size of the fish-eye lens in their model, the atoms remained entangled, even over relatively large distances of tens of microns. They also observed that, even if some light escaped the lens, the atoms were able to share enough of a photon’s energy to remain entangled. Finally, as they placed more pairs of atoms in the lens, opposite to one another, along with corresponding photons, these atoms also became simultaneously entangled.

    “You can use the fish eye to entangle multiple pairs of atoms at a time, which is what makes it useful and promising,” Perczel says.

    Fishy secrets

    In modeling the behavior of photons and atoms in the fish-eye lens, the researchers also found that, as light collected on the opposite end of the lens, it did so within an area that was larger than the wavelength of the photon’s light, meaning that the lens likely cannot produce a perfect image.

    “We can precisely ask the question during this photon exchange, what’s the size of the spot to which the photon gets recollected? And we found that it’s comparable to the wavelength of the photon, and not smaller,” Perczel says. “Perfect imaging would imply it would focus on an infinitely sharp spot. However, that is not what our quantum mechanical calculations showed us.”

    Going forward, the team hopes to work with experimentalists to test the quantum behaviors they observed in their modeling. In fact, in their paper, the team also briefly proposes a way to design a fish-eye lens for quantum entanglement experiments.

    “The fish-eye lens still has its secrets, and remarkable physics buried in it,” Perczel says. “But now it’s making an appearance in quantum technologies where it turns out this lens could be really useful for entangling distant quantum bits, which is the basic building block for building any useful quantum computer or quantum information processing device.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 7:10 am on September 4, 2018 Permalink | Reply
    Tags: "The Future of Nuclear Energy in a Carbon-Constrained World", A deeply decarbonized energy future, , MIT, Nuclear energy   

    From MIT News: “MIT Energy Initiative study reports on the future of nuclear energy” 

    MIT News
    MIT Widget

    From MIT News

    September 4, 2018
    rancesca McCaffrey | MIT Energy Initiative

    1
    Image: Christine Daniloff, MIT

    Findings suggest new policy models and cost-cutting technologies could help nuclear play vital role in climate solutions.

    The report The Future of Nuclear Energy in a Carbon-Constrained World,

    How can the world achieve the deep carbon emissions reductions that are necessary to slow or reverse the impacts of climate change? The authors of a new MIT study say that unless nuclear energy is meaningfully incorporated into the global mix of low-carbon energy technologies, the challenge of climate change will be much more difficult and costly to solve. For nuclear energy to take its place as a major low-carbon energy source, however, issues of cost and policy need to be addressed.

    In “The Future of Nuclear Energy in a Carbon-Constrained World,” released by the MIT Energy Initiative (MITEI) on Sept. 3, the authors analyze the reasons for the current global stall of nuclear energy capacity — which currently accounts for only 5 percent of global primary energy production — and discuss measures that could be taken to arrest and reverse that trend.

    The study group, led by MIT researchers in collaboration with colleagues from Idaho National Laboratory and the University of Wisconsin at Madison, is presenting its findings and recommendations at events in London, Paris, and Brussels this week, followed by events on Sept. 25 in Washington, and on Oct. 9 in Tokyo. MIT graduate and undergraduate students and postdocs, as well as faculty from Harvard University and members of various think tanks, also contributed to the study as members of the research team.

    “Our analysis demonstrates that realizing nuclear energy’s potential is essential to achieving a deeply decarbonized energy future in many regions of the world,” says study co-chair Jacopo Buongiorno, the TEPCO Professor and associate department head of the Department of Nuclear Science and Engineering at MIT. He adds, “Incorporating new policy and business models, as well as innovations in construction that may make deployment of cost-effective nuclear power plants more affordable, could enable nuclear energy to help meet the growing global demand for energy generation while decreasing emissions to address climate change.”

    The study team notes that the electricity sector in particular is a prime candidate for deep decarbonization. Global electricity consumption is on track to grow 45 percent by 2040, and the team’s analysis shows that the exclusion of nuclear from low-carbon scenarios could cause the average cost of electricity to escalate dramatically.

    “Understanding the opportunities and challenges facing the nuclear energy industry requires a comprehensive analysis of technical, commercial, and policy dimensions,” says Robert Armstrong, director of MITEI and the Chevron Professor of Chemical Engineering. “Over the past two years, this team has examined each issue, and the resulting report contains guidance policymakers and industry leaders may find valuable as they evaluate options for the future.”

    The report discusses recommendations for nuclear plant construction, current and future reactor technologies, business models and policies, and reactor safety regulation and licensing. The researchers find that changes in reactor construction are needed to usher in an era of safer, more cost-effective reactors, including proven construction management practices that can keep nuclear projects on time and on budget.

    “A shift towards serial manufacturing of standardized plants, including more aggressive use of fabrication in factories and shipyards, can be a viable cost-reduction strategy in countries where the productivity of the traditional construction sector is low,” says MIT visiting research scientist David Petti, study executive director and Laboratory Fellow at the Idaho National Laboratory. “Future projects should also incorporate reactor designs with inherent and passive safety features.”

    These safety features could include core materials with high chemical and physical stability and engineered safety systems that require limited or no emergency AC power and minimal external intervention. Features like these can reduce the probability of severe accidents occurring and mitigate offsite consequences in the event of an incident. Such designs can also ease the licensing of new plants and accelerate their global deployment.

    “The role of government will be critical if we are to take advantage of the economic opportunity and low-carbon potential that nuclear has to offer,” says John Parsons, study co-chair and senior lecturer at MIT’s Sloan School of Management. “If this future is to be realized, government officials must create new decarbonization policies that put all low-carbon energy technologies (i.e. renewables, nuclear, fossil fuels with carbon capture) on an equal footing, while also exploring options that spur private investment in nuclear advancement.”

    The study lays out detailed options for government support of nuclear. For example, the authors recommend that policymakers should avoid premature closures of existing plants, which undermine efforts to reduce emissions and increase the cost of achieving emission reduction targets. One way to avoid these closures is the implementation of zero-emissions credits — payments made to electricity producers where electricity is generated without greenhouse gas emissions — which the researchers note are currently in place in New York, Illinois, and New Jersey.

    Another suggestion from the study is that the government support development and demonstration of new nuclear technologies through the use of four “levers”: funding to share regulatory licensing costs; funding to share research and development costs; funding for the achievement of specific technical milestones; and funding for production credits to reward successful demonstration of new designs.

    The study includes an examination of the current nuclear regulatory climate, both in the United States and internationally. While the authors note that significant social, political, and cultural differences may exist among many of the countries in the nuclear energy community, they say that the fundamental basis for assessing the safety of nuclear reactor programs is fairly uniform, and should be reflected in a series of basic aligned regulatory principles. They recommend regulatory requirements for advanced reactors be coordinated and aligned internationally to enable international deployment of commercial reactor designs, and to standardize and ensure a high level of safety worldwide.

    The study concludes with an emphasis on the urgent need for both cost-cutting advancements and forward-thinking policymaking to make the future of nuclear energy a reality.

    “The Future of Nuclear Energy in a Carbon-Constrained World” is the eighth in the “Future of…” series of studies that are intended to serve as guides to researchers, policymakers, and industry. Each report explores the role of technologies that might contribute at scale in meeting rapidly growing global energy demand in a carbon-constrained world. Nuclear power was the subject of the first of these interdisciplinary studies, with the 2003 “Future of Nuclear Power” report (an update was published in 2009). The series has also included a study on the future of the nuclear fuel cycle. Other reports in the series have focused on carbon dioxide sequestration, natural gas, the electric grid, and solar power. These comprehensive reports are written by multidisciplinary teams of researchers. The research is informed by a distinguished external advisory committee.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 4:40 pm on August 24, 2018 Permalink | Reply
    Tags: , , MIT,   

    From MIT: “Pushing the plasma density limit” 

    MIT News
    MIT Widget

    From MIT News

    1
    Seung Gyou Baek and his colleagues performed experiments on the Alcator C-Mod tokamak to demonstrate how microwaves can be used to overcome barriers to steady-state fusion reactor operation. Photo: Paul Rivenberg/PSFC

    August 23, 2018
    Paul Rivenberg | Plasma Science and Fusion Center

    For decades, researchers have been exploring ways to replicate on Earth the physical process of fusion that occurs naturally in the sun and other stars. Confined by its own strong gravitational field, the sun’s burning plasma is a sphere of fusing particles, producing the heat and light that makes life possible on earth. But the path to a creating a commercially viable fusion reactor, which would provide the world with a virtually endless source of clean energy, is filled with challenges.

    Researchers have focused on the tokamak, a device that heats and confines turbulent plasma fuel in a donut-shaped chamber long enough to create fusion. Because plasma responds to magnetic fields, the torus is wrapped in magnets, which guide the fusing plasma particles around the toroidal chamber and away from the walls. Tokamaks have been able to sustain these reactions only in short pulses. To be a practical source of energy, they will need to operate in a steady state, around the clock.

    Researchers at MIT’s Plasma Science and Fusion Center (PSFC) have now demonstrated how microwaves can be used to overcome barriers to steady-state tokamak operation. In experiments performed on MIT’s Alcator C-Mod tokamak before it ended operation in September 2016, research scientist Seung Gyou Baek and his colleagues studied a method of driving current to heat the plasma called Lower Hybrid Current Drive (LHCD).

    Alcator C-Mod tokamak at MIT, no longer in operation

    The technique generates plasma current by launching microwaves into the tokamak, pushing the electrons in one direction — a prerequisite for steady-state operation.

    Furthermore, the strength of the Alcator magnets has allowed researchers to investigate LHCD at a plasma density high enough to be relevant for a fusion reactor. The encouraging results of their experiments have been published in Physical Review Letters.

    Pioneering LHCD

    “The conventional way of running a tokamak uses a central solenoid to drive the current inductively,” Baek says, referring to the magnetic coil that fills the center of the torus. “But that inherently restricts the duration of the tokamak pulse, which in turn limits the ability to scale the tokamak into a steady-state power reactor.”

    Baek and his colleagues believe LHCD is the solution to this problem.

    MIT scientists have pioneered LHCD since the 1970s, using a series of “Alcator” tokamaks known for their compact size and high magnetic fields. On Alcator C-Mod, LHCD was found to be efficient for driving currents at low density, demonstrating plasma current could be sustained non-inductively. However, researchers discovered that as they raised the density in these experiments to the higher levels necessary for steady-state operation, the effectiveness of LHCD to generate plasma current disappeared.

    This fall-off in effectiveness as density increased was first studied on Alcator C-Mod by research scientist Gregory Wallace.

    “He measured the fall-off to be much faster than expected, which was not predicted by theory,” Baek explains. “The last decade people have been trying to understand this, because unless this problem is solved you can’t really use this in a reactor.”

    Researchers needed to find a way to boost effectiveness and overcome the LHCD density limit. Finding the answer would require a close examination of how lower hybrid (LH) waves respond to the tokamak environment.

    Driving the current

    Lower hybrid waves drive plasma current by transferring their momentum and energy to electrons in the plasma.

    Head of the PSFC’s Physics Theory and Computation Division, senior research scientist Paul Bonoli compares the process to surfing.

    “You are on a surf board and you have a wave come by. If you just sit there the wave will kind of go by you,” Bonoli says. “But if you start paddling, and you get near the same speed as the wave, the wave picks you up and starts transferring energy to the surf board. Well, if you inject radio waves, like LH waves, that are moving at velocities near the speed of the particles in the plasma, the waves start to give up their energy to these particles.”

    Temperatures in today’s tokamaks — including C-Mod — are not high enough to provide good matching conditions for the wave to transfer all its momentum to the plasma particles on the first pass from the antenna, which launches the waves to the core plasma. Consequently, researchers noticed, the injected microwave travels through the core of the plasma and beyond, eventually interacting multiple times with the edge, where its power dissipates, particularly when the density is high.

    Exploring the scrape-off layer

    Baek describes this edge as a boundary area outside the main core of the plasma where, in order to control the plasma, researchers can drain — or “scrape-off” — heat, particles, and impurities through a divertor. This edge has turbulence, which, at higher densities, interacts with the injected microwaves, scattering them, and dissipating their energy.

    “The scrape-off layer is a very thin region. In the past RF scientists didn’t really pay attention to it,” Baek says. “Our experiments have shown in the last several years that interaction there can be really important in understanding the problem, and by controlling it properly you can overcome the density limit problem.”

    Baek credits extensive simulations by Wallace and PSFC research scientist Syun’ichi Shiraiwa for indicating that the scrape-off layer was most likely the location where LH wave power was being lost.

    Detailed research on the edge and scrape-off-layer conducted on Alcator C-Mod in the last two decades has documented that raising the total electrical current in the plasma narrows the width of the scrape-off-layer and reduces the level of turbulence there, suggesting that it may reduce or eliminate its deleterious effects on the microwaves.

    Motivated by this, PSFC researchers devised an LHCD experiment to push the total current by from 500,000 Amps to 1,400,000 Amps, enabled by C-Mod’s high-field tokamak operation. They found that the effectiveness of LCHD to generate plasma current, which had been lost at high density, reappeared. Making the width of the turbulent scrape-off layer very narrow prevents it from dissipating the microwaves, allowing higher densities to be reached beyond the LHCD density limit.

    The results from these experiments suggest a path to a steady-state fusion reactor. Baek believes they also provide additional experimental support to proposals by the PSFC to place the LHCD antenna at the high-field (inboard) side of a tokamak, near the central solenoid. Research suggests that placing it in this quiet area, as opposed to the turbulent outer midplane, would minimize destructive wave interactions in the plasma edge, while protecting the antenna and increasing its effectiveness. Principal Research scientist Steven Wukitch is currently pursuing new LHCD research in this area through PSFCs’ collaboration with the DIII-D tokamak in San Diego.

    Although existing tokamaks with LHCD are not operating at the high densities of C-Mod, Baek feels that the relationship between the current drive and the scrape-off layer could be investigated on any tokamak.

    “I hope our recipe for improving LHCD performance will be explored on other machines, and that these results invigorate further research toward steady-state tokamak operation,” he says.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 9:16 am on August 21, 2018 Permalink | Reply
    Tags: , Building phylogenetic trees, Chronograms, , , Geobiology, , Investigating Earth’s earliest life, Kelsey Moore, , MIT,   

    From MIT News: Women in STEM- “Investigating Earth’s earliest life” Kelsey Moore 

    MIT News
    MIT Widget

    From MIT News

    August 18, 2018
    Fatima Husain

    1
    Kelsey Moore. Image: Ian MacLellan

    Graduate student Kelsey Moore uses genetic and fossil evidence to study the first stages of evolution on our planet.

    In the second grade, Kelsey Moore became acquainted with geologic time. Her teachers instructed the class to unroll a giant strip of felt down a long hallway in the school. Most of the felt was solid black, but at the very end, the students caught a glimpse of red.

    That tiny red strip represented the time on Earth in which humans have lived, the teachers said. The lesson sparked Moore’s curiosity. What happened on Earth before there were humans? How could she find out?

    A little over a decade later, Moore enrolled in her first geoscience class at Smith College and discovered she now had the tools to begin to answer those very questions.

    Moore zeroed in on geobiology, the study of how the physical Earth and biosphere interact. During the first semester of her sophomore year of college, she took a class that she says “totally blew my mind.”

    “I knew I wanted to learn about Earth history. But then I took this invertebrate paleontology class and realized how much we can learn about life and how life has evolved,” Moore says. A few lectures into the semester, she mustered the courage to ask her professor, Sara Pruss in Smith’s Department of Geosciences, for a research position in the lab.

    Now a fourth-year graduate student at MIT, Moore works in the geobiology lab of Associate Professor Tanja Bosak in MIT’s Department of Earth, Atmospheric, and Planetary Sciences. In addition to carrying out her own research, Moore, who is also a Graduate Resident Tutor in the Simmons Hall undergraduate dorm, makes it a priority to help guide the lab’s undergraduate researchers and teach them the techniques they need to know.

    Time travel

    “We have a natural curiosity about how we got here, and how the Earth became what it is. There’s so much unknown about the early biosphere on Earth when you go back 2 billion, 3 billion, 4 billion years,” Moore says.

    Moore studies early life on Earth by focusing on ancient microbes from the Proterozoic, the period of Earth’s history that spans 2.5 billion to 542 million years ago — between the time when oxygen began to appear in the atmosphere up until the advent and proliferation of complex life. Early in her graduate studies, Moore and Bosak collaborated with Greg Fournier, the Cecil and Ida Green Assistant Professor of Geobiology, on research tracking cyanobacterial evolution. Their research is supported by the Simons Collaboration on the Origins of Life.

    An image of Cyanobacteria, Tolypothrix

    The question of when cyanobacteria gained the ability to perform oxygenic photosynthesis, which produces oxygen and is how many plants on Earth today get their energy, is still under debate. To track cyanobacterial evolution, MIT researchers draw from genetics and micropaleontology. Moore works on molecular clock models, which track genetic mutations over time to measure evolutionary divergence in organisms.

    Clad with a white lab coat, lab glasses, and bright purple gloves, Moore sifts through multiple cyanobacteria under a microscope to find modern analogs to ancient cyanobacterial fossils. The process can be time-consuming.

    “I do a lot of microscopy,” Moore says with a laugh. Once she’s identified an analog, Moore cultures that particular type of cyanobacteria, a process which can sometimes take months. After the strain is enriched and cultured, Moore extracts DNA from the cyanobacteria. “We sequence modern organisms to get their genomes, reconstruct them, and build phylogenetic trees,” Moore says.

    By tying information together from ancient fossils and modern analogs using molecular clocks, Moore hopes to build a chronogram — a type of phylogenetic tree with a time component that eventually traces back to when cyanobacteria evolved the ability to split water and produce oxygen.

    Moore also studies the process of fossilization, on Earth and potentially other planets. She is collaborating with researchers at NASA’s Jet Propulsion Laboratory to help them prepare for the upcoming Mars 2020 rover mission.

    “We’re trying to analyze fossils on Earth to get an idea for how we’re going to look at whatever samples get brought back from Mars, and then to also understand how we can learn from other planets and potentially other life,” Moore says.

    After MIT, Moore hopes to continue research, pursue postdoctoral fellowships, and eventually teach.

    “I really love research. So why stop? I’m going to keep going,” Moore says. She says she wants to teach in an institution that emphasizes giving research opportunities to undergraduate students.

    “Undergrads can be overlooked, but they’re really intelligent people and they’re budding scientists,” Moore says. “So being able to foster that and to see them grow and trust that they are capable in doing research, I think, is my calling.”

    Geology up close

    To study ancient organisms and find fossils, Moore has traveled across the world, to Shark Bay in Australia, Death Valley in the United States, and Bermuda.

    “In order to understand the rocks, you really have to get your nose on the rocks. Go and look at them, and be there. You have to go and stand in the tidal pools and see what’s happening — watch the air bubbles from the cyanobacteria and see them make oxygen,” Moore says. “Those kinds of things are really important in order to understand and fully wrap your brain around how important those interactions are.”

    And in the field, Moore says, researchers have to “roll with the punches.”

    “You don’t have a nice, beautiful, pristine lab set up with all the tools and equipment that you need. You just can’t account for everything,” Moore says. “You have to do what you can with the tools that you have.”

    Mentorship

    As a Graduate Resident Tutor, Moore helps to create supporting living environments for the undergraduate residents of Simmons Hall.

    Each week, she hosts a study break in her apartment in Simmons for her cohort of students — complete with freshly baked treats. “[Baking] is really relaxing for me,” Moore says. “It’s therapeutic.”

    “I think part of the reason I love baking so much is that it’s my creative outlet,” she says. “I know that a lot of people describe baking as like chemistry. But I think you have the opportunity to be more creative and have more fun with it. The creative side of it is something that I love, that I crave outside of research.”

    Part of Moore’s determination to research, trek out in the field, and mentor undergraduates draws from her “biggest science inspiration” — her mother, Michele Moore, a physics professor at Spokane Falls Community College in Spokane, Washington.

    “She was a stay-at-home mom my entire childhood. And then when I was in middle school, she decided to go and get a college degree,” Moore says. When Moore started high school, her mother earned her bachelor’s degree in physics. Then, when Moore started college, her mother earned her PhD. “She was sort of one step ahead of me all the time, and she was a big inspiration for me and gave me the confidence to be a woman in science.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 9:49 am on August 20, 2018 Permalink | Reply
    Tags: MIT, , ,   

    From MIT News: “Light from ancient quasars helps confirm quantum entanglement” 

    MIT News
    MIT Widget

    From MIT News

    August 19, 2018
    Jennifer Chu

    1
    The quasar dates back to less than one billion years after the big bang. Image: NASA/ESA/G.Bacon, STScI

    2
    Courtesy of the researchers.

    Results are among the strongest evidence yet for “spooky action at a distance.”

    3
    One of the two units for mobile receiving station entangled photons, which was operated by quantum physics Academy of Sciences Austria (OeAW) La Palma to measure the quantum entanglement. Copyright: Dominick Rauch / OeAW

    Last year, physicists at MIT, the University of Vienna, and elsewhere provided strong support for quantum entanglement, the seemingly far-out idea that two particles, no matter how distant from each other in space and time, can be inextricably linked, in a way that defies the rules of classical physics.

    Take, for instance, two particles sitting on opposite edges of the universe. If they are truly entangled, then according to the theory of quantum mechanics their physical properties should be related in such a way that any measurement made on one particle should instantly convey information about any future measurement outcome of the other particle — correlations that Einstein skeptically saw as “spooky action at a distance.”

    In the 1960s, the physicist John Bell calculated a theoretical limit beyond which such correlations must have a quantum, rather than a classical, explanation.

    But what if such correlations were the result not of quantum entanglement, but of some other hidden, classical explanation? Such “what-ifs” are known to physicists as loopholes to tests of Bell’s inequality, the most stubborn of which is the “freedom-of-choice” loophole: the possibility that some hidden, classical variable may influence the measurement that an experimenter chooses to perform on an entangled particle, making the outcome look quantumly correlated when in fact it isn’t.

    Last February, the MIT team and their colleagues significantly constrained [Physical Review Letters] the freedom-of-choice loophole, by using 600-year-old starlight to decide what properties of two entangled photons to measure. Their experiment proved that, if a classical mechanism caused the correlations they observed, it would have to have been set in motion more than 600 years ago, before the stars’ light was first emitted and long before the actual experiment was even conceived.

    Now, in a paper published today in Physical Review Letters, the same team has vastly extended the case for quantum entanglement and further restricted the options for the freedom-of-choice loophole. The researchers used distant quasars, one of which emitted its light 7.8 billion years ago and the other 12.2 billion years ago, to determine the measurements to be made on pairs of entangled photons. They found correlations among more than 30,000 pairs of photons, to a degree that far exceeded the limit that Bell originally calculated for a classically based mechanism.

    “If some conspiracy is happening to simulate quantum mechanics by a mechanism that is actually classical, that mechanism would have had to begin its operations — somehow knowing exactly when, where, and how this experiment was going to be done — at least 7.8 billion years ago. That seems incredibly implausible, so we have very strong evidence that quantum mechanics is the right explanation,” says co-author Alan Guth, the Victor F. Weisskopf Professor of Physics at MIT.

    “The Earth is about 4.5 billion years old, so any alternative mechanism — different from quantum mechanics — that might have produced our results by exploiting this loophole would’ve had to be in place long before even there was a planet Earth, let alone an MIT,” adds David Kaiser, the Germeshausen Professor of the History of Science and professor of physics at MIT. “So we’ve pushed any alternative explanations back to very early in cosmic history.”

    Guth and Kaiser’s co-authors include Anton Zeilinger and members of his group at the Austrian Academy of Sciences and the University of Vienna, as well as physicists at Harvey Mudd College and the University of California at San Diego.

    A decision, made billions of years ago

    In 2014, Kaiser and two members of the current team, Jason Gallicchio and Andrew Friedman, proposed an experiment to produce entangled photons on Earth — a process that is fairly standard in studies of quantum mechanics. They planned to shoot each member of the entangled pair in opposite directions, toward light detectors that would also make a measurement of each photon using a polarizer. Researchers would measure the polarization, or orientation, of each incoming photon’s electric field, by setting the polarizer at various angles and observing whether the photons passed through — an outcome for each photon that researchers could compare to determine whether the particles showed the hallmark correlations predicted by quantum mechanics.

    The team added a unique step to the proposed experiment, which was to use light from ancient, distant astronomical sources, such as stars and quasars, to determine the angle at which to set each respective polarizer. As each entangled photon was in flight, heading toward its detector at the speed of light, researchers would use a telescope located at each detector site to measure the wavelength of a quasar’s incoming light. If that light was redder than some reference wavelength, the polarizer would tilt at a certain angle to make a specific measurement of the incoming entangled photon — a measurement choice that was determined by the quasar. If the quasar’s light was bluer than the reference wavelength, the polarizer would tilt at a different angle, performing a different measurement of the entangled photon.

    In their previous experiment, the team used small backyard telescopes to measure the light from stars as close as 600 light years away. In their new study, the researchers used much larger, more powerful telescopes to catch the incoming light from even more ancient, distant astrophysical sources: quasars whose light has been traveling toward the Earth for at least 7.8 billion years — objects that are incredibly far away and yet are so luminous that their light can be observed from Earth.

    Tricky timing

    On Jan. 11, 2018, “the clock had just ticked past midnight local time,” as Kaiser recalls, when about a dozen members of the team gathered on a mountaintop in the Canary Islands and began collecting data from two large, 4-meter-wide telescopes: the William Herschel Telescope and the Telescopio Nazionale Galileo, both situated on the same mountain and separated by about a kilometer.

    One telescope focused on a particular quasar, while the other telescope looked at another quasar in a different patch of the night sky. Meanwhile, researchers at a station located between the two telescopes created pairs of entangled photons and beamed particles from each pair in opposite directions toward each telescope.

    In the fraction of a second before each entangled photon reached its detector, the instrumentation determined whether a single photon arriving from the quasar was more red or blue, a measurement that then automatically adjusted the angle of a polarizer that ultimately received and detected the incoming entangled photon.

    “The timing is very tricky,” Kaiser says. “Everything has to happen within very tight windows, updating every microsecond or so.”

    Demystifying a mirage

    The researchers ran their experiment twice, each for around 15 minutes and with two different pairs of quasars. For each run, they measured 17,663 and 12,420 pairs of entangled photons, respectively. Within hours of closing the telescope domes and looking through preliminary data, the team could tell there were strong correlations among the photon pairs, beyond the limit that Bell calculated, indicating that the photons were correlated in a quantum-mechanical manner.

    Guth led a more detailed analysis to calculate the chance, however slight, that a classical mechanism might have produced the correlations the team observed.

    He calculated that, for the best of the two runs, the probability that a mechanism based on classical physics could have achieved the observed correlation was about 10 to the minus 20 — that is, about one part in one hundred billion billion, “outrageously small,” Guth says. For comparison, researchers have estimated the probability that the discovery of the Higgs boson was just a chance fluke to be about one in a billion.

    “We certainly made it unbelievably implausible that a local realistic theory could be underlying the physics of the universe,” Guth says.

    And yet, there is still a small opening for the freedom-of-choice loophole. To limit it even further, the team is entertaining ideas of looking even further back in time, to use sources such as cosmic microwave background photons that were emitted as leftover radiation immediately following the Big Bang, though such experiments would present a host of new technical challenges.

    “It is fun to think about new types of experiments we can design in the future, but for now, we are very pleased that we were able to address this particular loophole so dramatically. Our experiment with quasars puts extremely tight constraints on various alternatives to quantum mechanics. As strange as quantum mechanics may seem, it continues to match every experimental test we can devise,” Kaiser says.

    This research was supported in part by the Austrian Academy of Sciences, the Austrian Science Fund, the U.S. National Science Foundation, and the U.S. Department of Energy.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 11:00 am on August 16, 2018 Permalink | Reply
    Tags: , , , , MIT, Sprawling galaxy cluster found hiding in plain sight   

    From MIT News: “Sprawling galaxy cluster found hiding in plain sight” 

    MIT News

    MIT Widget

    From MIT News

    August 16, 2018
    Jennifer Chu

    1
    An X-ray image (in blue) with a zoom in optical image (gold and brown) showing the central galaxy of a hidden cluster, which harbors a supermassive black hole.
    Image: Taweewat Somboonpanyakul

    Bright light from black hole in a feeding frenzy had been obscuring surrounding galaxies.

    MIT scientists have uncovered a sprawling new galaxy cluster hiding in plain sight. The cluster, which sits a mere 2.4 billion light years from Earth, is made up of hundreds of individual galaxies and surrounds an extremely active supermassive black hole, or quasar.

    The central quasar goes by the name PKS1353-341 and is intensely bright — so bright that for decades astronomers observing it in the night sky have assumed that the quasar was quite alone in its corner of the universe, shining out as a solitary light source from the center of a single galaxy.

    But as the MIT team reports today in The Astrophysical Journal, the quasar’s light is so bright that it has obscured hundreds of galaxies clustered around it.

    In their new analysis, the researchers estimate that there are hundreds of individual galaxies in the cluster, which, all told, is about as massive as 690 trillion suns. Our Milky Way galaxy, for comparison, weighs in at around 400 billion solar masses.

    The team also calculates that the quasar at the center of the cluster is 46 billion times brighter than the sun. Its extreme luminosity is likely the result of a temporary feeding frenzy: As an immense disk of material swirls around the quasar, big chunks of matter from the disk are falling in and feeding it, causing the black hole to radiate huge amounts of energy out as light.

    “This might be a short-lived phase that clusters go through, where the central black hole has a quick meal, gets bright, and then fades away again,” says study author Michael McDonald, assistant professor of physics in MIT’s Kavli Institute for Astrophysics and Space Research. “This could be a blip that we just happened to see. In a million years, this might look like a diffuse fuzzball.”

    McDonald and his colleagues believe the discovery of this hidden cluster shows there may be other similar galaxy clusters hiding behind extremely bright objects that astronomers have miscatalogued as single light sources. The researchers are now looking for more hidden galaxy clusters, which could be important clues to estimating how much matter there is in the universe and how fast the universe is expanding.

    The paper’s co-authors include lead author and MIT graduate student Taweewat Somboonpanyakul, Henry Lin of Princeton University, Brian Stalder of the Large Synoptic Survey Telescope, and Antony Stark of the Harvard-Smithsonian Center for Astrophysics.

    Fluffs or points

    In 2012, McDonald and others discovered the Phoenix cluster, one of the most massive and luminous galaxy clusters in the universe. The mystery to McDonald was why this cluster, which was so intensely bright and in a region of the sky that is easily observable, hadn’t been found before.

    “We started asking ourselves why we had not found it earlier, because it’s very extreme in its properties and very bright,” McDonald says. “It’s because we had preconceived notions of what a cluster should look like. And this didn’t conform to that, so we missed it.”

    For the most part, he says astronomers have assumed that galaxy clusters look “fluffy,” giving off a very diffuse signal in the X-ray band, unlike brighter, point-like sources, which have been interpreted as extremely active quasars or black holes.

    “The images are either all points, or fluffs, and the fluffs are these giant million-light-year balls of hot gas that we call clusters, and the points are black holes that are accreting gas and glowing as this gas spirals in,” McDonald says. “This idea that you could have a rapidly accreting black hole at the center of a cluster — we didn’t think that was something that happened in nature.”

    But the Phoenix discovery proved that galaxy clusters could indeed host immensely active black holes, prompting McDonald to wonder: Could there be other nearby galaxy clusters that were simply misidentified?

    An extreme eater

    To answer that question, the researchers set up a survey named CHiPS, for Clusters Hiding in Plain Sight, which is designed to reevaluate X-ray images taken in the past.

    “We start from archival data of point sources, or objects that were super bright in the sky,” Somboonpanyakul explains. “We are looking for point sources inside fluffy things.”

    For every point source that was previously identified, the researchers noted their coordinates and then studied them more directly using the Magellan Telescope, a powerful optical telescope that sits in the mountains of Chile.

    Carnegie 6.5 meter Magellan Baade and Clay Telescopes located at Carnegie’s Las Campanas Observatory, Chile. over 2,500 m (8,200 ft) high

    If they observed a higher-than-expected number of galaxies surrounding the point source (a sign that the gas may stem from a cluster of galaxies), the researchers looked at the source again, using NASA’s space-based Chandra X-Ray Observatory, to identify an extended, diffuse source around the main point source.

    NASA/Chandra X-ray Telescope

    “Some 90 percent of these sources turned out to not be clusters,” McDonald says. “But the fun thing is, the small number of things we are finding are sort of rule-breakers.”

    The new paper reports the first results of the CHiPS survey, which has so far confirmed one new galaxy cluster hosting an extremely active central black hole.

    “The brightness of the black hole might be related to how much it’s eating,” McDonald says. “This is thousands of times brighter than a typical black hole at the center of a cluster, so it’s very extreme in its feeding. We have no idea how long this has been going on or will continue to go on. Finding more of these things will help us understand, is this an important process, or just a weird thing that there’s only one of in the universe.”

    The team plans to comb through more X-ray data in search of galaxy clusters that might have been missed the first time around.

    “If the CHiPS survey can find enough of these, we will be able to pinpoint the specific rate of accretion onto the black hole where it switches from generating primarily radiation to generating mechanical energy, the two primary forms of

    energy output from black holes,” says Brian McNamara, professor of physics and astronomy at the University of Waterloo, who was not involved in the research. “This particular object is interesting because it bucks the trend. Either the central supermassive black hole’s mass is much lower than expected, or the structure of the accretion flow is abnormal. The oddballs are the ones that teach us the most.”

    In addition to shedding light on a black hole’s feeding, or accretion behavior, the detection of more galaxy clusters may help to estimate how fast the universe is expanding.

    “Take for instance, the Titanic,” McDonald says. “If you know where the two biggest pieces landed, you could map them backward to see where the ship hit the iceberg. In the same way, if you know where all the galaxy clusters are in the universe, which are the biggest pieces in the universe, and how big they are, and you have some information about what the universe looked like in the beginning, which we know from the Big Bang, then you could map out how the universe expanded.”

    This research was supported, in part, by the Kavli Research Investment Fund at MIT, and by NASA.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 7:46 am on August 16, 2018 Permalink | Reply
    Tags: ASTERIA cubesat, , , , , MIT, ,   

    From MIT News and NASA/JPL-Caltech : “Tiny ASTERIA satellite achieves a first for CubeSats” 

    MIT News
    MIT Widget


    From MIT News and NASA/JPL-Caltech

    August 15, 2018
    Lauren Hinkel
    Mary Knapp

    1
    Members of the ASTERIA team prepare the petite satellite for its journey to space. Photo courtesy of NASA/JPL-Caltech

    MIT/NASA JPL/Caltech ASTERIA cubesat

    2
    This plot shows the transit lightcurve of 55 Cancri e observed by ASTERIA. Image courtesy of NASA/JPL-Caltech

    Measurement of an exoplanet transit demonstrates proof of concept that small spacecraft can perform high-precision photometry.

    Planet transit. NASA/Ames

    A miniature satellite called ASTERIA (Arcsecond Space Telescope Enabling Research in Astrophysics) has measured the transit of a previously-discovered super-Earth exoplanet, 55 Cancri e. This finding shows that miniature satellites, like ASTERIA, are capable of making of sensitive detections of exoplanets via the transit method.

    While observing 55 Cancri e, which is known to transit, ASTERIA measured a miniscule change in brightness, about 0.04 percent, when the super-Earth crossed in front of its star. This transit measurement is the first of its kind for CubeSats (the class of satellites to which ASTERIA belongs) which are about the size of a briefcase and hitch a ride to space as secondary payloads on rockets used for larger spacecraft.

    The ASTERIA team presented updates and lessons learned about the mission at the Small Satellite Conference in Logan, Utah, last week.

    The ASTERIA project is a collaboration between MIT and NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California, funded through JPL’s Phaeton Program. The project started in 2010 as an undergraduate class project in 16.83/12.43 (Space Systems Engineering), involving a technology demonstration of astrophysical measurements using a Cubesat, with a primary goal of training early-career engineers.

    The ASTERIA mission — of which Department of Earth, Atmospheric and Planetary Sciences Class of 1941 Professor of Planetary Sciences Sara Seager is the Principal Investigator — was designed to demonstrate key technologies, including very stable pointing and thermal control for making extremely precise measurements of stellar brightness in a tiny satellite. Earlier this year, ASTERIA achieved pointing stability of 0.5 arcseconds and thermal stability of 0.01 degrees Celsius. These technologies are important for precision photometry, i.e., the measurement of stellar brightness over time.

    Precision photometry, in turn, provides a way to study stellar activity, transiting exoplanets, and other astrophysical phenomena. Several MIT alumni have been involved in ASTERIA’s development from the beginning including Matthew W. Smith PhD ’14, Christopher Pong ScD ’14, Alessandra Babuscia PhD ’12, and Mary Knapp PhD ’18. Brice-Olivier Demory, a professor at the University of Bern and a former EAPS postdoc who is also a member of the ASTERIA science team, performed the data reduction that revealed the transit.

    ASTERIA’s success demonstrates that CubeSats can perform big science in a small package. This finding has earned ASTERIA the honor of “Mission of the Year,” which was awarded at the SmallSat conference. The honor is presented annually to the mission that has demonstrated a significant improvement in the capability of small satellites, which weigh less than 150 kilograms. Eligible missions have launched, established communication, and acquired results from on-orbit after Jan, 1, 2017.

    Now that ASTERIA has proven that it can measure exoplanet transits, it will continue observing two bright, nearby stars to search for previously unknown transiting exoplanets. Additional funding for ASTERIA operations was provided by the Heising-Simons Foundation

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 12:02 pm on August 14, 2018 Permalink | Reply
    Tags: , , , , MIT, , Optics in cameras   

    From MIT News: “Novel optics for ultrafast cameras create new possibilities for imaging” 

    MIT News
    MIT Widget

    From MIT News

    August 13, 2018
    Rob Matheson

    1
    MIT researchers have developed novel photography optics, dubbed “time-folded optics,” that captures images based on the timing of reflecting light inside the lens, instead of the traditional approach that relies on the arrangement of optical components. The invention opens doors for new capabilities for ultrafast time- or depth-sensitive cameras. Courtesy of the researchers.

    Technique can capture a scene at multiple depths with one shutter click — no zoom lens needed.

    The new optics architecture includes a set of semireflective parallel mirrors that reduce, or “fold,” the focal length every time the light reflects between the mirrors. By placing the set of mirrors between the lens and sensor, the researchers condensed the distance of optics arrangement by an order of magnitude while still capturing an image of the scene.

    In their study [Nature Photnics], the researchers demonstrate three uses for time-folded optics for ultrafast cameras and other depth-sensitive imaging devices. These cameras, also called “time-of-flight” cameras, measure the time that it takes for a pulse of light to reflect off a scene and return to a sensor, to estimate the depth of the 3-D scene.

    Co-authors on the paper are Matthew Tancik, a graduate student in the MIT Computer Science and Artificial Intelligence Laboratory; Guy Satat, a PhD student in the Camera Culture Group at the Media Lab; and Ramesh Raskar, an associate professor of media arts and sciences and director of the Camera Culture Group.

    Folding the optical path into time

    The researchers’ system consists of a component that projects a femtosecond (quadrillionth of a second) laser pulse into a scene to illuminate target objects. Traditional photography optics change the shape of the light signal as it travels through the curved glasses. This shape change creates an image on the sensor. But, with the researchers’ optics, instead of heading right to the sensor, the signal first bounces back and forth between mirrors precisely arranged to trap and reflect light. Each one of these reflections is called a “round trip.” At each round trip, some light is captured by the sensor programed to image at a specific time interval — for example, a 1-nanosecond snapshot every 30 nanoseconds.

    A key innovation is that each round trip of light moves the focal point — where a sensor is positioned to capture an image — closer to the lens. This allows the lens to be drastically condensed. Say a streak camera wants to capture an image with the long focal length of a traditional lens. With time-folded optics, the first round-trip pulls the focal point about double the length of the set of mirrors closer to the lens, and each subsequent round trip brings the focal point closer and closer still. Depending on the number of round trips, a sensor can then be placed very near the lens.

    By placing the sensor at a precise focal point, determined by total round trips, the camera can capture a sharp final image, as well as different stages of the light signal, each coded at a different time, as the signal changes shape to produce the image. (The first few shots will be blurry, but after several round trips the target object will come into focus.)

    In their paper, the researchers demonstrate this by imaging a femtosecond light pulse through a mask engraved with “MIT,” set 53 centimeters away from the lens aperture. To capture the image, the traditional 20-centimeter focal length lens would have to sit around 32 centimeters away from the sensor. The time-folded optics, however, pulled the image into focus after five round trips, with only a 3.1-centimeter lens-sensor distance.

    This could be useful, Heshmat says, in designing more compact telescope lenses that capture, say, ultrafast signals from space, or for designing smaller and lighter lenses for satellites to image the surface of the ground.

    Multizoom and multicolor

    The researchers next imaged two patterns spaced about 50 centimeters apart from each other, but each within line of sight of the camera. An “X” pattern was 55 centimeters from the lens, and a “II” pattern was 4 centimeters from the lens. By precisely rearranging the optics — in part, by placing the lens in between the two mirrors — they shaped the light in a way that each round trip created a new magnification in a single image acquisition. In that way, it’s as if the camera zooms in with each round trip. When they shot the laser into the scene, the result was two separate, focused images, created in one shot — the X pattern captured on the first round trip, and the II pattern captured on the second round trip.

    The researchers then demonstrated an ultrafast multispectral (or multicolor) camera. They designed two color-reflecting mirrors and a broadband mirror — one tuned to reflect one color, set closer to the lens, and one tuned to reflect a second color, set farther back from the lens. They imaged a mask with an “A” and “B,” with the A illuminated the second color and the B illuminated the first color, both for a few tenths of a picosecond.

    When the light traveled into the camera, wavelengths of the first color immediately reflected back and forth in the first cavity, and the time was clocked by the sensor. Wavelengths of the second color, however, passed through the first cavity, into the second, slightly delaying their time to the sensor. Because the researchers knew which wavelength would hit the sensor at which time, they then overlaid the respective colors onto the image — the first wavelength was the first color, and the second was the second color. This could be used in depth-sensing cameras, which currently only record infrared, Heshmat says.

    One key feature of the paper, Heshmat says, is it opens doors for many different optics designs by tweaking the cavity spacing, or by using different types of cavities, sensors, and lenses. “The core message is that when you have a camera that is fast, or has a depth sensor, you don’t need to design optics the way you did for old cameras. You can do much more with the optics by looking at them at the right time,” Heshmat says.

    This work “exploits the time dimension to achieve new functionalities in ultrafast cameras that utilize pulsed laser illumination. This opens up a new way to design imaging systems,” says Bahram Jalali, director of the Photonics Laboratory and a professor of electrical and computer engineering at the University of California at Berkeley. “Ultrafast imaging makes it possible to see through diffusive media, such as tissue, and this work hold promise for improving medical imaging in particular for intraoperative microscopes.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: