Tagged: MIT Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:47 pm on January 24, 2020 Permalink | Reply
    Tags: "Using artificial intelligence to enrich digital maps", GPS, MIT, RoadTagger   

    From MIT News: “Using artificial intelligence to enrich digital maps” 

    MIT News

    From MIT News

    January 23, 2020
    Rob Matheson

    1
    An AI model developed at MIT and Qatar Computing Research Institute that uses only satellite imagery to automatically tag road features in digital maps could improve GPS navigation, especially in countries with limited map data. Image: Google Maps/MIT News

    Model tags road features based on satellite images, to improve GPS navigation in places with limited map data.

    A model invented by researchers at MIT and Qatar Computing Research Institute (QCRI) that uses satellite imagery to tag road features in digital maps could help improve GPS navigation.

    Showing drivers more details about their routes can often help them navigate in unfamiliar locations. Lane counts, for instance, can enable a GPS system to warn drivers of diverging or merging lanes. Incorporating information about parking spots can help drivers plan ahead, while mapping bicycle lanes can help cyclists negotiate busy city streets. Providing updated information on road conditions can also improve planning for disaster relief.

    But creating detailed maps is an expensive, time-consuming process done mostly by big companies, such as Google, which sends vehicles around with cameras strapped to their hoods to capture video and images of an area’s roads. Combining that with other data can create accurate, up-to-date maps. Because this process is expensive, however, some parts of the world are ignored.

    A solution is to unleash machine-learning models on satellite images — which are easier to obtain and updated fairly regularly — to automatically tag road features. But roads can be occluded by, say, trees and buildings, making it a challenging task. In a paper being presented at the Association for the Advancement of Artificial Intelligence conference, the MIT and QCRI researchers describe “RoadTagger,” which uses a combination of neural network architectures to automatically predict the number of lanes and road types (residential or highway) behind obstructions.

    In testing RoadTagger on occluded roads from digital maps of 20 U.S. cities, the model counted lane numbers with 77 percent accuracy and inferred road types with 93 percent accuracy. The researchers are also planning to enable RoadTagger to predict other features, such as parking spots and bike lanes.

    “Most updated digital maps are from places that big companies care the most about. If you’re in places they don’t care about much, you’re at a disadvantage with respect to the quality of map,” says co-author Sam Madden, a professor in the Department of Electrical Engineering and Computer Science (EECS) and a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “Our goal is to automate the process of generating high-quality digital maps, so they can be available in any country.”

    The paper’s co-authors are CSAIL graduate students Songtao He, Favyen Bastani, and Edward Park; EECS undergraduate student Satvat Jagwani; CSAIL professors Mohammad Alizadeh and Hari Balakrishnan; and QCRI researchers Sanjay Chawla, Sofiane Abbar, and Mohammad Amin Sadeghi.

    Combining CNN and GNN

    Quatar, where QCRI is based, is “not a priority for the large companies building digital maps,” Madden says. Yet, it’s constantly building new roads and improving old ones, especially in preparation for hosting the 2022 FIFA World Cup.

    “While visiting Qatar, we’ve had experiences where our Uber driver can’t figure out how to get where he’s going, because the map is so off,” Madden says. “If navigation apps don’t have the right information, for things such as lane merging, this could be frustrating or worse.”

    RoadTagger relies on a novel combination of a convolutional neural network (CNN) — commonly used for images-processing tasks — and a graph neural network (GNN). GNNs model relationships between connected nodes in a graph and have become popular for analyzing things like social networks and molecular dynamics. The model is “end-to-end,” meaning it’s fed only raw data and automatically produces output, without human intervention.

    Combining CNN and GNN

    Quatar, where QCRI is based, is “not a priority for the large companies building digital maps,” Madden says. Yet, it’s constantly building new roads and improving old ones, especially in preparation for hosting the 2022 FIFA World Cup.

    “While visiting Qatar, we’ve had experiences where our Uber driver can’t figure out how to get where he’s going, because the map is so off,” Madden says. “If navigation apps don’t have the right information, for things such as lane merging, this could be frustrating or worse.”

    RoadTagger relies on a novel combination of a convolutional neural network (CNN) — commonly used for images-processing tasks — and a graph neural network (GNN). GNNs model relationships between connected nodes in a graph and have become popular for analyzing things like social networks and molecular dynamics. The model is “end-to-end,” meaning it’s fed only raw data and automatically produces output, without human intervention.

    The CNN takes as input raw satellite images of target roads. The GNN breaks the road into roughly 20-meter segments, or “tiles.” Each tile is a separate graph node, connected by lines along the road. For each node, the CNN extracts road features and shares that information with its immediate neighbors. Road information propagates along the whole graph, with each node receiving some information about road attributes in every other node. If a certain tile is occluded in an image, RoadTagger uses information from all tiles along the road to predict what’s behind the occlusion.

    This combined architecture represents a more human-like intuition, the researchers say. Say part of a four-lane road is occluded by trees, so certain tiles show only two lanes. Humans can easily surmise that a couple lanes are hidden behind the trees. Traditional machine-learning models — say, just a CNN — extract features only of individual tiles and most likely predict the occluded tile is a two-lane road.

    “Humans can use information from adjacent tiles to guess the number of lanes in the occluded tiles, but networks can’t do that,” He says. “Our approach tries to mimic the natural behavior of humans, where we capture local information from the CNN and global information from the GNN to make better predictions.”

    Learning weights

    To train and test RoadTagger, the researchers used a real-world map dataset, called OpenStreetMap, which lets users edit and curate digital maps around the globe. From that dataset, they collected confirmed road attributes from 688 square kilometers of maps of 20 U.S. cities — including Boston, Chicago, Washington, and Seattle. Then, they gathered the corresponding satellite images from a Google Maps dataset.

    In training, RoadTagger learns weights — which assign varying degrees of importance to features and node connections — of the CNN and GNN. The CNN extracts features from pixel patterns of tiles and the GNN propagates the learned features along the graph. From randomly selected subgraphs of the road, the system learns to predict the road features at each tile. In doing so, it automatically learns which image features are useful and how to propagate those features along the graph. For instance, if a target tile has unclear lane markings, but its neighbor tile has four lanes with clear lane markings and shares the same road width, then the target tile is likely to also have four lanes. In this case, the model automatically learns that the road width is a useful image feature, so if two adjacent tiles share the same road width, they’re likely to have the same lane count.

    Given a road not seen in training from OpenStreetMap, the model breaks the road into tiles and uses its learned weights to make predictions. Tasked with predicting a number of lanes in an occluded tile, the model notes that neighboring tiles have matching pixel patterns and, therefore, a high likelihood to share information. So, if those tiles have four lanes, the occluded tile must also have four.

    In another result, RoadTagger accurately predicted lane numbers in a dataset of synthesized, highly challenging road disruptions. As one example, an overpass with two lanes covered a few tiles of a target road with four lanes. The model detected mismatched pixel patterns of the overpass, so it ignored the two lanes over the covered tiles, accurately predicting four lanes were underneath.

    The researchers hope to use RoadTagger to help humans rapidly validate and approve continuous modifications to infrastructure in datasets such as OpenStreetMap, where many maps don’t contain lane counts or other details. A specific area of interest is Thailand, Bastani says, where roads are constantly changing, but there are few if any updates in the dataset.

    “Roads that were once labeled as dirt roads have been paved over so are better to drive on, and some intersections have been completely built over. There are changes every year, but digital maps are out of date,” he says. “We want to constantly update such road attributes based on the most recent imagery.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 11:42 am on January 22, 2020 Permalink | Reply
    Tags: “I like the biology-as-computer analogy so much.”, Create artificial systems from parts already found in nature, Jesse Tordoff, MIT, Organoids- artificially grown organs, President of Yale University’s Women in Computer Science Club, She has relished her new role as a mentor in her program and lab., , Tordoff had trouble adjusting to grad school and she was plagued with imposter syndrome [defined in post] in her early years.,   

    From MIT News: Women in STEM “Hacking life inside and outside the laboratory” Jesse Tordoff 

    MIT News

    From MIT News

    January 21, 2020
    Bridget E. Begg | Office of Graduate Education

    1
    Jesse Tordoff. Image: Gretchen Ertl

    Managing her own synthetic biology project helped graduate student Jesse Tordoff overcome imposter syndrome and hit her stride.

    Jesse Tordoff makes cells form unusual patterns. “I have the coolest research project ever, which has the big, broad goal of controlling the shapes that cells grow into.” Her signature shape? Polka dots.

    “The idea is that [the process is] synthetic, outside of the natural developmental pathways,” she explains. “My project mostly involves giving the cells genetic circuits to express cell-to-cell adhesion molecules differently.”

    A fifth-year graduate student in the Computational and Systems Biology program, Tordoff is passionate about synthetic biology, which aims to create artificial systems from parts already found in nature — in her case, harnessing nature’s ability to form shapes as complex and intricate as the human body.

    The field has implications for developing organoids, artificially grown organs, and even things as fantastic as living materials, where engineered structures may one day be able to grow and heal themselves.

    Cells as computers

    Tordoff’s interest in science was fostered at an early age by her parents, who are both scientists at Monell Chemical Senses Center in Philadelphia. She recalls her father teaching her QBasic, a programming language, and her mother buying her a used light microscope that Tordoff used to observe microorganisms in pond water in her free time. She also grew to love entomology. “It’s official, I’m a nerd,” she laughs.

    In college, Tordoff turned to computer science, where she became enamored with the creative process of coding and solving problems. She was also president of Yale University’s Women in Computer Science Club, an experience that encouraged her to reflect on the gender disparities in technical fields and to appreciate her parents’ support in cultivating her early interests in math and science.

    She assumed she would seek a career in programming, but eventually Tordoff returned to bugs — this time cataloguing species in a part-time data entry job in college. Around the same time, she was introduced to the field of synthetic biology, and she realized that it perfectly merged her interests in computer science and the natural world.

    “I like the biology-as-computer analogy so much,” she says. “A computer runs on binary code, and you can control pretty much every part of it. You can make programs that are human-readable and human-interpretable. Cells are obviously way more complicated; they’re not built from the ground up the way computers are built from the ground up — not yet! But they do work on logic the same way computers do, just with much more complexity and very different mechanisms underneath.”

    Becoming the expert

    The wealth of synthetic biology labs attracted Tordoff to MIT for graduate school, and she is thrilled to be here. “People get jaded about it, but we’re at the best research institute in the entire world! It sounds pretentious when you say it like that, but then somehow it’s more pretentious to say it’s not a big deal. It’s a huge deal!” she says.

    Despite an unwavering enthusiasm for research, Tordoff had trouble adjusting to grad school, and she was plagued with imposter syndrome [One doubts one’s accomplishments and has a persistent internalized fear of being exposed as a “fraud”.] in her early years. Over her graduate career, these anxieties have subsided, but she often reflects on how she overcame them.

    “A big part of getting over my imposter syndrome was having my own research project, which I think is the best thing about grad school,” she says. “I remember in my first year, all of my cohort cared so much about machine learning, and I did not feel called to the machine learning path. At the time, I thought ‘I’m so dumb, I can’t understand that it’s interesting.’ And now I realize that it’s actually just not my scene! It’s not as cool to me.”

    The turning point came when she began working in the lab of Ron Weiss, a professor of biological engineering and of electrical engineering and computer science. After six months she got her own project, and she alone was responsible for designing and executing her experiments. “That made me feel like I was the expert — and it was true. And it made me realize that there is something that I’m good at. Realistically, there are a million ways to be good at something, and being honest about not understanding something is way more important than being the smartest person in the room,” Tordoff says.

    It’s a lesson that she tries to pass on to first-year students, technicians, and laboratory rotation students, and she has relished her new role as a mentor in her program and lab. “Partially, I see in their eyes that … they may be dealing with some of the anxiety issues that I was, too. I survived it, and I survived it because everyone was nice to me and supported me, so I feel like it’s sort of a pay-it-forward thing,” she says.

    A life outside the lab

    These days, Tordoff has hit her stride. Living in Inman Square, she enjoys walking or biking to lab, getting takeout from Punjabi Dhaba, and watching Netflix with her boyfriend, Sam. In fact, she finds time for many activities outside of lab and is surprised at the work-life balance she’s managed to achieve. “I thought that you didn’t have any free time in grad school. But I have so much free time to do stuff that I like,” she says. “This weekend, I chilled and watched ‘Great British Bakeoff’ for hours. That was the biggest surprise for me in grad school. When I work late it’s because I want to, not because I have to.”

    Tordoff is also a passionate crafter. Making resin jewelry is one of her favorite pastimes — a hobby that reflects her lifelong love of nature. She sometimes wears her creations, which can contain pressed flowers and leaves and sometimes acorns covered in glitter.

    Tordoff is grateful for her supportive family, friends, and labmates for helping her to find her niche in graduate school as well as always reminding her that she is more than her work. Adopting this mindset has allowed her to thrive both inside and outside of the laboratory. Their support has also given her a passion for mentorship; she encourages other young, struggling graduate students to be patient, realize that they are smart, and most importantly, learn to fail.

    “You just have to keep doing it! That’s the hardest lesson, for sure.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 8:24 am on January 15, 2020 Permalink | Reply
    Tags: "How to verify that quantum chips are computing correctly", , MIT,   

    From MIT News: “How to verify that quantum chips are computing correctly” 

    MIT News

    From MIT News

    January 13, 2020
    Rob Matheson

    1
    Researchers from MIT, Google, and elsewhere have designed a novel method for verifying when quantum processors have accurately performed complex computations that classical computers can’t. They validate their method on a custom system (pictured) that’s able to capture how accurately a photonic chip (“PNP”) computed a notoriously difficult quantum problem. Image: Mihika Prabhu

    A new method determines whether circuits are accurately executing complex operations that classical computers can’t tackle.

    In a step toward practical quantum computing, researchers from MIT, Google, and elsewhere have designed a system that can verify when quantum chips have accurately performed complex computations that classical computers can’t.

    Quantum chips perform computations using quantum bits, called “qubits,” that can represent the two states corresponding to classic binary bits — a 0 or 1 — or a “quantum superposition” of both states simultaneously. The unique superposition state can enable quantum computers to solve problems that are practically impossible for classical computers, potentially spurring breakthroughs in material design, drug discovery, and machine learning, among other applications.

    Full-scale quantum computers will require millions of qubits, which isn’t yet feasible. In the past few years, researchers have started developing “Noisy Intermediate Scale Quantum” (NISQ) chips, which contain around 50 to 100 qubits. That’s just enough to demonstrate “quantum advantage,” meaning the NISQ chip can solve certain algorithms that are intractable for classical computers. Verifying that the chips performed operations as expected, however, can be very inefficient. The chip’s outputs can look entirely random, so it takes a long time to simulate steps to determine if everything went according to plan.

    In a paper published today in Nature Physics, the researchers describe a novel protocol to efficiently verify that an NISQ chip has performed all the right quantum operations. They validated their protocol on a notoriously difficult quantum problem running on custom quantum photonic chip.

    “As rapid advances in industry and academia bring us to the cusp of quantum machines that can outperform classical machines, the task of quantum verification becomes time critical,” says first author Jacques Carolan, a postdoc in the Department of Electrical Engineering and Computer Science (EECS) and the Research Laboratory of Electronics (RLE). “Our technique provides an important tool for verifying a broad class of quantum systems. Because if I invest billions of dollars to build a quantum chip, it sure better do something interesting.”

    Joining Carolan on the paper are researchers from EECS and RLE at MIT, as well from the Google Quantum AI Laboratory, Elenion Technologies, Lightmatter, and Zapata Computing.

    Divide and conquer

    The researchers’ work essentially traces an output quantum state generated by the quantum circuit back to a known input state. Doing so reveals which circuit operations were performed on the input to produce the output. Those operations should always match what researchers programmed. If not, the researchers can use the information to pinpoint where things went wrong on the chip.

    At the core of the new protocol, called “Variational Quantum Unsampling,” lies a “divide and conquer” approach, Carolan says, that breaks the output quantum state into chunks. “Instead of doing the whole thing in one shot, which takes a very long time, we do this unscrambling layer by layer. This allows us to break the problem up to tackle it in a more efficient way,” Carolan says.

    For this, the researchers took inspiration from neural networks — which solve problems through many layers of computation — to build a novel “quantum neural network” (QNN), where each layer represents a set of quantum operations.

    To run the QNN, they used traditional silicon fabrication techniques to build a 2-by-5-millimeter NISQ chip with more than 170 control parameters — tunable circuit components that make manipulating the photon path easier. Pairs of photons are generated at specific wavelengths from an external component and injected into the chip. The photons travel through the chip’s phase shifters — which change the path of the photons — interfering with each other. This produces a random quantum output state — which represents what would happen during computation. The output is measured by an array of external photodetector sensors.

    That output is sent to the QNN. The first layer uses complex optimization techniques to dig through the noisy output to pinpoint the signature of a single photon among all those scrambled together. Then, it “unscrambles” that single photon from the group to identify what circuit operations return it to its known input state. Those operations should match exactly the circuit’s specific design for the task. All subsequent layers do the same computation — removing from the equation any previously unscrambled photons — until all photons are unscrambled.

    As an example, say the input state of qubits fed into the processor was all zeroes. The NISQ chip executes a bunch of operations on the qubits to generate a massive, seemingly randomly changing number as output. (An output number will constantly be changing as it’s in a quantum superposition.) The QNN selects chunks of that massive number. Then, layer by layer, it determines which operations revert each qubit back down to its input state of zero. If any operations are different from the original planned operations, then something has gone awry. Researchers can inspect any mismatches between the expected output to input states, and use that information to tweak the circuit design.

    Boson “unsampling”

    In experiments, the team successfully ran a popular computational task used to demonstrate quantum advantage, called “boson sampling,” which is usually performed on photonic chips. In this exercise, phase shifters and other optical components will manipulate and convert a set of input photons into a different quantum superposition of output photons. Ultimately, the task is to calculate the probability that a certain input state will match a certain output state. That will essentially be a sample from some probability distribution.

    But it’s nearly impossible for classical computers to compute those samples, due to the unpredictable behavior of photons. It’s been theorized that NISQ chips can compute them fairly quickly. Until now, however, there’s been no way to verify that quickly and easily, because of the complexity involved with the NISQ operations and the task itself.

    “The very same properties which give these chips quantum computational power makes them nearly impossible to verify,” Carolan says.

    In experiments, the researchers were able to “unsample” two photons that had run through the boson sampling problem on their custom NISQ chip — and in a fraction of time it would take traditional verification approaches.

    “This is an excellent paper that employs a nonlinear quantum neural network to learn the unknown unitary operation performed by a black box,” says Stefano Pirandola, a professor of computer science who specializes in quantum technologies at the University of York. “It is clear that this scheme could be very useful to verify the actual gates that are performed by a quantum circuit — [for example] by a NISQ processor. From this point of view, the scheme serves as an important benchmarking tool for future quantum engineers. The idea was remarkably implemented on a photonic quantum chip.”

    While the method was designed for quantum verification purposes, it could also help capture useful physical properties, Carolan says. For instance, certain molecules when excited will vibrate, then emit photons based on these vibrations. By injecting these photons into a photonic chip, Carolan says, the unscrambling technique could be used to discover information about the quantum dynamics of those molecules to aid in bioengineering molecular design. It could also be used to unscramble photons carrying quantum information that have accumulated noise by passing through turbulent spaces or materials.

    “The dream is to apply this to interesting problems in the physical world,” Carolan says.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 1:11 pm on January 13, 2020 Permalink | Reply
    Tags: "Influential electrons? Physicists uncover a quantum relationship", How electron energies vary from region to region in a particular quantum state, , MIT, , , Quantum hybridization in the relationships between moving electrons, , Spectromicroscopy   

    From New York University, the Lawrence Berkeley National Laboratory, Rutgers University, and MIT via phys.org: “Influential electrons? Physicists uncover a quantum relationship” 

    From

    via


    phys.org

    A team of physicists has mapped how electron energies vary from region to region in a particular quantum state with unprecedented clarity. This understanding reveals an underlying mechanism by which electrons influence one another, termed quantum “hybridization,” that had been invisible in previous experiments.

    1
    Credit: CC0 Public Domain

    The findings, the work of scientists at New York University, the Lawrence Berkeley National Laboratory, Rutgers University, and MIT, are reported in the journal Nature Physics.

    “This sort of relationship is essential to understanding a quantum electron system—and the foundation of all movement—but had often been studied from a theoretical standpoint and not thought of as observable through experiments,” explains Andrew Wray, an assistant professor in NYU’s Department of Physics and one of the paper’s co-authors. “Remarkably, this work reveals a diversity of energetic environments inside the same material, allowing for comparisons that let us spot how electrons shift between states.”

    The scientists focused their work on bismuth selenide, or Bi2Se3, a material that has been under intense investigation for the last decade as the basis of advanced information and quantum computing technologies. Research in 2008 and 2009 identified bismuth selenide to host a rare “topological insulator” quantum state that changes the way electrons at its surface interact with and store information.

    Studies since then have confirmed a number of theoretically inspired ideas about topological insulator surface electrons. However, because these particles are on a material’s surface, they are exposed to environmental factors not present in the bulk of the material, causing them to manifest and move in different ways from region to region.

    The resulting knowledge gap, together with similar challenges for other material classes, has motivated scientists to develop techniques for measuring electrons with micron- or nanometer- scale spatial resolution, allowing researchers to examine electron interaction without external interference.

    The Nature Physics research is one of the first studies to use this new generation of experimental tools, termed “”—and the first spectromicroscopy investigation of Bi2Se3. This procedure can track how the motion of surface electrons differs from region to region within a material. Rather than focusing on average electron activity over a single large region on a sample surface, the scientists collected data from nearly 1,000 smaller regions.

    By broadening the terrain through this approach, they could observe signatures of quantum hybridization in the relationships between moving electrons, such as a repulsion between electronic states that come close to one another in energy. Measurements from this method illuminated the variation of electronic quasiparticles across the material surface.

    “Looking at how the electronic states vary in tandem with one another across the sample surface reveals conditional relationships between different kinds of electrons, and it’s really a new way of studying a material,” explains Erica Kotta, an NYU graduate student and first author on the paper. “The results provide new insight into the physics of topological insulators by providing the first direct measurement of quantum hybridization between electrons near the surface.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    About Science X in 100 words

    Science X™ is a leading web-based science, research and technology news service which covers a full range of topics. These include physics, earth science, medicine, nanotechnology, electronics, space, biology, chemistry, computer sciences, engineering, mathematics and other sciences and technologies. Launched in 2004 (Physorg.com), Science X’s readership has grown steadily to include 5 million scientists, researchers, and engineers every month. Science X publishes approximately 200 quality articles every day, offering some of the most comprehensive coverage of sci-tech developments world-wide. Science X community members enjoy access to many personalized features such as social networking, a personal home page set-up, article comments and ranking, the ability to save favorite articles, a daily newsletter, and other options.
    Mission 12 reasons for reading daily news on Science X Organization Key editors and writersinclude 1.75 million scientists, researchers, and engineers every month. Phys.org publishes approximately 100 quality articles every day, offering some of the most comprehensive coverage of sci-tech developments world-wide. Quancast 2009 includes Phys.org in its list of the Global Top 2,000 Websites. Phys.org community members enjoy access to many personalized features such as social networking, a personal home page set-up, RSS/XML feeds, article comments and ranking, the ability to save favorite articles, a daily newsletter, and other options.

     
  • richardmitnick 9:53 am on January 12, 2020 Permalink | Reply
    Tags: , , Electrical, , MIT, Yury Polyanskiy   

    From MIT News: “Sending clearer signals” 

    MIT News

    From MIT News

    January 11, 2020
    Rob Matheson

    1
    Yury Polyanskiy. Image: M. Scott Brauer

    Associate Professor Yury Polyanskiy is working to keep data flowing as the “internet of things” becomes a reality.

    In the secluded Russian city where Yury Polyanskiy grew up, all information about computer science came from the outside world. Visitors from distant Moscow would occasionally bring back the latest computer science magazines and software CDs to Polyanskiy’s high school for everyone to share.

    One day while reading a borrowed PC World magazine in the mid-1990s, Polyanskiy learned about a futuristic concept: the World Wide Web.

    Believing his city would never see such wonders of the internet, he and his friends built their own. Connecting an ethernet cable between two computers in separate high-rises, they could communicate back and forth. Soon, a handful of other kids asked to be connected to the makeshift network.

    “It was a pretty challenging engineering problem,” recalls Polyanskiy, an associate professor of electrical engineering and computer science at MIT, who recently earned tenure. “I don’t remember exactly how we did it, but it took us a whole day. You got a sense of just how contagious the internet could be.”

    Thanks to the then-recent fall of the Iron Curtain, Polyanskiy’s family did eventually connect to the internet. Soon after, he became interested in computer science and then information theory, the mathematical study of storing and transmitting data. Now at MIT, his most exciting work centers on preventing major data-transmission issues with the rise of the “internet of things” (IoT). Polyanskiy is a member of the of the Laboratory for Information and Decision Systems, the Institute for Data, Systems, and Society, and the Statistics and Data Science Center.

    Today, people carry around a smartphone and maybe a couple smart devices. Whenever you watch a video on your smartphone, for example, a nearby cell tower assigns you an exclusive chunk of the wireless spectrum for a certain time. It does so for everyone, making sure the data never collide.

    The number IoT devices is expected to explode, however. People may carry dozens of smart devices; all delivered packages may have tracking sensors; and smart cities may implement thousands of connected sensors in their infrastructure. Current systems can’t divvy up the spectrum effectively to stop data from colliding. That will slow down transmission speeds and make our devices consume much more energy in sending and resending data.

    “There may soon be a hundredfold explosion of devices connected to the internet, which is going to clog the spectrum, and there will be no way to ensure interference-free transmission. Entirely new access approaches will be needed,” Polyanskiy says. “It’s the most exciting thing I’m working on, and it’s surprising that no one is talking much about it.”

    From Russia, with love of computer science

    Polyanskiy grew up in a place that translates in English to “Rainbow City,” so named because it was founded as a site to develop military lasers. Surrounded by woods, the city had a population of about 15,000 people, many of them engineers.

    In part, that environment got Polyanskiy into computer science. At the age of 12, he started coding — “and for profit,” he says. His father was working for an engineering firm, on a team that was programming controllers for oil pumps. When the lead programmer took another position, they were left understaffed. “My father was discussing who can help. I was sitting next to him, and I said, ‘I can help,’” Polyanskiy says. “He first said no, but I tried it and it worked out.”

    Soon after, his father opened his own company for designing oil pump controllers and brought Polyanskiy on board while he was still in high school. The business gained customers worldwide. He says some of the controllers he helped program are still being used today.

    Polyanskiy earned his bachelor’s in physics from the Moscow Institute of Physics and Technology, a top university worldwide for physics research. But then, interested in pursuing electrical engineering for graduate school, he applied to programs in the U.S. and was accepted to Princeton University.

    In 2005, he moved to the U.S. to attend Princeton, which came with cultural shocks “that I still haven’t recovered from,” Polyanskiy jokes. For starters, he says, the U.S. education system encourages interaction with professors. Also, the televisions, gaming consoles, and furniture in residential buildings and around campus were not placed under lock and key.

    “In Russia, everything is chained down,” Polyanskiy says. “I still can’t believe U.S. universities just keep those things out in the open.”

    At Princeton, Polyanskiy wasn’t sure which field to enter. But when it came time to select, he asked one rather discourteous student about studying under a giant in information theory, Sergio Verdú. The student told Polyanskiy he wasn’t smart enough for Verdú — so Polyanskiy got defiant. “At that moment, I knew for certain that Sergio would be my number one pick,” Polyanskiy says, laughing. “When people say I can’t do something, that’s usually the best way to motivate me.”

    At Princeton, working under Verdú, Polyanskiy focused on a component of information theory that deals with how much redundancy to send with data. Each time data transmit, they are perturbed by some noise. Adding duplicate data means less data get lost in that noise. Researchers thus study the optimal amounts of redundancy to reduce signal loss but keep transmissions fast.

    In his graduate work, Polyanskiy pinpointed sweet spots for redundancy when transmitting hundreds or thousands of data bits in packets, which is mostly how data are transmitted online today.

    Getting hooked

    After earning his PhD in electrical engineering from Princeton, Polyanskiy finally did come to MIT, his “dream school,” in 2011, but as a professor. MIT had helped pioneer some information theory research and introduced the first college courses in the field.

    Some call information theory “a green island,” he says, “because it’s hard to get into but once you’re there, you’re very happy. And information theorists can be seen as snobby.” When he came to MIT, Polyanskiy says, he was narrowly focused on his work. But he experienced yet another cultural shock — this time in a collaborative and bountiful research culture.

    MIT researchers are constantly presenting at conferences, holding seminars, collaborating, and “working on about 20 projects in parallel,” Polyanskiy says. “I was hesitant that I could do quality research like that, but then I got hooked. I became more broad-minded, thanks to MIT’s culture of drinking from a fire hose. There’s so much going on that eventually you get addicted to learning fields that are far away from you own interests.”

    In collaboration with other MIT researchers, Polyanskiy’s group now focuses on finding ways to split up the spectrum in the coming IoT age. So far, his group has mathematically proven that the systems in use today do not have the capabilities and energy to do so. They’ve also shown what types of alternative transmission systems will and won’t work.

    Inspired by his own experiences, Polyanskiy likes to give his students “little hooks,” tidbits of information about the history of scientific thought surrounding their work and about possible future applications. One example is explaining philosophies behind randomness to mathematics students who may be strictly deterministic thinkers. “I want to give them a little taste of something more advanced and outside scope of what they’re studying,” he says.

    After spending 14 years in the U.S., the culture has shaped the Russian native in certain ways. For instance, he’s accepted a more relaxed and interactive Western teaching style, he says. But it extends beyond the classroom, as well. Just last year, while visiting Moscow, Polyanskiy found himself holding a subway rail with both hands. Why is this strange? Because he was raised to keep one hand on the subway rail, and one hand over his wallet to prevent thievery. “With horror, I realized what I was doing,” Polyanskiy says, laughing. “I said, ‘Yury, you’re becoming a real Westerner.’”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 10:06 am on January 10, 2020 Permalink | Reply
    Tags: , , Julia Ortony, MIT, , Self-assembling nanostructures,   

    From MIT News: Women in STEM- “Julia Ortony: Concocting nanomaterials for energy and environmental applications” 

    MIT News

    From MIT News

    January 9, 2020
    Leda Zimmerman | MIT Energy Initiative

    1
    Julia Ortony is the Finmeccanica Career Development Assistant Professor of Engineering in the Department of Materials Science and Engineering. Photo: Lillie Paquette/School of Engineering

    2
    Assistant Professor Julia Ortony (right) and graduate student William Lindemann discuss his experiments on self-assembling nanofibers. Work at the Ortony lab focuses on molecular design and synthesis to create new soft nanomaterials for tackling problems related to energy and the environment. Photo: Lillie Paquette/School of Engineering

    The MIT assistant professor is entranced by the beauty she finds pursuing chemistry.

    A molecular engineer, Julia Ortony performs a contemporary version of alchemy.

    “I take powder made up of disorganized, tiny molecules, and after mixing it up with water, the material in the solution zips itself up into threads 5 nanometers thick — about 100 times smaller than the wavelength of visible light,” says Ortony, the Finmeccanica Career Development Assistant Professor of Engineering in the Department of Materials Science and Engineering (DMSE). “Every time we make one of these nanofibers, I am amazed to see it.”

    But for Ortony, the fascination doesn’t simply concern the way these novel structures self-assemble, a product of the interaction between a powder’s molecular geometry and water. She is plumbing the potential of these nanomaterials for use in renewable energy and environmental remediation technologies, including promising new approaches to water purification and the photocatalytic production of fuel.

    Tuning molecular properties

    Ortony’s current research agenda emerged from a decade of work into the behavior of a class of carbon-based molecular materials that can range from liquid to solid.

    During doctoral work at the University of California at Santa Barbara, she used magnetic resonance (MR) spectroscopy to make spatially precise measurements of atomic movement within molecules, and of the interactions between molecules. At Northwestern University, where she was a postdoc, Ortony focused this tool on self-assembling nanomaterials that were biologically based, in research aimed at potential biomedical applications such as cell scaffolding and regenerative medicine.

    “With MR spectroscopy, I investigated how atoms move and jiggle within an assembled nanostructure,” she says. Her research revealed that the surface of the nanofiber acted like a viscous liquid, but as one probed further inward, it behaved like a solid. Through molecular design, it became possible to tune the speed at which molecules that make up a nanofiber move.

    A door had opened for Ortony. “We can now use state-of-matter as a knob to tune nanofiber properties,” she says. “For the first time, we can design self-assembling nanostructures, using slow or fast internal molecular dynamics to determine their key behaviors.”

    Slowing down the dance

    When she arrived at MIT in 2015, Ortony was determined to tame and train molecules for nonbiological applications of self-assembling “soft” materials.

    “Self-assembling molecules tend to be very dynamic, where they dance around each other, jiggling all the time and coming and going from their assembly,” she explains. “But we noticed that when molecules stick strongly to each other, their dynamics get slow, and their behavior is quite tunable.” The challenge, though, was to synthesize nanostructures in nonbiological molecules that could achieve these strong interactions.

    “My hypothesis coming to MIT was that if we could tune the dynamics of small molecules in water and really slow them down, we should be able to make self-assembled nanofibers that behave like a solid and are viable outside of water,” says Ortony.

    Her efforts to understand and control such materials are now starting to pay off.

    “We’ve developed unique, molecular nanostructures that self-assemble, are stable in both water and air, and — since they’re so tiny — have extremely high surface areas,” she says. Since the nanostructure surface is where chemical interactions with other substances take place, Ortony has leapt to exploit this feature of her creations — focusing in particular on their potential in environmental and energy applications.

    Clean water and fuel from sunlight

    One key venture, supported by Ortony’s Professor Amar G. Bose Fellowship, involves water purification. The problem of toxin-laden drinking water affects tens of millions of people in underdeveloped nations. Ortony’s research group is developing nanofibers that can grab deadly metals such as arsenic out of such water. The chemical groups she attaches to nanofibers are strong, stable in air, and in recent tests “remove all arsenic down to low, nearly undetectable levels,” says Ortony.

    She believes an inexpensive textile made from nanofibers would be a welcome alternative to the large, expensive filtration systems currently deployed in places like Bangladesh, where arsenic-tainted water poses dire threats to large populations.

    “Moving forward, we would like to chelate arsenic, lead, or any environmental contaminant from water using a solid textile fabric made from these fibers,” she says.

    In another research thrust, Ortony says, “My dream is to make chemical fuels from solar energy.” Her lab is designing nanostructures with molecules that act as antennas for sunlight. These structures, exposed to and energized by light, interact with a catalyst in water to reduce carbon dioxide to different gases that could be captured for use as fuel.

    In recent studies, the Ortony lab found that it is possible to design these catalytic nanostructure systems to be stable in water under ultraviolet irradiation for long periods of time. “We tuned our nanomaterial so that it did not break down, which is essential for a photocatalytic system,” says Ortony.

    Students dive in

    While Ortony’s technologies are still in the earliest stages, her approach to problems of energy and the environment are already drawing student enthusiasts.

    Dae-Yoon Kim, a postdoc in the Ortony lab, won the 2018 Glenn H. Brown Prize from the International Liquid Crystal Society for his work on synthesized photo-responsive materials and started a tenure track position at the Korea Institute of Science and Technology this fall. Ortony also mentors Ty Christoff-Tempesta, a DMSE doctoral candidate, who was recently awarded a Martin Fellowship for Sustainability. Christoff-Tempesta hopes to design nanoscale fibers that assemble and disassemble in water to create environmentally sustainable materials. And Cynthia Lo ’18 won a best-senior-thesis award for work with Ortony on nanostructures that interact with light and self-assemble in water, work that will soon be published. She is “my superstar MIT Energy Initiative UROP [undergraduate researcher],” says Ortony.

    Ortony hopes to share her sense of wonder about materials science not just with students in her group, but also with those in her classes. “When I was an undergraduate, I was blown away at the sheer ability to make a molecule and confirm its structure,” she says. With her new lab-based course for grad students — 3.65 (Soft Matter Characterization) — Ortony says she can teach about “all the interests that drive my research.”

    While she is passionate about using her discoveries to solve critical problems, she remains entranced by the beauty she finds pursuing chemistry. Fascinated by science starting in childhood, Ortony says she sought out every available class in chemistry, “learning everything from beginning to end, and discovering that I loved organic and physical chemistry, and molecules in general.”

    Today, she says, she finds joy working with her “creative, resourceful, and motivated” students. She celebrates with them “when experiments confirm hypotheses, and it’s a breakthrough and it’s thrilling,” and reassures them “when they come with a problem, and I can let them know it will be thrilling soon.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 1:45 pm on January 2, 2020 Permalink | Reply
    Tags: "Scientists pin down timing of lunar dynamo’s demise", A mechanism known as precession may have powered a much stronger though shorter-lived dynamo., Around 4 billion years ago the infant moon was likely much closer to the Earth than it is today and much more susceptible to the planet’s gravitational effects., As the moon moved slowly away from the Earth the effect of precession decreased weakening the dynamo and the magnetic field in turn., , , , Core crystallization ruled out., , MIT, Reliable magnetic recorders   

    From MIT News: “Scientists pin down timing of lunar dynamo’s demise” 

    MIT News

    From MIT News

    January 1, 2020
    Jennifer Chu

    1
    A new analysis of moon rocks pins down the end of the lunar dynamo, the process by which the moon once generated a magnetic field. Image: Hernán Cañellas and Benjamin Weiss

    A conventional compass would be of little use on the moon, which today lacks a global magnetic field.

    But the moon did produce a magnetic field billions of years ago, and it was likely even stronger than the Earth’s field today. Scientists believe that this lunar field, like Earth’s, was generated by a powerful dynamo — the churning of the moon’s core. At some point, this dynamo, and the magnetic field it generated, petered out.

    Now scientists from MIT and elsewhere have pinned down the timing of the lunar dynamo’s end, to around 1 billion years ago. The findings appear today in the journal Science Advances.

    The new timing rules out some theories for what drove the lunar dynamo in its later stages and favors one particular mechanism: core crystallization. As the moon’s inner iron core crystallized, the liquid core’s electrically charged fluid was buoyantly stirred, producing the dynamo.

    “The magnetic field is this nebulous thing that pervades space, like an invisible force field,” says Benjamin Weiss, professor of earth, atmospheric, and planetary sciences at MIT. “We’ve shown that the dynamo that produced the moon’s magnetic field died somewhere between 1.5 and 1 billion years ago, and seems to have been powered in an Earth-like way.”

    Weiss’ co-authors on the paper are co-lead authors Saied Mighani and Huapei Wang, as well as Caue Borlina and Claire Nichols of MIT, along with David Shuster of the University of California at Berkeley.

    Dueling dynamo theories

    Over the past few years, Weiss’ group and others have discovered signs of a strong magnetic field, of around 100 microteslas, in lunar rocks as old as 4 billion years. For comparison, Earth’s magnetic field today is around 50 microteslas.

    In 2017, Weiss’s group studied a sample collected from NASA’s Apollo project, and found traces of a much weaker magnetic field, below 10 microteslas, in a moon rock they determined to be about 2.5 billion years old. Their thinking at the time was that perhaps two mechanisms for the lunar dynamo were at play: The first could have generated a much stronger, earlier magnetic field around 4 billion years ago, before being replaced by a second, more long-lived mechanism that sustained a much weaker field, through to at least 2.5 billion years ago.

    “There are several ideas for what mechanisms powered the lunar dynamo, and the question is, how do you figure out which one did it?” Weiss says. “It turns out all these power sources have different lifetimes. So if you could figure out when the dynamo turned off, then you could distinguish between the mechanisms that have been proposed for the lunar dynamo. That was the purpose of this new paper.”

    Most of the magnetic studies lunar samples from the Apollo missions have been from ancient rocks, dating to about 3 billion to 4 billion years old. These are rocks that originally spewed out as lava onto a very young lunar surface, and as they cooled, their microscopic grains aligned in the direction of the moon’s magnetic field. Much of the moon’s surface is covered in such rocks, which have remained unchanged since, preserving a record of the ancient magnetic field.

    However, lunar rocks whose magnetic histories began less than 3 billion years ago have been much harder to find because most lunar volcanism had ceased by this time.

    “The past 3 billion years of lunar history has been a mystery because there’s almost no rock record of it,” Weiss says.

    “Little compasses”

    Nevertheless, he and his colleagues identified two samples of lunar rock, collected by astronauts during the Apollo missions, that appear to have suffered a massive impact about 1 billion years ago and as a result were melted and welded back together in such a way that their ancient magnetic record was all but erased.

    The team took the samples back to the lab and first analyzed the orientation of each rock’s electrons, which Weiss describes as “little compasses” that either align in the direction of an existing magnetic field or appear in random orientations in the absence of one. For both samples, the team observed the latter: random configurations of electrons, suggesting that the rocks formed in an extremely weak to essentially zero magnetic field, of no more than 0.1 microteslas.

    The team then determined the age of both samples using a radiometric dating technique that Weiss and Shuster were able to adapt for this study.

    The team put the samples through a battery of tests to see whether they were indeed good magnetic recorders. In other words, once they were reheated by some massive impact, could they have still been sensitive enough to record even a weak magnetic field on the moon, if it existed?

    To answer this, the researchers placed both samples in an oven and blasted them with high temperatures to effectively erase their magnetic record, then exposed the rocks to an artificially generated magnetic field in the laboratory as they cooled.

    The results confirmed that the two samples were indeed reliable magnetic recorders and that the field strength they initially measured, of 0.1 microteslas, accurately represented the maximum possible value of the moon’s extremely weak magnetic field 1 billion years ago. Weiss says a field of 0.1 microteslas is so low that it’s likely the lunar dynamo ended by this time.

    The new findings line up with the predicted lifetime of core crystallization, a proposed mechanism for the lunar dynamo that could have generated a weak and long-lived magnetic field in the later part of the moon’s history. Weiss says that prior to core crystallization, a mechanism known as precession may have powered a much stronger though shorter-lived dynamo. Precession is a phenomenon by which the solid outer shell of a body such as the moon, in close proximity to a much larger body such as the Earth, wobbles in response to the Earth’s gravity. This wobbling stirs up the fluid in the core, the way swishing a cup of coffee stirs up the liquid inside.

    Around 4 billion years ago, the infant moon was likely much closer to the Earth than it is today, and much more susceptible to the planet’s gravitational effects. As the moon moved slowly away from the Earth, the effect of precession decreased, weakening the dynamo and the magnetic field in turn. Weiss says it’s likely that around 2.5 billion years ago, core crystallization became the dominant mechanism by which the lunar dynamo continued, producing a weaker magnetic field that continued to dissipate as the moon’s core eventually fully crystallized.

    The group is looking next to measure the direction of the moon’s ancient magnetic field in hopes of gleaning more information about the moon’s evolution.

    This research was supported, in part, by NASA.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 12:51 pm on January 2, 2020 Permalink | Reply
    Tags: "How long will a volcanic island live?", Galapagos Islands, Hawaiin Islands, MIT,   

    From MIT News: “How long will a volcanic island live?” 

    MIT News

    From MIT News

    January 1, 2020
    Jennifer Chu

    1
    An aerial view of Las Tintoreras, Isla Isabela in the Galapagos Islands, Ecuador.

    Plate tectonics and mantle plumes set the lifespan of volcanic islands like Hawaii and the Galapagos.

    The tectonic plates of the world were mapped in 1996, USGS.

    1
    Hawaiin Islands map. https://www.lonelyplanet.com/maps/north-america/usa/hawaii/

    4
    Topographic and bathymetric map of the Galápagos Islands, Ecuador.
    Eric Gaba (Sting – fr:Sting),

    When a hot plume of rock rises through the Earth’s mantle to puncture the overlying crust, it can create not only a volcanic ocean island, but also a swell in the ocean floor hundreds to thousands of kilometers long. Over time the island is carried away by the underlying tectonic plate, and the plume pops out another island in its place. Over millions of years, this geological hotspot can produce a chain of trailing islands, on which life may flourish temporarily before the islands sink, one by one, back into the sea.

    The Earth is pocked with dozens of hotspots, including those that produced the island chains of Hawaii and the Galapagos. While the process by which volcanic islands form is similar from chain to chain, the time that any island spends above sea level can vary widely, from a few million years in the case of the Galapagos to over 20 million for the Canary Islands. An island’s age can determine the life and landscapes that evolve there. And yet the mechanisms that set an island’s lifespan are largely unknown.

    Now scientists at MIT have an idea about the processes that determine a volcanic island’s age. In a paper published today in Science Advances, they report an analysis of 14 major volcanic island chains around the world. They found that an island’s age is related to two main geological factors: the speed of the underlying plate and the size of the swell generated by the hotspot plume.

    For instance, if an island lies on a fast-moving plate, it is likely to have a short lifespan, unless, as is the case with Hawaii, it was also created by a very large plume. The plume that gave rise to the Hawaiian islands is among the largest on Earth, and while the Pacific plate on which Hawaii sits is relatively speedy compared with other oceanic plates, it takes considerable time for the plate to slide over the plume’s expansive swell.

    The researchers found that this interplay between tectonic speed and plume size explains why the Hawaiian islands persist above sea level for million years longer than the oldest Galapagos Islands, which also sit on plates that travel at a similar speed but over a much smaller plume. By comparison, the Canary Islands, among the oldest island chains in the world, sit on the slow-moving Atlantic plate and over a relatively large plume.

    “These island chains are dynamic, insular laboratories that biologists have long focused on,” says former MIT graduate student Kimberly Huppert, the study’s lead author. “But besides studies on individual chains, there’s not a lot of work that related them to processes of the solid Earth, kilometers below the surface.”

    “You can imagine all these organisms living on a sort of treadmill made of islands, like stepping stones, and they’re evolving, diverging, migrating to new islands, and the old islands are drowning,” adds Taylor Perron, associate head of MIT’s Department of Earth, Atmospheric and Planetary Sciences. “What Kim has shown is, there’s a geophysical mechanism that controls how fast this treadmill is moving and how long the island chains go before they drop off the end.”

    Huppert and Perron co-authored the study with Leigh Royden, professor of earth, atmospheric and planetary sciences at MIT.

    Sinking a blowtorch

    The new study is a part of Huppert’s MIT thesis work, in which she looked mainly at the evolution of landscapes on volcanic island chains, the Hawaiian islands in particular. In studying the processes that contribute to island erosion, she dug up a controversy in the literature regarding the processes that cause the seafloor to swell around hotspot islands.

    “The idea was, if you heat some of the bottom of the plate, you can make it go up really fast by just thermal uplift, basically like a blowtorch under the plate,” Royden says.

    If this idea is correct, then by the same token, cooling of the heated plate should cause the seafloor to subside and islands to eventually sink back into the ocean. But in studying the ages of drowned islands in hotspot chains around the world, Huppert found that islands drown at a faster rate than any natural cooling mechanism could explain.

    “So most of this uplift and sinking couldn’t have been from heating and cooling,” Royden says. “It had to be something else.”

    Huppert’s observation inspired the group to compare major volcanic island chains in hopes of identifying the mechanisms of island uplift and sinking — which are likely the same processes that set an island’s lifespan, or time above sea level.

    Evolution, on a treadmill

    In their analysis, the researchers looked at 14 volcanic island chains around the world, including the Hawaiian, Galapagos, and Canary islands. For each island chain, they noted the direction in which the underlying tectonic plate was moving and measured the plate’s average speed relative to the hotspot. They then measured, in the direction of each island chain, the distance between the beginning and the end of the swell, or uplift in the crust, created by the underlying plume. For every island chain, they divided the swell distance by plate velocity to arrive at a number representing the average time a volcanic island should spend atop the plume’s swell — which should determine how long an island remains above sea level before sinking into the ocean.

    When the researchers compared their calculations with the actual ages of each island in each of the 14 chains, including islands that had long since sunk below sea level, they found a strong correlation between the time spent atop the swell and the typical amount of time that islands remain above sea level. A volcanic island’s lifespan, they concluded, depends on a combination of the underlying plate’s speed and the size of the plume, or swell that it creates.

    Huppert says that the processes that set an island’s age can help scientists better understand biodiversity and how life looks different from one island chain to another.

    “If an island spends a long time above sea level, that provides a long time for speciation to play out,” Huppert says. “But if you have an island chain where you have islands that drown at a faster rate, then it will affect the ability of fauna to radiate to neighboring islands, and how these islands are populated.”

    The researchers posit that, in some sense, we have the interplay of tectonic speed and plume size to thank for our modern understanding of evolution.

    “You’re looking at a process in the solid Earth which is contributing to the fact that the Galapagos is a very fast moving treadmill, with islands moving off very quickly, with not a long time to erode, and this was the system that led to people discovering evolution,” Royden notes. “So in a sense this process really set the stage for humans to figure out what evolution was about, by doing it in this microcosm. If there hadn’t been this process, and the Galapagos hadn’t been on that short residence time, who knows how long it would have taken for people to figure it out.”

    This research was supported, in part, by NASA.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 4:36 pm on December 23, 2019 Permalink | Reply
    Tags: , "The billion-year belch", A black hole spewed out jets of plasma., , , , , MIT, The outburst in galaxy cluster SPT-CLJ0528-5300 or SPT-0528 for short.   

    From MIT News: “The billion-year belch” 

    MIT News

    From MIT News

    December 23, 2019
    Fernanda Ferreira | School of Science

    1
    Giant cavities in the X-ray emitting intracluster medium (shown in blue, as observed by NASA’s Chandra X-ray Observatory) have been carved out by a black hole outburst.

    NASA/Chandra X-ray Telescope

    X-ray data are overlaid on top of optical data from the Hubble Space Telescope (in red/orange), where the central galaxy that is likely hosting the culprit supermassive black hole is also visible.

    NASA/ESA Hubble Telescope

    Image courtesy of the researchers.

    Michael Calzadilla and colleagues describe a violent black hole outburst that provides new insight into galaxy cluster evolution.

    Billions of years ago, in the center of a galaxy cluster far, far away (15 billion light-years, to be exact), a black hole spewed out jets of plasma. As the plasma rushed out of the black hole, it pushed away material, creating two large cavities 180 degrees from each other. In the same way you can calculate the energy of an asteroid impact by the size of its crater, Michael Calzadilla, a graduate student at the MIT Kavli Institute for Astrophysics and Space Research (MKI), used the size of these cavities to figure out the power of the black hole’s outburst.

    In a recent paper in The Astrophysical Journal Letters, Calzadilla and his coauthors describe the outburst in galaxy cluster SPT-CLJ0528-5300, or SPT-0528 for short. Combining the volume and pressure of the displaced gas with the age of the two cavities, they were able to calculate the total energy of the outburst. At greater than 1,054 joules of energy, a force equivalent to about 1,038 nuclear bombs, this is the most powerful outburst reported in a distant galaxy cluster. Coauthors of the paper include MKI research scientist Matthew Bayliss and assistant professor of physics Michael McDonald.

    The universe is dotted with galaxy clusters, collections of hundreds and even thousands of galaxies that are permeated with hot gas and dark matter. At the center of each cluster is a black hole, which goes through periods of feeding, where it gobbles up plasma from the cluster, followed by periods of explosive outburst, where it shoots out jets of plasma once it has reached its fill. “This is an extreme case of the outburst phase,” says Calzadilla of their observation of SPT-0528. Even though the outburst happened billions of years ago, before our solar system had even formed, it took around 6.7 billion years for light from the galaxy cluster to travel all the way to Chandra, NASA’s X-ray emissions observatory that orbits Earth.

    Because galaxy clusters are full of gas, early theories about them predicted that as the gas cooled, the clusters would see high rates of star formation, which need cool gas to form. However, these clusters are not as cool as predicted and, as such, weren’t producing new stars at the expected rate. Something was preventing the gas from fully cooling. The culprits were supermassive black holes, whose outbursts of plasma keep the gas in galaxy clusters too warm for rapid star formation.

    The recorded outburst in SPT-0528 has another peculiarity that sets it apart from other black hole outbursts. It’s unnecessarily large. Astronomers think of the process of gas cooling and hot gas release from black holes as an equilibrium that keeps the temperature in the galaxy cluster — which hovers around 18 million degrees Fahrenheit — stable. “It’s like a thermostat,” says McDonald. The outburst in SPT-0528, however, is not at equilibrium.

    According to Calzadilla, if you look at how much power is released as gas cools onto the black hole versus how much power is contained in the outburst, the outburst is vastly overdoing it. In McDonald’s analogy, the outburst in SPT-0528 is a faulty thermostat. “It’s as if you cooled the air by 2 degrees, and thermostat’s response was to heat the room by 100 degrees,” McDonald explains.

    Earlier in 2019, McDonald and colleagues released a paper [The Astrophysical Journal] looking at a different galaxy cluster, one that displays a completely opposite behavior to that of SPT-0528.

    3
    With increasing data quality (shown progressively, left to right, in images from 2012, 2015, and 2019) Assistant Professor Michael McDonald and colleagues can conclusively show that the black hole in the Phoenix galaxy cluster is not preventing star formation.
    Photos (left to right): Magellan/IMACS/M.McDonald; Magellan/Megacam/M.McDonald; ESA and NASA/Hubble/M.McDonald

    https://sciencesprings.wordpress.com/2019/11/21/from-mit-news-phoenix-cluster-is-cooling-faster-than-expected/

    Instead of an unnecessarily violent outburst, the black hole in this cluster, dubbed Phoenix, isn’t able to keep the gas from cooling. Unlike all the other known galaxy clusters, Phoenix is full of young star nurseries, which sets it apart from the majority of galaxy clusters.

    “With these two galaxy clusters, we’re really looking at the boundaries of what is possible at the two extremes,” McDonald says of SPT-0528 and Phoenix. He and Calzadilla will also characterize the more normal galaxy clusters, in order to understand the evolution of galaxy clusters over cosmic time. To explore this, Calzadilla is characterizing 100 galaxy clusters.

    The reason for characterizing such a large collection of galaxy clusters is because each telescope image is capturing the clusters at a specific moment in time, whereas their behaviors are happening over cosmic time. These clusters cover a range of distances and ages, allowing Calzadilla to investigate how the properties of clusters change over cosmic time. “These are timescales that are much bigger than a human timescale or what we can observe,” explains Calzadilla.

    The research is similar to that of a paleontologist trying to reconstruct the evolution of an animal from a sparse fossil record. But, instead of bones, Calzadilla is studying galaxy clusters, ranging from SPT-0528 with its violent plasma outburst on one end to Phoenix with its rapid cooling on the other. “You’re looking at different snapshots in time,” says Calzadilla. “If you build big enough samples of each of those snapshots, you can get a sense how a galaxy cluster evolves.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 8:51 pm on December 11, 2019 Permalink | Reply
    Tags: "Is there dark matter at the center of the Milky Way", , , , , MIT   

    From MIT News: “Is there dark matter at the center of the Milky Way” 

    MIT News

    From MIT News

    December 10, 2019
    Jennifer Chu

    1
    A map of gamma ray emissions throughout the Milky Way galaxy, based on observations from the Fermi Gamma-ray Space Telescope.

    NASA/Fermi LAT

    NASA/Fermi Gamma Ray Space Telescope

    The inset depicts the Galactic Center Excess – an unexpected, spherical region of gamma ray emissions at the center of our galaxy, of unknown origin. Credit: NASA/T. Linden, U.Chicago

    MIT physicists are reigniting the possibility, which they previously had snuffed out, that a bright burst of gamma rays at the center of our galaxy may be the result of dark matter after all.

    For years, physicists have known of a mysterious surplus of energy at the Milky Way’s center, in the form of gamma rays — the most energetic waves in the electromagnetic spectrum. These rays are typically produced by the hottest, most extreme objects in the universe, such as supernovae and pulsars.

    Gamma rays are found across the disk of the Milky Way, and for the most part physicists understand their sources. But there is a glow of gamma rays at the Milky Way’s center, known as the galactic center excess, or GCE, with properties that are difficult for physicists to explain given what they know about the distribution of stars and gas in the galaxy.

    There are two leading possibilities for what may be producing this excess: a population of high-energy, rapidly rotating neutron stars known as pulsars, or, more enticingly, a concentrated cloud of dark matter, colliding with itself to produce a glut of gamma rays.

    In 2015, an MIT-Princeton University team, including associate professor of physics Tracy Slatyer and postdocs Benjamin Safdi and Wei Xue, came down in favor of pulsars. The researchers had analyzed observations of the galactic center taken by the Fermi Gamma-ray Space Telescope, using a “background model” that they developed to describe all the particle interactions in the galaxy that could produce gamma rays. They concluded, rather definitively, that the GCE was most likely a result of pulsars, and not dark matter.

    However, in new work, led by MIT postdoc Rebecca Leane, Slatyer has since reassessed this claim. In trying to better understand the 2015 analytical method, Slatyer and Leane found that the model they used could in fact be “tricked” to produce the wrong result. Specifically, the researchers ran the model on actual Fermi observations, as the MIT-Princeton team did in 2015, but this time they added a fake extra signal of dark matter. They found that the model failed to pick up this fake signal, and even as they turned the signal up, the model continued to assume pulsars were at the heart of the excess.

    The results, published today in the journal Physical Review Letters, highlight a “mismodeling effect” in the 2015 analysis and reopen what many had thought was a closed case.

    “It’s exciting in that we thought we had eliminated the possibility that this is dark matter,” Slatyer says. “But now there’s a loophole, a systematic error in the claim we made. It reopens the door for the signal to be coming from dark matter.”

    Milky Way’s center: grainy or smooth?

    While the Milky Way galaxy more or less resembles a flat disk in space, the excess of gamma rays at its center occupies a more spherical region, extending about 5,000 light years in every direction from the galactic center.

    In their 2015 study, Slatyer and her colleagues developed a method to determine whether the profile of this spherical region is smooth or “grainy.” They reasoned that, if pulsars are the source of the gamma ray excess, and these pulsars are relatively bright, the gamma rays they emit should inhabit a spherical region that, when imaged, looks grainy, with dark gaps between the bright spots where the pulsars sit.

    If, however, dark matter is the source of the gamma ray excess, the spherical region should look smooth: “Every line of sight toward the galactic center probably has dark matter particles, so I shouldn’t see any gaps or cold spots in the signal,” Slatyer explains.

    She and her team used a background model of all the matter and gas in the galaxy, and all the particle interactions that could occur to produce gamma rays. They considered models for the GCE’s spherical region that were grainy on one hand or smooth on the other, and devised a statistical method to tell the difference between them. They then fed into the model actual observations of the spherical region, taken by the Fermi telescope, and looked to see if these observations fit more with a smooth or grainy profile.

    “We saw it was 100 percent grainy, and so we said, ‘oh, dark matter can’t do that, so it must be something else,’” Slatyer recalls. “My hope was that this would be just the first of many studies of the galactic center region using similar techniques. But by 2018, the main cross-checks of the method were still the ones we’d done in 2015, which made me pretty nervous that we might have missed something.

    Planting a fake

    After arriving at MIT in 2017, Leane became interested in analyzing gamma-ray data. Slatyer suggested they try to test the robustness of the statistical method used in 2015, to develop a deeper understanding of the result. The two researchers asked the difficult question: Under what circumstances would their method break down? If the method withstood interrogation, they could be confident in the original 2015 result. If, however, they discovered scenarios in which the method collapsed, it would suggest something was amiss with their approach, and perhaps dark matter could still be at the center of the gamma ray excess.

    Leane and Slatyer repeated the approach of the MIT-Princeton team from 2015, but instead of feeding into the model Fermi data, the researchers essentially drew up a fake map of the sky, including a signal of dark matter, and pulsars that were not associated with the gamma ray excess. They fed this map into the model and found that, despite there being a dark matter signal within the spherical region, the model concluded this region was most likely grainy and therefore dominated by pulsars. This was the first clue, Slatyer says, that their method “wasn’t foolproof.”

    At a conference to present their results thus far, Leane entertained a question from a colleague: What if she added a fake signal of dark matter that was combined with real observations, rather than with a fake background map?

    The team took up the challenge, feeding the model with data from the Fermi telescope, along with a fake signal of dark matter. Despite the deliberate plant, their statistical analysis again missed the dark matter signal and returned a grainy, pulsar-like picture. Even when they turned up the dark matter signal to four times the size of the actual gamma ray excess, their method failed to see it.

    “By that stage, I was pretty excited, because I knew the implications were very big — it meant that the dark matter explanation was back on the table,” Leane says.

    She and Slatyer are working to better understand the bias in their approach, and hope to tune out this bias in the future.

    “If it’s really dark matter, this would be the first evidence of dark matter interacting with visible matter through forces other than gravity,” Leane says. “The nature of dark matter is one of the biggest open questions in physics at the moment. Identifying this signal as dark matter may allow us to finally expose the fundamental identity of dark matter. No matter what the excess turns out to be, we will learn something new about the universe.”

    This research was funded in part by the Office of High Energy Physics of the U.S. Department of Energy. This research was conducted in part while Slatyer was a visiting junior professor at the Institute for Advanced Study’s School of Natural Sciences, during which she was supported by the Institute for Advanced Study’s John N. Bahcall Fellowship.

    A new analysis puts dark matter back in the game as a possible source of energy excess at the galactic center.

    MIT physicists are reigniting the possibility, which they previously had snuffed out, that a bright burst of gamma rays at the center of our galaxy may be the result of dark matter after all.

    For years, physicists have known of a mysterious surplus of energy at the Milky Way’s center, in the form of gamma rays — the most energetic waves in the electromagnetic spectrum. These rays are typically produced by the hottest, most extreme objects in the universe, such as supernovae and pulsars.

    Gamma rays are found across the disk of the Milky Way, and for the most part physicists understand their sources. But there is a glow of gamma rays at the Milky Way’s center, known as the galactic center excess, or GCE, with properties that are difficult for physicists to explain given what they know about the distribution of stars and gas in the galaxy.

    There are two leading possibilities for what may be producing this excess: a population of high-energy, rapidly rotating neutron stars known as pulsars, or, more enticingly, a concentrated cloud of dark matter, colliding with itself to produce a glut of gamma rays.

    In 2015, an MIT-Princeton University team, including associate professor of physics Tracy Slatyer and postdocs Benjamin Safdi and Wei Xue, came down in favor of pulsars. The researchers had analyzed observations of the galactic center taken by the Fermi Gamma-ray Space Telescope, using a “background model” that they developed to describe all the particle interactions in the galaxy that could produce gamma rays. They concluded, rather definitively, that the GCE was most likely a result of pulsars, and not dark matter.

    However, in new work, led by MIT postdoc Rebecca Leane, Slatyer has since reassessed this claim. In trying to better understand the 2015 analytical method, Slatyer and Leane found that the model they used could in fact be “tricked” to produce the wrong result. Specifically, the researchers ran the model on actual Fermi observations, as the MIT-Princeton team did in 2015, but this time they added a fake extra signal of dark matter. They found that the model failed to pick up this fake signal, and even as they turned the signal up, the model continued to assume pulsars were at the heart of the excess.

    The results, published today in the journal Physical Review Letters, highlight a “mismodeling effect” in the 2015 analysis and reopen what many had thought was a closed case.

    “It’s exciting in that we thought we had eliminated the possibility that this is dark matter,” Slatyer says. “But now there’s a loophole, a systematic error in the claim we made. It reopens the door for the signal to be coming from dark matter.”

    Milky Way’s center: grainy or smooth?

    While the Milky Way galaxy more or less resembles a flat disk in space, the excess of gamma rays at its center occupies a more spherical region, extending about 5,000 light years in every direction from the galactic center.

    In their 2015 study, Slatyer and her colleagues developed a method to determine whether the profile of this spherical region is smooth or “grainy.” They reasoned that, if pulsars are the source of the gamma ray excess, and these pulsars are relatively bright, the gamma rays they emit should inhabit a spherical region that, when imaged, looks grainy, with dark gaps between the bright spots where the pulsars sit.

    If, however, dark matter is the source of the gamma ray excess, the spherical region should look smooth: “Every line of sight toward the galactic center probably has dark matter particles, so I shouldn’t see any gaps or cold spots in the signal,” Slatyer explains.

    She and her team used a background model of all the matter and gas in the galaxy, and all the particle interactions that could occur to produce gamma rays. They considered models for the GCE’s spherical region that were grainy on one hand or smooth on the other, and devised a statistical method to tell the difference between them. They then fed into the model actual observations of the spherical region, taken by the Fermi telescope, and looked to see if these observations fit more with a smooth or grainy profile.

    “We saw it was 100 percent grainy, and so we said, ‘oh, dark matter can’t do that, so it must be something else,’” Slatyer recalls. “My hope was that this would be just the first of many studies of the galactic center region using similar techniques. But by 2018, the main cross-checks of the method were still the ones we’d done in 2015, which made me pretty nervous that we might have missed something.”

    “It’s exciting in that we thought we had eliminated the possibility that this is dark matter,” Slatyer says. “But now there’s a loophole, a systematic error in the claim we made. It reopens the door for the signal to be coming from dark matter.”

    This research was funded in part by the Office of High Energy Physics of the U.S. Department of Energy. This research was conducted in part while Slatyer was a visiting junior professor at the Institute for Advanced Study’s School of Natural Sciences, during which she was supported by the Institute for Advanced Study’s John N. Bahcall Fellowship.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: