Tagged: M.I.T. News Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:05 pm on September 2, 2014 Permalink | Reply
    Tags: , M.I.T. News,   

    From M.I.T.: “Nature’s tiny engineers” 

    MIT News

    September 1, 2014
    David L. Chandler | MIT News Office

    Conventional wisdom has long held that corals — whose calcium-carbonate skeletons form the foundation of coral reefs — are passive organisms that rely entirely on ocean currents to deliver dissolved substances, such as nutrients and oxygen. But now scientists at MIT and the Weizmann Institute of Science (WIS) in Israel have found that they are far from passive, engineering their environment to sweep water into turbulent patterns that greatly enhance their ability to exchange nutrients and dissolved gases with their environment.

    “These microenvironmental processes are not only important, but also unexpected,” says Roman Stocker, an associate professor of civil and environmental engineering at MIT and senior author of a paper describing the results in the Proceedings of the National Academy of Sciences.

    When the team set up their experiment with living coral in tanks in the lab, “I was expecting that this would be a smooth microworld, there would be not much action except the external flow,” Stocker says. Instead, what the researchers found, by zooming in on the coral surface with powerful microscopes and high-speed video cameras, was the opposite: Within the millimeter closest to the coral surface, “it’s very violent,” he says.

    Vortical ciliary flows enhance the exchange of oxygen and nutrients between corals and their environment. The paths of tracer particles are color-coded by fluid velocity, demonstrating that the coral surface is driving the flow. Courtesy of the researchers

    It’s long been known that corals have cilia, small threadlike appendages that can push water along the coral surface. However, these currents were previously assumed to move parallel to the coral surface, in a conveyor-belt fashion. Such smooth motion may help corals remove sediments, but would have little effect on the exchange of dissolved nutrients. Now Stocker and his colleagues show that the cilia on the coral’s surface are arranged in such a way as to produce strong swirls of water that draw nutrients toward the coral, while driving away potentially toxic waste products, such as excess oxygen.

    Not just passive

    “The general thinking has been that corals are completely dependent upon ambient flow, from tides and turbulence, to enable them to overcome diffusion limitation and facilitate the efficient supply of nutrients and the disposal of dissolved waste products,” says Orr Shapiro, a postdoc from WIS and co-first author on the paper, who spent a year in Stocker’s lab making these observations.

    Under such a scenario, colonies in sheltered parts of a reef or at slack tide would see little water movement and might experience severe nutrient limitation or a buildup of toxic waste, to the point of jeopardizing their survival. “Even the shape of the coral can be problematic” under that passive scenario, says Vicente Fernandez, an MIT postdoc and co-first author of the paper. Coral structures are often “treelike, with a deeply branched structure that blocks a lot of the external flow, so the amount of new water going through to the center is very low.”

    The team’s approach of looking at corals with video microscopy and advanced image analysis changed this paradigm. They showed that corals use their cilia to actively enhance the exchange of dissolved molecules, which allows them to maintain increased rates of photosynthesis and respiration even under near-zero ambient flow.

    The researchers tested six different species of reef corals, demonstrating that all share the ability to induce complex turbulent flows around them. “While that doesn’t yet prove that all reef corals do the same,” Shapiro says, “it appears that most if not all have the cilia that create these flows. The retention of cilia through 400 million years of evolution suggests that reef corals derive a substantial evolutionary advantage” from these flows.

    Corals need to stir it up

    The reported findings transform the way we perceive the surface of reef corals; the existing view of a stagnant boundary layer has been replaced by one of a dynamic, actively stirred environment. This will be important not only to questions of mass transport, but also to the interactions of marine microorganisms with coral colonies, a subject that attracts much attention due to a global increase in coral disease and reef degradation over the past decades.

    Besides illuminating how coral reefs function, which could help better predict their health in the face of climate change, this research could have implications in other fields, Stocker suggests: Cilia are ubiquitous in more complex organisms — such as inside human airways, where they help to sweep away contaminants.

    But such processes are difficult to study because cilia are internal. “It’s rare that you have a situation in which you see cilia on the outside of an animal,” Stocker says — so corals could provide a general model for understanding ciliary processes related to mass transport and disease.

    David Bourne, a researcher at the Australian Institute of Marine Science who was not connected with this research, says the work has “provided a major leap forward in understanding why corals are so efficient and thrive. … We finally have a greater understanding of why corals have been successful in establishing and providing the structural framework of coral reef ecosystems.”

    Bourne adds that Stocker has made great strides by “applying his engineering background to biological questions. This cross-disciplinary approach allows his group to approach fundamental questions from a new angle and provide novel answers.”

    In addition to Stocker, Shapiro, and Fernandez, the research team included Assaf Vardi, faculty at WIS; postdoc Melissa Garren; former MIT postdoc Jeffrey Guasto, now an assistant professor at Tufts University; undergraduate François Debaillon-Vesque from MIT and the École Polytechnique in Paris; and Esti Kramarski-Winter from WIS. The work was supported by the Human Frontiers in Science Program, the National Science Foundation, the National Institutes of Health, and the Gordon and Betty Moore Foundation.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 9:11 am on August 29, 2014 Permalink | Reply
    Tags: , M.I.T. News,   

    From M.I.T.: “The power of hidden patterns” 

    MIT News

    August 29, 2014
    David L. Chandler | MIT News Office

    Interfaces within materials can be patterned as a means of controlling the properties of composites.

    Patterned surfaces are all the rage among researchers seeking to induce surfaces to repel water or adhere to other things, or to modify materials’ electrical properties.

    Interfaces between solid materials are surfaces with intricate, internal structure (shown on the left). To control that structure, and to use it for specific applications, researchers model it a simplified way (shown on the right).

    Image: Niaz Abdolrahim and Jose-Luis Olivares/MIT

    Now materials scientists at MIT have added a new wrinkle to research on the patterning of surfaces: While most research has focused on patterns on the outer surfaces of materials, Michael Demkowicz and his team in MIT’s Department of Materials Science and Engineering (DMSE) have begun to explore the effects of patterned surfaces deep within materials — specifically, at the interfaces between layers of crystalline materials.

    Their results, published in the journal Scientific Reports, show that such control of internal patterns can lead to significant improvements in the performance of the resulting materials.

    Demkowicz explains that much research has aimed to create layered composites with desired strength, flexibility, or resistance to vibrations, temperature changes, or radiation. But actually controlling the surfaces where two materials meet within a composite is a tricky process.

    “People don’t think of them as surfaces,” says Demkowicz, an associate professor in DMSE. “If they do, they think of it as a uniform surface, but as it turns out, most interfaces are not uniform.”

    To control the properties of these materials, it is essential to understand and direct these nonuniform interfaces, Demkowicz says. He and his team have taken classical equations used to describe average properties of surfaces and adapted them to instead describe variations in these surfaces “location by location. That’s not easy to do experimentally, but we can do that directly in our computer simulations.”

    The ability to simulate, and then control, how defects or variations are distributed at these interfaces could be useful for a range of applications, he says. For example, in materials used on the interior walls of fusion power reactors, such patterning could make a big difference in durability under extreme conditions.

    As metal walls in these reactors are bombarded by alpha particles — the nuclei of helium atoms — from the fusion reaction, these particles embed themselves and form tiny helium bubbles, which over time can weaken the material and cause it to fail.

    “It’s the most extreme of extreme environments,” Demkowicz says. But by controlling the patterning within the material so that the bubbles line up and form a channel, the helium could simply diffuse out of the materials instead of accumulating, he says. “If we’re successful in doing that, to produce a pathway for the helium to escape, it could be huge,” he says.

    “By exploiting the internal structure as a template, exactly analogous to what people do with surfaces, we can make the bubbles form channels,” Demkowicz adds. The same principle can apply to engineering the properties of materials for other applications, he says, such as controlling how phonons — vibrations of heat or sound — move through a crystalline structure, which could be important in the production of thermoelectric devices. Similarly, the creation of pathways for diffusion within a material could help improve the efficiency of devices such as lithium-ion batteries and fuel cells, he says.

    “The mechanical properties of materials also depend on the internal structure, so you can make them strong or weak,” Demkowicz says, by controlling these interfaces. While materials are ordinarily engineered for strength, there are applications where “you want something that comes apart easily at the seams,” he says.

    David L. McDowell, executive director of the Institute for Materials at the Georgia Institute of Technology, who was not involved in this work, says it “is exciting in that it offers a practical reduced-order strategy to exploit extended defects, to influence and tailor properties and responses of interfaces in materials. These kinds of high-throughput advances in design of interfaces are a key component of realizing the vision of the U.S. Materials Genome Initiative, developing new and improved materials at half the time and half the cost.”

    The research team also included postdocs Aurelien Vattre (now at the French Atomic Energy and Alternatives Commission), Niaz Abdolrahim, and Kedarnath Kolluri. The work was supported by the U.S. Department of Energy and the National Science Foundation.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 7:31 am on August 26, 2014 Permalink | Reply
    Tags: , , , M.I.T. News   

    From M.I.T.- “Study: Cutting emissions pays for itself” 

    MIT News

    August 24, 2014
    Audrey Resutek | Joint Program on the Science and Policy of Global Change

    Lower rates of asthma and other health problems are frequently cited as benefits of policies aimed at cutting carbon emissions from sources like power plants and vehicles, because these policies also lead to reductions in other harmful types of air pollution.

    But just how large are the health benefits of cleaner air in comparison to the costs of reducing carbon emissions? MIT researchers looked at three policies achieving the same reductions in the United States, and found that the savings on health care spending and other costs related to illness can be big — in some cases, more than 10 times the cost of policy implementation.

    Illustration: Christine Daniloff/MIT

    “Carbon-reduction policies significantly improve air quality,” says Noelle Selin, an assistant professor of engineering systems and atmospheric chemistry at MIT, and co-author of a study published today in Nature Climate Change. “In fact, policies aimed at cutting carbon emissions improve air quality by a similar amount as policies specifically targeting air pollution.”

    Selin and colleagues compared the health benefits to the economic costs of three climate policies: a clean-energy standard, a transportation policy, and a cap-and-trade program. The three were designed to resemble proposed U.S. climate policies, with the clean-energy standard requiring emissions reductions from power plants similar to those proposed in the Environmental Protection Agency’s Clean Power Plan.

    Health savings constant across policies

    The researchers found that savings from avoided health problems could recoup 26 percent of the cost to implement a transportation policy, but up to to 10.5 times the cost of implementing a cap-and-trade program. The difference depended largely on the costs of the policies, as the savings — in the form of avoided medical care and saved sick days — remained roughly constant: Policies aimed at specific sources of air pollution, such as power plants and vehicles, did not lead to substantially larger benefits than cheaper policies, such as a cap-and-trade approach.

    Savings from health benefits dwarf the estimated $14 billion cost of a cap-and-trade program. At the other end of the spectrum, a transportation policy with rigid fuel-economy requirements is the most expensive policy, costing more than $1 trillion in 2006 dollars, with health benefits recouping only a quarter of those costs. The price tag of a clean energy standard fell between the costs of the two other policies, with associated health benefits just edging out costs, at $247 billion versus $208 billion.

    “If cost-benefit analyses of climate policies don’t include the significant health benefits from healthier air, they dramatically underestimate the benefits of these policies,” says lead author Tammy Thompson, now at Colorado State University, who conducted the research as a postdoc in Selin’s group.

    Most detailed assessment to date

    The study is the most detailed assessment to date of the interwoven effects of climate policy on the economy, air pollution, and the cost of health problems related to air pollution. The MIT group paid especially close attention to how changes in emissions caused by policy translate into improvements in local and regional air quality, using comprehensive models of both the economy and the atmosphere.

    In addition to carbon dioxide, burning fossil fuels releases a host of other chemicals into the atmosphere. Some of these substances interact to form ground-level ozone, as well as fine particulate matter. The researchers modeled where and when these chemical reactions occurred, and where the resulting pollutants ended up — in cities where many people would come into contact with them, or in less populated areas.

    The researchers projected the health effects of ground-level ozone and fine particulate matter, two of the biggest health offenders related to fossil-fuel emissions. Both pollutants can cause asthma attacks and heart and lung disease, and can lead to premature death.

    In 2011, 231 counties in the U.S. exceeded the EPA’s regulatory standards for ozone, the main component of smog. Standards for fine particulate matter — airborne particles small enough to be inhaled deep into the lungs and even absorbed into the bloodstream — were exceeded in 118 counties.

    While cutting carbon dioxide from current levels in the U.S. will result in savings from better air quality, pollution-related benefits decline as carbon policies become more stringent. Selin cautions that after a certain point, most of the health benefits have already been reaped, and additional emissions reductions won’t translate into greater improvements.

    “While air-pollution benefits can help motivate carbon policies today, these carbon policies are just the first step,” Selin says. “To manage climate change, we’ll have to make carbon cuts that go beyond the initial reductions that lead to the largest air-pollution benefits.”

    The study shows that climate policies can also have significant local benefits not related to their impact on climate, says Gregory Nemet, a professor of public affairs and environmental studies at the University of Wisconsin at Madison who was not involved in the study.

    “A particularly notable aspect of this study is that even though several recent studies have shown large co-benefits, this study finds large co-benefits in the U.S., where air quality is assumed to be high relative to other countries,” Nemet says. “Now that states are on the hook to come up with plans to meet federal emissions targets by 2016, you can bet they will take a close look at these results.”

    This research was supported by funding from the EPA’s Science to Achieve Results program.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 7:16 am on August 26, 2014 Permalink | Reply
    Tags: , M.I.T. News, , Microfluidics   

    From M.I.T.: “Sorting cells with sound waves” 

    MIT News

    August 25, 2014
    Anne Trafton | MIT News Office

    Acoustic device that separates tumor cells from blood cells could help assess cancer’s spread.

    Illustration: Christine Daniloff/MIT

    Researchers from MIT, Pennsylvania State University, and Carnegie Mellon University have devised a new way to separate cells by exposing them to sound waves as they flow through a tiny channel. Their device, about the size of a dime, could be used to detect the extremely rare tumor cells that circulate in cancer patients’ blood, helping doctors predict whether a tumor is going to spread.

    Separating cells with sound offers a gentler alternative to existing cell-sorting technologies, which require tagging the cells with chemicals or exposing them to stronger mechanical forces that may damage them.

    “Acoustic pressure is very mild and much smaller in terms of forces and disturbance to the cell. This is a most gentle way to separate cells, and there’s no artificial labeling necessary,” says Ming Dao, a principal research scientist in MIT’s Department of Materials Science and Engineering and one of the senior authors of the paper, which appears this week in the Proceedings of the National Academy of Sciences.

    Subra Suresh, president of Carnegie Mellon, the Vannevar Bush Professor of Engineering Emeritus, and a former dean of engineering at MIT, and Tony Jun Huang, a professor of engineering science and mechanics at Penn State, are also senior authors of the paper. Lead authors are MIT postdoc Xiaoyun Ding and Zhangli Peng, a former MIT postdoc who is now an assistant professor at the University of Notre Dame.

    The researchers have filed for a patent on the device, the technology of which they have demonstrated can be used to separate rare circulating cancer cells from white blood cells.

    To sort cells using sound waves, scientists have previously built microfluidic devices with two acoustic transducers, which produce sound waves on either side of a microchannel. When the two waves meet, they combine to form a standing wave (a wave that remains in constant position). This wave produces a pressure node, or line of low pressure, running parallel to the direction of cell flow. Cells that encounter this node are pushed to the side of the channel; the distance of cell movement depends on their size and other properties such as compressibility.

    However, these existing devices are inefficient: Because there is only one pressure node, cells can be pushed aside only short distances.

    The new device overcomes that obstacle by tilting the sound waves so they run across the microchannel at an angle — meaning that each cell encounters several pressure nodes as it flows through the channel. Each time it encounters a node, the pressure guides the cell a little further off center, making it easier to capture cells of different sizes by the time they reach the end of the channel.

    This simple modification dramatically boosts the efficiency of such devices, says Taher Saif, a professor of mechanical science and engineering at the University of Illinois at Urbana-Champaign. “That is just enough to make cells of different sizes and properties separate from each other without causing any damage or harm to them,” says Saif, who was not involved in this work.

    In this study, the researchers first tested the system with plastic beads, finding that it could separate beads with diameters of 9.9 and 7.3 microns (thousandths of a millimeter) with about 97 percent accuracy. They also devised a computer simulation that can predict a cell’s trajectory through the channel based on its size, density, and compressibility, as well as the angle of the sound waves, allowing them to customize the device to separate different types of cells.

    To test whether the device could be useful for detecting circulating tumor cells, the researchers tried to separate breast cancer cells known as MCF-7 cells from white blood cells. These two cell types differ in size (20 microns in diameter for MCF-7 and 12 microns for white blood cells), as well as density and compressibility. The device successfully recovered about 71 percent of the cancer cells; the researchers plan to test it with blood samples from cancer patients to see how well it can detect circulating tumor cells in clinical settings. Such cells are very rare: A 1-milliliter sample of blood may contain only a few tumor cells.

    “If you can detect these rare circulating tumor cells, it’s a good way to study cancer biology and diagnose whether the primary cancer has moved to a new site to generate metastatic tumors,” Dao says. “This method is a step forward for detection of circulating tumor cells in the body. It has the potential to offer a safe and effective new tool for cancer researchers, clinicians and patients,” Suresh says.

    The research was funded by the National Institutes of Health and the National Science Foundation.

    See the full article, with video, here.

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 11:06 am on August 21, 2014 Permalink | Reply
    Tags: , M.I.T. News, Software Engineering   

    From M.I.T.: “Unlocking the potential of simulation software” 

    MIT News

    August 21, 2014
    Rob Matheson | MIT News Office

    Novel software by Akselos drastically increases speed, ease of 3-D engineering simulations.

    With a method known as finite element analysis (FEA) , engineers can generate 3-D digital models of large structures to simulate how they’ll fare under stress, vibrations, heat, and other real-world conditions.

    A screenshot of Akselos’ software running in a Web browser for 2.01x. This app shows the stresses in the landing gear for a solar-powered airplane.
    Courtesy of Akselos

    Used for mapping out large-scale structures — such as mining equipment, buildings, and oil rigs — these simulations require intensive computation done by powerful computers over many hours, costing engineering firms much time and money.

    Now MIT spinout Akselos has developed novel software, based on years of research at the Institute, that uses precalculated supercomputer data for structural components — like simulated “Legos” — to solve FEA models in seconds.

    A simulation that could take hours with conventional FEA software, for instance, could be done in seconds with Akselos’ platform.

    Hundreds of engineers in the mining, power-generation, and oil and gas industries are now using the Akselos software. The startup is also providing software for an MITx course on structural engineering.

    With its technology, Akselos aims to make 3-D simulations more accessible worldwide to promote efficient engineering design, says David Knezevic, Akselos’ chief technology officer, who co-founded the startup with former MIT postdoc Phuong Huynh and alumnus Thomas Leurent SM’ 01.

    “We’re trying to unlock the value of simulation software, since for many engineers current simulation software is far too slow and labor-intensive, especially for large models,” Knezevic says. “High-fidelity simulation enables more cost-effective designs, better use of energy and materials, and generally an increase in overall efficiency.”

    “Simulation components”

    Akselos’ software runs on a novel technique called the “reduced basis (RB) component method,” co-invented by Anthony Patera, the Ford Professor of Engineering at MIT, and Knezevic and Huynh. (The technique builds on a decade of research by Patera’s group.)

    This technique merges the concept of the RB method — which reproduces expensive FEA results by solving related calculations that are much faster — with the idea of decomposing larger simulations into an assembly of components.

    “We developed a component-based version of the reduced basis method, which enables users to build large and complex 3-D models out of a set of parameterized components,” Knezevic says.

    In 2010, the firm’s founders were part of a team, led by Patera, that used that technique to create a mobile app that displayed supercomputer simulations, in seconds, on a smartphone.

    A supercomputer first presolved problems — such as fluid flow around a spherical obstacle in a pipe — that had a known form, but for dozens of different parameters. (These parameters were automatically chosen to cover a range of possible solutions.) When app users plugged in custom parameters for problems — such as the diameter of that spherical obstacle — the app would compute a solution for the new parameters by referencing the precomputed data.

    Today’s Akselos software runs on a similar principle, but with new software, and cloud-based service. A supercomputer precalculates individual components, such as, say, a simple tube or a complex mechanical part. “And this creates a big data footprint for each one of these components, which we push to the cloud,” Knezevic says.

    These components contain adjustable parameters, which enable users to vary properties, such as geometry, density, and stiffness. Engineers can then access and customize a library of precalculated components, drag and drop them into an “assembler” platform, and connect them to build a full simulation. After that, the software will reference the precomputed data to create a highly detailed 3-D simulation in seconds.

    In one demonstration, for instance, a mining company used components available in the Akselos library to rapidly create a simulation of shiploader infrastructure — complete with high-stress “hot spots” — that needed inspection. When on-site inspectors then found cracks, they relayed that information to the engineer, who added the damage to the simulation, and created modified simulations within a few minutes.

    “The software also allows people to model the machinery in its true state,” Knezevic says. “Often infrastructure has been in use for decades and is far from pristine — with damage, or holes, or corrosion — and you want to represent those defects,” Knezevic says. “That’s not simple for engineers today, since with other software it’s not feasible to simulate large structures in full 3-D detail.”

    Ultimately, pushing the data to the cloud has helped Akselos, by leveraging the age-old tradeoff between speed and storage: By storing and reusing more data, algorithms can do less work and hence finish more quickly.

    “These days, with cloud technology, storing lots of data is no big deal. We store a lot more data than other methods, but that data, in turn, allows us to go faster, because we’re able to reuse as much precomputed data as possible,” he says.

    Bringing technology to the world

    Akselos was founded in 2012, after Knezevic and Huynh, along with Leurent — who actually started FEA work with Patera group back in 2000 — earned a Deshpande innovation grant for their “supercomputing-on-a-smartphone” innovation.

    “That was a trigger,” Knezevic says. “Our passion and goal has always been to bring new technology to the world. That’s where the Deshpande Center and the MIT innovation ecosystem are great.”

    From there, Akselos grew with additional help from MIT’s Venture Mentoring Service (VMS), whose mentors guided the team in fundraising, sales, opening a Web platform to users, and hiring.

    “We needed a sounding board,” Knezevic says. “We’d go into meetings and bounce ideas around to help us make good decisions. I think all our decisions were influenced by that type of discussion. It’s a real luxury that you don’t have in other places.”

    In expanding their visibility, and to get back into the academic sphere, Akselos has teamed with Simona Socrate, a principal research scientist in mechanical engineering at MIT, who is using the startup’s software — albeit a limited version — in her MITx class, 2.01x (Elements of Structures).

    Feedback from students has been positive, Knezevic says. Primarily, he hears that the software is allowing students to “build intuition for the physics of structures beyond what they could see by simply solving math problems.”

    “In 2.01x the students learn about axial loading, bending, and torsion — we have apps for each case so they can visualize the stress, strain, and displacement in 3-D in their browser,” he says. “We think it’s a great way to show students the value of fast, 3-D simulations.”

    Commercially, Akselos is expanding, hiring more employees in its three branches — in Boston, Vietnam, and Switzerland — building a community of users, and planning to continue its involvement with edX classes.

    On Knezevic’s end, at the Boston office, it’s all about software development, tailoring features to customer needs — a welcome challenge for the longtime researcher.

    “In academia, typically only you and a few colleagues use the software,” he says. “But in a company you have people all over the world playing with it and testing it, saying, ‘This button needs to be there’ or ‘We need this new type of analysis.’ Everything revolves around the customer. But it was good to have that solid footing in academic work that we could build on.”

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 9:06 am on August 21, 2014 Permalink | Reply
    Tags: , Drones, M.I.T. News   

    From M.I.T.: “Delivery by drone” 

    MIT News

    August 21, 2014
    Jennifer Chu | MIT News Office

    In the near future, the package that you ordered online may be deposited at your doorstep by a drone: Last December, online retailer Amazon announced plans to explore drone-based delivery, suggesting that fleets of flying robots might serve as autonomous messengers that shuttle packages to customers within 30 minutes of an order.


    To ensure safe, timely, and accurate delivery, drones would need to deal with a degree of uncertainty in responding to factors such as high winds, sensor measurement errors, or drops in fuel. But such “what-if” planning typically requires massive computation, which can be difficult to perform on the fly.

    Now MIT researchers have come up with a two-pronged approach that significantly reduces the computation associated with lengthy delivery missions. The team first developed an algorithm that enables a drone to monitor aspects of its “health” in real time. With the algorithm, a drone can predict its fuel level and the condition of its propellers, cameras, and other sensors throughout a mission, and take proactive measures — for example, rerouting to a charging station — if needed.

    The researchers also devised a method for a drone to efficiently compute its possible future locations offline, before it takes off. The method simplifies all potential routes a drone may take to reach a destination without colliding with obstacles.

    In simulations involving multiple deliveries under various environmental conditions, the researchers found that their drones delivered as many packages as those that lacked health-monitoring algorithms — but with far fewer failures or breakdowns.

    “With something like package delivery, which needs to be done persistently over hours, you need to take into account the health of the system,” says Ali-akbar Agha-mohammadi, a postdoc in MIT’s Department of Aeronautics and Astronautics. “Interestingly, in our simulations, we found that, even in harsh environments, out of 100 drones, we only had a few failures.”

    Agha-mohammadi will present details of the group’s approach in September at the IEEE/RSJ International Conference on Intelligent Robots and Systems, in Chicago. His co-authors are MIT graduate student Kemal Ure; Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics; and John Vian of Boeing.

    Tree of possibilities

    Planning an autonomous vehicle’s course often involves an approach called Markov Decision Process (MDP), a sequential decision-making framework that resembles a “tree” of possible actions. Each node along a tree can branch into several potential actions — each of which, if taken, may result in even more possibilities. As Agha-mohammadi explains it, MDP is “the process of reasoning about the future” to determine the best sequence of policies to minimize risk.

    MDP, he says, works reasonably well in environments with perfect measurements, where the result of one action will be observed perfectly. But in real-life scenarios, where there is uncertainty in measurements, such sequential reasoning is less reliable. For example, even if a command is given to turn 90 degrees, a strong wind may prevent that command from being carried out.

    Instead, the researchers chose to work with a more general framework of Partially Observable Markov Decision Processes (POMDP). This approach generates a similar tree of possibilities, although each node represents a probability distribution, or the likelihood of a given outcome. Planning a vehicle’s route over any length of time, therefore, can result in an exponential growth of probable outcomes, which can be a monumental task in computing.

    Agha-mohammadi chose to simplify the problem by splitting the computation into two parts: vehicle-level planning, such as a vehicle’s location at any given time; and mission-level, or health planning, such as the condition of a vehicle’s propellers, cameras, and fuel levels.

    For vehicle-level planning, he developed a computational approach to POMDP that essentially funnels multiple possible outcomes into a few most-likely outcomes.

    “Imagine a huge tree of possibilities, and a large chunk of leaves collapses to one leaf, and you end up with maybe 10 leaves instead of a million leaves,” Agha-mohammadi says. “Then you can … let this run offline for say, half an hour, and map a large environment, and accurately predict the collision and failure probabilities on different routes.”

    He says that planning out a vehicle’s possible positions ahead of time frees up a significant amount of computational energy, which can then be spent on mission-level planning in real time. In this regard, he and his colleagues used POMDP to generate a tree of possible health outcomes, including fuel levels and the status of sensors and propellers.

    Proactive delivery

    The researchers combined the two computational approaches, and ran simulations in which drones were tasked with delivering multiple packages to different addresses under various wind conditions and with limited fuel. They found that drones operating under the two-pronged approach were more proactive in preserving their health, rerouting to a recharge station midmission to keep from running out of fuel. Even with these interruptions, the team found that these drones were able to deliver just as many packages as those that were programmed to simply make deliveries without considering health.

    Going forward, the team plans to test the route-planning approach in actual experiments. The researchers have attached electromagnets to small drones, or quadrotors, enabling them to pick up and drop off small parcels. The team has also programmed the drones to land on custom-engineered recharge stations.

    “We believe in the near future, in a lab setting, we can show what we’re gaining with this framework by delivering as many packages as we can while preserving health,” Agha-mohammadi says. “Not only the drone, but the package might be important, and if you fail, it could be a big loss.”

    This work was supported by Boeing.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 8:01 am on August 19, 2014 Permalink | Reply
    Tags: , , , M.I.T. News   

    From M.I.T.: “The History Inside Us” 

    MIT News

    August 19, 2014
    Christine Kenneally

    Improvements in DNA analysis are helping us rewrite the past and better grasp what it means to be human.


    Every day our DNA breaks a little. Special enzymes keep our genome intact while we’re alive, but after death, once the oxygen runs out, there is no more repair. Chemical damage accumulates, and decomposition brings its own kind of collapse: membranes dissolve, enzymes leak, and bacteria multiply. How long until DNA disappears altogether? Since the delicate molecule was discovered, most scientists had assumed that the DNA of the dead was rapidly and irretrievably lost. When Svante Pääbo, now the director of the Max Planck Institute for Evolutionary Anthropology in Germany, first considered the question more than three decades ago, he dared to wonder if it might last beyond a few days or weeks. But Pääbo and other scientists have now shown that if only a few of the trillions of cells in a body escape destruction, a genome may survive for tens of thousands of years.

    An example of the results of automated chain-termination DNA sequencing.

    In his first book, Neanderthal Man: In Search of Lost Genomes, Pääbo logs the genesis of one of the most groundbreaking scientific projects in the history of the human race: sequencing the genome of a Neanderthal, a human-like creature who lived until about 40,000 years ago. Pääbo’s tale is part hero’s journey and part guidebook to shattering scientific paradigms. He began dreaming about the ancients on a childhood trip to Egypt from his native Sweden. When he grew up, he attended medical school and studied molecular biology, but the romance of the past never faded. As a young researcher, he tried to mummify a calf liver in a lab oven and then extract DNA from it. Most of Pääbo’s advisors saw ancient DNA as a “quaint hobby,” but he persisted through years of disappointing results, patiently awaiting technological innovation that would make the work fruitful. All the while, Pääbo became adept at recruiting researchers, luring funding, generating publicity, and finding ancient bones.

    Eventually, his determination paid off: in 1996, he led the effort to sequence part of the Neanderthal mitochondrial genome. (Mitochondria, which serve as cells’ energy packs, appear to be remnants of an ancient single-celled organism, and they have their own DNA, which children inherit from their mothers. This DNA is simpler to read than the full human genome.) Finally, in 2010, Pääbo and his colleagues published the full Neanderthal genome.

    That may have been one of the greatest feats of modern biology, yet it is also part of a much bigger story about the extraordinary utility of DNA. For a long time, we have seen the genome as a tool for predicting the future. Do we have the mutation for Huntington’s? Are we predisposed to diabetes? But it may have even more to tell us about the past: about distant events and about the network of lives, loves, and decisions that connects them.


    Long before research on ancient DNA took off, Luigi Cavalli-Sforza made the first attempt to rebuild the history of the world by comparing the distribution of traits in different living populations. He started with blood types; much later, his popular 2001 book Genes, Peoples, and Languages explored demographic history via languages and genes. Big historical arcs can also be inferred from the DNA of living people, such as the fact that all non-Africans descend from a small band of humans that left Africa 60,000 years ago. The current distribution across Eurasia of a certain Y chromosome—which fathers pass to their sons—rather neatly traces the outline of the Mongolian Empire, leading researchers to propose that it comes from Genghis Khan, who pillaged and raped his way across the continent in the 13th century.

    But in the last few years, geneticists have found ways to explore not just big events but also the dynamics of populations through time. A 2014 study used the DNA of ancient farmers and hunter-gatherers from Europe to investigate an old question: Did farming sweep across Europe and become adopted by the resident hunter-gatherers, or did farmers sweep across the continent and replace the hunter-gatherers? The researchers sampled ancient individuals who were identified as either farmers or hunters, depending on how they were buried and what goods were buried with them. A significant difference between the DNA of the two groups was found, suggesting that even though there may have been some flow of hunter-­gatherer DNA into the farmers’ gene pool, for the most part the farmers replaced the hunter-gatherers.

    Looking at more recent history, Peter Ralph and Graham Coop compared small segments of the genome across Europe and found that any two modern Europeans who lived in neighboring populations, such as Belgium and Germany, shared between two and 12 ancestors over the previous 1,500 years. They identified tantalizing variations as well. Most of the common ancestors of Italians seem to have lived around 2,500 years ago, dating to the time of the Roman Republic, which preceded the Roman Empire. Though modern Italians share ancestors within the last 2,500 years, they share far fewer of them than other Europeans share with their own countrymen. In fact, Italians from different regions of Italy today have about the same number of ancestors in common with one another as they have with people from other countries. The genome reflects the fact that until the 19th century Italy was a group of small states, not the larger country we know today.

    In a very short amount of time, the genomes of ancient people have ­facilitated a new kind of population genetics. It reveals phenomena that we have no other way of knowing about.

    Significant events in British history suggest that the genetics of Wales and some remote parts of Scotland should be different from genetics in the rest of Britain, and indeed, a standard population analysis on British people separates these groups out. But this year scientists led by Peter Donnelly at Oxford uncovered a more fine-grained relationship between genetics and history. By tracking subtle patterns across the genomes of modern Britons whose ancestors lived in particular rural areas, they found at least 17 distinct clusters that probably reflect different groups in the historic population of Britain. This work could help explain what happened during the Dark Ages, when no written records were made—for example, how much ancient British DNA was swamped by the invading Saxons of the fifth century.

    The distribution of certain genes in modern populations tells us about cultural events and choices, too: after some groups decided to drink the milk of other mammals, they evolved the ability to tolerate lactose. The descendants of groups that didn’t make this choice don’t tolerate lactose well even today.


    Analyzing the DNA of the living is much easier than analyzing ancient DNA, which is always vulnerable to contamination. The first analyses of Neanderthal mitochondrial DNA were performed in an isolated lab that was irradiated with UV light each night to destroy DNA carried in on dust. Researchers wore face shields, sterile gloves, and other gear, and if they entered another lab, Pääbo would not allow them back that day. Still, controlling contamination only took Pääbo’s team to the starting line. The real revolution in analysis of ancient DNA came in the late 1990s, with ­second-generation DNA sequencing techniques. Pääbo replaced Sanger sequencing, invented in the 1970s, with a technique called pyrosequencing, which meant that instead of sequencing 96 fragments of ancient DNA at a time, he could sequence hundreds of thousands.

    Such breakthroughs made it possible to answer one of the longest-running questions about Neanderthals: did they mate with humans? There was scant evidence that they had, and Pääbo himself believed such a union was unlikely because he had found no trace of Neanderthal genetics in human mitochondrial DNA. He suspected that humans and Neanderthals were biologically incompatible. But now that the full Neanderthal genome has been sequenced, we can see that 1 to 3 percent of the genome of non-Africans living today contains variations, known as alleles, that apparently originated with Neanderthals. That indicates that humans and Neanderthals mated and had children, and that those children’s children eventually led to many of us. The fact that sub-Saharan Africans do not carry the same Neanderthal DNA suggests that Neanderthal-human hybrids were born just as humans were expanding out of Africa 60,000 years ago and before they colonized the rest of the world. In addition, the way Neanderthal alleles are distributed in the human genome tells us about the forces that shaped lives long ago, perhaps helping the earliest non-Africans adapt to colder, darker regions. Some parts of the genome with a high frequency of Neanderthal variants affect hair and skin color, and the variants probably made the first Eurasians lighter-skinned than their African ancestors.

    Ancient DNA will almost certainly complicate other hypotheses, like the ­African-origin story, with its single migratory human band. Ancient DNA also reveals phenomena that we have no other way of knowing about. When Pääbo and colleagues extracted DNA from a few tiny bones and a couple of teeth found in a cave in the Altai Mountains in Siberia, they discovered an entirely new sister group, the Denisovans. Indigenous Australians, Melanesians, and some groups in Asia may have up to 5 percent Denisovan DNA, in addition to their Neanderthal DNA.

    In a very short amount of time, a number of ancients have been sequenced by teams all over the world, and the growing library of their genomes has facilitated a new kind of population genetics. What is it that DNA won’t be able to tell us about the past? It may all come down to what happened in the first moments or days after someone’s death. If, for some reason, cells dry out quickly—if you die in a desert or a dry cave, if you are frozen or mummified—post-mortem damage to DNA can be halted, but it may never be possible to sequence DNA from remains found in wet, tropical climates. Still, even working with only the scattered remains that we have found so far, we keep gaining insights into ancient history. One of the remaining mysteries, Pääbo observes, is why modern humans, unlike their archaic cousins, spread all over the globe and dramatically reshaped the environment. What made us different? The answer, he believes, lies waiting in the ancient genomes we have already sequenced.

    There is some irony in the fact that Pääbo’s answer will have to wait until we get more skillful at reading our own genome. We are at the very beginning stages of understanding how the human genome works, and it is only once we know ourselves better that we will be able to see what we had in common with Neanderthals and what is truly different.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 7:45 am on August 19, 2014 Permalink | Reply
    Tags: Automation, M.I.T. News   

    From M.I.T.: “Love of Labor” 

    MIT News

    August 19, 2014
    Mattathias Schwartz

    Automation makes things easier, whether it’s on the factory floor or online. Is it also eroding too many of the valuable skills that define us as people?


    Messages move at light speed. maps speak directions. Groceries arrive at the door. Floors mop themselves. Automation provides irresistible conveniences.

    And yet automation can also be cast as a villain. When machines take over work that once required sweat and skill, humans atrophy into mere button-pushing operators. Laments about automation are as familiar as John Henry, the railroad steel-driver of lore who could not outlast a steam-powered version of himself. The latest is The Glass Cage by Nicholas Carr, who worries about the implications as machines and software advance far past the railroad and the assembly line to the cockpit, the courtroom, and even the battle­field. Machines and computers now do much more than rote mechanical work. They monitor complex systems, synthesize data, learn from experience, and make fine-grained, split-second judgments.

    What will be left for us to do? While economists and policy makers are debating what automation will mean for employment and inequality (see “How Technology Is Destroying Jobs,” July/August 2013), Carr’s book does not sort out those implications. It is about what he fears will be diminished—our autonomy, our feelings of accomplishment, our engagement with the world—if we no longer have to carry out as many difficult tasks, whether at home or at work.

    The Glass Cage channels the anxieties of the contemporary workplace. Even talented white-collar workers feel as though they are half a generation from being rendered obsolete.

    The centerpiece of his argument is the Yerkes-Dodson curve, which plots the relationship between human performance and the stimulation our tasks provide. Too much stimulation makes us feel panicked and overloaded, but when we have too little stimulation—when our work is too easy—we become lethargic and withdrawn. Activities that provide moderate stimulation yield the highest level of performance and, as Carr argues, turn us into better people in the process.

    Carr, a former executive editor of Harvard Business Review and an occasional contributor to this magazine, has written several books that have challenged common beliefs about technology, like the added value of IT for businesses and the cognitive benefits of Google. In The Glass Cage he is channeling the anxieties of the contemporary workplace. Even talented white-collar workers feel as though they are half a generation from being rendered obsolete by an algorithm. But Carr is not analyzing the economic consequences of automation for the workforce at large. The book begins with a warning to airline pilots from the U.S. Federal Aviation Administration not to rely too much on autopilot. He narrates two crashes, tracing their cause to pilot inattention caused by the autopilot’s lulling effects. This reads like the opening of a utilitarian argument against automation: we ought to let pilots do their jobs because computers lack the judgment necessary to preserve human life during moments of crisis. Later, we learn that the safety records of Airbus planes and the more pilot-oriented planes built by Boeing are more or less identical. Carr’s core complaint is mainly about the texture of living in an automated world—how it affects us at a personal level.

    At times, this seems to be coming from a position of nostalgia, a longing for a past that is perhaps more desirable in retrospect. Take GPS. To Carr, GPS systems are inferior to paper maps because they make navigation too easy—they weaken our own navigational skills. GPS is “not designed to deepen our involvement with our surroundings,” he writes. The problem is, neither are maps. Like GPS, they are tools intended to deliver their user to a desired destination with the least possible hassle. It is true that paper maps require a different set of skills, and anyone who finds this experience of stopping and unfolding and getting lost more enlivening or less emasculating than the new incarnation of way-finding can choose to turn GPS off, or use the two technologies in tandem.

    In the zone

    The classic account of life at the top of the Yerkes-Dodson curve is Mihaly ­Csikszentmihalyi’s Flow: The Psychology of Optimal Experience, published in 1990. Flow is a concept of almost poetic vagueness, hard to measure and even harder to define. Csikszentmihalyi found it in all kinds of people: athletes, artists, musicians, and craftsmen. What makes “flow” more than a flight of fancy is that almost anyone will recognize the feeling of “losing oneself” in a challenging task or being “in the zone.” As a concept, flow erases the boundary that economists draw between “work” and leisure or recreation, and Carr wants automation to be designed to produce it. Ideally it would have a Goldilocks just-right quality, relieving drudgery but stopping short of doing everything.

    Carr spends most of The Glass Cage treating automation as though it were a problem of unenlightened personal choices—suggesting that we should often opt out of technologies like GPS in favor of manual alternatives. Yet the decision to adopt many other innovations is not always so voluntary. There is often something seductive and even coercive about them. Consider a technology that Carr himself discusses: Facebook, which seeks to automate the management of human relationships. Once the majority has accepted the site’s addictive design and slight utility, it gets harder for any one individual to opt out. (Though Facebook may not look like an example of automation, it is indeed work in disguise. The workers—or “users”—are not paid a wage and the product, personal data, is not sold in a visible or public market, but it does have a residual echo of the machine room. Personal expression and relationships constitute the raw material; the continuously updated feed is the production line.)

    Carr flirts with real anger in The Glass Cage, but he doesn’t go far enough in exploring more constructive pushback to automation. The resistance he endorses is the docile, individualized resistance of the consumer—a photographer who shoots on film, an architect who brainstorms on paper. These are small, personal choices with few broader consequences. The frustrations that Carr diagnoses—the longing for an older world, or a different world, or technologies that embody more humanistic and less exploitative intentions—are widespread. For these alternatives to appear feasible, someone must do the hard work of imagining what they would look like.

    A human operator controls a robot in a British facility that makes metal pedestrian barriers.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 7:24 am on August 14, 2014 Permalink | Reply
    Tags: , , , M.I.T. News, , Suicide   

    From M.I.T.: “Could a Genetic Test Predict the Risk for Suicide?” 

    MIT News

    August 13, 2014
    Antonio Regalado

    Scientists are hunting for the genetic basis of suicide and developing suicide DNA tests.

    No one could have predicted that Oscar-winning comedian Robin Williams would kill himself.

    Or could they?

    When someone commits suicide, the reaction is often the same. It’s disbelief, mixed with a recognition that the signs were all there. Depression. Maybe talk of ending one’s life.

    Now, by studying people who think about committing suicide, as well as brains of people who actually did, two groups of genome researchers in the U.S. and Europe are claiming they can use DNA tests to actually predict who will attempt suicide.

    While claims for a suicide test remain preliminary, and controversial, a “suicide gene” is not as fanciful as it sounds. The chance that a person takes his or her own life is in fact heritable, and many scientific teams are now involved in broad expeditions across the human genome to locate suicide’s biological causes.

    Based on such gene research, one startup company, Sundance Diagnostics, based in Boulder, Colorado, says it will begin offering a suicide risk test to doctors next month, but only in connection with patients taking antidepressant drugs like Prozac and Zoloft.

    The Sundance test rests on research findings reported by the Max Planck Institute of Psychiatry in 2012. The German researchers, based in Munich, scanned the genes of 898 people taking antidepressants and identified 79 genetic markers they claimed together had a 91 percent probability of correctly predicting “suicidal ideation,” or imagining the act of suicide.

    It’s well known that after going on antidepressants, some people do begin thinking about killing themselves. The risk is large enough that a decade ago the U.S. Food and Drug Administration slapped a warnings on antidepressant pills, saying they “increased the risk … of suicidal thinking and behavior” in children and young adults.

    “The number of completed suicides is not large, but none of us want our loved one to be at risk. You wouldn’t play roulette if it was your child,” says Sundance CEO Kim Bechthold, who licensed the test idea from Max Planck. She says the DNA tests will be carried out on a saliva sample.

    Given how many people take antidepressants, the market for a suicide test could be big. In the U.S., about 11 percent of Americans 12 years and older take antidepressants, according to a 2011 estimate by the U.S. Centers for Disease Control and Prevention.

    For now, however, experts say there are good reasons to view any suicide test with skepticism. Genome studies often turn up apparent connections that later are found not to mean much. Dozens of genes have been linked to suicide, but none in a truly definitive fashion.

    “I don’t think there are any credible genomic tests for suicide risk or prevention,” says Muin J. Khoury, head of the Office of Public Health Genomics at the U.S. Centers for Disease Control and Prevention. According to the CDC, suicide is the 10th most common cause of death in the U.S., accounting for 39,518 deaths in 2011.

    What is certain, says the CDC’s Khoury, is that suicide runs in families. On its list of suicide risk factors, the CDC lists family history as the most important, followed by mistreatment of children, prior suicide attempts, and depression.

    That family connection is what makes scientists certain that genes are involved. In 2013, for instance, Danish researchers looked at 221 adopted children who later in life committed suicide. They found that their biological siblings, raised in different households, were five times as likely to also commit suicide as other people. Identical twins are also more likely to both kill themselves than are two non-identical twins.

    Altogether, epidemiologists believe that 30 percent to 55 percent of the risk that someone takes their own life is inherited, and the risk isn’t linked to any specific mental illness, like depression or schizophrenia.

    That means suicide probably has its own unique genetic causes, says Stella Dracheva, a pathologist who studies the brains of suicide victims at Memorial Sloan Kettering in New York.”Suicide is a very complex condition, but there is a lot of evidence that it has a biological base,” she says. “There is something different in people who commit suicide.”

    In her view, that means it’s worth searching for suicide genes and that a DNA test is also theoretically plausible. She says a test would be particularly useful among veterans or other groups at unusually high risk of harming themselves.

    A person’s life history still has more to do with whether it ends in suicide than genes do. Virginia Willour, a geneticist at the University of Iowa who studies suicidal thinking among bipolar patients, says environmental factors are especially important in preventing suicide. Getting medical treatment, an involved family, and religious beliefs all cut the chance of suicide dramatically.

    Willour’s grandfather was bipolar and killed himself. “I chose to research suicidal behavior because I knew the impact. His suicide was a constant reminder and presence in my childhood,” she says.

    The pain and disbelief surrounding suicide only raises the stakes for scientists claiming they can predict it. The latest report of a possible suicide test came in July from Johns Hopkins University, in Baltimore, where geneticists published a report saying that the presence of alterations to a single gene could predict who will attempt suicide with 80 percent accuracy.

    Johns Hopkins has filed a patent on a suicide test, and the university is attempting to license it.

    That research, carried out by Zachary Kaminsky, an assistant professor of psychiatry at Johns Hopkins, began on a collection of a small number of brains of suicide victims held by the National Institutes of Health. Instead of looking just at DNA, they studied patterns of methylation, a type of chemical block on genes that can lower their activity. They found that one gene, SKA2, seemed to be blocked often in the suicide brains. They later found the same gene block was common when they tested the blood of a larger number of people having suicidal thoughts.

    “We seem to be able to predict suicidal behavior and attempts, based on seeing these epigenetic changes in the blood,” says Kaminsky. “The caveat is that we have small sample sizes.”

    Kaminsky says that following the report, his e-mail inbox was immediately flooded by people wanting the test. “They wanted to know, if my dad died from suicide, is my son at risk?” he says. They didn’t understand that the type of DNA change he identified probably isn’t the inherited kind, but instead may be the result of stress or some other environmental factor.

    Kaminsky’s publication has drawn some criticism from scientists who say his conclusions were based on thin evidence. They say more data is needed. “It’s a striking finding, but as always, when you look at complex genetics, you need replication. Time will tell if it [stands up],” says Willour.

    The bigger problem, says Dracheva, is that there are simply not enough brains of suicide victims to study. Unlike studies of diabetes or schizophrenia, where scientists can call on thousands or tens of thousands of patients, suicide studies remain small, and their findings much more tentative.

    It’s because they don’t have DNA from enough people who committed suicide that researchers, including those at Hopkins and Max Planck, have had to try connecting the dots between DNA and whether or not people have suicidal thoughts. Yet there’s no straight line between the contemplation of suicide and actually doing it.

    “Who doesn’t think about killing themselves?” says Dracheva.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 8:49 am on August 12, 2014 Permalink | Reply
    Tags: Dinosaurs, , M.I.T. News   

    From M.I.T.: “Rise of the dinosaurs” 

    MIT News

    August 12, 2014
    Jennifer Chu | MIT News Office

    The Jurassic and Cretaceous periods were the golden age of dinosaurs, during which the prehistoric giants roamed the Earth for nearly 135 million years. Paleontologists have unearthed numerous fossils from these periods, suggesting that dinosaurs were abundant throughout the world. But where and when dinosaurs first came into existence has been difficult to ascertain.

    Collage: Jose-Luis Olivares/MIT (original background photograph courtesy of Malka Machlus from Lamont-Doherty Earth Observatory of Columbia University)

    Fossils discovered in Argentina suggest that the first dinosaurs may have appeared in South America during the Late Triassic, about 230 million years ago — a period when today’s continents were fused in a single landmass called Pangaea. Previously discovered fossils in North America have prompted speculation that dinosaurs didn’t appear there until about 212 million years ago — significantly later than in South America. Scientists have devised multiple theories to explain dinosaurs’ delayed appearance in North America, citing environmental factors or a vast desert barrier.

    depiction of Pangaea

    But scientists at MIT now have a bone to pick with such theories: They precisely dated the rocks in which the earliest dinosaur fossils were discovered in the southwestern United States, and found that dinosaurs appeared there as early as 223 million years ago. What’s more, they demonstrated that these earliest dinosaurs coexisted with close nondinosaur relatives, as well as significantly more evolved dinosaurs, for more than 12 million years. To add to the mystery, they identified a 16-million-year gap, older than the dinosaur-bearing rocks, where there is either no trace of any vertebrates, including dinosaurs, in the rock record, or the corresponding rocks have eroded.

    “Right below that horizon where we find the earliest dinosaurs, there is a long gap in the fossil and rock records across the sedimentary basin,” says Jahan Ramezani, a research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “If the record is not there, it doesn’t mean the dinosaurs didn’t exist. It means that either no fossils were preserved, or we haven’t found them. That tells us the theory that dinosaurs simply started in South America and spread all over the world has no firm basis.”

    Ramezani details the results of his geochronological analysis in the American Journal of Science. The study’s co-authors are Sam Bowring, the Robert R. Shrock Professor of Geology at MIT, and David Fastovsky, professor of geosciences at the University of Rhode Island.

    The isotope chronometer

    The most complete record of early dinosaur evolution can be found in Argentina, where layers of sedimentary rock preserve a distinct evolutionary progression: During the Late Triassic period, preceding the Jurassic, dinosaur “precursors” first appeared, followed by animals that began to exhibit dinosaur-like characteristics, and then advanced, or fully evolved, dinosaurs. Each animal group is found in a distinct rock formation, with very little overlap, revealing a general evolutionary history.

    In comparison, the dinosaur record in North America is a bit muddier. The most abundant fossils from the Late Triassic period have been discovered in layers of rock called the Chinle Formation, which occupies portions of Arizona, New Mexico, Utah, and Colorado, and is best exposed in Petrified Forest National Park. Scientists had previously dated isolated beds of this formation, and determined the earliest dinosaur-like animals, discovered in New Mexico, appeared by 212 million years ago.

    Chinle Badlands, Grand Staircase-Escalante National Monument, Utah, US.

    The Tepees in Petrified Forest National Park in northeastern Arizona, United States. View is toward the northwest from the main park road. According to a National Park Service (NPS) document, rock strata exposed in the Tepees area of the park belong to the Blue Mesa Member of the Chinle Formation and are about 220 to 225 million years old. The colorful bands of mudstone and sandstone were laid down during the Triassic, when the area was part of a huge tropical floodplain

    Ramezani and Bowring sought to more precisely date the entire formation, including levels in which the earliest dinosaur fossils have been found. The team took samples from exposed layers of sedimentary rock that were derived, in large part, from volcanic debris in various sections of the Chinle Formation. In the lab, the researchers pulverized the rocks and isolated individual microscopic grains of zircon — a uranium-bearing mineral that forms in magma shortly prior to volcanic eruptions. From the moment zircon crystallizes, the decay of uranium to lead begins in the mineral and, as Ramezani explains it, “the chronometer starts.” Researchers can measure the ratio of uranium to lead isotopes to determine the age of the zircon, and, inferentially, the rock in which it was found.

    The Blue Mesa locality of the Petrified Forest National Park in Arizona contains the Late Triassic continental sedimentary rocks of the Chinle Formation. Near Blue Mesa, the oldest documented dinosaur remains in the Chinle Formation have been found. Courtesy of Malka Machlus from Lamont-Doherty Earth Observatory of Columbia University

    A unique but incomplete record

    The team analyzed individual grains of zircon, and created a precise map of ages for each sedimentary interval of the Chinle Formation. Ramezani found, based on rock ages, that the fossils found in New Mexico are, in fact, not the earliest dinosaurs in North America. Instead, it appears that fossils found in Arizona are older, discovered in rocks as old as 223 million years.

    In this North American mix, the early relatives of dinosaurs apparently coexisted with more evolved dinosaurs for more than 12 million years, according to Ramezani’s analysis.

    “In South America, there is very little overlap,” Ramezani says. “But in North America, we see this unique interval when these groups were coexisting. You could think of it as Neanderthals coexisting with modern humans.”

    While fascinating to think about, Ramezani says this period does not shed much light on when the very first dinosaurs appeared in North America.

    “The fact that our record starts with advanced forms tells us there was a prior history,” Ramezani says. “It’s not just that advanced dinosaurs suddenly appeared 223 million years ago. There must have been prior evolution in North America — we just haven’t identified any earlier dinosaurs yet.”

    He says the answer to when dinosaurs first appeared in North America may lie in a 16-million-year gap, in the lower Chinle Formation and beneath it, which bears no fossils, dinosaurian or otherwise. The absence of any fossils is unremarkable; Ramezani notes that fossil preservation is “an exceptional process, requiring exceptional circumstances.” Dinosaurs may well have first appeared during this period; if they left any fossil evidence, it may have since been erased.

    “Every study like this is a step forward, to try to reconstruct the past,” Ramezani says. “Dinosaurs really rose to the top of the pyramid. What made them so successful, and what were the evolutionary advantages they developed so as to dominate terrestrial ecosystems? It all goes back to their beginning, to the Late Triassic when they just started to appear.”

    The new dates provide a framework against which other theories of dinosaur evolution may be tested, says Raymond Rogers, a professor of geology at Macalester College in Saint Paul, Minn., who was not involved in this work.

    “This is the kind of careful work that needs to be done before evolutionary hypotheses that relate to the origination and diversification of the dinosaurs can be addressed,” Rogers says. “This gap in the Chinle fossil record makes comparing the North American and South American dinosaur records problematic. Existing hypotheses that relate to the timing of dinosaur evolution in North and South America arguably need to be reconsidered in light of this new study.”

    This research was supported by funding from the National Science Foundation.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers



Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 378 other followers

%d bloggers like this: