Tagged: Caltech Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:37 am on March 22, 2016 Permalink | Reply
    Tags: , Caltech,   

    From Caltech: “Nanoparticle-Based Cancer Therapies Shown to Work in Humans” 

    Caltech Logo
    Caltech

    03/21/2016
    Ker Than

    1
    Andrew Clark, an MD/PhD student in Mark Davis’s lab and the study’s first author, investigates a specimen from the study. Evidence of the nanoparticles in tumor tissue was found using fluorescence microscopy, a technique capable of detecting the chemotherapeutic drug (camptothecin) attached to the nanoparticle. In the nine patients evaluated, the nanoparticles were found only in tumor tissue and not nearby, healthy tissue. Credit: Lance Hayashida/Caltech

    A team of researchers led by Caltech scientists has shown that nanoparticles can function to target tumors while avoiding adjacent healthy tissue in human cancer patients.

    “Our work shows that this specificity, as previously demonstrated in preclinical animal studies, can in fact occur in humans,” says study leader Mark E. Davis, the Warren and Katharine Schlinger Professor of Chemical Engineering at Caltech. “The ability to target tumors is one of the primary reasons for using nanoparticles as therapeutics to treat solid tumors.”

    The findings, published online the week of March 21 in the journal Proceedings of the National Academy of Sciences, demonstrate that nanoparticle-based therapies can act as a “precision medicine” for targeting tumors while leaving healthy tissue intact.

    In the study, Davis and his colleagues examined gastric tumors from nine human patients both before and after infusion with a drug—camptothecin—that was chemically bound to nanoparticles about 30 nanometers in size.

    “Our nanoparticles are so small that if one were to increase the size to that of a soccer ball, the increase in size would be on the same order as going from a soccer ball to the planet Earth,” says Davis, who is also a member of the City of Hope Comprehensive Cancer Center in Duarte, California, where the clinical trial was conducted.

    The team found that 24 to 48 hours after the nanoparticles were administered, they had localized in the tumor tissues and released their drug cargo, and the drug had had the intended biological effects of inhibiting two proteins that are involved in the progression of the cancer. Equally important, both the nanoparticles and the drug were absent from healthy tissue adjacent to the tumors.

    The nanoparticles are designed to be flexible delivery vehicles. “We can attach different drugs to the nanoparticles, and by changing the chemistry of the bond linking the drug to the nanoparticle, we can alter the release rate of the drug to be faster or slower,” says Andrew Clark, a graduate student in Davis’s lab and the study’s first author.

    Davis says his team’s findings suggest that a phenomenon known as the enhanced permeability and retention (EPR) effect is at work in humans. In the EPR effect, abnormal blood vessels that are “leakier” than normal blood vessels in healthy tissue allow nanoparticles to preferentially concentrate in tumors. Until now, the existence of the EPR effect has been conclusively proven only in animal models of human cancers.

    “Our results don’t prove the EPR effect in humans, but they are completely consistent with it,” Davis says.

    The findings could also help pave the way toward more effective cancer drug cocktails that can be tailored to fight specific cancers and that leave patients with fewer side effects.

    “Right now, if a doctor wants to use multiple drugs to treat a cancer, they often can’t do it because the cumulative toxic effects of the drugs would not be tolerated by the patient,” Davis says. “With targeted nanoparticles, you have far fewer side effects, so it is anticipated that a drug combination can be selected based on the biology and medicine rather than the limitations of the drugs.”

    These nanoparticles are currently being tested in a number of phase-II clinical trials. (Information about trials of the nanoparticles, denoted CRLX101, is available at http://www.clinicaltrials.gov).

    In addition to Davis and Clark, other coauthors on the study, entitled “CRLX101 nanoparticles localize in human tumors and not in adjacent, nonneoplastic tissue after intravenous dosing,” include Devin Wiley (MS ’11, PhD ’13) and Jonathan Zuckerman (PhD ’12); Paul Webster of the Oak Crest Institute of Science; Joseph Chao and James Lin at City of Hope; and Yun Yen of Taipei Medical University, who was at City of Hope and a visitor in the Davis lab at the initiation of the clinical trial. The research was supported by grants from the National Cancer Institute and the National Institutes of Health and by Cerulean Pharma Inc. Davis is a consultant to and holds stock in Cerulean Pharma Inc.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings

     
  • richardmitnick 4:06 pm on February 10, 2016 Permalink | Reply
    Tags: , , Caltech, , , Owens Valley Long Wavelength Array, ,   

    From Caltech: “Chasing Extrasolar Space Weather” 

    Caltech Logo
    Caltech

    02/10/2016
    Lori Dajose

    Earth’s magnetic field acts like a giant shield, protecting the planet from bursts of harmful charged solar particles that could strip away the atmosphere.

    Magnetosphere of Earth
    Earth’s magnetosphere

    Gregg Hallinan, an assistant professor of astronomy, aims to detect this kind of space weather on other stars to determine whether planets around these stars are also protected by their own magnetic fields and how that impacts planetary habitability.

    On Wednesday, February 10, at 8 p.m. in Beckman Auditorium, Hallinan will discuss his group’s efforts to detect intense radio emissions from stars and their effects on any nearby planets. Admission is free.

    What do you do?
    I am an astronomer. My primary focus is the study of the magnetic fields of stars, planets, and brown dwarfs—which are kind of an intermediate object between a planet and a star.

    Brown dwarf
    Brown dwarf

    Stars and their planets have intertwined relationships. Our sun, for example, produces coronal mass ejections, or CMEs, which are bubbles of hot plasma explosively ejected from the sun out into the solar system.

    Solar eruption 2012 by NASA's Solar Dynamic Observatory SDO
    CME

    Radiation and particles from these solar events bombard the earth and interact with the atmosphere, dominating the local “space weather” in the environment of Earth. Happily, our planet’s magnetic field shields and redirects CMEs toward the polar regions. This causes auroras—the colorful light in the sky commonly known as the Northern or Southern Lights.

    Auroras from around the world
    Auroras from around the world

    Our new telescope, the Owens Valley Long Wavelength Array, images the entire sky instantaneously and allows us to monitor extrasolar space weather on thousands of nearby stellar systems.

    Caltech Owens Valley Long Wavelength Array
    Caltech Owens Valley Long Wavelength Array

    When a star produces a CME, it also emits a bright burst of radio waves with a specific signature. If a planet has a magnetic field and it is hit by one of these CMEs, it will also become brighter in radio waves. Those radio signatures are very specific and allow you to measure very precisely the strength of the planet’s magnetic field. I am interested in detecting radio waves from exoplanets—planets outside of our solar system—in order to learn more about what governs whether or not a planet has a magnetic field.

    Why is this important?

    The presence of a magnetic field on a planet can tell us a lot. Like on our own planet, magnetic fields are an important line of defense against the solar wind, particularly explosive CMEs, which can strip a planet of its atmosphere. Mars is a good example of this. Because it didn’t have a magnetic field shielding it from the sun’s solar wind, it was stripped of its atmosphere long ago. So, determining whether a planet has a magnetic field is important in order to determine which planets could possibly have atmospheres and thus could possibly host life.

    How did you get into this line of work?

    From a young age, I was obsessed with astronomy—it’s all I cared for. My parents got me a telescope when I was 7 or 8, and from then on, that was it.

    As a grad student, I was looking at magnetic fields of cool—meaning low-temperature—objects. When I was looking at brown dwarfs, I found that they behave like planets in that they also have auroras. I had the idea that auroras could be the avenue to examine the magnetic fields of other planets. So brown dwarfs were my gateway into exoplanets.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings

     
  • richardmitnick 8:22 pm on February 3, 2016 Permalink | Reply
    Tags: , Caltech, , EEW earthquake early-warning,   

    From Caltech: “White House Puts Spotlight on Earthquake Early-Warning System” 

    Caltech Logo
    Caltech

    02/03/2016
    Katie Neith
    Contact:
    Tom Waldman
    (626) 395-5832
    twaldman@caltech.edu

    Since the late 1970s, Caltech seismologist Tom Heaton, professor of engineering seismology, has been working to develop earthquake early-warning (EEW) systems—networks of ground-based sensors that can send data to users when the earth begins to tremble nearby, giving them seconds to potentially minutes to prepare before the shaking reaches them. In fact, Heaton wrote the first paper published on the concept in 1985. EEW systems have been implemented in countries like Japan, Mexico, and Turkey. However, the Unites States has been slow to regard EEW systems as a priority for the West Coast.

    Earthquake early warning EEW  UserDisplay
    The earthquake early warning (EEW) UserDisplay in action for a scenario M7.8 earthquake. The most intense colors correspond to very strong ground shaking. The banner on top shows expected shaking at the user site. The number “14” on the left indicates warning time, and the expected intensity at the user site is shown in roman numerals, VII. Other information indicates the epicenter and date/time of the earthquake.

    But on February 2, 2016, the White House held the Earthquake Resilience Summit, signaling a new focus on earthquake safety and EEW systems. There, stakeholders—including Caltech’s Heaton and Egill Hauksson, research professor in geophysics; and U.S. Geological Survey (USGS) seismologist Lucy Jones, a visiting associate in geophysics at Caltech and seismic risk advisor to the mayor of Los Angeles—discussed the need for earthquake early warning and explored steps that can be taken to make such systems a reality.

    At the summit, the Gordon and Betty Moore Foundation announced $3.6 million in grants to advance a West Coast EEW system called ShakeAlert, which received an initial $6 million in funding from foundation in 2011. The new grants will go to researchers working on the system at Caltech, the USGS, UC Berkeley, and the University of Washington.

    “We have been successfully operating a demonstration system for several years, and we know that it works for the events that have happened in the test period,” says Heaton. “However, there is still significant development that is required to ensure that the system will work reliably in very large earthquakes similar to the great 1906 San Francisco earthquake. This new funding allows us to accelerate the rate at which we develop this critical system.”

    In addition, the Obama Administration outlined new federal commitments to support greater earthquake safety including an executive order to ensure that new construction of federal buildings is up to code and that federal assets are available for recovery efforts after a large earthquake.

    The commitments follow a December announcement from Congressman Adam Schiff (D-Burbank) that Congress has included $8.2 million in the fiscal year 2016 funding bill specifically designated for a West Coast earthquake early warning system.

    “By increasing the funding for the West Coast earthquake early-warning system, Congress is sending a message to the Western states that it supports this life-saving system. But the federal government cannot do it alone and will need local stakeholders, both public and private, to get behind the effort with their own resources,” said Schiff, in a press release. “The early warning system will give us critical time for trains to be slowed and surgeries to be stopped before shaking hits—saving lives and protecting infrastructure. This early warning system is an investment we need to make now, not after the ‘big one’ hits.”

    ShakeAlert utilizes a network of seismometers—instruments that measure ground motion—widely scattered across the Western states. In California, that network of sensors is called the California Integrated Seismic Network (CISN) and is made up of computerized seismometers that send ground-motion data back to research centers like the Seismological Laboratory at Caltech.

    Here’s how the current ShakeAlert works: a user display opens in a pop-up window on a recipient’s computer as soon as a significant earthquake occurs in California. The screen lists the quake’s estimated location and magnitude based on the sensor data received to that point, along with an estimate of how much time will pass before the shaking reaches the user’s location. The program also gives an approximation of how intense that shaking will be. Since ShakeAlert uses information from a seismic event in progress, people living near the epicenter do not get much—if any—warning, but those farther away could have seconds or even tens of seconds’ notice.

    The goal is an improved version of ShakeAlert that will eventually give schools, utilities, industries, and the general public a heads-up in the event of a major temblor.

    Read more about how ShakeAlert works and about Caltech’s development of EEW systems in a feature that ran in the Summer 2013 issue of E&S magazine called Can We Predict Earthquakes?

    See the full article here .

    [If you live in an earthquake prone area, you can help with identification and notification by joing the Quake-Catcher Network, a project based at Caltech and running on software from BOINC, Berkeley Open Infrastructure for Network Computing. Please visit Quake-Catcher Network and see what it is all about.]

    BOINC WallPaper

    BOINC

    QCN Quake Catcher Network map
    Quake-Catcher Network map

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings

     
  • richardmitnick 8:11 pm on January 26, 2016 Permalink | Reply
    Tags: A New Power Source for Earth's Dynamo, , Caltech,   

    From Caltech: “A New Power Source for Earth’s Dynamo” 

    Caltech Logo
    Caltech

    01/26/2016
    Lori Dajose

    Geodynamo earth core magnesium bearing
    Magnesium-bearing minerals transported through the core up to the mantle can cause the convection that powers the geodynamo. Credit: Joseph O’Rourke/Caltech

    The earth’s global magnetic field plays a vital role in our everyday lives, shielding us from harmful solar radiation. The magnetic field, which has existed for billions of years, is caused by a dynamo—or generator—within the mostly molten iron in the earth’s interior; this liquid iron churns in a process called convection. But convection does not happen on its own. It needs a driving force—a power source. Now, graduate student Joseph O’Rourke and David Stevenson, Caltech’s Marvin L. Goldberger Professor of Planetary Science, have proposed a new mechanism that can power this convection in the earth’s interior for all of the earth’s history.

    A paper detailing the findings appears in the January 21 issue of Nature.

    Convection can be seen in such everyday phenomena as a pot of boiling water. Heat at the bottom of the pot causes pockets of fluid to become less dense than the surrounding fluid, and thus to rise. When they reach the surface, the pockets of fluid cool and sink again. This same process occurs in the 1,400-mile-thick layer of molten metal that makes up the outer core.

    The earth consists mostly of the mantle (solid material made of oxides and silicate in which magnesium is prominent) and the core (mainly iron). These two regions are usually thought of as completely separated; that is, the mantle materials do not dissolve in the core materials. They do not mix at the atomic level, much as water does not usually mix with oil. The core has a solid inner part that has been slowly growing throughout the earth’s history, as liquid iron in the planet’s interior solidifies. The outer, liquid part of the core is a layer of molten iron mixed with other elements, including silicon, oxygen, nickel, and a small amount of magnesium. Stevenson and O’Rourke propose that the transfer of the element magnesium in the form of mantle minerals from the outer core to the base of the mantle is the mechanism that powers convection.

    Magneisum is a major element in the mantle, but it has low solubility in the iron core except at very high temperatures—above 7,200 degrees Fahrenheit. As the earth’s core cools, magnesium oxides and magnesium silicates crystallize from the metallic, liquid outer core, much as sugar that has been dissolved in hot water will precipitate as sugar crystals when the water cools. Because these crystals are less dense than iron, they rise to the base of the mantle. The heavier liquid metal left behind then sinks, and this motion, Stevenson argues, may be the mechanism that has sustained convection for over three billion years—the mechanism that in turn powers the global magnetic field.

    “Precipitation of magnesium-bearing minerals from the outer core is 10 times more effective at driving convection than growth of the inner core,” O’Rourke says. “Such minerals are very buoyant and the resulting fluid motions can transport heat effectively. The core only needs to precipitate upwards a layer of magnesium minerals 10 kilometers thick—which seems like a lot, but it’s not much on the scale of the inner and outer cores—in order to drive the outer core’s convection.”

    Previous models assumed that the steady cooling of iron in the inner core would release heat that could power convection. But most measurements and theory in the past few years for the thermal conductivity of iron—the property that determines how efficiently heat can flow through a metal—indicates that the metal can easily transfer heat without undergoing motion. “Heating up iron at the bottom of the outer core will not cause it to rise up buoyantly—it’s just going to dissipate the heat to its surroundings,” O’Rourke says.

    “Dave had the idea of a magnesium-powered dynamo for a while, but there was supposed to be no magnesium in Earth’s core,” O’Rourke says. “Now, models of planetary formation in the early solar system are showing that Earth underwent frequent impacts with giant planetary bodies. If these violent, energetic events occurred, Earth would have been experiencing much higher temperatures during its formation than previously thought—temperatures that would have been high enough to allow some magnesium to mix into liquid metallic iron.”

    These models made it possible to pursue the idea that the dynamo may be powered by the precipitation of magnesium-bearing minerals. O’Rourke calculated that the amounts of magnesium that would have dissolved in the core during Earth’s hot early stages would have caused other changes in the composition of the mantle that are consistent with other models and measurements. He also calculated that the precipitation of these magnesium minerals would have enough energy to power the dynamo for four billion years.

    Experimental verification of the amount of magnesium that can go into the core is still sparse, O’Rourke and Stevenson say. “Further applications of our proposed mechanism include Venus—where there is no magnetic field—and the abundant exoplanets that are more massive than the Earth but may have similar chemical compositions,” Stevenson says.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings

     
  • richardmitnick 7:00 pm on December 23, 2015 Permalink | Reply
    Tags: , , , Caltech, , ,   

    From AAAS: “Physicists figure out how to retrieve information from a black hole” 

    AAAS

    AAAS

    23 December 2015
    Adrian Cho

    Temp 1
    It would take technologies beyond our wildest dreams to extract the tiniest amount of quantum information from a black hole like this one. NASA; M. Weiss/Chandra X-Ray Center

    Black holes earn their name because their gravity is so strong not even light can escape from them. Oddly, though, physicists have come up with a bit of theoretical sleight of hand to retrieve a speck of information that’s been dropped into a black hole. The calculation touches on one of the biggest mysteries in physics: how all of the information trapped in a black hole leaks out as the black hole “evaporates.” Many theorists think that must happen, but they don’t know how.

    Unfortunately for them, the new scheme may do more to underscore the difficulty of the larger “black hole information problem” than to solve it. “Maybe others will be able to go further with this, but it’s not obvious to me that it will help,” says Don Page, a theorist at the University of Alberta in Edmonton, Canada, who was not involved in the work.

    You can shred your tax returns, but you shouldn’t be able to destroy information by tossing it into a black hole. That’s because, even though quantum mechanics deals in probabilities—such as the likelihood of an electron being in one location or another—the quantum waves that give those probabilities must still evolve predictably, so that if you know a wave’s shape at one moment you can predict it exactly at any future time. Without such “unitarity” quantum theory would produce nonsensical results such as probabilities that don’t add up to 100%.

    But suppose you toss some quantum particles into a black hole. At first blush, the particles and the information they encode is lost. That’s a problem, as now part of the quantum state describing the combined black hole-particles system has been obliterated, making it impossible to predict its exact evolution and violating unitarity.

    Physicists think they have a way out. In 1974, British theorist Stephen Hawking argued that black holes can radiate particles and energy. Thanks to quantum uncertainty, empty space roils with pairs of particles flitting in and out of existence. Hawking realized that if a pair of particles from the vacuum popped into existence straddling the black hole’s boundary then one particle could fly into space, while the other would fall into the black hole. Carrying away energy from the black hole, the exiting Hawking radiation should cause a black hole to slowly evaporate. Some theorists suspect information reemerges from the black hole encoded in the radiation—although how remains unclear as the radiation is supposedly random.

    Now, Aidan Chatwin-Davies, Adam Jermyn, and Sean Carroll of the California Institute of Technology in Pasadena have found an explicit way to retrieve information from one quantum particle lost in a black hole, using Hawking radiation and the weird concept of quantum teleportation.

    Quantum teleportation enables two partners, Alice and Bob, to transfer the delicate quantum state of one particle such as an electron to another. In quantum theory, an electron can spin one way (up), the other way (down), or literally both ways at once. In fact, its state can be described by a point on a globe in which north pole signifies up and the south pole signifies down. Lines of latitude denote different mixtures of up and down, and lines of longitude denote the “phase,” or how the up and down parts mesh. However, if Alice tries to measure that state, it will “collapse” one way or the other, up or down, squashing information such as the phase. So she can’t measure the state and send the information to Bob, but must transfer it intact.

    To do that Alice and Bob can share an additional pair of electrons connected by a special quantum link called entanglement. The state of either particle in the entangled pair is uncertain—it simultaneously points everywhere on the globe—but the states are correlated so that if Alice measures her particle from the pair and finds it spinning, say, up, she’ll know instantly that Bob’s electron is spinning down. So Alice has two electrons—the one whose state she wants to teleport and her half of the entangled pair. Bob has just the one from the entangled pair.

    To perform the teleportation, Alice takes advantage of one more strange property of quantum mechanics: that measurement not only reveals something about a system, it also changes its state. So Alice takes her two unentangled electrons and performs a measurement that “projects” them into an entangled state. That measurement breaks the entanglement between the pair of electrons that she and Bob share. But at the same time, it forces Bob’s electron into the state that her to-be-teleported electron was in. It’s as if, with the right measurement, Alice squeezes the quantum information from one side of the system to the other.

    Chatwin-Davies and colleagues realized that they could teleport the information about the state of an electron out of a black hole, too. Suppose that Alice is floating outside the black hole with her electron. She captures one photon from a pair born from Hawking radiation. Much like an electron, the photon can spin in either of two directions, and it will be entangled with its partner photon that has fallen into the black hole. Next, Alice measures the total angular momentum, or spin, of the black hole—both its magnitude and, roughly speaking, how much it lines up with a particular axis. With those two bits of information in hand, she then tosses in her electron, losing it forever.

    But Alice can still recover the information about the state of that electron, the team reports in a paper in press at Physical Review Letters. All she has to do is once again measure the spin and orientation of the black hole. Those measurements then entangle the black hole and the in-falling photon. They also teleport the state of the electron to the photon that Alice captured. Thus, the information from the lost electron is dragged back into the observable universe.

    Chatwin-Davies stresses that the scheme is not a plan for a practical experiment. After all, it would require Alice to almost instantly measure the spin of a black hole as massive as the sun to within a single atom’s spin. “We like to joke around that Alice is the most advanced scientist in the universe,” he says.

    The scheme also has major limitations. In particular, as the authors note, it works for one quantum particle, but not for two or more. That’s because the recipe exploits the fact that the black hole conserves angular momentum, so that its final spin is equal to its initial spin plus that of the electron. That trick enables Alice to get out exactly two bits of information—the total spin and its projection along one axis—and that’s just enough information to specify the latitude and longitude of quantum state of one particle. But it’s not nearly enough to recapture all the information trapped in a black hole, which typically forms when a star collapses upon itself.

    To really tackle the black hole information problem, theorists would also have to account for the complex states of the black hole’s interior, says Stefan Leichenauer, a theorist at the University of California, Berkeley. “Unfortunately, all of the big questions we have about black holes are precisely about these internal workings,” he says. “So, this protocol, though interesting in its own right, will probably not teach us much about the black hole information problem in general.”

    However, delving into the interior of black holes would require a quantum mechanical theory of gravity. Of course, developing such a theory is perhaps the grandest goal in all of theoretical physics, one that has eluded physicists for decades.

    See the full article here .

    The American Association for the Advancement of Science is an international non-profit organization dedicated to advancing science for the benefit of all people.

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

     
  • richardmitnick 1:18 pm on December 22, 2015 Permalink | Reply
    Tags: , Caltech, ,   

    From Caltech: “Toward Liquid Fuels from Carbon Dioxide” 

    Caltech Logo
    Caltech

    12/22/2015
    Ker Than

    1
    C1 to C2: Connecting carbons by reductive deoxygenation and coupling of CO Credit: Kyle Horak and Joshua Buss/Caltech

    In the quest for sustainable alternative energy and fuel sources, one viable solution may be the conversion of the greenhouse gas carbon dioxide (CO2) into liquid fuels.

    Through photosynthesis, plants convert sunlight, water, and CO2 into sugars, multicarbon molecules that fuel cellular processes. CO2 is thus both the precursor to the fossil fuels that are central to modern life as well as the by-product of burning those fuels. The ability to generate synthetic liquid fuels from stable, oxygenated carbon precursors such as CO2 and carbon monoxide (CO) is reminiscent of photosynthesis in nature and is a transformation that is desirable in artificial systems. For about a century, a chemical method known as the Fischer-Tropsch process has been utilized to convert hydrogen gas (H2) and CO to liquid fuels. However, its mechanism is not well understood and, in contrast to photosynthesis, the process requires high pressures (from 1 to 100 times atmospheric pressure) and temperatures (100–300 degrees Celsius).

    More recently, alternative conversion chemistries for the generation of liquid fuels from oxygenated carbon precursors have been reported. Using copper electrocatalysts, CO and CO2 can be converted to multicarbon products. The process proceeds under mild conditions, but how it takes place remains a mystery.

    Now, Caltech chemistry professor Theo Agapie and his graduate student Joshua Buss have developed a model system to demonstrate what the initial steps of a process for the conversion of CO to hydrocarbons might look like.

    The findings, published as an advanced online publication for the journal Nature on December 21, 2015 (and appearing in print on January 7, 2016), provide a foundation for the development of technologies that may one day help neutralize the negative effects of atmospheric accumulation of the greenhouse gas CO2 by converting it back into fuel. Although methods exist to transform CO2 into CO, a crucial next step, the deoxygenation of CO molecules and their coupling to form C–C bonds, is more difficult.

    In their study, Agapie and Buss synthesized a new transition metal complex—a metal atom, in this case molybdenum, bound by one or more supporting molecules known as ligands—that can facilitate the activation and cleavage of a CO molecule. Incremental reduction of the molecule leads to substantial weakening of the C–O bonds of CO. Once weakened, the bond is broken entirely by introducing silyl electrophiles, a class of silicon-containing reagents that can be used as surrogates for protons.

    This cleavage results in the formation of a terminal carbide—a single carbon atom bound to a metal center—that subsequently makes a bond with the second CO molecule coordinated to the metal. Although a carbide is commonly proposed as an intermediate in CO reductive coupling, this is the first direct demonstration of its role in this type of chemistry, the researchers say. Upon C–C bond formation, the metal center releases the C2 product. Overall, this process converts the two CO units to an ethynol derivative and proceeds easily even at temperatures lower than room temperature.

    “To our knowledge, this is the first example of a well-defined reaction that can take two carbon monoxide molecules and convert them into a metal-free ethynol derivative, a molecule related to ethanol; the fact that we can release the C2 product from the metal is important,” Agapie says.

    While the generated ethynol derivative is not useful as a fuel, it represents a step toward being able to generate synthetic multicarbon fuels from carbon dioxide. The researchers are now applying the knowledge gained in this initial study to improve the process. “Ideally, our insight will facilitate the development of practical catalytic systems,” Buss says.

    The scientists are also working on a way to cleave the C–O bond using protons instead of silyl electrophiles. “Ultimately, we’d like to use protons from water and electron equivalents derived from sunlight,” Agapie says. “But protons are very reactive, and right now we can’t control that chemistry.”

    The research in the paper, “Four-electron deoxygenative reductive coupling of carbon monoxide at a single metal site,” was funded by Caltech and the National Science Foundation.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings

     
  • richardmitnick 5:24 pm on December 12, 2015 Permalink | Reply
    Tags: , Caltech, Earth's mantle,   

    From Caltech: “Developing a Picture of the Earth’s Mantle” 

    Caltech Logo
    Caltech

    12/11/2015
    Katie Neith

    1
    This illustration shows a bridgmanite sample that is being laser-heated between two diamond anvils. This set-up allows researchers to measure a sample at compressions over 1 million times the earth’s atmospheric pressure, while being heated to thousands of degrees Celsius.
    Credit: Aaron Wolf and Jennifer Jackson, Caltech

    2
    This schematic shows different scenarios of bridgmanite provinces that Jennifer Jackson and colleagues explored in their research. Their results found that the scenario at right aligns most closely with geophysical constraints of the lower mantle.Credit: Aaron Wolf and Jennifer Jackson, Caltech

    Deep inside the earth, seismic observations reveal that three distinct structures make up the boundary between the earth’s metallic core and overlying silicate mantle at a depth of about 2,900 kilometers—an area whose composition is key to understanding the evolution and dynamics of our planet. These structures include remnants of subducted plates that originated near the earth’s surface, ultralow-velocity zones believed to be enriched in iron, and large dense provinces of unknown composition and mineralogy. A team led by Caltech’s Jennifer Jackson, professor of mineral physics has new evidence for the origin of these features that occur at the core-mantle boundary.

    “We have discovered that bridgmanite, the most abundant mineral on our planet, is a reasonable candidate for the material that makes up these dense provinces that occupy about 20 percent of the core-mantle boundary surface, and rise up to a depth of about 1,500 kilometers. Integrated by volume that’s about the size of our moon!” says Jackson, coauthor of a study that outlines these findings and appears online in the Journal of Geophysical Research: Solid Earth. “This finding represents a breakthrough because although bridgmanite is the earth’s most abundant mineral, we only recently have had the ability to precisely measure samples of it in an environment similar to what we think the materials are experiencing inside the earth.”

    Previously, says Jackson, it was not clear whether bridgmanite, a perovskite structured form of (Mg,Fe)SiO3, could explain seismic observations and geodynamic modeling efforts of these large dense provinces. She and her team show that indeed they do, but these structures need to be propped up by external forces, such as the pinching action provided by cold and dense subducted slabs at the base of the mantle.

    Jackson, along with then Caltech graduate student Aaron Wolf (PhD ’13), now a research scientist at the University of Michigan at Ann Arbor, and researchers from Argonne National Laboratory, came to these conclusions by taking precise X-ray measurements of synthetic bridgmanite samples compressed by diamond anvil cells to over 1 million times the earth’s atmospheric pressure and heated to thousands of degrees Celsius.

    The measurements were done utilizing two different beamlines at the Advanced Photon Source [APS] of Argonne National Laboratory in Illinois, where the team used powerful X-rays to measure the state of bridgmanite under the physical conditions of the earth’s lower mantle to learn more about its stiffness and density under such conditions.

    ANL APS interior
    ANL APS
    APS

    The density controls the buoyancy—whether or not these bridgmanite provinces will lie flat on the core-mantle boundary or rise up. This information allowed the researchers to compare the results to seismic observations of the core-mantle boundary region.

    “With these new measurements of bridgmanite at deep-mantle conditions, we show that these provinces are very likely to be dense and iron-rich, helping them to remain stable over geologic time,” says Wolf.

    Using a technique known as synchrotron Mössbauer spectroscopy, the team also measured the behavior of iron in the crystal structure of bridgmanite, and found that iron-bearing bridgmanite remained stable at extreme temperatures (more than 2,000 degrees Celsius) and pressure (up to 130 gigapascals). There had been some reports that iron-bearing bridgmanite breaks down under extreme conditions, but the team found no evidence for any breakdown or reactions.

    “This is the first study to combine high-accuracy density and stiffness measurements with Mössbauer spectroscopy, allowing us to pinpoint iron’s behavior within bridgmanite,” says Wolf. “Our results also show that these provinces cannot possibly contain a large complement of radiogenic elements, placing strong constraints on their origin. If present, these radiogenic elements would have rapidly heated and destabilized the piles, contradicting many previous simulations that indicate that they are likely hundreds of millions of years old.”

    In addition, the experiments suggest that the rest of the lower mantle is not 100 percent bridgmanite as had been previously suggested. “We’ve shown that other phases, or minerals, must be present in the mantle to satisfy average geophysical observations,” says Jackson. “Until we made these measurements, the thermal properties were not known with enough precision and accuracy to uniquely constrain the mineralogy.”

    “There is still a lot of work to be done, such as identifying the dynamics of subducting slabs, which we believe plays a role in providing an external force to shape these large bridgmanite provinces,” she says. “We know that the earth did not start out this way. The provinces had to evolve within the global system, and we think these findings may help large-scale geodynamic modeling that involves tectonic plate reconstructions.”

    The results of the study were published in a paper titled The thermal equation of state of (Mg,Fe)SiO3bridgmanite (perovskite) and implications for lower mantle structures. In addition to Jackson and Wolf, other authors on the study are Przemeslaw Dera and Vitali B. Prakapenka from the Center for Advanced Radiation Sources at Argonne National Laboratory. Support for this research was provided by the National Science Foundation, the Turner Postdoctoral Fellowship at the University of Michigan, and the California Institute of Technology.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings

     
  • richardmitnick 5:49 pm on December 7, 2015 Permalink | Reply
    Tags: , Caltech,   

    From Caltech: “Unlocking the Chemistry of Life” 

    Caltech Logo
    Caltech

    12/07/2015
    Jessica Stoller-Conrad

    1
    No image credit

    In just the span of an average lifetime, science has made leaps and bounds in our understanding of the human genome and its role in heredity and health—from the first insights about DNA structure in the 1950s to the rapid, inexpensive sequencing technologies of today. However, the 20,000 genes of the human genome are more than DNA; they also encode proteins to carry out the countless functions that are key to our existence. And we know much less about how this collection of proteins supports the essential functions of life.

    In order to understand the role each of these proteins plays in human health—and what goes wrong when disease occurs—biologists need to figure out what these proteins are and how they function. Several decades ago, biologists realized that to answer these questions on the scale of the thousands of proteins in the human body, they would have to leave the comfort of their own discipline to get some help from a standard analytical-chemistry technique: mass spectrometry. Since 2006, Caltech’s Proteome Exploration Laboratory (PEL) has been building on this approach to bridge the gap between biology and chemistry, in the process unlocking important insights about how the human body works.

    Scientists can easily sequence an entire genome in just a day or two, but sequencing a proteome—all of the proteins encoded by a genome—is a much greater challenge says Ray Deshaies, protein biologist and founder of the PEL. “One challenge is the amount of protein. If you want to sequence a person’s DNA from a few of their cheek cells, you first amplify—or make copies of—the DNA so that you’ll have a lot of it to analyze. However, there is no such thing as protein amplification,” Deshaies says. “The number of protein molecules in the cells that you have is the number that you have, so you must use a very sensitive technique to identify those very few molecules.” The best means available for doing this today is called shotgun mass spectrometry, Deshaies says. In general, mass spectrometry allows researchers to identify the amount and types of molecules that are present in a biological sample by separating and analyzing the molecules as gas ions, based on mass and charge; shotgun mass spectrometry—a combination of several techniques—applies this separation process specifically to digested, broken-down proteins, allowing researchers to identify the types and amounts of proteins that are present in a heterogeneous mixture.

    The first step of shotgun mass spectroscopy entails digesting a mixture of proteins into smaller fragments called peptides. The peptides are then separated based on their physical properties, and then they are sprayed into a mass spectrometer and blasted apart via collisions with gas molecules such as helium or nitrogen—a process that creates a unique fragmentation pattern for each peptide. This pattern, or “fingerprint,” of each peptide’s fragmentation can then be searched on a database and used to identify the protein this peptide came from.


    download mp4 video here.

    “Up until this technique was invented, people had to take a mixture of proteins, run a current through a polyacrylamide gel to separate the proteins by size, stain the proteins, and then physically cut the stained bands out of the gel to have each individual protein species sequenced,” says Deshaies. “But mass spectrometry technology has gotten so good that we can now cast a broader net by sequencing everything, then use data analysis to figure out what specific information is of interest after the dust settles down.”

    Deshaies began using this shotgun mass spectrometry in the late 1990s, but because the technology was still very new, all of the protein analysis had to be done at the outside laboratories that were inventing the methodology.

    In 2001, after realizing the potential of this field-changing technology, he and colleague Barbara Wold, the Bren Professor of Molecular Biology, applied for and received a Department of Energy grant for their very own mass spectrometer. When the instrument arrived on campus, demand began to surge. “Barbara and I were first just doing experiments for our own labs, but then other people on campus wanted us to help them apply this technology to their research problems,” Deshaies says.

    So he and Wold began campaigning for a larger, ongoing center where anyone could begin using mass spectrometry resources for protein research. In 2006, Deshaies and then chair of the Division of Biology (now the Division of Biology and Biological Engineering) Elliot Meyerowitz petitioned the Gordon and Betty Moore Foundation to secure funding for a formal Proteome Exploration Laboratory, as part of the foundation’s commitment to Caltech.

    The influx of cash dramatically expanded the capabilities and resources that were available to the PEL, allowing it to purchase the best and fastest mass spectrometry instruments available. But just as importantly, it also meant that the PEL could expand its human resources, Deshaies adds. Mostly students were running the instruments in the Deshaies lab, he says, so when they graduated or moved on, gaps were left in expertise. Sonja Hess came to Caltech in 2007 to fill that gap as director of the PEL.

    Hess, who came from a proteomics lab at the National Institutes of Health, knew the challenges of running an interdisciplinary center such as the PEL. Although the field of proteomics holds great promise for understanding big questions in many fields, including biology and medicine, mass spectrometry is still a highly technical method involving analytical chemistry and data science—and it’s a technique that many biologists were never trained in. Conversely, many chemists and mass spectrometry technicians don’t necessarily understand how to apply the technique to biological processes.

    By encouraging dialogue between these two sides, Hess says that the PEL crosses that barrier, helping apply mass spectrometry techniques to diverse research questions from more than 20 laboratories on campus. Creating this interdisciplinary and resource-rich environment has enabled a wide breadth of discoveries, says Hess. One major user of the PEL, chemist David Tirrell, has used the center for many collaborations involving a technique he developed with former colleagues Erin Schuman and Daniela Dieterich called BONCAT (for “bioorthogonal noncanonical amino-acid tagging”). BONCAT uses synthetic molecules that are not normally found in proteins in nature and that carry particular chemical tags. When these artificial amino acids are incubated with certain cells, they are taken up by the cells and incorporated into all newly formed proteins in those cells.

    The tags then allow researchers to identify and pull out proteins from the cells, thus enabling them to wash away all of the other untagged proteins from other cells that aren’t of interest. When this method is combined with mass spectrometry techniques, it enables researchers to achieve specificity in their results and determine which proteins are produced in a particular subset of cells during a particular time. “In my own laboratory, we work at making sure the method is adapted appropriately to the specifics of a biological problem. But we rely on collaborations with other laboratories to help us understand what the demands on the method are and what kinds of questions would be interesting to people in those fields,” Tirrell says.

    For example, Tirrell collaborated with biologist Paul Sternberg and the PEL, using BONCAT and mass spectrometry to analyze specific proteins from a few cells within a whole organism, a feat that had never been accomplished before. Using the nematode C. elegans, Sternberg and his team applied the BONCAT technique to tag proteins in the 20 cells of the worm’s pharynx, and then used the PEL resources to analyze proteome-wide information from just those 20 cells. The results, including identification of proteins that were not previously associated with the pharynx, were published in PNAS in 2014.

    The team is now trying to target the experiment to a single pair of neurons that help the worm to sense and avoid harmful chemicals—a first step in learning which proteins are essential to producing this responsive behavior. But analyzing protein information from just two cells is a difficult experiment, says Tirrell. “The challenge comes in separating out the proteins that are made in those two cells from the proteins in the rest of the hundreds of cells in the worm’s body. You’re only interested in two cells, but to get the proteins from those two cells, you’re essentially trying to wash away everything else— about 500 times as much ‘junk’ protein as the protein that you’re really interested in,” he says. “We’re working on these separation methods now because the ultimate experiment would be to find a way to use BONCAT and mass spec to pull out proteomic information from a single cell in an animal.”

    This next step is a big one, but Tirrell says that an advantage of the PEL is that the laboratory’s staff can focus on optimizing the very technical mass spectrometry aspects of an experiment, while researchers using the PEL can focus more holistically on the question they’re trying to answer. This was also true for biologist Mitch Guttman, who asked the laboratory to help him develop a mass spectrometry–based technique for identifying the proteins that hitchhike on a class of RNA genes called lncRNAs. Long noncoding RNAs—or lncRNAs (pronounced “link RNAs”) for short—are abundant in the human genome, but scientists know very little about how they work or what they do.

    Although it’s known that protein-coding genes start out as DNA, which is transcribed into RNA, which is then translated into the gene product, a protein, lncRNAs are never translated into proteins. Instead, they’re thought to act as scaffolds, corralling important proteins and bringing them to where they’re needed in the cell. In a study published in April 2015 in Nature, Guttman used a specific example of a lncRNA, a gene called Xist, to learn more about these hitchhiking proteins.

    “The big challenge to doing this was technical; we’ve never had a way to identify what proteins are actually interacting with a lncRNA molecule. By working with the PEL, we were able to develop a method based on mass spectrometry to actually purify and identify this complex of proteins interacting with a lncRNA in living cells,”Guttman says. “Once we had that information, we could really start to ask ourselves questions about these proteins and how are they working.”

    Using this new method, called RNA antisense purification with mass spectrometry (RAP-MS), Guttman’s lab determined that 10 proteins associate with the lncRNA Xist, and that three of those 10 are essential to the gene’s function—inactivating the second X chromosome in women, a necessary process that, if interrupted, results in the death of female embryos early in development. Guttman’s findings marked the first time that anyone had uncovered the detailed mechanism of action for an lncRNA gene. For decades, other research groups had been trying to solve this problem; however, the collaborative development of RAP-MS in the PEL provided the missing piece.

    Even Deshaies, who began doing shotgun mass spectrometry experiments in his own laboratory, now exclusively uses the PEL’s resources and says that the laboratory has played an essential support role in his work. He studies the normal balance of proteins in a cell and how this balance changes during disease. In a 2013 study published in Cell, his laboratory focused on a dynamic network of protein complexes called SCF complexes, which go through cycles of assembly and dissociation in a cell, depending on when they are needed.

    Because there was no insight into how these complexes form and disassemble, Deshaies and his colleagues used the PEL to quantitatively monitor how this protein network’s dynamics were changing within cells. They determined that SCF complexes are normally very stable, but in the presence of a protein called Cand1 they become very dynamic and rapidly exchange subunits. Because some components of the SCF complex have been implicated in the development of human diseases such as cancers, work is now being done to see if Cand1 holds promise as a target for a cancer therapeutic.

    Although Deshaies says that the PEL resources have become invaluable to his work, he adds that what makes the laboratory unique is how it benefits the entire institute—a factor that he hopes will encourage further support for its mission. “The value of the PEL is not just about what it contributes to my lab or to Dave Tirrell’s lab or to anyone else’s,” he says. “It’s about the breadth of PEL’s impact—the 20 or so labs that are bringing in samples and using this operation every year to do important work, like solving the mechanism of X-chromosome inactivation in females.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings

     
  • richardmitnick 10:17 pm on November 24, 2015 Permalink | Reply
    Tags: , , Caltech,   

    From Caltech: “Tracking Down the “Missing” Carbon From the Martian Atmosphere” 

    Caltech Logo
    Caltech

    11/24/2015
    Kimm Fesenmaier

    1
    Carbon exchange and loss processes on Mars. Once in the atmosphere, CO2 can exchange with the polar caps or dissolve into waters, which can precipitate out solid carbonatess. CO2 in the atmosphere is lost to space at a rate controlled in part by the sun’s activity. Through the mechanism described in the study, UV radiation encounters a CO2 molecule and produces CO and then C atoms. Since C-12 is more easily removed than the heavier C-13, the current martian atmosphere is enriched in C-13.
    Credit: Lance Hayashida/Caltech Office of Strategic Communications

    Caltech and JPL scientists suggest the fingerprints of early photochemistry provide a solution to the long-standing mystery

    Mars is blanketed by a thin, mostly carbon dioxide atmosphere—one that is far too thin to prevent large amounts of water on the surface of the planet from subliming or evaporating. But many researchers have suggested that the planet was once shrouded in an atmosphere many times thicker than Earth’s. For decades that left the question, “Where did all the carbon go?”

    Now a team of scientists from Caltech and JPL thinks they have a possible answer. The researchers suggest that 3.8 billion years ago, Mars might have had only a moderately dense atmosphere. They have identified a photochemical process that could have helped such an early atmosphere evolve into the current thin one without creating the problem of “missing” carbon and in a way that is consistent with existing carbon isotopic measurements.

    The scientists describe their findings in a paper that appears in the November 24 issue of the journal Nature Communications.

    “With this new mechanism, everything that we know about the martian atmosphere can now be pieced together into a consistent picture of its evolution,” says Renyu Hu, a postdoctoral scholar at JPL, a visitor in planetary science at Caltech, and lead author on the paper.

    When considering how the early martian atmosphere might have transitioned to its current state, there are two possible mechanisms for the removal of excess carbon dioxide (CO2). Either the CO2 was incorporated into minerals in rocks called carbonates or it was lost to space.

    A separate recent study coauthored by Bethany Ehlmann, assistant professor of planetary science and a research scientist at JPL, used data from several Mars-orbiting satellites to inventory carbonate rocks, showing that there are not enough carbonates in the upper kilometer of crust to contain the missing carbon from a very thick early atmosphere that might have existed about 3.8 billion years ago.

    To study the escape-to-space scenario, scientists examine the ratio of carbon-12 and carbon-13, two stable isotopes of the element carbon that have the same number of protons in their nuclei but different numbers of neutrons, and thus different masses. Because various processes can change the relative amounts of those two isotopes in the atmosphere, “we can use these measurements of the ratio at different points in time as a fingerprint to infer exactly what happened to the martian atmosphere in the past,” says Hu.

    To establish a starting point, the researchers used measurements of the carbon isotope ratio in martian meteorites that contain gases that originated deep in the planet’s mantle. Because atmospheres are produced by outgassing of the mantle through volcanic activity, these measurements provide insight into the isotopic ratio of the original martian atmosphere.

    The scientists then compared those values to isotopic measurements of the current martian atmosphere recently collected by NASA’s Curiosity rover.

    NASA Mars Curiosity Rover
    Curiosity

    Those measurements show the atmosphere to be unusually enriched in carbon-13.

    Previously, researchers thought the main way that martian carbon would be ejected into space was through a process called sputtering, which involves interactions between the solar wind and the upper atmosphere. Sputtering causes some particles—slightly more of the lighter carbon-12 than the heavier carbon-13—to escape entirely from Mars, but this effect is small. So there had to be some other process at work.

    That is where the new mechanism comes in. In the study, the researchers describe a process that begins with a particle of ultraviolet light from the sun striking a molecule of CO2 in the upper atmosphere. That molecule absorbs the photon’s energy and divides into carbon monoxide (CO) and oxygen. Then another ultraviolet particle hits the CO, causing it to dissociate into atomic carbon (C) and oxygen. Some carbon atoms produced in this way have enough energy to escape the atmosphere, and the new study shows that carbon-12 is far more likely to escape than carbon-13.

    Modeling the long-term effects of this ultraviolet photodissociation mechanism coupled with volcanic gas release, loss via sputtering, and loss to carbonate rock formation, the researchers found that it was very efficient in terms of enriching carbon-13 in the atmosphere. Using the isotopic constraints, they were then able to calculate that the atmosphere 3.8 billion years ago might have had the pressure of Earth’s or less under most scenarios.

    “The efficiency of this new mechanism shows that there is in fact no discrepancy between Curiosity’s measurements of the modern enriched value for carbon in the atmosphere and the amount of carbonate rock found on the surface of Mars,” says Ehlmann, also a coauthor on the new study. “With this mechanism, we can describe an evolutionary scenario for Mars that makes sense of the apparent carbon budget, with no missing processes or reservoirs.”

    The authors conclude their work by pointing out several tests and refinements for the model. For example, future data from the ongoing Mars Atmosphere and Volatile EvolutioN (MAVEN) mission could provide the isotope fractionation of presently ongoing atmospheric loss to space and improve the extrapolation to early Mars.

    NASA Mars MAVEN
    MAVEN

    Hu emphasizes that the work is an excellent example of multidisciplinary effort. On the one hand, he says, the team looked at the atmospheric chemistry—the isotopic signature, the escape processes, and the enrichment mechanism. On the other, they used geological evidence and remote sensing of the martian surface. “By putting these together, we were able to come up with a summary of evolutionary scenarios,” says Hu. “I feel that Caltech/JPL is a unique place where we have the multidisciplinary capability and experience to make this happen.”

    Additional authors on the paper, Tracing the Fate of Carbon and the Atmospheric Evolution of Mars, are Yuk Yung, the Smits Family Professor of Planetary Science at Caltech and a senior research scientist at JPL, and David Kass, a research scientist at JPL. The work was supported by funding from NASA.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings

     
  • richardmitnick 2:54 pm on November 18, 2015 Permalink | Reply
    Tags: , , Caltech,   

    From Caltech: “Dark Matter Dominates in Nearby Dwarf Galaxy” 

    Caltech Logo
    Caltech

    11/18/2015
    Lori Dajose

    1
    Dwarf galaxies have few stars but lots of dark matter. This Caltech FIRE (Feedback in Realistic Environments) simulation from shows the predicted distribution of stars (left) and dark matter (right) around a galaxy like the Milky Way. The red circle shows a dwarf galaxy like Triangulum II. Although it has a lot of dark matter, it has very few stars. Dark matter-dominated galaxies like Triangulum II are excellent prospects for detecting the gamma-ray signal from dark matter self-annihilation. Credit: A. Wetzel and P. Hopkins, Caltech

    Dark matter is called “dark” for a good reason. Although they outnumber particles of regular matter by more than a factor of 10, particles of dark matter are elusive. Their existence is inferred by their gravitational influence in galaxies, but no one has ever directly observed signals from dark matter. Now, by measuring the mass of a nearby dwarf galaxy called Triangulum II, Assistant Professor of Astronomy Evan Kirby may have found the highest concentration of dark matter in any known galaxy.

    Triangulum II is a small, faint galaxy at the edge of the Milky Way, made up of only about 1,000 stars. Kirby measured the mass of Triangulum II by examining the velocity of six stars whipping around the galaxy’s center. “The galaxy is challenging to look at,” he says. “Only six of its stars were luminous enough to see with the Keck telescope.”

    Keck Observatory
    Keck Observatory Interior
    Keck

    By measuring these stars’ velocity, Kirby could infer the gravitational force exerted on the stars and thereby determine the mass of the galaxy.

    “The total mass I measured was much, much greater than the mass of the total number of stars—implying that there’s a ton of densely packed dark matter contributing to the total mass,” Kirby says. “The ratio of dark matter to luminous matter is the highest of any galaxy we know. After I had made my measurements, I was just thinking—wow.”

    Triangulum II could thus become a leading candidate for efforts to directly detect the signatures of dark matter. Certain particles of dark matter, called supersymmetric WIMPs (weakly interacting massive particles), will annihilate one another upon colliding and produce gamma rays that can then be detected from Earth.

    While current theories predict that dark matter is producing gamma rays almost everywhere in the universe, detecting these particular signals among other galactic noises, like gamma rays emitted from pulsars, is a challenge. Triangulum II, on the other hand, is a very quiet galaxy. It lacks the gas and other material necessary to form stars, so it isn’t forming new stars—astronomers call it “dead.” Any gamma ray signals coming from colliding dark matter particles would theoretically be clearly visible.

    It hasn’t been definitively confirmed, though, that what Kirby measured is actually the total mass of the galaxy. Another group, led by researchers from the University of Strasbourg in France, measured the velocities of stars just outside Triangulum II and found that they are actually moving faster than the stars closer into the galaxy’s center—the opposite of what’s expected. This could suggest that the little galaxy is being pulled apart, or “tidally disrupted,” by the Milky Way’s gravity.

    “My next steps are to make measurements to confirm that other group’s findings,” Kirby says. “If it turns out that those outer stars aren’t actually moving faster than the inner ones, then the galaxy could be in what’s called dynamic equilibrium. That would make it the most excellent candidate for detecting dark matter with gamma rays.”

    A paper describing this research appears in the November 17 issue of the Astrophysical Journal Letters. Judith Cohen (PhD ’71), the Kate Van Nuys Page Professor of Astronomy, is a Caltech coauthor.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 552 other followers

%d bloggers like this: