Tagged: Applied Research & Technology Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:48 am on May 29, 2016 Permalink | Reply
    Tags: Applied Research & Technology, , , Scientists just made big progress in fighting this incurable form of brain cancer   

    From Science Alert: “Scientists just made big progress in fighting this incurable form of brain cancer” 

    ScienceAlert

    Science Alert

    This post is dedicated to E.B.M., cancer researcher. I hope that he or his parents see it.

    27 MAY 2016
    DAVID NIELD

    1

    Because chemo is just not good enough.

    Researchers working on a more effective treatment for brain tumours have achieved such amazing results, they thought there was an error in their calculations. But the results are real: and the implications could be huge.

    By using an organic ‘nanocarrier’ to deliver chemotherapy drugs directly to tumours in the brain, the scientists have been able to achieve significant improvements in the number of cancer cells being killed off.

    The technique has so far only been tested in mice, but if replicated in humans – which, to be clear, is no easy feat – it could eventually lead to new treatments for people with specific types of brain cancer.

    Lead researcher and radiologist Ann-Marie Broome at the Medical University of South Carolina has been targeting glioblastoma multiforme (GBM) – a particularly stubborn form of cancer that’s currently incurable.

    Its position in the brain makes it difficult to operate on, and the blood-brain barrier (designed to protect the brain from harm) means that getting an effective dose of drugs to the tumour isn’t easy.

    2
    A schematic sketch of blood vessels in the brain.
    Date March 2009
    Armin Kübelbeck

    That’s where this new nanotechnology approach comes in. Broome and her colleagues used what they already knew about GBM and platelet-derived growth factor (PDGF) – which regulates cell growth and division – to create their new nanocarrier, built from an aggregate of molecules.

    The carrier, technically known as a micelle, is small enough to cross the brain-blood barrier to apply the treatment directly. The researchers describe it as using a postal code to get the drugs to the right place – the micelle gets the dose to the right street, and then the PDGF is used to find the right house.

    “I was very surprised by how efficiently and well it worked once we got the nanocarrier to those cells,” says Broome. “When we perfect this strategy, we will be able to deliver potent chemotherapies only to the area that needs them.”

    “This will dramatically improve our cure rates while cutting out a huge portion of our side effects from chemotherapy,” she adds. “Imagine a world where a cancer diagnosis not only was not life-threatening, but also did not mean that you would be tired, nauseated, or lose your hair.”

    The brain tumour’s own natural chemistry actually gives the micelle nanocarriers their potency. As the tumour grows, it creates waste by-products that cause acidity in the blood, which triggers the release of the micelle’s payload.

    “It’s very important that the public recognise that nanotechnology is the future,” said Broome. “It impacts so many different fields. It has a clear impact on cancer biology and potentially has an impact on cancers that are inaccessible, untreatable, undruggable – that in normal circumstances are ultimately a death knell.”

    Now that the researchers have shown that nanocarrier delivery is possible – at least in mice – they need to test a wider range of drugs against a wider range of cancers. If all goes well, hopefully we’ll hear about clinical trials involving humans later on down the track.

    There’s obviously a ways to go before this technique will become available to treat cancer in people, but nanotechnology has been showing promising results in previous studies, so this might just be how we end up fighting the disease in the future.

    The findings have been published in Nanomedicine.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 1:27 pm on May 28, 2016 Permalink | Reply
    Tags: Applied Research & Technology, Hot new solar cell, , ,   

    From MIT: “Hot new solar cell” 

    MIT News
    MIT News
    MIT Widget

    May 23, 2016
    David L. Chandler

    1
    While all research in traditional photovoltaics faces the same underlying theoretical limitations, MIT PhD student David Bierman says, “with solar thermal photovoltaics you have the possibility to exceed that.” In fact, theory predicts that in principle this method could more than double the theoretical limit of efficiency, potentially making it possible to deliver twice as much power from a given area of panels. Photo courtesy of the researchers.

    System converts solar heat into usable light, increasing device’s overall efficiency.

    A team of MIT researchers has for the first time demonstrated a device based on a method that enables solar cells to break through a theoretically predicted ceiling on how much sunlight they can convert into electricity.

    Ever since 1961 it has been known that there is an absolute theoretical limit, called the Shockley-Queisser Limit, to how efficient traditional solar cells can be in their energy conversion. For a single-layer cell made of silicon — the type used for the vast majority of today’s solar panels — that upper limit is about 32 percent. But it has also been known that there are some possible avenues to increase that overall efficiency, such as by using multiple layers of cells, a method that is being widely studied, or by converting the sunlight first to heat before generating electrical power. It is the latter method, using devices known as solar thermophotovoltaics, or STPVs, that the team has now demonstrated.

    The findings are reported this week in the journal Nature Energy, in a paper* by MIT doctoral student David Bierman, professors Evelyn Wang and Marin Soljačić, and four others.

    While all research in traditional photovoltaics faces the same underlying theoretical limitations, Bierman says, “with solar thermophotovoltaics you have the possibility to exceed that.” In fact, theory predicts that in principle this method, which involves pairing conventional solar cells with added layers of high-tech materials, could more than double the theoretical limit of efficiency, potentially making it possible to deliver twice as much power from a given area of panels.

    “We believe that this new work is an exciting advancement in the field,” Wang says, “as we have demonstrated, for the first time, an STPV device that has a higher solar-to-electrical conversion efficiency compared to that of the underlying PV cell.” In the demonstration, the team used a relatively low-efficiency PV cell, so the overall efficiency of the system was only 6.8 percent, but it clearly showed, in direct comparisons, the improvement enabled by the STPV system.

    The basic principle is simple: Instead of dissipating unusable solar energy as heat in the solar cell, all of the energy and heat is first absorbed by an intermediate component, to temperatures that would allow that component to emit thermal radiation. By tuning the materials and configuration of these added layers, it’s possible to emit that radiation in the form of just the right wavelengths of light for the solar cell to capture. This improves the efficiency and reduces the heat generated in the solar cell.

    The key is using high-tech materials called nanophotonic crystals, which can be made to emit precisely determined wavelengths of light when heated. In this test, the nanophotonic crystals are integrated into a system with vertically aligned carbon nanotubes, and operate at a high temperature of 1,000 degrees Celsius. Once heated, the nanophotonic crystals continue to emit a narrow band of wavelengths of light that precisely matches the band that an adjacent photovoltaic cell can capture and convert to an electric current. “The carbon nanotubes are virtually a perfect absorber over the entire color spectrum,” Bierman says, allowing it to capture the full solar spectrum. “All of the energy of the photons gets converted to heat.” Then, that heat gets re-emitted as light but, thanks to the nanophotonic structure, is converted to just the colors that match the PV cell’s peak efficiency.

    In operation, this approach would use a conventional solar-concentrating system, with lenses or mirrors that focus the sunlight, to maintain the high temperature. An additional component, an advanced optical filter, lets through all the desired wavelengths of light to the PV cell, while reflecting back any unwanted wavelengths, since even this advanced material is not perfect in limiting its emissions. The reflected wavelengths then get re-absorbed, helping to maintain the heat of the photonic crystal.

    Bierman says that such a system could offer a number of advantages over conventional photovoltaics, whether based on silicon or other materials. For one thing, the fact that the photonic device is producing emissions based on heat rather than light means it would be unaffected by brief changes in the environment, such as clouds passing in front of the sun. In fact, if coupled with a thermal storage system, it could in principle provide a way to make use of solar power on an around-the-clock basis. “For me, the biggest advantage is the promise of continuous on-demand power,” he says.

    In addition, because of the way the system harnesses energy that would otherwise be wasted as heat, it can reduce excessive heat generation that can damage some solar-concentrating systems.

    To prove the method worked, the team ran tests using a photovoltaic cell with the STPV components, first under direct sunlight and then with the sun completely blocked so that only the secondary light emissions from the photonic crystal were illuminating the cell. The results showed that the actual performance matched the predicted improvements.

    “A lot of the work thus far in this field has been proof-of-concept demonstrations,” Bierman says. “This is the first time we’ve actually put something between the sun and the PV cell to prove the efficiency” of the thermal system. Even with this relatively simple early-stage demonstration, Bierman says, “we showed that just with our own unoptimized geometry, we in fact could break the Shockley-Queisser limit.” In principle, such a system could reach efficiencies greater than that of an ideal solar cell.

    The next steps include finding ways to make larger versions of the small, laboratory-scale experimental unit, and developing ways of manufacturing such systems economically.

    This represents a “significant experimental advance,” says Peter Bermel, an assistant professor of electrical and computer engineering at Purdue University, who was not associated with this work. “To the best of my knowledge, this is a new record for solar TPV, using a solar simulator, selective absorber, selective filter, and photovoltaic receiver, that reasonably represents actual performance that might be achievable outdoors.” He adds, “It also shows that solar TPV can exceed PV output with a direct comparison of the same cells, for a sufficiently high input power density, lending this approach to applications using concentrated sunlight.”

    The research team also included MIT alumnus Andrej Lenert PhD ’14, now a research fellow at the University of Michigan, MIT postdocs Walker Chan and Bikram Bhatia, and research scientist Ivan Celanovic. The work was supported by the Solid-State Solar Thermal Energy Conversion (S3TEC) Center, funded by the U.S. Department of Energy.

    *Science paper:
    Enhanced photovoltaic energy conversion using thermally based spectral shaping

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 12:57 pm on May 28, 2016 Permalink | Reply
    Tags: Applied Research & Technology, ,   

    From CUMC NY Presbyterian Hospital: “Unmasking a killer” 

    1

    Columbia University Medical Center

    Unmasking a killer: how immunotherapy helps your body find cancer and destroy it.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 7:14 am on May 28, 2016 Permalink | Reply
    Tags: Applied Research & Technology, , , What Went Wrong on Everest This Season?   

    From NatGeo: “What Went Wrong on Everest This Season” 

    National Geographic

    National Geographics

    May 24, 2016
    Kat Long

    1
    A shot from Garrett Madison’s successful 2016 Everest expedition; Photograph courtesy Garrett Madison from Instagram

    Garrett Madison had high hopes for the 2016 climbing season on Mount Everest.

    The veteran mountaineering guide, who is president and founder of Madison Mountaineering in Seattle, had survived the devastating avalanche at Everest Base Camp, triggered by the magnitude 7.8 earthquake that rocked the Himalayas on April 25, 2015. At least 20 other climbers and Sherpas had perished. The previous year, Madison had watched an avalanche bury 16 Nepali mountain workers in the Khumbu Icefall, and he had helped dig out the victims.

    Madison returned this year “to remember what a beautiful climb it can be” and to lead others toward Everest’s peak. With an international team of 15 Sherpas, seven clients, and four other guides, Madison summited the world’s tallest mountain—for his seventh time—on May 18.

    But they changed their plans for summiting neighboring Lhotse the following day after a Sherpa who was fixing ropes died in a fall. Three more climbers died from altitude-related conditions on Everest, while two climbers went missing and are presumed to have died on the mountain. Many more climbers have been evacuated by helicopter for pulmonary edema, frostbite, and other illnesses.

    Madison spoke with National Geographic Adventure about what’s gone wrong this year, how climbers can ensure their own health and safety, and why choosing the right expedition company for an Everest ascent can be a matter of life and death.

    Why do you think so many climbers have lost their lives this season?

    I don’t know what happened in each case; I’ve only heard rumors. On the day we summited Everest, while we were on our way down, we looked over to Lhotse, which is the nearest peak to Everest. We could see a line of climbers going up the Lhotse face, but as they were going up, they turned around. I couldn’t figure out why they turned around—they were only a few hundred meters from the top.

    [Later we heard] that the Sherpa who died on Lhotse fell at that moment. They’re supposed to use modern safety equipment. No one knows why he fell, but the climbers turned around, most likely because no one continued fixing the ropes.

    That night we had a team talk, and we discussed the fact that there had been a sad accident. We didn’t feel like it was ethical to continue our plan to climb Lhotse the following day. It just didn’t seem right.

    This is speculation, but it’s possible that the Sherpa was piecing together ropes from the last time Lhotse was climbed, in 2013, with new ropes from this season, and perhaps the old rope broke. Nobody knows for sure. But all teams have backed off from climbing Lhotse this year because of the death as well as the uncertain safety of the ropes.

    In past seasons, climbers have died because of sudden storms and unpredictable weather. But weather has been good lately, right?

    The weather been very good this season. But when I think about mountain conditions, I always have to anticipate bad conditions and be prepared for them. There are some things you can’t anticipate—with an icefall or an avalanche, you’re in the wrong place at the wrong time, and it can take you out.

    In general, mountaineering is dangerous because the weather can change suddenly. In 2012, on May 20, which is the day we summited Everest and came down OK, four other climbers behind us on other teams didn’t make it.

    Has the growing number of climbers and different outfitters made Everest less safe?

    What we’re seeing now with recent Western climbers is people getting in over their heads. A lot of climbers are buying into logistics support—that includes permits, maybe a camping setup, some oxygen, some Sherpa support—but they don’t climb together as a true team. They’re individuals going up and down the mountain who are sharing logistics and services. When they get into trouble, they’re on their own. They don’t have [a] support network in place to get them down the mountain to save them.

    That’s a lot of what I’ve seen on Everest, these ragtag groups of amateur climbers who get in over their heads and don’t have a support network, i.e., a professional mountain guide, to make decisions and intervene and try to save them. If I have a client who’s struggling, we try not to get to the point where they are incapacitated and helpless. I try to address problems lower on the mountain and head off issues before they become life threatening. That’s why three of our folks went home [before ascending the summit].

    I feel like a lot of people don’t know when to stop pushing themselves, and they don’t have a guide who can tell them when it’s far enough and what’s too much. It’s like swimming out into the ocean—you don’t want to get so far out that you can’t swim back. I see a lot of amateur climbers without the knowledge and experience pushing themselves so far that they can’t get back down.

    That’s what we saw in 2012: Four climbers couldn’t get back down from summit day to high camp. And that’s what we’re seeing now: Climbers can get down to high camp, but they’re so wiped out that they’re dying that night of cerebral edema or other causes—no one’s really sure what at this point.

    Is that partly because some guides on Everest lack the experience to be able to help their clients if they’re in danger?

    Well, I wouldn’t refer to those individuals as guides. I’d say some group leaders, or business owners who offer services on Everest—they’re not in the business of guiding. They’re providing services and logistics for climbers who want to come up and make their own attempt. So, there really isn’t any guidance there. It’s just, “For x amount of dollars”—which is a lot less than I charge—”we’ll give you the permit, the oxygen, some Sherpa support, some food, everything you need to make an attempt on Everest.” But that’s it.

    For many people, the discount in money is a big deal, and maybe they think it’s the right thing for them. But I think a lot of people get in over their heads, and unfortunately they pay the price. That’s what we’ve seen happen this year.

    As a guide, what is your role in helping your clients when they’re ailing?

    We had a member with pulmonary edema at Camp 2, and we immediately addressed that with medications and supplemental oxygen. We helped him get to Base Camp. Helicopter rescues are available; because of advances in technology we can evacuate people from Camp 2. But we’d rather have them walk down under their own power. That was the end of the expedition for him, though, because it takes a while to recover from pulmonary edema.

    As a guide, my role is dealing with these issues as they come up and helping them get down safely. I’m responsible for my clients’ lives. I feel compelled to ensure that they return to their families and loved ones. That’s a special service that’s part of our expedition. If you sign up with us, that is part of the deal, you’re a part of our team. But on the other end of the spectrum, with these logistics support companies, no one’s looking out for you.

    For some climbers who do have a lot of experience, logistics support is all they need and want. They don’t need a guide. I think that’s fine for some people, but for others, they should have a lot more supervision and guidance so they can get down alive.

    Because there’s no regulation of Everest in terms of guides services, anybody can offer an Everest climb, so there’s a whole spectrum of services and packages available, from the ultra bare-bones, low-end basic program to the ultra high-end program, which is kind of where we are. Climbers have to make educated decisions about who they decide to go with, based on their ability and skill set.

    Do you think in the future there will be more regulation of these companies on Everest?

    Perhaps eventually, but I think it will be a long time coming. The Nepalese government makes money off the number of climbers who decide to try Everest, at $11,000 per foreign climber. If they start to regulate how things are done, I think that will diminish the number of climbers, which diminishes the royalty fee the government gets. Right now, I think they want as many people as possible to climb, and they accommodate every type of service those climbers want. That’s what they’re focused on.

    What has the mood been like between the Western climbers and the Nepali workers there? Some have said they’re dissatisfied with the working conditions on Everest. Are the relationships positive?

    Oh yeah, very good. Relations are great—in fact, the media always blows [any conflict] way out of proportion. It’s not representative at all of the cohesive relationship between foreign climbers and Nepali high-altitude workers.

    I think the economic part is certainly a big reason why Sherpas and other castes in Nepal work in the mountains. They’ve got to feed their families somehow, and it certainly can be a way to do that. But I feel like all the Sherpas that we climb with really enjoy climbing in the mountains, and they love climbing on Everest.

    The reason I started working as a guide in 1999 on Mount Rainier is because I love being in the mountains and sharing that with other climbers. I found a way to do that and actually make a living at it. I remember thinking, Wow, this is as good as it gets! I’m living my dreams, my passion, and I’m getting paid for it.

    I think a lot of the Sherpas, if not all of them, feel the same way. It’s their identity; it’s in their blood. All the guys I climb with on our team are very happy to be climbing, and the camaraderie that we share is really powerful.

    There have been reports of some climbers taking oxygen tanks and tents from other climbers. Have you heard anything about that?

    I heard some rumors. I think what happens is that people get into trouble. People go up high, and they become incapacitated and desperate. They just want to survive. They do whatever they have to, meaning they’ll take somebody’s oxygen, which is cached up high, or they’ll get into someone’s tent, or they’ll take someone’s food. It happens on every mountain all over the world when people get desperate for their lives.

    I think we see it on Everest because there are a lot of people up there for a very short period of time, a very small window. There are a lot of amateur climbers who do get into trouble, and they’re just trying to survive. I mean, if I was in that situation, I’d probably do everything that I could to try to help myself survive. I’ve been fortunate not to have been in that situation.

    If I did need to take someone else’s oxygen or use their tent for shelter, I hope that I would at least be able to let them know. Maybe I could talk to them by radio so they didn’t get there and find their oxygen gone or their tent occupied. I think it’s just a condition of human nature that when people get into trouble, they do whatever they can to survive.

    What could climbers do to increase their own safety and avoid that kind of survival situation?

    I just hope climbers do a lot of research before gong to Everest, and make very informed decisions about the selection of the companies that they go with. I wish [climbers] had some coaching, or some screening process to help them not go so far, or be with somebody who could make decisions for them. It’s just really sad to see people push themselves beyond their ability, become incapacitated, and then not have the support to help them get down. That’s my main thought and takeaway from this season.

    Now that your whole team has summited together, are you planning on retiring from Everest?

    No, I think I’ll come back one more time. Our next big project is K2, and I have to be in Islamabad by June 12 to start that expedition, so for me, it’s one mountain at a time. I’ve got a few climbers who really want to go to Everest next year, so I’m planning on another trip in 2017. I’d like to climb it a few more times.

    But this season, it was great to have a safe, successful, drama-free expedition, especially after the last two years. Sometimes you get lucky.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The National Geographic Society has been inspiring people to care about the planet since 1888. It is one of the largest nonprofit scientific and educational institutions in the world. Its interests include geography, archaeology and natural science, and the promotion of environmental and historical conservation.

     
  • richardmitnick 4:24 pm on May 27, 2016 Permalink | Reply
    Tags: Applied Research & Technology, , ,   

    From SLAC: “SLAC’s New Computer Science Division Teams with Stanford to Tackle Data Onslaught” 


    SLAC Lab

    1
    Alex Aiken, director of the new SLAC Computer Science Division, in the Stanford Research Computing Facility. Built by Stanford on the SLAC campus, this high-performance computing data center opened in 2013; it is used by more than 230 principal investigators and 1,100 students. (SLAC National Accelerator Laboratory)

    Alex Aiken, director of the new Computer Science Division at the Department of Energy’s SLAC National Accelerator Laboratory, has been thinking a great deal about the coming challenges of exascale computing, defined as a billion billion calculations per second. That’s a thousand times faster than any computer today. Reaching this milestone is such a big challenge that it’s expected to take until the mid-2020s and require entirely new approaches to programming, data management and analysis, and numerous other aspects of computing.

    SLAC and Stanford, Aiken believes, are in a great position to join forces and work toward these goals while advancing SLAC science.

    “The kinds of problems SLAC scientists have are at such an extreme scale that they really push the limits of all those systems,” he says. “We believe there is an opportunity here to build a world-class Department of Energy computer science group at SLAC, with an emphasis on large-scale data analysis.”

    Even before taking charge of the division on April 1, Aiken had his feet in both worlds, working on DOE-funded projects at Stanford. He’ll continue in his roles as professor and chair of the Stanford Computer Science Department while building the new SLAC division.

    Solving Problems at the Exascale

    SLAC has a lot of tough computational problems to solve, from simulating the behavior of complex materials, chemical reactions and the cosmos to analyzing vast torrents of data from the upcoming LCLS-II and LSST projects. SLAC’s Linac Coherent Light Source (LCLS), is a DOE Office of Science User Facility.

    SLAC LCLS-II line
    SLAC LCLS-II line

    LSST/Camera, built at SLAC
    LSST Interior
    LSST telescope, currently under construction at Cerro Pachón Chile
    LSST Camera, built at SLAC; LSST telescope, currently under construction at Cerro Pachón Chile

    LSST, the Large Synoptic Survey Telescope, will survey the entire Southern Hemisphere sky every few days from a mountaintop in Chile starting in 2022. It will produce 6 million gigabytes of data per year – the equivalent of shooting roughly 800,000 images with a 8-megapixel digital camera every night. And the LCLS-II X-ray laser, which begins operations in 2020, will produce a thousand times more data than today’s LCLS.

    The DOE has led U.S. efforts to develop high-performance computing for decades, and computer science is increasingly central to the DOE mission, Aiken says. One of the big challenges across a number of fields is to find ways to process data on the fly, so researchers can obtain rapid feedback to make the best use of limited experimental time and determine which data are interesting enough to analyze in depth.

    The DOE recently launched the Exascale Computing Initiative (ECI), led by the Office of Science and National Nuclear Security Administration, as part of a broader National Strategic Computing Initiative. It aims to develop capable exascale computing systems for science, national security and energy technology development by the mid-2020s.

    Staffing up and Enhancing Collaborations

    On the Stanford side, the university has been performing world-class computer science – a field Aiken loosely describes as, “How do you make computers useful for a variety of things that people want to do with them?” – for more than half a century. But since faculty members mainly work through graduate student and postdoctoral researchers, projects tend to be limited to the 3- to 5-year lifespan of those positions.

    The new SLAC division will provide a more stable basis for the type of long-term collaboration needed to solve the most challenging scientific problems. Stanford computer scientists have already been involved with the LSST project, and Aiken himself is working on new exascale computing initiatives at SLAC: “That’s where I’m spending my own research time.”

    He is in the process of hiring four SLAC staff scientists, with plans to eventually expand to a group of 10 to 15 researchers and two initial joint faculty positions. The division will eventually be housed in the Photon Science Laboratory Building that’s now under construction, maximizing their interaction with researchers who use intensive computing for ultrafast science and biology. Stanford graduate students and postdocs will also be an important part of the mix.

    While initial funding is coming from SLAC and Stanford, Aiken says he will be applying for funding from the DOE’s Advanced Scientific Computing Research program, the Exascale Computing Initiative and other sources to make the division self-sustaining.

    Two Sets of Roots

    Aiken came to Stanford in 2003 from the University of California, Berkeley, where he was a professor of engineering and computer science. Before that he spent five years at IBM Almaden Research Center.

    He received a bachelor’s degree in computer science and music from Bowling Green State University in 1983 and a PhD from Cornell in 1988. Aiken met his wife, Jennifer Widom, in a music practice room when they were graduate students (he played trombone, she played trumpet). Widom is now a professor of computer science and electrical engineering at Stanford and senior associate dean for faculty and academic affairs for the School of Engineering. Avid and adventurous travelers, they have taken their son and daughter, both now grown, on trekking, backpacking, scuba diving and sailing trips all over the world.

    The roots of the new SLAC Computer Science Division go back to fall 2014, when Aiken began meeting with two key faculty members – Stanford Professor Pat Hanrahan, a computer graphics researcher who was a founding member of Pixar Animation Studios and has received three Academy Awards for rendering and computer graphics, and SLAC/Stanford Professor Tom Abel, director of the Kavli Institute for Particle Astrophysics and Cosmology, who specializes in computer simulations and visualizations of cosmic phenomena. The talks quickly drew in other faculty and staff, and led to a formal proposal late last year that outlined potential synergies between SLAC, Stanford and Silicon Valley firms that develop computer hardware and software.

    “Modern algorithms that exploit new computer architectures, combined with our unique data sets at SLAC, will allow us to do science that is greater than the sum of its parts,” Abel said. “I am so looking forward to having more colleagues at SLAC to discuss things like extreme data analytics and how to program exascale computers.”

    Aiken says he has identified eight Stanford computer science faculty members and a number of SLAC researchers with LCLS, LSST, the Particle Astrophysics and Cosmology Division, the Elementary Particle Physics Division and the Accelerator Directorate who want to get involved. “We keep hearing from more SLAC people who are interested,” he says. “We’re looking forward to working with everyone!”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    SLAC Campus
    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.
    i1

     
  • richardmitnick 9:22 am on May 27, 2016 Permalink | Reply
    Tags: Applied Research & Technology, Dense Oceanfloor Network System for Earthquakes and Tsunamis (DONET), , , Ocean Floor Networks Capture Low-Frequency Earthquake Event   

    From Eos: “Ocean Floor Networks Capture Low-Frequency Earthquake Event” 

    Eos news bloc

    Eos

    25 May 2016
    By Masaru Nakano, Takane Hori, Eiichiro Araki, Narumi Takahashi, and Shuichi Kodaira

    Last August, stations on a newly deployed permanent ocean floor observation network recorded rarely seen, very low frequency signals from shallow earthquakes.

    1
    Sensors, installed at a Dense Oceanfloor Network System for Earthquakes and Tsunamis (DONET) station using a remotely operated vehicle, buried in the seafloor. The cylindrical vessel is pressure resistant and houses seismometers. Credit: JAMSTEC

    For the past thousand years, devastating megathrust earthquakes, sometimes accompanied by huge tsunamis, have occurred repeatedly along the Nankai trough off southwestern Japan [e.g., Ando, 1975]. To help us prepare for such earthquakes in the future, we have developed oceanic observation networks to monitor seismotectonic activities in this region.

    In 2011, a permanent ocean floor observation network called the Dense Oceanfloor Network System for Earthquakes and Tsunamis (DONET1) was deployed above the Kumano fore-arc basin to monitor crustal activities in the Nankai trough (Figure 1) [Kaneda et al., 2015; Kawaguchi et al., 2015]. Construction of a new array—DONET2—started in 2013, and it now covers the ocean floor between the Nankai trough and the cities of Shiono-misaki and Muroto.

    1
    Fig. 1. Map showing the DONET stations (green squares, DONET1; blue squares, DONET2) and the borehole station (purple square in the DONET1 network area) along the Nankai trough and the fiber-optic cables (black lines) connecting them. Gray contours show the seafloor topography in meters below sea level. Red and yellow circles show the epicenter locations of very low frequency (VLF) and regular earthquakes, respectively. Clusters of VLF earthquakes during 2003–2004 are enclosed by dashed ellipses (data from Obara and Ito [2005]). The inset shows the location of the plotted area in Japan.

    In 2015, the DONET2 stations deployed close to the trough axis observed something rarely seen: tiny bursts of shallow very low frequency (VLF) earthquakes. These earthquakes, which shook directly below the stations, occurred intermittently. Their hypocenters—the point where an earthquake rupture starts—are in the accretionary wedge, a jumble of oceanic sediments scraped from the Philippine Sea Plate as it dives below the Eurasian continent.

    Shallow VLF earthquakes indicate crustal deformation in the subduction zone, but the generation mechanism of these peculiar earthquakes has not been clarified yet. Because no regular earthquakes occur in the sedimentary wedge, VLF earthquakes are the only useful indicators of the deformation. Revealing the source process of these earthquakes as well as other earthquake activity will improve our understanding of plate motion at subduction zones where megathrust earthquakes occur.

    DONET: A Permanent Ocean Floor Observation Network

    The DONET arrays are different from ordinary in-line cable observation networks because clusters of DONET stations, like bunches of grapes, are deployed around nodes, to which they are linked with optical fiber cables. Each node is similarly linked to a backbone cable [Kaneda et al., 2015; Kawaguchi et al., 2015]. With this geometry, the arrays are able to densely cover the seafloor.

    e
    DONET stations are deployed around nodes, to which they are linked with optical fiber cables. Each node is similarly linked to a backbone cable. Credit: JAMSTEC

    Each DONET station is equipped with a strong-motion seismometer, which can measure ground accelerations and is important to understanding how earthquakes affect man-made structures onshore. The stations also have a high-sensitivity broadband seismometer that can record signals at a broad range of wavelengths, including those with periods as long as 360 seconds. This capability is useful for detecting small and large earthquakes. Stations are also equipped with pressure gauges, which can detect tsunamis as well as crustal deformation—uplift or subsidence of the seafloor.

    With these instruments, we can detect and issue earlier warnings for large earthquakes and tsunamis than we could by relying on land observations alone. We can also monitor other seismotectonic activities such as slow earthquakes and slow-slip crustal deformation. Intensive study of these crustal activities may help us make better preparations for the next large earthquakes.

    In addition, we have installed a vertical array of seismometers, strain meters, pore-pressure gauges, and tiltmeters in boreholes that we drilled into the ocean floor within the DONET1 network area. These instruments monitor crustal deformation and movement of pore fluid in the accretionary prism. This sensor array, which is linked by fiber-optic cable to the DONET backbone, transmits data in real time. The DONET arrays and these borehole stations together form a three-dimensional observation network along the Nankai trough.

    Characteristics of VLF Earthquakes

    4
    Fig. 2. Waveforms showing the vertical component of velocity seismograms during VLF activity observed on 4 September 2015 by DONET2 stations (location shown in Figure 1). (a) Higher-frequency VLF (pink) and regular earthquake (yellow) signals with periods between 0.2 and 1 second. (b) Low-frequency signals with periods between 10 and 50 seconds. (c) Running spectrum of the vertical velocity seismogram at station MRF25. These plots, in the same time window, show a swarm activity of VLF events. BPF stands for band-pass filter; UTC is coordinated universal time.

    Recently, we have observed slow earthquakes and crustal deformation with durations lasting from 1 second to as long as 1 year (reviewed by Beroza and Ide [2011]). Like regular earthquakes, slow earthquakes are caused by shear slip on a fault plane, but the slip velocity is slow and lasts for a longer duration. Rupture of regular earthquakes continues at most for several minutes.

    Shallow VLF earthquakes are characterized by the dominance of seismic signals longer than 10 seconds, unlike regular earthquakes of similar ground velocity (Figure 2). Shallow VLF earthquakes are the least studied among slow earthquakes because their activity affects only a limited area. Also, because they originate below the ocean floor, observations close to the source are difficult to obtain.

    Shallow VLF earthquake activity along the Nankai trough was first observed in 2002 [Ishihara, 2003], and activity was observed again in 2004, 2009, and 2011. This activity along the Nankai trough occurred in six spatial clusters—studies showed that the shallow VLF earthquakes were caused by shear failure in the accretionary prism or plate boundary [Obara and Ito, 2005; Sugioka et al., 2012; To et al., 2015].

    Burst of VLF Earthquakes in 2015

    From August to November 2015, the DONET1 and DONET2 networks both detected swarms of VLF earthquakes (Figure 1). On 10 August 2015, just after DONET2 stations were deployed along the trough axis, VLF earthquake activity started south of Kii channel (Figure 3). One week later, a burst of activity was observed off Shiono-misaki, and a swarm started south of Kii channel that continued for about 1 month. Three weeks later, the DONET1 stations observed VLF activity in the Kumano fore-arc basin.

    5
    Fig. 3. VLF earthquake activity in 2015. Note how VLF earthquakes seem to sequentially shift from cluster to cluster.

    It is not clear whether activity continued south of Kii channel and off Shiono-misaki in October, when DONET2 observations were interrupted for system maintenance, but by November 2015 all activity along the Nankai trough had ceased. This sequence of activity, in which bursts off Shiono-misaki were followed by bursts south of Kii channel and in the Kumano fore-arc basin, is very similar to that observed in 2009 [Obara and Ito, 2005].

    The densely distributed DONET stations directly above the source enabled us to determine precise source distribution of VLF local clusters even for small events [Sugioka et al., 2012]. The sources were in the accretionary prism or the oceanic crust at depths shallower than 10 kilometers below sea level, and the magnitudes of the VLF earthquakes were less than 4. These magnitudes are rather small compared with those during previous periods of VLF earthquake activity, when magnitudes as high as about 5 were recorded.

    Further Studies

    Together, DONET1, DONET2, and the borehole stations constitute an observation network that extends over 300 kilometers along the Nankai trough and covers more than half of the region in which clusters of shallow VLF earthquakes have been observed. Continuous monitoring of VLF activity immediately above the source clusters will reveal the spatiotemporal variations of the source characteristics, including stress levels. By comparing differences and similarities among the clusters and periods of VLF activity, we will gain further understanding of VLF earthquakes.

    Shallow VLF earthquakes in a cluster at Hyuga-nada, west of the DONET2 observation area, may have been synchronous with slow-slip events along the plate boundary [Asano et al., 2015]. Observations of crustal deformation and pore pressure changes made by our oceanic observation network will thus provide the data needed to understand how VLF earthquakes are related to other slow earthquakes and to plate motion. With this understanding, we will extend our knowledge of seismotectonic activity in the Nankai trough subduction zone.

    DONET1 and borehole observation data are publicly available, and DONET2 data will also be available once the network has been completed.

    Acknowledgments

    We used data obtained from the Hi-net and F-net operated by Japan’s National Research Institute for Earth Science and Disaster Resilience and the Japan Meteorological Agency for the hypocenter determinations of regular earthquakes.

    References

    Ando, M. (1975), Source mechanisms and tectonic significance of historical earthquakes along the Nankai trough, Japan, Tectonophysics, 27, 119–140.

    Asano, Y., K. Obara, T. Matsuzawa, H. Hirose, and Y. Ito (2015), Possible shallow slow slip events in Hyuga-nada, Nankai subduction zone, inferred from migration of very low frequency earthquakes, Geophys. Res. Lett., 42, 331–338, doi:10.1002/2014GL062165.

    Beroza, G. C., and S. Ide (2011), Slow earthquakes and nonvolcanic tremor, Annu. Rev. Earth Planet. Sci., 39, 271–296, doi:10.1146/annurev-earth-040809-152531.

    Ishihara, Y. (2003), Major existence of very low frequency earthquakes in background seismicity along subduction zone of southwestern Japan, Eos Trans. AGU, 84(46), Fall Meet. Suppl., Abstract S41C-0107.

    Kaneda, Y., K. Kawaguchi, E. Araki, H. Matsumoto, T. Nakamura, S. Kamiya, K. Ariyoshi, T. Hori, T. Baba, and N. Takahashi (2015), Development and application of an advanced ocean floor network system for megathrust earthquakes and tsunamis, in Seafloor Observatories, pp. 643–66, Springer, Heidelberg, Germany, doi:10.1007/978-3-642-11374-1_252.

    Kawaguchi, K., S. Kaneko, T. Nishida, and T. Komine (2015), Construction of the DONET real-time seafloor observatory for earthquakes and tsunami monitoring, in Seafloor Observatories, pp. 211–228, Springer, Heidelberg, Germany, doi:10.1007/978-3-642-11374-1_10.

    Obara, K., and Y. Ito (2005), Very low frequency earthquakes excited by the 2004 off the Kii peninsula earthquakes: A dynamic deformation process in the large accretionary prism, Earth Planets Space, 57, 321–326.

    Sugioka, H., T. Okamoto, T. Nakamura, Y. Ishihara, A. Ito, K. Obana, M. Kinoshita, K. Nakahigashi, M. Shinohara, and Y. Fukao (2012), Tsunamigenic potential of the shallow subduction plate boundary inferred from slow seismic slip, Nat. Geosci., 5, 414–418, doi:10.1038/NGEO1466.

    To, A., K. Obana, H. Sugioka, E. Araki, N. Takahashi, and Y. Fukao (2015), Small size very low frequency earthquakes in the Nankai accretionary prism, following the 2011 Tohoku-Oki earthquake, Phys. Earth Planet. Inter., 245, 40–51, doi:10.1016/j.pepi.2015.04.007.

    Citation: Nakano, M., T. Hori, E. Araki, N. Takahashi, and S. Kodaira (2016), Ocean floor networks capture low-frequency earthquake event, Eos, 97, doi:10.1029/2016EO052877. Published on 25 May 2016.
    © 2016. The authors. CC BY-NC-ND 3.0

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Eos is the leading source for trustworthy news and perspectives about the Earth and space sciences and their impact. Its namesake is Eos, the Greek goddess of the dawn, who represents the light shed on understanding our planet and its environment in space by the Earth and space sciences.

     
  • richardmitnick 8:29 am on May 27, 2016 Permalink | Reply
    Tags: Applied Research & Technology, , , Superbug resistant to last resort antibiotic   

    From COSMOS: “Superbug resistant to last resort antibiotic” 

    Cosmos Magazine bloc

    COSMOS

    27 May 2016
    Bill Condie

    The emergence of the first strain of bacteria resistant to colistin is causing alarm among health professionals. Bill Condie reports.

    1
    A coloured scanning electron micrograph shows E. coli bacteria, a new strain of which is resistant to all known antibiotics.Credit: PASIEKA

    A patient has become the first in the US with an infection resistant to colistin – the antibiotic of last resort used to treat infections that have not responded to anything else.

    The so-called “superbug” is a strain of E. coli that could be the harbinger of pathogens resistant to all treatment.

    The case – a urinary tract infection of a 49-year-old Pennsylvania woman – was reported* in Antimicrobial Agents and Chemotherapy, a publication of the American Society for Microbiology.

    The study by the Walter Reed National Military Medical Center has alarmed health officials.

    “We risk being in a post-antibiotic world,” Thomas Frieden, director of the US Centers for Disease Control and Prevention, told Reuters.

    “The more we look at drug resistance, the more concerned we are. The medicine cabinet is empty for some patients. It is the end of the road for antibiotics unless we act urgently.”

    The study explains that the superbug itself had first been infected with a tiny piece of DNA called a plasmid, which passed along a gene called mcr-1 that confers resistance to colistin.

    “This heralds the emergence of truly pan-drug resistant bacteria,” said the study.

    “To the best of our knowledge, this is the first report of mcr-1 in the USA.”

    The mcr-1 gene was found last year in people and pigs in China and doctors are worried about the potential for the superbug to spread from animals to people.

    “It is dangerous and we would assume it can be spread quickly, even in a hospital environment if it is not well contained,” Gail Cassell, a microbiologist and senior lecturer at Harvard Medical School, told Reuters.

    Experts have warned of this potential problem since the 1990s thanks, in part, to the overprescribing of antibiotics and their extensive use in food livestock.

    2
    Extensive use of antibiotics in intensive farming is partly to blame for increased drug resistance in humans. Credit: Martin Harvey

    How did we come to this?

    In 1948, an American biochemist, Thomas H. Jukes, discovered that the addition of a cheap antibiotic to the diet of chickens made them gain weight much faster than normal.

    His company, Lederle, lost no time in rolling out the product as an agricultural supplement, effectively setting in train a vast uncontrolled experiment to transform the food chain.

    Now, nearly 70 years later, about 80% of antibiotic sales in the US go to livestock production rather than to human health care, despite the mounting evidence that drug resistance spills over from livestock to people.

    To read more about how this came about and the risks it poses, see How did antibiotics become part of the food chain?

    Some of the biggest effects of multi drug resistance are being felt by patients with tuberculosis. The problem is particularly acute in countries such as South Africa, where patients carrying drug-resistant TB strains are routinely returned to their communities.

    TB is the leading cause of death in South Africa thanks to the large number of people co-infected with HIV.

    For Extreme Drug Resistant TB, cure rates are now just 20% nationally.

    *Science paper:
    Escherichia coli Harboring mcr-1 and blaCTX-M on a Novel IncF Plasmid: First report of mcr-1 in the USA

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 4:09 pm on May 26, 2016 Permalink | Reply
    Tags: Applied Research & Technology, , , Alzheimer’s Disease May Be Caused By Brain Infections   

    From NOVA: “Alzheimer’s Disease May Be Caused By Brain Infections” 

    PBS NOVA

    NOVA

    26 May 2016
    Allison Eck

    Silent infections earlier in life could be at the root of Alzheimer’s disease.

    Alzheimer’s researchers have long presumed that amyloid beta proteins are the brain’s garbage, accumulating over time but serving no obvious purpose. These plaques trigger the formation of tau proteins (or “tangles”), which proceed to destroy nerve cells.

    Robert D. Moir of Harvard Medical School and Massachusetts General Hospital thought something was missing in this picture—and looked to proteins that live on our innate immune system for answers. Moir and his colleague Rudolph E. Tanzi noticed that amyloid proteins look like these immune system proteins, which trap and then purge harmful viruses, yeast, fungi, and bacteria. The two scientists wanted to see if amyloid plaques serve a similar function in the brain.

    1
    Salmonella bacteria, trapped in amyloid beta plaques.

    In one experiment, Moir and Tanzi subjected young mice’s brains to Salmonella bacteria. They noticed that plaques began to form around single Salmonella bacterium and that in mice without amyloid beta, bacterial infections arose more quickly. The team’s work, published* Wednesday in the journal Science Translational Medicine, suggests that silent, often symptomless infections in the brain could be the precursor to the development of Alzheimer’s disease later in life.

    Here’s Gina Kolata, reporting for The New York Times:

    “The Harvard researchers report a scenario seemingly out of science fiction. A virus, fungus or bacterium gets into the brain, passing through a membrane—the blood-brain barrier—that becomes leaky as people age. The brain’s defense system rushes in to stop the invader by making a sticky cage out of proteins, called beta amyloid. The microbe, like a fly in a spider web, becomes trapped in the cage and dies. What is left behind is the cage—a plaque that is the hallmark of Alzheimer’s.

    So far, the group has confirmed this hypothesis in neurons growing in petri dishes as well as in yeast, roundworms, fruit flies and mice. There is much more work to be done to determine if a similar sequence happens in humans, but plans—and funding—are in place to start those studies, involving a multicenter project that will examine human brains.

    The finding may help explain why some people with Alzheimer’s have exhibited higher levels of herpes antibodies, a sign of previous infection, than others who didn’t have Alzheimer’s.”

    Of course, infection is likely not the only contributing factor. People with the ApoE4 gene aren’t as effective in breaking down beta amyloid, so any potential immune-like response by amyloid proteins could lead to an unhealthy buildup.

    Whatever the complex set of circumstances may be, this finding may fill in some of missing links in Alzheimer’s research.

    *Science paper:
    Amyloid-β peptide protects against microbial infection in mouse and worm models of Alzheimer’s disease

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 3:02 pm on May 26, 2016 Permalink | Reply
    Tags: Applied Research & Technology, , , TSRI Scientists Discover Mechanism that Turns Mutant Cells into Aggressive Cancers   

    From Scripps: “TSRI Scientists Discover Mechanism that Turns Mutant Cells into Aggressive Cancers” 

    Scripps
    Scripps Research Institute

    Scientists at The Scripps Research Institute (TSRI) have caught a cancer-causing mutation in the act.

    A new study shows how a gene mutation found in several human cancers, including leukemia, gliomas and melanoma, promotes the growth of aggressive tumors.

    “We’ve found the mechanism through which this mutation leads to a scrambling of the genome,” said TSRI Associate Professor Eros Lazzerini Denchi, who co-led the study with Agnel Sfeir of New York University (NYU) School of Medicine. “That’s when you get really massive tumors.”

    The research, published* May 26, 2016 by the journal Cell Reports, also suggests a possible way to kill these kinds of tumors by targeting an important enzyme.

    A Puzzling Finding

    The researchers investigated mutations in a gene that codes for the protein POT1. This protein normally forms a protective cap around the ends of chromosomes (called telomeres), stopping cell machinery from mistakenly damaging the DNA there and causing harmful mutations.

    POT1 is so critical that cells without functional POT1 would rather die than pass on POT1 mutations. Stress in these cells leads to the activation of an enzyme, called ATR, that triggers programmed cell death.

    Knowing this, scientists in recent years were surprised to find recurrent mutations affecting POT1 in several human cancers, including leukemia and melanoma.

    “Somehow those cells found a way to survive—and thrive,” said Lazzerini Denchi. “We thought that if we could understand how that happens, maybe we could find a way to kill those cells.”

    It Takes Two to Tango

    Using a mouse model, the researchers found that mutations in POT1 lead to cancer when combined with a mutation in a gene called p53.

    “The cells no longer have the mechanism for dying, and mice develop really aggressive thymic lymphomas,” said Lazzerini Denchi.

    P53, a well-known tumor suppressor gene, is a cunning accomplice. When mutated, it overrides the protective cell death response initiated by ATR. Then, without POT1 creating a protective cap, the chromosomes are fused together and the DNA is rearranged, driving the accumulation of even more mutations. These mutant cells go on to proliferate and become aggressive tumors.

    The findings led the team to consider a new strategy for killing these tumors.

    Scientists know that all cells—even cancer cells—will die if they have no ATR. Since tumors with mutant POT1 already have low ATR levels, the researchers think a medicine that knocks out the remaining ATR could kill tumors without affecting healthy cells. “This study shows that by looking at basic biological questions, we can potentially find new ways to treat cancer,” said Lazzerini Denchi.

    The researchers plan to investigate this new therapeutic approach in future studies.

    In addition to Lazzerini Denchi and Sfeir, authors of the study, “Telomere replication stress induced by POT1 inactivation accelerates tumorigenesis,” were Angela Beal and Nidhi Nair of TSRI; Alexandra M. Pinzaru, Aaron F. Phillips, Eric Ni and Timothy Cardozo of the NYU School of Medicine; Robert A. Hom and Deborah S. Wuttke of the University of Colorado; and Jaehyuk Choi of Northwestern University.

    The study was supported by the National Institutes of Health (grants AG038677, CA195767 and GM059414), a NYSTEM institutional training grant (C026880), a scholarship from the California Institute for Regenerative Medicine, a Ruth L. Kirschstein National Research Service Award (GM100532), The V Foundation for Cancer Research, two Pew Stewart Scholars Awards and the Novartis Advanced Discovery Institute.

    *Science paper:
    Telomere Replication Stress Induced by POT1 Inactivation Accelerates Tumorigenesis

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Scripps Research Institute (TSRI), one of the world’s largest, private, non-profit research organizations, stands at the forefront of basic biomedical science, a vital segment of medical research that seeks to comprehend the most fundamental processes of life. Over the last decades, the institute has established a lengthy track record of major contributions to the betterment of health and the human condition.

    The institute — which is located on campuses in La Jolla, California, and Jupiter, Florida — has become internationally recognized for its research into immunology, molecular and cellular biology, chemistry, neurosciences, autoimmune diseases, cardiovascular diseases, virology, and synthetic vaccine development. Particularly significant is the institute’s study of the basic structure and design of biological molecules; in this arena TSRI is among a handful of the world’s leading centers.

    The institute’s educational programs are also first rate. TSRI’s Graduate Program is consistently ranked among the best in the nation in its fields of biology and chemistry.

     
  • richardmitnick 12:41 pm on May 26, 2016 Permalink | Reply
    Tags: Applied Research & Technology, , Tiny Vampires, ,   

    From UCSB: “Tiny Vampires” Women in Science (No, the women are not the vampires in question) 

    UC Santa Barbara Name bloc

    May 25, 2016
    Julie Cohen

    1
    Susannah Porter. Photo Credit: Sonia Fernandez

    Paleobiologist Susannah Porter finds evidence of predation in ancient microbial ecosystems dating back more than 740 million years.

    2
    The Chuar Group in the Grand Canyon was once an ancient seabed. Photo Credit: Carol Dehler

    Vampires are real, and they’ve been around for millions of years. At least, the amoebae variety has. So suggests new research from UC Santa Barbara paleobiologist Susannah Porter.

    Using a scanning electron microscope to examine minute fossils, Porter found perfectly circular drill holes that may have been formed by an ancient relation of Vampyrellidae amoebae. These single-celled creatures perforate the walls of their prey and reach inside to consume its cell contents. Porter’s findings* appear in the Proceedings of the Royal Society B.

    “To my knowledge these holes are the earliest direct evidence of predation on eukaryotes,” said Porter, an associate professor in UCSB’s Department of Earth Science. Eukaryotes are organisms whose cells contain a nucleus and other organelles such as mitochondria.

    “We have a great record of predation on animals going back 550 million years,” she continued, “starting with the very first mineralized shells, which show evidence of drillholes. We had nothing like that for early life — for the time before animals appear. These holes potentially provide a way of looking at predator-prey interactions in very deep time in ancient microbial ecosystems.”

    Porter examined fossils from the Chuar Group in the Grand Canyon — once an ancient seabed — that are between 782 and 742 million years old. The holes are about one micrometer (one thousandth of a millimeter) in diameter and occur in seven of the species she identified. The holes are not common in any single one species; in fact, they appear in not more than 10 percent of the specimens.

    “I also found evidence of specificity in hole sizes, so different species show different characteristic hole sizes, which is consistent with what we know about modern vampire amoebae and their food preferences,” Porter said. “Different species of amoebae make differently sized holes. The Vampyrellid amoebae make a great modern analog, but because vampirelike feeding behavior is known in a number of different unrelated amoebae, it makes it difficult to pin down exactly who the predator was.”

    According to Porter, this evidence may help to address the question of whether predation was one of the driving factors in the diversification of eukaryotes that took place about 800 million years ago.

    “If that is true, then if we look at older fossil assemblages — say 1 to 1.6 billion years old — the fossilized eukaryote will show no evidence of predation,” Porter said. “I’m interested in finding out when drilling first appears in the fossil record and whether its intensity changes through time.”

    Porter also is interested in seeing whether oxygen played a role in predation levels through time. She noted that the microfossils those organisms attacked were probably phytoplankton living in oxygenated surface waters, but like vampyrellid amoebae today, the predators may have lived in the sediments. She suggests that those phytoplankton made tough-walled cysts — resting structures now preserved as fossils — that sank to the bottom where they were attacked by the amoebae.

    “We have evidence that the bottom waters in the Chuar Group in that Grand Canyon basin were relatively deep — 200 meters deep at most — and sometimes became anoxic, meaning they lacked oxygen,” Porter explained.

    “I’m interested to know whether the predators only were present and making these drill holes when the bottom waters contained oxygen,” Porter added. “That might tie the diversification of eukaryotes and the appearance of predators to evidence for increasing oxygen levels around 800 million years ago.

    “We know from the modern vampire amoebae that at least some of them make resting cysts themselves,” Porter said. “A former student of mine joked we should call these coffins. So one of our motivations is to see if we can find these coffins in the fossil assemblage as well. That’s the next project.”

    *Science paper:
    Tiny vampires in ancient seas: evidence for predation via perforation in fossils from the 780–740 million-year-old Chuar Group, Grand Canyon, USA

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    UC Santa Barbara Seal
    The University of California, Santa Barbara (commonly referred to as UC Santa Barbara or UCSB) is a public research university and one of the 10 general campuses of the University of California system. Founded in 1891 as an independent teachers’ college, UCSB joined the University of California system in 1944 and is the third-oldest general-education campus in the system. The university is a comprehensive doctoral university and is organized into five colleges offering 87 undergraduate degrees and 55 graduate degrees. In 2012, UCSB was ranked 41st among “National Universities” and 10th among public universities by U.S. News & World Report. UCSB houses twelve national research centers, including the renowned Kavli Institute for Theoretical Physics.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 558 other followers

%d bloggers like this: