Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:00 pm on October 10, 2015 Permalink | Reply
    Tags: , ,   

    From Ethan Siegel: “How Do Photons Experience Time?” 

    Starts with a bang
    Starts with a Bang

    Ethan Siegel

    Image credit: NASA / International Space Station.

    “Everyone has his dream; I would like to live till dawn, but I know I have less than three hours left. It will be night, but no matter. Dying is simple. It does not take daylight. So be it: I will die by starlight.” -Victor Hugo

    Each week, you send in your questions and suggestions for our Ask Ethan column, and I go through and pick the one that I think will make the best story for you all. There were some great options this week, but since this is the 110th anniversary of special relativity and the 100th of general relativity, I thought I’d pick a question that requires a look to [Albert] Einstein for the answer. So let’s take a look at our submission from our reader Erwin, who asks:

    [L]ight takes about 8 minutes to travel from the sun to earth. Light travels at the speed of light. If you do that relativity kicks in. So my question is, how much time passes for the photons traveling? In other words, how much have the photons aged when the reach the earth? Thanks for considering this.

    If your intuition is to just say, “eight minutes,” I’d have a hard time arguing with you. After all, that’s how much the photon ages for us.

    Image credit: NASA / International Space Station.

    If a 0.5 mile (0.8 km) walk to the store takes eight minutes, and you walk to the store, you age eight minutes. And if the shopkeeper watched you walk to the store, she’d know you aged eight minutes, too. If all we did was adhere to the Newtonian definition of time — with the notion that time was an absolute quantity — this would be true for absolutely anything in the Universe: everyone, everywhere would experience time passing at the same rate in all circumstances.

    But if this were the case, the speed of light couldn’t be a constant.

    Image credit: Noreen of

    Imagine you stand still on the ground, shining a flashlight in one direction at an object one light-second away. Now imagine you’re running towards that same object, shining that same flashlight. The faster you run, the faster you’d expect that light to go: it ought to move at whatever speed light-at-rest moves at plus whatever speed you run at.

    Why would this be a necessity?

    I want you to imagine that you’ve got a clock, only instead of having a clock where a gear turns and the hands move, you have a clock where a single photon of light bounces up-and-down between two mirrors. If your clock is at rest, you see the photon bouncing up-and-down, and the seconds pass as normal. But if your clock is moving, and you look on it, how will the seconds pass, now?

    Image credit: John D. Norton, via

    Quite clearly, it takes longer for the bounces to occur if the speed of light is always a constant. If time ran at the same rate for everyone, everywhere and under all conditions, then we’d see the speed of light be arbitrarily fast the faster something moved. And what’s even worse, is if something moved very quickly and then turned on a flashlight in the opposite direction, we’d see that light barely move at all: it’d be almost at rest.

    Since light doesn’t do this — or change its speed-in-a-vacuum under any circumstances — we know this naive picture is wrong.

    Image credit: Shutterstock/Pixomar.

    In 1905, [Albert] Einstein put forth his theory of special relativity, noting that the failed Michelson-Morley experiment and the phenomena of length contraction and time dilation would all be explained if the speed of light in a vacuum were a universal constant, c. This means that the faster something moves — the closer to the speed of light it moves — someone watching it at rest will see their own times and distances as normal, but someone “riding” the fast-moving object will see that they traveled a shorter distance and traveled for a shorter amount of time than the observer who remained at rest.

    Image credit: The Curious Astronomer, via

    In fact, when you make that eight minute walk to the store, thanks to Einstein’s relativity, the time on your watch — assuming it was super accurate and matched the shopkeeper’s watch exactly before you left — would now read just under two nanoseconds ahead of the shopkeeper’s watch! The effects of relativity, even though they’re small under most circumstances, are always at play.

    The reason is because things don’t just move through space, and they don’t just move forward in time. It’s because space and time are linked as part of a unified fabric: spacetime.

    Image credit: Clear Science, via

    This was first realized by one of Einstein’s former teachers, Hermann Minkowski, in 1908, who said:

    The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.

    The way this works is that everyone and everything that exists at all always moves through spacetime, and they always move through spacetime with a very particular relationship: you move a certain amount through the combination of the two no matter how you move relative to anything else.

    Image credit: (C) Encyclopaedia Britannica, Inc.

    If you move through space quickly from a certain point of view, you move through less time: this is why when you walked to the store, your journey through time was around 2 nanoseconds less than the shopkeeper’s: you moved through space more quickly than she did, and so you moved through time a little bit less than her. If you moved faster, your clock would be even farther ahead. In fact, if you moved very close to the speed of light — if you moved at 99.9999999% the speed of light on that journey to the store — no matter how far away that store was, the shopkeeper would see that 22,000 times as much time passed for her as passed for you.

    A relativistic journey toward the constellation of Orion. Image credit: Alexis Brandeker, via

    So now, with all of that in mind, let’s come to the photon itself. It’s not moving near the speed of light, but actually at the speed of light. All our formulas to describe what it’s like for an observer gives us answers with infinities in them when it comes to asking what happens at the speed of light. But infinities don’t always mean physics is wrong; they often mean that physics does something unintuitive. When you move at the speed of light, this means the following:

    You absolutely cannot have a mass; if you did, you’d carry an infinite amount of energy at the speed of light. You must be massless.
    You will not experience any of your travels through space. All the distances along your direction of motion will be contracted down to a single point.
    And you will not experience the passage of time; you entire journey will appear to you to be instantaneous.

    Image credit: Wikimedia Commons user LucasVB.

    For an observer here on Earth, the light will be emitted from the Sun some eight minutes (more like 8:20) before we receive it, and if we could “watch” the photon travel, it would appear to move at the speed of light throughout its entire journey. But if there were a “clock” on board this photon, it would appear to be entirely stopped to us. While those just-over-eight-minutes would pass as normal for us, the photon would experience absolutely no passage of time.

    This gets particularly disturbing when we look at distant galaxies in the Universe.

    Image credit: NASA, ESA, S. Beckwith (STScI) and the HUDF Team.

    NASA Hubble Telescope
    NASA/ESA Hubble

    The light emitted from them takes billions of years to reach us from our point of view as observers in the Milky Way. During this time, the expansion of the Universe causes space to stretch, and the energy of the emitted photons to drop tremendously: a cosmological redshift. Yet despite this incredible journey, the photon itself experiences none of what we know as time: it simply is emitted and then instantaneously is absorbed, experiencing the entirety of its travels through space in literally no time. Given everything that we know, a photon never ages in any way at all.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible.

  • richardmitnick 7:24 am on October 10, 2015 Permalink | Reply
    Tags: , ,   

    From The Conversation: “Chemistry Nobel DNA research lays foundation for new ways to fight cancer” 

    The Conversation

    October 8, 2015
    Rachel Litman Flynn

    You’d be in bad shape if your cells couldn’t fix DNA issues that arise. redondoself, CC BY

    Our cells are up against a daily onslaught of damage to the DNA that encodes our genes. It takes constant effort to keep up with the DNA disrepair – and if our cells didn’t bother to try to fix it, we might not survive. The DNA damage repair pathways are an essential safeguard for the human genome.

    The 2015 Nobel Laureates in chemistry received the prize for their pioneering work figuring out the molecular machinery that cells use to repair that DNA damage. In their basic research, Tomas Lindahl, Paul Modrich and Aziz Sancar each narrowed in on one piece of the DNA repair puzzle.

    They’ve laid the framework for the research that many basic and translational scientists are expanding upon to try to crack cancer. Ironically, we’re finding ways to turn that DNA repair system against cancerous cells that have often arisen from DNA damage in the first place.

    UV light from the sun is one cause of DNA mutations. NASA/David Herring, CC BY

    DNA under siege

    DNA is composed of four simple letters, or nucleotides, A, T, C and G. When combined, these nucleotides form the genetic code. There are approximately 30,000 genes in the human genome.

    Each time a cell grows and divides, every single gene needs to be faithfully copied to the next generation of cells. This process of DNA replication is constantly threatened by both internal and external sources of DNA damage. There are environmental sources such as radon from the earth or UV light from the sun. Or it can be just a mistake, happening within the cell as a consequence of normal growth and division. Some studies have estimated that a single cell can experience several thousand DNA damage events in a single day.

    The question then becomes: how does the cell repair all of this damage? Or perhaps more worrisome, what happens if the cell doesn’t repair the damage?

    A full toolbox to deal with the damage

    To counter the daily onslaught of DNA damage, mammalian cells have evolved a number of intricate mechanisms to not only recognize DNA damage, but repair it and restore the original genetic sequence.

    Consider a typo that changes the letter N to the letter M, causing “grin” to become “grim.” That single typo has now changed the entire meaning of the word. It works just the same in the “words” of the genetic code when an incorrect nucleotide takes the place of the right one. The DNA damage repair enzymes function like an eraser reverting the mutant M back to the original N.

    DNA excision repair gets rid of a mistaken nucleotide and fills in what should be there. LadyofHats

    Following DNA damage, the cell must first recognize the damage and then alert the system that there’s a problem. The recognition machinery then activates various factors to halt cell growth until the damage has been repaired. And if things are too far gone, additional factors are poised and ready to induce cell death.

    That’s the most basic way to think about the DNA damage response pathway, as a simple chain of events. Of course it’s a lot more complicated, a complex network of checks and balances to ensure that the DNA damage is not only recognized but clearly identified to ensure that the correct factors are recruited to repair the lesion.

    Much like a homeowner wouldn’t want an electrician to fix the leaky roof, a DNA “typo” shouldn’t be fixed by a mechanism used to heal double-strand DNA breaks, for instance. Therefore, sensing which specific genetic lesion is the problem is one of the earliest and most critical steps in the DNA damage response pathway.

    It’s hard to say exactly how many DNA damage response “sensors” there are or exactly who they are, but that’s something the field is actively investigating. Likewise, while the number of DNA damage repair pathways we know about hasn’t necessarily increased since the groundbreaking Nobel work was done, the complexity of our understanding has.

    What if the repair process itself is broken?

    In a limited capacity, mutations actually aid evolution. There have to be changes for natural selection to act on, so these DNA mutations are a significant factor in Darwin’s theory of evolution. However, what is a blessing may also be a curse.

    Mutations in essential genes can lead to death even before we enter the world. However, mutations in nonessential genes may not be evident until later in life. When these mutations persist – or even worse, accumulate – it can lead to genomic instability. And that’s a hallmark of cancer cells.

    You can imagine, then, that a single mutation in a component of the DNA damage response pathway could lead to the accumulation of DNA damage, genomic instability and ultimately the progression toward cancer. And it’s true, we frequently find mutations in the DNA damage response pathway in cancer. Deciphering exactly how these pathways work is essential to our understanding not only of cancer, but also of how we might exploit these pathways to actually treat the disease.
    Harnessing the repair systems to our own ends

    These damage repair pathways are essential to prevent the accumulation of genetic lesions and ultimately inhibit the progression toward cancer. Is there a way we can exploit the system, push it over the edge and cause an unwanted cell not just to gain mutations but to die?

    To that end, researchers are hard at work trying to further define the nitty gritty details that regulate the DNA damage response. Others are trying to identify factors that we could target therapeutically.

    It may seem counterintuitive to target the DNA damage response pathway once it’s already been inactivated by a mutation. But the approach has its advantages.

    Generally, when a genetic mutation inactivates one branch of repair, the cell will try to compensate by using another type of repair just to keep the cell alive. Would you rather call the electrician and hope he can fix the leaky roof or risk having the entire roof collapse in on you?

    The cell opts for a back-up mechanism to try to resolve the damage. In general, this results in inadequate repair and the acquisition of additional mutations, fueling the genomic instability and cancer progression.

    We want to eliminate the back-up mechanism – send the electrician out of town. Research has demonstrated that when one type of repair mechanism is inactivated by a genetic mutation and you therapeutically inactivate the back-up mechanism, the cancerous cell dies. Likewise, if we combine drugs that induce a particular type of damage and then inactivate that specific repair pathway, cells die. Clinical scientists have demonstrated that this can lead to tumor regression in patients sparking a surge of research in this area.

    In these dividing cells, DNA is colored white. They were treated with ATR molecules that interfere with DNA damage repair. Dr Neil J Ganem, Boston University, CC BY-ND

    Targeting telomeres

    My lab is interested in understanding how the DNA damage response is regulated specifically at telomeric DNA.

    The telomere is a repetitive DNA sequence that caps the ends of each human chromosome. Telomeres function as a barrier, protecting the human genome from degradation and/or the fusion of whole chromosomes.

    Each time a cell divides, a portion of this barrier is lost; over time the shortened telomere compromises the genome’s stability. To avoid damage to the genome, critically short telomeres send a signal to the cell to either stop growing or induce cell death.

    Cancer cells, however, have evolved mechanisms to overcome progressive telomere shortening and bypass this growth arrest. In other words, they outmaneuver the normal routine, dividing and growing while avoiding the usual step of telomere shortening that eventually leads to death for normal cells. One way they counter telomere shortening and promote telomere elongation is by activating the Alternative Lengthening of Telomeres pathway (ALT).

    The ALT mechanism is active in 10%-15% of all human cancers. This incidence skyrockets to approximately 60% in some of the most aggressive forms of human cancer, including osteosarcoma and glioblastoma. These cancers are often resistant to common therapeutic strategies and there are no therapies that specifically target the ALT pathway.

    Representative chromosome spread from ALT cells where telomeres are stained with either a red or green probe. A yellow signal indicates a ??? [not complete]

    In my lab, we’re focusing on one of the molecules that senses DNA damage in the first place, the ATR kinase. We’ve found that preventing it from doing its job leads to both a decrease in recombination at telomeres and an increase in telomere loss at the chromosome ends, suggesting a defect in ALT activity.

    Perhaps most significant is that ATR inhibitions led to catastrophic cell division and robust cell death in ALT-positive cancer cells, yet had little effect on non-cancerous cell lines.

    These studies may allow us to drive ATR inhibitors into preclinical development with the ultimate goal of improving the therapeutic strategies in the treatment of some of the most aggressive forms of human cancer.

    Time-lapse live-cell imaging experiment from the author’s lab investigating how to disrupt cancer cells by disrupting telomere maintenancete.
    download the mp4 video here.

    It’s this kind of translational research that builds on the framework laid by the work of our newest Nobel laureates in chemistry. Their basic research is proving to be the foundation for new ways to target – and hopefully treat – cancer.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

  • richardmitnick 6:58 am on October 10, 2015 Permalink | Reply
    Tags: , ,   

    From Nature: “Gene-editing record smashed in pigs” 

    Nature Mag

    06 October 2015
    Sara Reardon

    Researchers modify more than 60 genes in effort to enable organ transplants into humans.

    Geneticist George Church has co-founded a company that is developing genetically modified pigs to grow organs for human transplant. Jessica Rinaldi/Reuters

    For decades, scientists and doctors have dreamed of creating a steady supply of human organs for transplantation by growing them in pigs. But concerns about rejection by the human immune system and infection by viruses embedded in the pig genome have stymied research. Now, by modifying more than 60 genes in pig embryos — ten times more than have been edited in any other animal — researchers believe they may have produced a suitable non-human organ donor.

    The work was presented on 5 October at a meeting of the US National Academy of Sciences (NAS) in Washington DC on human gene editing. Geneticist George Church of Harvard Medical School in Boston, Massachusetts, announced that he and colleagues had used the CRISPR/Cas9 gene-editing technology to inactivate 62 porcine endogenous retroviruses (PERVs) in pig embryos. These viruses are embedded in all pigs’ genomes and cannot be treated or neutralized. It is feared that they could cause disease in human transplant recipients.

    The gene-edited pigs will be raised in isolation from pathogens. ableimages / Alamy Stock Photo

    Church’s group also modified more than 20 genes in a separate set of pig embryos, including genes that encode proteins that sit on the surface of pig cells and are known to trigger a human immune response or cause blood clotting. Church declined to reveal the exact genes, however, because the work is as yet unpublished. Eventually, pigs intended for organ transplants would need both these modifications and the PERV deletions.

    Preparing for implantation

    “This is something I’ve been wanting to do for almost a decade,” Church says. A biotech company that he co-founded to produce pigs for organ transplantation, eGenesis in Boston, is now trying to make the process as cheap as possible.

    Church released few details about how his team managed to remove so many pig genes. But he says that both sets of edited pig embryos are almost ready to implant into mother pigs. eGenesis has procured a facility at Harvard Medical School where the pigs will be implanted and raised in isolation from pathogens.

    Jennifer Doudna, a biochemist at University of California, Berkeley, who was one of the inventors of CRISPR/Cas9 technology, is impressed by the number of edited genes. If the work holds up, she says, it could be useful for synthetic-biology applications where genes can be switched on and off. In microorganisms, creating these circuits requires the insertion or modification of multiple genes that regulate one another.

    Cutting multiple genes will also be useful for human therapies, says George Daley, a stem-cell biologist at Harvard Medical School, because many diseases with a genetic component involve more than one gene.

    Nature doi:10.1038/nature.2015.18525

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Nature is a weekly international journal publishing the finest peer-reviewed research in all fields of science and technology on the basis of its originality, importance, interdisciplinary interest, timeliness, accessibility, elegance and surprising conclusions. Nature also provides rapid, authoritative, insightful and arresting news and interpretation of topical and coming trends affecting science, scientists and the wider public.

  • richardmitnick 3:50 pm on October 9, 2015 Permalink | Reply
    Tags: , , Smoking kills   

    From Rutgers: “Oscar Auerbach: A Professor at Rutgers New Jersey Medical School Who Proved the Case Against Tobacco Use” 

    Rutgers University
    Rutgers University

    October 6, 2015
    Rob Forman

    Courtesy: George F. Smith Library of the Health Sciences/Rutgers University
    Oscar Auerbach painstakingly examined microscopic changes in lung tissue to prove that smoking causes lung cancer.

    Smoking kills. Those two words are by now undeniable. But in the 1950s and ‘60s, while there was mounting evidence that cigarettes directly cause lung cancer, there was just enough scientific doubt that the United States government felt unable to take action. Oscar Auerbach put that doubt to rest forever.

    Auerbach was a pathologist who would serve for more than 30 years on the faculty of what is now Rutgers New Jersey Medical School. When he began investigating the effects of smoking, the vast majority of data on the subject came from epidemiological studies. Epidemiology measures the incidence of disease in large populations, and then seeks to trace the cause.

    All signs from epidemiological research pointed toward a strong link between lung cancers and smoking. But powerful tobacco companies, as well as states whose economies relied on tobacco farming, questioned the validity of that link. They argued that while a statistical association did exist between rates of smoking and the incidence of lung cancer, nobody had established a cause-and-effect relationship.

    Kenneth M. Klein – now a Rutgers New Jersey Medical School professor – says when he was a young pathologist, the doubters went to great lengths to claim that tobacco and cancer were not linked. “Literally I heard people arguing, ‘well, maybe it’s refrigerators,’” says Klein, who served alongside Auerbach on the school’s faculty for 21 years. “They said if you follow the incidence of lung cancer in the 20th century, it not only parallels the consumption of tobacco, but it also parallels when refrigerators became available and people started to buy refrigerators. How do you argue with that?”

    Through intensive research, Oscar Auerbach found a way. The Veterans Administration cared for numerous vets who were dying from lung cancer – soldiers, sailors and airmen who had smoked prolifically for years. Auerbach led a team that performed those patients’ autopsies, examining as many as 10 times the number of microscopic slides that a standard post mortem required. It was painstaking, labor-intensive work.

    “He did very meticulous autopsies,” says Klein, “and was able to correlate the changes that he saw with the known tobacco consumption that these individuals had been exposed to. You see changes in bronchi, or you can see the tumor. And it’s very hard to refute when you see it directly.”

    But Auerbach still had to do more. Science isn’t truly convinced of cause and effect until results can be replicated. That meant seeing in real time whether cigarette smoke induced lung disease. For that, with funding from sources that included the American Cancer Society, Auerbach studied the effects of smoking on lab animals whose lung tissues strongly resembled that of humans. What he had found in the veterans’ lungs was happening in those animals’ lungs. It was now indisputable that smoking causes lung cancer.

    Millions of lives saved

    In 1964, United States Surgeon General Luther Terry released a report on the health effects of smoking that was so damning that he waited to issue his findings until the stock exchanges had closed – so as not to cause major market disruptions. The report, which cited Oscar Auerbach by name seven times, directly linked smoking to lung cancer, and concluded that “smoking is a health hazard of sufficient importance in the United States to warrant appropriate remedial action.”

    From that report grew the health warnings and other legal measures that are credited with reducing per capita consumption of cigarettes by nearly 75 percent since the report was issued, according to the Centers for Disease Control and Prevention. A 2014 study funded by the National Cancer Institute estimates that 8 million premature deaths were averted in the 50 years following the Terry report, with average improvements in lifespan of 19 to 20 years.

    Contributing as he did to this revolutionary improvement in public health was Oscar Auerbach’s crowning achievement, but he never slowed down. One week before his death in 1997, at age 92, he was still teaching New Jersey Medical School students.

    He also wore his achievement with the utmost humility.

    “He was a sweetheart of a person,” says Auerbach’s longtime colleague Klein. “You would never know just chatting with him in the hallway that this was an internationally renowned and acclaimed physician-scientist, who did so much.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Rutgers, The State University of New Jersey, is a leading national research university and the state’s preeminent, comprehensive public institution of higher education. Rutgers is dedicated to teaching that meets the highest standards of excellence; to conducting research that breaks new ground; and to providing services, solutions, and clinical care that help individuals and the local, national, and global communities where they live.

    Founded in 1766, Rutgers teaches across the full educational spectrum: preschool to precollege; undergraduate to graduate; postdoctoral fellowships to residencies; and continuing education for professional and personal advancement.

    Rutgers Seal

  • richardmitnick 3:37 pm on October 9, 2015 Permalink | Reply
    Tags: , Hydrogen storage,   

    From Sandia Lab: “Bay Area national labs team to tackle long-standing automotive hydrogen storage challenge” 

    Sandia Lab

    October 8, 2015
    Patti Koning,, (925) 294-4911

    Sandia National Laboratories chemist Mark Allendorf, shown here at Berkeley Lab’s Advanced Light Source facility, is leading the Hydrogen Materials – Advanced Research Consortium (HyMARC) to advance solid-state materials for onboard hydrogen storage. (Photo by Dino Vournas)

    Sandia National Laboratories will lead a new tri-lab consortium to address unsolved scientific challenges in the development of viable solid-state materials for storage of hydrogen onboard vehicles. Better onboard hydrogen storage could lead to more reliable and economic hydrogen fuel cell vehicles.

    “Storing hydrogen on board vehicles is a critical enabling technology for creating hydrogen-fueled transportation systems that can reduce oil dependency and mitigate the long-term effects of burning fossil fuels on climate change,” said Sandia chemist Mark Allendorf, the consortium’s director.

    Called the Hydrogen Materials – Advanced Research Consortium (HyMARC), the program is funded by the U.S. Department of Energy’s (DOE) Fuel Cell Technologies Office within the Office of Energy Efficiency and Renewable Energy at $3 million per year for three years, with the possibility of renewal. In addition to Sandia, the core team includes Lawrence Livermore and Lawrence Berkeley national laboratories.

    The consortium will address the gaps in solid-state hydrogen storage by leveraging recent advances in predictive multiscale modeling, high-resolution in situ characterization and material synthesis. Past efforts, which synthesized and characterized hundreds of materials for solid-state hydrogen storage, laid a solid foundation for current work including the understanding of the kinetics and thermodynamics governing the physical properties of these types of storage methods.

    “By focusing on the underlying properties and phenomena that limit the performance of storage materials, we will generate much-needed understanding that will accelerate the development of all types of advanced storage materials, including sorbents, metal hydrides and liquid carriers,” said Brandon Wood, who is leading the Lawrence Livermore team.

    Sandia is an international leader in hydrogen materials science, exemplified by its role as the lead lab in DOE’s Metal Hydride Center of Excellence, which ran from 2005-2010. The consortium will leverage the core capabilities of the three partners, primarily synthetic chemistry at Sandia, theory and modeling at Lawrence Livermore and characterization at Berkeley Lab.

    The world-class supercomputing facilities at Lawrence Livermore and Sandia are key elements of the team’s strategy to develop the enabling science for hydrogen solid storage technologies, along with advanced experimental tools available at Berkeley Lab’s Advanced Light Source [ALS] and Molecular Foundry facilities.

    LBL Advanced Light Source

    Current hydrogen storage misses capacity, cost targets

    In the past five years, fuel cell electric vehicles (FCEVs) have gone from a concept to reality. Automakers are starting to roll out commercial FCEVs and investments are being made to deploy hydrogen refueling infrastructure, especially in early markets, such as California and the Northeast.

    However, the commercial FCEV light-duty vehicles are designed for 700-bar compressed hydrogen storage on board the vehicle and hydrogen-refueling infrastructure is being deployed for compressed hydrogen refueling. Although compressed hydrogen provides a near-term pathway to commercialization, this storage method falls short of DOE targets for onboard hydrogen storage, particularly for volumetric hydrogen energy density and cost.

    “Hydrogen, as a transportation fuel, has great potential to provide highly efficient power with nearly zero emissions,” said Allendorf. “Storage materials are the limiting factor right now.”

    Thermodynamics, kinetics challenges

    Although HyMARC will consider all types of hydrogen storage materials, two categories of solid-state materials, novel sorbents and high-density metal hydrides, are of particular interest. These materials have the potential to meet DOE targets to deliver hydrogen at the right pressure and energy density to power a hydrogen fuel cell vehicle.

    A key challenge is the thermodynamics — the energy and conditions necessary to release hydrogen during vehicle operation. Sorbents, which soak up hydrogen in nanometer-scale pores, bind hydrogen too weakly. In contrast, metal hydrides, which store hydrogen in chemical bonds, have the opposite problem — they bind the hydrogen too strongly.

    The kinetics, the rate at which a chemical process occurs, is also an issue for high-density metal hydrides. These materials undergo complicated reactions during hydrogen release and uptake that can involve transitions between liquid, solid and gaseous phases. In some cases, the chemical reactions can form intermediates that trap hydrogen.

    The consortium will explore several innovative ideas for solving these problems. The overall concept is to synthesize well-controlled materials to serve as model systems and develop experimental platforms for systematically probing key processes that limit performance.

    “Using these tools, we can study the hydrogen reactions with these materials using state-of-the-art techniques, such as those at Berkeley Lab’s Advanced Light Source and Molecular Foundry, which can provide unprecedented spatial resolution of material composition and character in real time,” said Jeff Urban, Berkeley Lab team lead.

    The HyMARC strategy embodies the approach highlighted within the recent Materials Genome Initiative (MGI) Strategic Plan for accelerated materials development. The focus is on developing a set of ready-to-use resources accessible to the entire hydrogen storage community.

    “With our extensive knowledge base of hydrogen storage materials and new tools for characterization, modeling and synthesizing materials, many of which were not available even five years ago, our goal is to develop codes, databases, synthetic protocols and characterization tools,” said Allendorf. “These resources will create an entirely new capability that will enable accelerated materials development to achieve thermodynamics and kinetics required to meet DOE targets.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Sandia Campus
    Sandia National Laboratory

    Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major R&D responsibilities in national security, energy and environmental technologies, and economic competitiveness.

  • richardmitnick 2:53 pm on October 9, 2015 Permalink | Reply
    Tags: , ,   

    From EPFL: “Using optical fibre to generate a two-micron laser” 

    EPFL bloc

    Ecole Polytechnique Federale Lausanne

    Emmanuel Barraud

    Camille Brès and Svyatoslav Kharitonov of EPFL

    Lasers with a wavelength of two microns could move the boundaries of surgery and molecule detection. Researchers at EPFL have managed to generate such lasers using a simple and inexpensive method.

    In recent years, two-micron lasers (0.002 millimetre) have been of growing interest among researchers. In the areas of surgery and molecule detection, for example, they offer significant advantages compared to traditional, shorter-wavelength lasers.
    However, two-micron lasers are still in their infancy and not yet as mature as their telecom counterparts (1.55-micron). Moreover sources currently used in labs are typically bulky and expensive. Optical fibre-based 2 micron lasers are an elegant solution to these issues. This is where researchers at Photonics Systems Laboratory (PHOSL) come in.

    In an article published in Light: Science & Applications, the team of Camille Brès at EPFL described a way to design these lasers at a lower cost, by changing the way optical fibres are connected to each other. Thanks to the new configuration, they were able not only to produce very good 2 micron lasers, but also to do without an expensive and complex component that is normally required.

    Bloodless surgery and long-range molecule détection

    Two-micron spectral domain has potential applications in medicine, environmental sciences and industry. At these wavelengths, the laser light is easily absorbed by water molecules, which are the main constituents of human tissue. In the realm of high precision surgery, they can be used to target water molecules during an operation and make incisions in very small areas of tissue without penetrating deeply. What is more, the energy from the laser causes the blood to coagulate on the wound, which prevents bleeding.

    Two-micron lasers are also very useful for detecting key meteorological data over long distances through the air. Not to mention that they are highly effective in the processing of various industrial materials.

    Replacing a cop with a detour

    To create a 2 micron fibre-laser, light is usually injected into an optical-fibre ring containing a gain region which amplifies 2 micron light. The light circulates in the ring, passing through the gain region many times thus gaining more and more power, until becoming a laser. For optimal operation, these systems include a costly component called isolator, which forces the light to circulate in a single direction.

    At PHOSL, researchers built a thulium-doped fibre laser that works without an isolator. Their idea was to connect the fibres differently, to steer light instead of stopping it. “We plug a kind of deviation that redirects the light heading in the wrong direction, putting it back on track”, said Camille Brès. This means no more need for the isolator, whose job is to stop light moving in the wrong direction, sort of like a traffic cop. “We replaced the traffic cop with a detour,” said Svyatoslav Kharitonov, the article’s lead author.

    Higher quality laser

    The new system not only proved to be less expensive than more traditional ones, it also showed it could generate a higher quality laser light. The explanation is as follows: the laser output gets purified because light interacts with itself in a very special way, thanks to the amplifying fibre’s composition and dimensions, and the high power circulating in this atypical laser architecture.
    “While the association of amplifying fibres and high power usually weakens traditional lasers performance, it actually improves the quality of this laser, thanks to our specific architecture”, said Svyatoslav Kharitonov.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    EPFL is Europe’s most cosmopolitan technical university. It receives students, professors and staff from over 120 nationalities. With both a Swiss and international calling, it is therefore guided by a constant wish to open up; its missions of teaching, research and partnership impact various circles: universities and engineering schools, developing and emerging countries, secondary schools and gymnasiums, industry and economy, political circles and the general public.

  • richardmitnick 2:37 pm on October 9, 2015 Permalink | Reply
    Tags: , , ,   

    From AAS NOVA: ” How to Blow a Bubble in a Galaxy” 


    Amercan Astronomical Society

    9 October 2015
    Susanna Kohler

    Hubble Heritage image of Arp 220. A bubble has recently been discovered in the center of this galaxy. [NASA/ESA/Hubble Heritage Team]

    When two galaxies merge, the event often produces enormous galactic outflows. Though we’ve been able to study these on large scales, resolution limits in the past have prevented us from examining the launch sites, propagation, and escape of these outflows.

    But recent high-resolution observations of Arp 220, a galaxy merger located a mere 250 million light years away from us, have finally provided a closer look at what’s happening in the center of this merger — and spotted something interesting.

    Galaxy Arp 220 as imaged by the Wide Field Planetary Camera [FPC] on the Hubble Space Telescope
    Date 13 June 2006
    Author NASA, ESA, and C. Wilson (McMaster University, Hamilton, Ontario, Canada)

    NASA Hubble WFPC1

    NASA Hubble Telescope
    NASA/ESA Hubble

    A Curious Find

    Arp 220 is an object clearly in the late stage of a galaxy merger; it has tidal tails, two distinct nuclei at its center (heavily-obscured by dust), lots of star formation, and a large-scale outflow that extends far from the galaxy.

    While using Hubble observations to construct the first high-spatial-resolution optical emission line maps of Arp 220, a team led by Kelly Lockhart (Institute for Astronomy, Hawaii) discovered something unusual: evidence of a bubble-like structure, visible in the Hα+[N ii] emission. The bubble is slightly offset from the two nuclei at the galactic center, and measures ~600 pc across.

    Origin Explanations

    Large-scale (top) and zoomed-in (bottom) three-color Hubble observations of Arp 220: blue is optical, red is near-infrared, and green is Hα+[N ii] line emission. The bubble and the western nucleus (nuclei are marked by white circles) lie along the axis of the large-scale outflows (white vector). [Lockhart et al. 2015]

    The authors propose several explanations for how the bubble was created, and examine the implications to determine which is the most likely. The explanations fall into two categories:

    The bubble is centered around its source.
    It could be produced by an outflow from an accreting black hole or a massive star cluster located at the bubble center.
    The bubble’s source is located near the two nuclei in the galactic center, but outside the bubble.
    It could be produced by a jet originating from one of the two galactic nuclei, or by a collimated outflow from a startburst concentrated near the nuclei. Either of these outflows could blow a bubble as it first interacts with the interstellar medium.

    The authors show that the first category is disfavored based on observational and energetics arguments. In addition, the western-most nucleus and the bubble both align exactly with the axis of the large-scale outflows of the galaxy. Unlikely to be due to chance, this alignment is strong support in favor of the second category.

    Thus, it’s probable that the bubble is blown by an outflow that originates from the inner ~100pc around one of the nuclei, either due to a jet or a starburst wind. Further observations should be able to differentiate between these two mechanisms.


    Kelly E. Lockhart et al 2015 ApJ 810 149. doi:10.1088/0004-637X/810/2/149

    See the full article here. .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 11:36 am on October 9, 2015 Permalink | Reply
    Tags: , , Vitamin B6   

    From ETH Zürich: “A cure for vitamin B6 deficiency” 

    ETH Zurich bloc

    ETH Zürich

    Peter Rüegg

    Cassava has characteristic leaves that can be eaten as a vegetable. (All pictures: Hervé Vander Churen)

    Plant scientists engineered the cassava plant to produce higher levels of vitamin B6 in its storage roots and leaves. This could help to protect millions of people in Africa from serious deficiencies.

    In many tropical countries, particularly in sub-Saharan Africa, cassava is one of the most important staple foods. People eat the starchy storage roots but also the leaves as a vegetable. Both have to be cooked first to remove the toxic cyanide compounds that cassava produces.

    But the roots have a disadvantage: although rich in calories, in general they contain only few vitamins. Vitamin B6 in particular is present in only small amounts, and a person for whom cassava is a staple food would have to eat about 1.3 kg of it every day for a sufficient amount of this vital vitamin.

    Serious deficiency in Africa

    Vitamin B6 deficiency is prevalent in several African regions where cassava is often the only staple food people’s diet. Diseases of the cardiovascular and nervous systems as well as are associated with vitamin B6 deficiency.

    Plant scientists at ETH Zürich and the University of Geneva have therefore set out to find a way to increase vitamin B6 production in the roots and leaves of the cassava plant. This could prevent vitamin B6 deficiency among people who consume mostly cassava.

    Genetically modified lines produce more B6

    Their project has succeeded: in the latest issue of Nature Biotechnology, the scientists present a new genetically modified cassava variety that produces several-fold higher levels of this important vitamin.

    Preparation of the tuber is time costly.

    “Using the improved variety, only 500 g of boiled roots or 50 g of leaves per day is sufficient to meet the daily vitamin B6 requirement,” says Wilhelm Gruissem, professor of plant biotechnology at ETH Zürich. The basis for the new genetically modified cassava variant was developed by Professor Teresa Fitzpatrick at the University of Geneva. She discovered the biosynthesis of vitamin B6 in the model plant thale cress (Arabidopsis thaliana). Two enzymes, PDX1 and PDX2, are involved in the synthesis of the vitamin. With the introduction of the corresponding genes for the enzymes, into the cassava genome, the researchers produced several new cassava lines that had increased levels of vitamin B6.

    Stable under field conditions

    To determine if the increased production of the vitamin in the genetically modified cassava was stable without affected the yield, the plant scientists conducted tests in the greenhouse and in field trials over the course of several years. “It was important to determine that the genetically modified cassava consistently produced high vitamin B6 levels under different conditions,” says Gruissem.

    Measurements of the metabolites confirmed that cassava lines produced several times more vitamin B6 in both roots and leaves than normal cassava. The researchers also attributed the increased production to the activity of the transferred genes, regardless of whether the plants were grown in a greenhouse or the field. The increased vitamin B6 trait remained stable even after the cassava was multiplied twice by vegetative propagation.

    Previously, the researchers had analysed several hundred different cassava varieties from Africa for its natural vitamin B6 content – none had a level as high as the genetically modified variety.

    Vitamin B6 from the genetically modified varieties is bioavailable, which means that humans can absorb it well and use it, as was confirmed by a research team at the University of Utrecht.

    Accessible technology

    A farmer is checking his cassava plants

    “Our strategy shows that increasing vitamin B6 levels in an important food crop using Arabidopsis genes is stable, even under field conditions. Making sure that the technology is readily available to laboratories in developing countries is equally important,” says Hervé Vanderschuren, who led the cassava research programme at ETH Zürich and recently became a professor of plant genetics at the University of Liège.

    It is still unclear when and how vitamin B6-enhanced cassava will find its way to farmers and consumers. The new trait should be crossed in varieties preferred by farmers using traditional plant breeding or introduced into selected varieties using genetic engineering.

    Vanderschuren hopes this can be performed in African laboratories. He has previously trained scientists on site and organised workshops to build platforms for the genetic modification of crop plants in African laboratories. “We hope that these platforms can help spread the technology to farmers and consumers.”

    The method for increasing vitamin B6 has not been patented because the gene construct and technology should be available freely to all interested parties.

    Challenge of distribution and legislation

    One huge hurdle, however, is the distribution and use of the new variety: “There are at least two obstacles: legislation for transgenic crops in developing countries and implementation of a cassava seed system to give all farmers access to technologies,” says Vanderschuren.

    He is currently supervising a project in India in conjunction with the School of Agricultural, Forest and Food Sciences (HAFL) in Zollikofen, which he hopes will result in guidelines for the development of sustainable seed system for cassava in India. “Our work in Africa will also benefit from this project,” he asserts.

    Individual national organisations as well as the FAO and other NGOs are currently organising the spread of cassava stem cuttings for cultivation in Africa. However, a better and more efficient organisation for the distribution of healthy plant material is urgently needed, says the researcher.

    On the legislative side, the cultivation of genetically modified cassava (and other crops) is not yet regulated everywhere. In numerous African countries, such as Uganda, Kenya and Nigeria, the governments have now enacted legislation for field trials of genetically modified plants. “This is an important step to ensure that improved varieties can be tested under field conditions,” says Vanderschuren. “In order to allow the cultivation of genetically modified plants, the respective parliaments will have to develop further legislation.”

    More than just a substance

    Vitamin B6 is a mixture of three similar molecules, namely pyridoxol, pyridoxine and pyridoxamine. These are the precursors of pyridoxal phosphate, one of the most important co-enzymes in the body involved in the assembly and modification of proteins. The human body cannot produce vitamin B6, which is why it must be supplied with the food. A high vitamin B6 content is found in soya beans, oats, beef liver and brown rice, for example. Avocados, nuts and potatoes are also good sources. The daily requirement of an adult is approximately 1.5 mg to 2 mg.


    Kuan-Te Li et al. Increased bioavailable vitamin B6 in field-grown transgenic cassava for dietary sufficiency. Nature Biotechnology, 2015, AOP October 8th 2015, doi: 10.1038/nbt.3318

    Vanderschuren, H. Strengthening African R&D through effective transfer of tropical crop biotech to African institutions. Nature Biotechnology (2012), 30(12): 1170-1172.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ETH Zurich campus
    ETH Zurich is one of the leading international universities for technology and the natural sciences. It is well known for its excellent education, ground-breaking fundamental research and for implementing its results directly into practice.

    Founded in 1855, ETH Zurich today has more than 18,500 students from over 110 countries, including 4,000 doctoral students. To researchers, it offers an inspiring working environment, to students, a comprehensive education.

    Twenty-one Nobel Laureates have studied, taught or conducted research at ETH Zurich, underlining the excellent reputation of the university.

  • richardmitnick 11:06 am on October 9, 2015 Permalink | Reply
    Tags: , ,   

    From TUM: “Faster design – better catalysts” 

    Techniche Universitat Munchen

    Techniche Universitat Munchen

    Prof. Dr. Aliaksandr S. Bandarenka
    Technical University of Munich
    Physics of Energy Conversion and Storage
    James-Franck-Str. 1, 85748 Garching, Germany
    Tel.: +49 89 289 12531

    New method facilitates research on fuel cell catalysts

    The different number of similar neighbors has an important influence on the catalytic activity of surface atoms of a nanoparticle – Image: David Loffreda, CNRS, Lyon

    While the cleaning of car exhausts is among the best known applications of catalytic processes, it is only the tip of the iceberg. Practically the entire chemical industry relies on catalytic reactions. Catalyst design plays a key role in improving these processes. An international team of scientists has now developed a concept that elegantly correlates geometric and adsorption properties. They validated their approach by designing a new platinum-based catalyst for fuel cell applications.

    Hydrogen would be an ideal energy carrier: Surplus wind power could split water into its elements. The hydrogen could power fuel cell-driven electric cars with great efficiency. While the only exhaust would be water, the range could be as usual. But fuel cell vehicles are still a rare exception. The required platinum (Pt) is extremely expensive and the world’s annual output would not suffice for all cars.

    A key component of the fuel cell is the platinum catalyst that is used to reduce oxygen. It is well known that not the entire surface but only a few particularly exposed areas of the platinum, the so-called active centers, are catalytically active.

    A team of scientists from Technical University of Munich and Ruhr University Bochum (Germany), the Ecole normale superieure (ENS) de Lyon, Centre national de la recherche scientifique (CNRS), Universite Claude Bernard Lyon 1 (France) and Leiden University (Netherlands) have set out to determine what constitutes an active center.

    Studying the model

    A common method used in developing catalysts and in modeling the processes that take place on their surfaces is computer simulation. But as the number of atoms increases, quantum chemical calculations quickly become extremely complex.

    With their new methodology called “coordination-activity plots” the research team presents an alternative solution that elegantly correlates geometric and adsorption properties. It is based on the generalized coordination number (GCN) , which counts the immediate neighbors of an atom and the coordination numbers of its neighbors.

    Calculated with the new approach, a typical Pt (111) surface has a GCN value of 7.5. According to the coordination-activity plot, the optimal catalyst should, however, achieve a value of 8.3. The required larger number of neighbors can be obtained by inducing atomic-size cavities into the platinum surface, for example.

    Successful practical test

    In order to validate the accuracy of their new methodology, the researchers computationally designed a new type of platinum catalyst for fuel cell applications. The model catalysts were prepared experimentally using three different synthesis methods. In all three cases, the catalysts showed up to three and a half times greater catalytic activity.

    “This work opens up an entirely new way for catalyst development: the design of materials based on geometric rationales which are more insightful than their energetic equivalents,” says Federico Calle-Vallejo. “Another advantage of the method is that it is based clearly on one of the basic principles of chemistry: coordination numbers. This significantly facilitates the experimental implementation of computational designs.”

    “With this knowledge, we might be able to develop nanoparticles that contain significantly less platinum or even include other catalytically active metals,” says Professor Aliaksandr S. Bandarenka, tenure track professor at Technical University of Munich. “And in future we might be able to extend our method to other catalysts and processes, as well.”


    Finding optimal surface sites on heterogeneous catalysts by counting nearest neighbors, Federico Calle-Vallejo, Jakub Tymoczko, Viktor Colic, Quang Huy Vu, Marcus D. Pohl, Karina Morgenstern, David Loffreda, Philippe Sautet, Wolfgang Schuhmann, Aliaksandr S. Bandarenka. Science, october 9., 2015; DOI : 10.1126/science.aab3501

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Techniche Universitat Munchin Campus

    Technische Universität München (TUM) is one of Europe’s top universities. It is committed to excellence in research and teaching, interdisciplinary education and the active promotion of promising young scientists. The university also forges strong links with companies and scientific institutions across the world. TUM was one of the first universities in Germany to be named a University of Excellence. Moreover, TUM regularly ranks among the best European universities in international rankings.

  • richardmitnick 10:50 am on October 9, 2015 Permalink | Reply
    Tags: , , ,   

    From “Scientists pave way for diamonds to trace early cancers” 


    October 9, 2015
    No Writer Credit

    Nano-diamonds using an optical microscope. Credit: Ewa Rej, the University of Sydney

    Physicists from the University of Sydney have devised a way to use diamonds to identify cancerous tumours before they become life threatening.

    Their findings, published today in Nature Communications, reveal how a nanoscale, synthetic version of the precious gem can light up early-stage cancers in non-toxic, non-invasive Magnetic Resonance Imaging (MRI) scans.

    Targeting cancers with tailored chemicals is not new but scientists struggle to detect where these chemicals go since, short of a biopsy, there are few ways to see if a treatment has been taken-up by a cancer.

    Led by Professor David Reilly from the School of Physics, researchers from the University investigated how nanoscale diamonds could help identify cancers in their earliest stages.

    “We knew nano diamonds were of interest for delivering drugs during chemotherapy because they are largely non-toxic and non-reactive,” says Professor Reilly.

    “We thought we could build on these non-toxic properties realising that diamonds have magnetic characteristics enabling them to act as beacons in MRIs. We effectively turned a pharmaceutical problem into a physics problem.”

    Professor Reilly’s team turned its attention to hyperpolarising nano-diamonds, a process of aligning atoms inside a diamond so they create a signal detectable by an MRI scanner.

    “By attaching hyperpolarised diamonds to molecules targeting cancers the technique can allow tracking of the molecules’ movement in the body,” says Ewa Rej, the paper’s lead author.

    “This is a great example of how quantum physics research tackles real-world problems, in this case opening the way for us to image and target cancers long before they become life-threatening,” says Professor Reilly.

    The next stage of the team’s work involves working with medical researchers to test the new technology on animals. Also on the horizon is research using scorpion venom to target brain tumours with MRI scanning.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    About in 100 Words™ (formerly is a leading web-based science, research and technology news service which covers a full range of topics. These include physics, earth science, medicine, nanotechnology, electronics, space, biology, chemistry, computer sciences, engineering, mathematics and other sciences and technologies. Launched in 2004,’s readership has grown steadily to include 1.75 million scientists, researchers, and engineers every month. publishes approximately 100 quality articles every day, offering some of the most comprehensive coverage of sci-tech developments world-wide. Quancast 2009 includes in its list of the Global Top 2,000 Websites. community members enjoy access to many personalized features such as social networking, a personal home page set-up, RSS/XML feeds, article comments and ranking, the ability to save favorite articles, a daily newsletter, and other options.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 476 other followers

%d bloggers like this: