Tagged: NOVA Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:53 pm on July 28, 2017 Permalink | Reply
    Tags: , , , , How Dust Built the Universe, NOVA   

    From NOVA: “How Dust Built the Universe” 



    28 Jul 2017
    Samia Bouzid

    If you’ve ever driven into the sunset with a dirty windshield or taken a drive after a snowstorm, your windshield caked with salt, you can probably relate to one of astronomers’ ongoing frustrations: seeing through dust.

    Cosmic dust, which collects in galaxies in loose fogs or thick clouds, has often plagued astronomers. The tiny grains, each 10,000 times smaller than the eye of a needle, absorb light, scatter it, or change change its wavelength so it’s invisible to the eye. In doing so, dust steals some of the few clues we have to understand the nature of the universe.

    But astronomers are discovering that dust plays important roles in both creating our universe and helping us understand it. It plants the seeds for stars, planets, and life as we know it. In the past two decades, astronomers studying dust have pulled back the curtain on important pieces of the universe that were hiding in plain sight. The more we learn about dust, the more we realize that it is part of the puzzle—not the rascal hiding the puzzle pieces.

    Fertilizing the Universe

    In the clouds of swirling gas that produce stars and planets, dust serves as a wingman for hydrogen. As a cloud condenses under its own gravity, star formation begins when hydrogen atoms meet and form molecules. But the compressing gas raises temperatures to the point where hydrogen begins whizzing around too fast to form bonds. It’s easier for the atoms to latch onto a piece of relatively big, slow dust. There, on the dust’s surface, two atoms can form a bond, making forming the first building blocks of a star. But dust is more than a matchmaker. As nearby stars blaze hot and bright in the ultraviolet, clouds of dust can act as a shield, sheltering stars-to-be from the barrage of radiation, which can break their chemical bonds and thwart their path to stardom.

    The stars and dust clouds of the Milk Way. No image credit.

    When the obstacles are finally overcome, a new star blossoms out of a cloud. Some of the remaining dust and gas begins to spin around the star and flatten into a disk. Specks of dust collide, and as their gravity increases, they pull more dust and gas onto their surface, accreting material. Over time, they become pebbles, then boulders and, sometimes, a few million years later, planets.

    Xuening Bai, a research associate at the Harvard Center for Astrophysics, studies the processes that create planets and the stuff of life. Without dust, he says, the world would be a different place.

    Seeing the Universe in a New Light

    Indeed, most of what we see in space—not to mention all that we are, all that we eat, all that we breathe—owes its existence, in some way, to a grain of dust that formed the seed of a star or planet. But despite its fundamental importance, astronomers have only begun to understand what dust really is and how it affects the way we see the universe.

    Dust itself is a mishmash of mostly carbon-based ashes cast off from dying stars. “It’s a catch-all term for what we would refer to on Earth as soot,” says Caitlin Casey, an astronomer at the University of Texas at Austin. Until recently, this “soot” was poorly understood. For centuries, the practice of astronomy was limited to what people could observe at visible wavelengths—in other words, what people could actually see. Dust absorbs light that can be seen by the naked eye and re-emits it at longer, infrared wavelengths, which are invisible to us. As a result, for most of history, dust was seen only as dark blobs, riddling galaxies with holes.

    Then, in the 1960s, the first infrared telescopes pioneered the study of dust emissions. But these telescopes were not able to detect all radiation from dust. Very distant galaxies, such as the ones Casey studies some 10 billion light-years away, are receding so quickly that the light re-emitted by their dust gets stretched, shifting its wavelength into the submillimeter range and making the galaxies practically invisible, even in infrared telescopes.

    NASA Infrared Telescope facility Mauna Kea, Hawaii, USA

    It wasn’t until 1998 that a group of astronomers in Mauna Kea, Hawaii, pointed a submillimeter telescope at a blank field of sky and made a discovery that rocked the field. A few years earlier, the Hubble Space Telescope had revealed that this blank sky was swarming with distant galaxies, but now, an entirely new population of galaxies lit up in submillimeter wavelengths. It was like turning on a light in a room where astronomers had fumbled in the dark for centuries. Galaxies glowed with dust, and the earliest, most distant galaxies, showed the most dust of all.

    East Asia Observatory James Clerk Maxwell telescope, Mauna Kea, Hawaii, USA

    NASA/ESA Hubble Telescope

    Dust Bunnies in the Edges of the Universe

    Submillimeter wavelengths were the last piece of the electromagnetic spectrum to be observed by astronomers, so in some ways, the 1998 discovery seemed to complete a picture of the universe. Large swaths of the sky were now imaged at every wavelength. Dust, the quiet catalyst behind star formation, had been unmasked.

    But in another way, astronomers had merely stumbled upon more pieces to a puzzle they thought they had completed. Because if dust comes from stars, the universe should get dustier the more stars have lived and died. What business did the earliest galaxies have being so dusty? The universe has been around for nearly 14 billion years, but most of these dusty galaxies formed when the universe was a tender 2 or 3 billion years old. By then, only a few generations of stars had ever existed. So where did all that dust come from?

    Desika Narayanan, an astronomer at the University of Florida, probes for answers by developing numerical simulations to model the early universe. He says that one clue lies in the earliest galaxies, which were probably ungainly, messy galaxies a far cry from the elegant spiral that is our Milky Way. Galaxies like ours pop out a star about once or twice a year. But these old, dusty galaxies were firecrackers, bursting with up to 1,000 to 2,000 new stars a year. As the first stars died, dust billowed from them and filled the galaxy—perhaps enough to account for the levels of dust seen today.

    But telescope data can only confirm so much. In the short lifetime of submillimeter astronomy, Narayanan says, telescope sensitivity has drastically improved, outpacing even camera phone technology, which has raced from blurry images taken by flip-phones to the latest, sharpest shots on iPhones in roughly the same period.

    Still, even the greatest telescopes strain against the vastness of the universe. They have to be extremely large to detect and resolve light from the most distant galaxies. At 15 meters in diameter, the world’s largest submillimeter dish belongs to the James Clerk Maxwell Telescope at the summit of Mauna Kea, Hawaii. It was the first telescope to detect these galaxies in 1998. In Chile, the Atacama Large Millimeter Array/Submillimeter Array, or ALMA, is made up of 66 dishes that can be arranged to span nearly 10 miles in an attempt to resolve the universe’s faintest, most distant galaxies.

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    It’s no coincidence that both of these observatories were built in extreme environments, both well above 10,000 feet and in dry places where the air is thin. Water vapor in the air soaks up most of the infrared radiation passing through it that’s so critical to observing dust. Meanwhile, the Earth itself radiates enthusiastically in the infrared, creating a noisy background for any signal that does get through. “Doing infrared astronomy from the ground is like trying to observe a star in the daylight out of a telescope made of light bulbs,” George Rieke, an infrared astronomer, once said.

    For now, this difficulty has left some mysteries intact. Although astronomers are better able to observe galaxies and create simulations, some galaxies remain too old and too dusty to fit into existing models. The size, peculiar structure, and dustiness of early galaxies is not fully explained.

    The next surge in science, expected to help explain some of the mysteries surrounding dusty, star-forming galaxies, will come from the James Webb Space Telescope, a massive instrument with a six-and-a-half-meter dish—a piece of highly polished metal as wide as a giraffe is tall—set to for launch in 2018.

    NASA/ESA/CSA Webb Telescope annotated

    Free from the interfering atmosphere, this telescope will peer into the dusty edges of space in finer detail than any other telescope.

    Narayanan says that the astronomy community is excited for these new measurements, and expect it will reveal new avenues for exploration. “Immediately, you start to open up as many questions as you think you’re going to answer,” he says.

    Twenty Years of Dusty Galaxies

    On July 31, astronomers will meet in Durham, U.K., to celebrate the 20th anniversary of the discovery of dusty star-forming galaxies and share what they have learned over the last two decades. But the elusiveness of hard data has left many questions about ancient dusty galaxies still open for debate. “I suspect we’re still going to walk away from this meeting saying, ‘Theorists still haven’t figured out where they come from,’” Narayanan says.

    But the mystery is part of what fascinates him. Twenty years ago, “We had no idea these things existed,” he says. “Then they just lit up in the infrared and have posed a huge challenge ever since then.”

    Despite all the research on dust, it is only a small fraction of the universe. Even in moderately dusty galaxies like our own, dust accounts for less than 1% of the mass. Yet its ability to transform the light passing through it completely changes the way we see the universe.

    For astronomers like Casey and Narayanan, this leaves plenty of mysteries to probe. “It’s really cool to me that something that is so negligible in terms of the mass budget of the universe can have such a tremendous impact on how we perceive it,” Casey says. “There is so much to discover and rediscover.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 6:25 am on March 27, 2017 Permalink | Reply
    Tags: "Cancer Biology Reproducibility Project Sees Mixed Results" Read it and Weep, , , Cancer Biology Reproducibility Project Sees Mixed Results, , NOVA   

    From NOVA: “Cancer Biology Reproducibility Project Sees Mixed Results” Read it and Weep 



    18 Jan 2017 [Don’t know how I missed this, or maybe they never put it up in social media before?]
    Courtney Humphries

    How trustworthy are the findings from scientific studies?

    A growing chorus of researchers says there’s a “reproducibility crisis” in science, with too many discoveries published that may be flukes or exaggerations. Now, an ambitious project to test the reproducibility of top studies in cancer research by independent laboratories has published its first five studies in the open-access journal eLife.

    “These are the first public replication studies conducted in biomedical science, and that in itself is a huge achievement,” says Elizabeth Iorns, CEO of Science Exchange and one of the project’s leaders.

    Cancer biology is just one of many fields being scrutinized for the reproducibility of its studies.

    The Reproducibility Project: Cancer Biology is a collaboration between the non-profit Center for Open Science and the for-profit Science Exchange, which runs a network of laboratories for outsourcing biomedical research. It began in 2013 with the goal of repeating experiments from top-cited cancer papers; all of the work has been planned, executed, and published in the open, in consultation with the studies’ original authors. These papers are the first of many underway and slated to be published in the coming months.

    The outcome so far has been mixed, the project leaders say. While some results are similar, none of the studies looks exactly like the original, says Tim Errington, the project’s manager. “They’re all different in some way. They’re all different in different ways.” In some studies, the experimental system didn’t behave the same. In others, the result was slightly different, or it did not hold up under the statistical scrutiny project leaders used to analyze results. All in all, project leaders report, one study failed to reproduce the original finding, two supported key aspects of the original papers, and two were inconclusive because of technical issues.

    Errington says the goal is not to single out any individual study as replicable or not. “Our intent with this project is to perform these direct replications so that we can understand collectively how reproducible our research is,” he says.

    Indeed, there are no agreed-upon criteria for judging whether a replication is successful. At the project’s end, he says, the team will analyze the replication studies collectively by several different standards—including simply asking scientists what they think. “We’re not going to force an agreement—we’re trying to create a discussion,” he says.

    The project has been controversial; some cancer biologists say it’s designed to make them look bad bad at a time when federal research funding is under threat. Others have praised it for tackling a system that rewards shoddy research. If the first papers are any indication, those arguments won’t be easily settled. So far, the studies provide a window into the challenges of redoing complex laboratory studies. They also underscore the need that, if cancer biologists want to improve the reproducibility of their research, they have to agree on a definition of success.

    An Epidemic?

    A recent survey in Nature of more than 1,500 researchers found that 70% have tried and failed to reproduce others’ experiments, and that half have failed to reproduce their own. But you wouldn’t know it by reading published studies. Academic scientists are under pressure to publish new findings, not replicate old research. There’s little funding earmarked toward repeating studies, and journals favor publishing novel discoveries. Science relies on a gradual accumulation of studies that test hypotheses in new ways. If one lab makes a discovery using cell lines, for instance, the same lab or another lab might investigate the phenomenon in mice. In this way, one study extends and builds on what came before.

    For many researchers, that approach—called conceptual replication, which gives supporting evidence for a previous study’s conclusion using another model—is enough. But a growing number of scientists have been advocating for repeating influential studies. Such direct replications, Errington says, “will allow us to understand how reliable each piece of evidence we have is.” Replications could improve the efficiency of future research by winnowing out false hypotheses early and help scientists recreate others’ work in order to build on it.

    In the field of cancer research, some of the pressure to improve reproducibility has come from the pharmaceutical industry, where investing in a spurious hypothesis or therapy can threaten profits. In a 2012 commentary in Nature, cancer scientists Glenn Begley and Lee Ellis wrote that they had tried to reproduce 53 high-profile cancer studies while working at the pharmaceutical company Amgen, and succeeded with just six. A year earlier, scientists at Bayer HealthCare announced that they could replicate only 20–25% of 47 cancer studies. But confidentiality rules prevented both teams from sharing data from those attempts, making it difficult for the larger scientific community to assess their results.

    ‘No Easy Task’

    Enter the Reproducibility Project: Cancer Biology. It was launched with a $1.3 million grant from the Laura and John Arnold Foundation to redo key experiments from 50 landmark cancer papers from 2010 to 2012. The work is carried out in the laboratory network of Science Exchange, a Palo Alto-based startup, and the results tracked and made available through a data-sharing platform developed by the Center for Open Science. Statisticians help design the experiments to yield rigorous results. The protocols of each experiment have been peer-reviewed and published separately as a registered report beforehand, which advocates say prevents scientists from manipulating the experiment or changing their hypothesis midstream.

    The group has made painstaking efforts to redo experiments with the same methods and materials, reaching out to original laboratories for advice, data, and resources. The labs that originally wrote the studies have had to assemble information from years-old research. Studies have been delayed because of legal agreements for transferring materials from one lab to another. Faced with financial and time constraints, the team has scaled back its project; so far 29 studies have been registered, and Errington says the plan is to do as much as they can over the next year and issue a final paper.

    “This is no easy task, and what they’ve done is just wonderful,” says Begley, who is now chief scientific officer at Akriveia Therapeutics and was originally on the advisory board for the project but resigned because of time constraints. His overall impression of the studies is that they largely flunked replication, even though some data from individual experiments matched. He says that for a study to be valuable, the major conclusion should be reproduced, not just one or two components of the study. This would demonstrate that the findings are a good foundation for future work. “It’s adding evidence that there’s a challenge in the scientific community we have to address,” he says.

    Begley has argued that early-stage cancer research in academic labs should follow methods that clinical trials use, like randomizing subjects and blinding investigators as to which ones are getting a treatment or not, using large numbers of test subjects, and testing positive and negative controls. He says that when he read the original papers under consideration for replication, he assumed they would fail because they didn’t follow these methods, even though they are top papers in the field.. “This is a systemic problem; it’s not one or two labs that are behaving badly,” he says.

    Details Matter

    For the researchers whose work is being scrutinized, the details of each study matter. Although the project leaders insist they are not designing the project to judge individual findings—that would require devoting more resources to each study—cancer researchers have expressed concern that the project might unfairly cast doubt on their discoveries. The responses of some of those scientists so far raise issues about how replication studies should be carried out and analyzed.

    One study, for instance, replicated a 2010 paper led by Erkki Ruoslahti, a cancer researcher at Sanford Burnham Prebys Medical Discovery Institute in San Diego, which identified a peptide that could stick to and penetrate tumors. Ruoslahti points to a list of subsequent studies by his lab and others that support the finding and suggest that the peptide could help deliver cancer drugs to tumors. But the replication study found that the peptide did not make tumors more permeable to drugs in mice. Ruoslahti says there could be a technical reason for the problem, but the replication team didn’t try to troubleshoot it. He’s now working to finish preclinical studies and secure funding to move the treatment into human trials through a company called Drugcendr. He worries that replication studies that fail without fully exploring why could derail efforts to develop treatments. “This has real implications to what will happen to patients,” he says.

    Atul Butte, a computational biologist at the University of California San Francisco, who led one of the original studies that was reproduced, praises the diligence of the team. “I think what they did is unbelievably disciplined,” he says. But like some other scientists, he’s puzzled by the way the team analyzed results, which can make a finding that subjectively seems correct appear as if it failed. His original study used a data-crunching model to sort through open-access genetic information and identify potential new uses for existing drugs. Their model predicted that the antiulcer medication cimetidine would have an effect against lung cancer, and his team validated the model by testing the drug against lung cancer tumors in mice. The replication found very similar effects. “It’s unbelievable how well it reproduces our study,” Butte says. But the replication team used a statistical technique to analyze the results that found them not statistically significant. Butte says it’s odd that the project went to such trouble to reproduce experiments exactly, only to alter the way the results are interpreted.

    Errington and Iorns acknowledge that such a statistical analysis is not common in biological research, but they say it’s part of the group’s effort to be rigorous. “The way we analyzed the result is correct statistically, and that may be different from what the standards are in the field, but they’re what people should aspire to,” Iorns says.

    In some cases, results were complicated by inconsistent experimental systems. One study tested a type of experimental drug called a BET inhibitor against multiple myeloma in mice. The replication found that the drug improved the survival of diseased mice compared to controls, consistent with the original study. But the disease developed differently in the replication study, and statistical analysis of the tumor growth did not yield a significant finding. Constantine Mitsiades, the study’s lead author and a cancer researcher at the Dana-Farber Cancer Institute, says that despite the statistical analysis, the replication study’s data “are highly supportive of and consistent with our original study and with subsequent studies that also confirmed it.”

    A Fundamental Debate

    These papers will undoubtedly provoke debate about what the standards of replication should be. Mitsiades and other scientists say that complex biological systems like tumors are inherently variable, so it’s not surprising if replication studies don’t exactly match their originals. Inflexible study protocols and rigid statistics may not be appropriate for evaluating such systems—or needed.

    Some scientists doubt the need to perform copycat studies at all. “I think science is self-correcting,” Ruoslahti says. “Yes, there’s some loss of time and money, but that’s just part of the process.” He says that, on the positive side, this project might encourage scientists to be more careful, but he also worries that it might discourage them from publishing new discoveries.

    Though the researchers who led these studies are, not surprisingly, focused on the correctness of the findings, Errington says that the variability of experimental models and protocols is important to document. Advocates for replication say that current published research reflects an edited version of what happened in the lab. That’s why the Reproducibility Project has made a point to publish all of its raw data and include experiments that seemed to go awry, when most researchers would troubleshoot them and try again.

    “The reason to repeat experiments is to get a handle on the intrinsic variability that happens from experiment to experiment,” Begley says. With a better understanding of biology’s true messiness, replication advocates say, scientists might have a clearer sense of whether or not to put credence in a single study. And if more scientists published the full data from every experiment, those original results may look less flashy to begin with, leading fewer labs to chase over-hyped hypotheses and therapies that never pan out. An ultimate goal of the project is to identify factors that make it easier to produce replicable research, like publishing detailed protocols and validating that materials used in a study, such as antibodies, are working properly.

    Access mp4 video here .

    Beyond this project, the scientific community is already taking steps to address reproducibility. Many scientific journals are making stricter requirements for studies and publishing registered reports of studies before they’re carried out. The National Institutes of Health has launched training and funding initiatives to promote robust and reproducible research. F1000Research, an open-access, online publisher launched a Preclinical Reproducibility and Robustness Channel in 2016 for researchers to publish results from replication studies. Last week several scientists published a reproducibility manifesto in the journal Human Behavior that lays out a broad series of steps to improve the reliability of research findings, from the way studies are planned to the way scientists are trained and promoted.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 11:44 am on March 22, 2017 Permalink | Reply
    Tags: , , NOVA, Remnants of Earth’s Original Crust Found in Canada   

    From NOVA: “Remnants of Earth’s Original Crust Found in Canada” 



    16 Mar 2017
    Annette Choi

    Two geologists studying North America’s oldest rocks have uncovered ancient minerals that are remnants of the Earth’s original crust which first formed more than 4.2 billion years ago.

    These rocks appear to preserve the signature of an early Earth that presumably took shape within the first few hundred million years of Earth’s history.

    Jonathan O’Neil and Richard Carlson uncovered the samples on a trek to the northeastern part of Canada to study the Canadian Shield formation, a large area of exposed continental crust underlying, centered on Hudson Bay, which was already known to contain some of the oldest parts of North America. O’Neil calls it the core or nucleus of the North American continent. “That spot on the shore of Hudson Bay has this older flavor to it, this older chemical signature.”

    A view of 2.7 billion-year-old continental crust produced by the recycling of more than 4.2 billion-year-old rocks. Image credit: Alexandre Jean

    To O’Neil, an assistant professor of geology at the University of Ottawa, rocks are like books that allow geologists to study their compositions and to learn about the conditions in which they form. But as far as rock records go, the first billion years of the Earth’s history is almost completely unrepresented.

    “We’re missing basically all the crust that was present about 4.4 billion years ago. The question we’re after with our study is: what happened to it?” said Carlson, director of the Carnegie Institution for Science. “Part of the goal of this was simply to see how much crust was present before and see what that material was.”

    While most of the samples are made up of a 2.7 billion-year-old granite, O’Neil said these rocks were likely formed by the recycling of a much older crust. “The Earth is very, very good at recycling itself. It constantly recycles and remelts and reworks its own crust,” O’Neil said. He and Carlson arrived at their conclusion by determining the age of the samples using isotopic dating and then adding on the estimate of how long it would have taken for the recycled bits to have originally formed.

    O’Neil and Carlson’s estimate relies on the theory that granite forms through the reprocessing of older rocks. “That is a possibility that they form that way, but that is not the only way you can form these rocks,” said Oliver Jagoutz, an associate professor of geology at the Massachusetts Institute of Technology. “Their interpretation really strongly depends on their assumption that that is the way these granites form.

    The nature of Earth’s first crust has largely remained a mystery because there simply aren’t very many rocks that have survived the processes that can erase their signature from the geologic record. Crust is often forced back into the Earth’s interior, which then melts it down, the geologic equivalent of sending silver jewelry back into the forge. That makes it challenging for geologists to reconstruct how the original looked.

    These new findings give geologists an insight into the evolution of the oldest elements of Earth’s outer layer and how it has come to form North America. “We’re recycling extremely, extremely old crust to form our stable continent,” O’Neil said.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 1:50 pm on January 29, 2017 Permalink | Reply
    Tags: , , , , , Dark Enrgy, Lawrence Krauss says “The longer you wait the less you will see and the more of the universe will disappear before your very eyes”, Milkomeda, NOVA, Physics in 1 Trillion Years   

    From NOVA: “Physics in 1 Trillion Years” from 17 Feb 2016 



    17 Feb 2016
    Sarah Scoles

    When winter weather closed Harvard University one day in 2011, astronomer Avi Loeb used the snow day not to sled or start a new novel but to contemplate the future of the universe. In that future, cosmologists like him, who study the universe’s origins and evolution, might not be able to make a living.

    Nine years before, he had written a paper outlining the problem: Dark energy makes the universe expand faster and faster every femtosecond. As spacetime—the fabric of the cosmos—stretches, it carries galaxies along with it. The stretching sends each galaxy farther and farther from the others, eventually driving them so far apart that light will never be able to bridge the gap between them.

    Far future cosmologists won’t have the same evidence as we do to infer the Big Bang. No image credit.

    In that future, our own oasis, the Milky Way, will be completely alone. When future astronomers look up, they will see only our galaxy’s own stars. They won’t find any evidence—even with the powerful telescopes of a trillion years hence—that other galaxies even exist beyond the horizon of their visible universe. Without a view of those other galaxies, they won’t be able to tell that everything was born in a Big Bang, or that the black vacuum of space is expanding at all, let alone that that expansion is speeding up. Ironically, dark energy itself will destroy evidence of dark energy.

    Thinking of this emptied universe, Loeb stared out the window at the snowfall, which covered the ground in a blank blanket. “I was pretty depressed that there would be nothing to look at, and that we won’t be able to tell how the universe started by observing it.”

    He set out to find a solution.

    A Galactic Merger

    Currently, cosmic expansion clues us in to the Big Bang. Press fast-forward on the growing universe we see today, and it continues growing, with objects flying ever-farther apart. It doesn’t take much creativity to then press rewind: The universe shrinks, and its ingredients squish together. If you rewind until the very beginning of the tape, everything piles into one infinitesimally small, infinitely dense spot. Press play and it bursts forth: a Big Bang.

    Astronomers only discovered that expansion because they could see other galaxies, which all seem to be running away from us. In 1999, using ultra-distant supernova explosions, they figured out that faraway galaxies were retreating faster than they “should” be, and that even more distant galaxies were distancing themselves faster than that. Something—which they later termed dark energy—spurs expansion on, like a car whose pedal never reaches the metal no matter how hard you push.

    The real problems won’t show up for a while, until about a trillion years after the Big Bang. By that time, the Milky Way will have long ago crashed into the Andromeda Galaxy. The stars will have spent 3 billion years swirling into stable orbits, before becoming a seamless chimera: a single galaxy called “Milkomeda,” a term Loeb coined in 2008 when he simulated and then forecasted the collision’s specifics.

    After their first close pass, the Andromeda Galaxy as well as the Milky Way would be tidally stretched out, as shown in this artist’s conception. NASA / ESA / STScI

    Even as that galactic collision takes place, dark energy will be dragging everything else away from us. Little by little over billions of years, everything will pop over the visible horizon, along with any physical evidence of its existence, until only our neighbor stars in Milkomeda remain. “The universe becomes lonely,” says Glenn Starkman, a physicist at Case Western Reserve University. He and astronomer Lawrence Krauss of Arizona State University in Tempe wrote an article titled Life, The Universe, and Nothing: Life and Death in an Ever-Expanding Universe, which also discusses this “lonely astronomer” problem. “The longer you wait, the less you will see and the more of the universe will disappear before your very eyes,” Krauss says.

    “Earth’s night sky will change,” Loeb says. Stars that humans (or whoever is around) will get to watch in a few billion years will shift radically. Today, the Milky Way appears as a diagonal swash of fuzzy light, the combined photons of billions of stars too small for our eyes to resolve. But when people in the distant future look up at Milkomeda, they will see those stars distributed evenly across the sky.

    If astronomers still live in Milkomeda at that point, they could be thrown into an astronomical dark age. To them, the universe will look like the one we thought we understood before telescopes. Back then, we thought we were the center of the cosmos, and we believed the Milky Way to be the entirety of the universe.

    That universe seemed static and without beginning. Alone in Milkomeda, future astronomers may—validly, based on actual evidence—see it that way, too. “Scientists who evolve on such a world will look out and find that the three main pillars of the Big Bang will all be gone,” Krauss says.

    Three Missing Pillars

    “It’s a gloomy forecast,” Loeb says. “We won’t be able to look at anything. It’s not just galaxies—it’s any relic left from Big Bang.” Right now, telescopes can see a glow of light left over from the Big Bang. This relic radiation, called the cosmic microwave background [CMB], comes from every direction in the sky. The Planck Telescope recently made a high-definition map of it, which is essentially a blueprint of a baby universe. It shows us the seeds that grew into groups of galaxies, tells us what the universe is made of, and tips us off about the very beginning of everything.

    CMB per ESA/Planck
    CMB per ESA/Planck


    But as time passes, the photons that make up cosmic microwave background cool off and lose energy, increasing their wavelengths. Eventually, those waves—which today are on the order of millimeters—will be bigger than the visible universe. There’s no telescope, not even one a trillion-year-old society could build, that can detect that. “They will no longer be able to learn what we know about the early universe,” Starkman says.

    The composition of the universe, which now tells scientists that the Big Bang occurred, won’t help in the far future, either. After the Big Bang, the universe began to cool off. Soon, free-range quarks settled down into electrons, protons, and neutrons, which could then intertwine into hydrogen atoms. Those atoms then smacked into each other and stuck together, fusing into larger helium atoms. In just 30 minutes, most of the helium that exists today had formed. A comparatively small amount has been created inside stars in the few billion years since.

    “Right now, we know the Big Bang happened because 25% of universe is helium,” Krauss says. “There’s no way stars could have made that.” But by the time the universe is 10 trillion years old, stars will have fused most of the hydrogen into helium. That is, in fact, their job. But in doing it so well, they will knock down the last solid evidence that the universe had a beginning at all. “All relics of Big Bang will be gone from us,” Loeb says. “There will be really nothing.”

    It seems that we live at a somewhat strange time in the universe—one in which our sky is filled with evidence of the cosmic narrative. Does that make us lucky? And does it make future observers unlucky? Astronomers generally shy away from suggestions that we are anything other than dead-average. They call it the Mediocrity Principle.

    But maybe each eon is a special snowflake in its own way, meaning none of them is really special, just like soccer kids who all get trophies. The far-future folks may have easy access to knowledge we, in our dark-energy-dominated and bright-skied time, can’t grasp. “I suspect that each era is interesting for different reasons,” Krauss says. “There may be cosmological observables that we could see in the far future that we can’t see now.”

    We can’t know for sure, nor can we know for sure that this future forecast is correct. Just like perfect weather prediction, it can only happen if we know everything about every subatomic particle. The year 1 trillion CE may not look exactly as we envision it. “That broad picture is what will happen if what we know continues to be the whole truth and nothing but the truth,” Starkman says. “There’s a lot of chutzpah in thinking that’s really so, that we’ve captured everything there is to know about physics.”

    Possible Answers

    As the winter storm swirled outside, Loeb considered the dark, empty (potential) future he’d predicted. He hated that so much knowledge—the science he loved—would disappear, like all the galaxies. He had recently given a public talk on the topic, sharing his sadness, and an audience member’s question had sent him reeling: Would this future convert cosmology into a kind of religion? “You would have books talking about the story of how the universe started, but you wouldn’t be able to verify that,” he says. “I was worried that cosmology would be turned into folklore.”

    “There will really be nothing,” he thought again. But then a flash swept through his brain. Nothing—except for one thing. “I realized that not everything is lost,” says Loeb. The key is a type of object called a hypervelocity star.

    “The center of our galaxy keeps ejecting stars at high enough speeds that they can exit the galaxy,” Loeb says. The intense and dynamic gravity near the black hole ejects them into space, where they will glide away forever like radiating rocket ships. The same thing should happen a trillion years from now.

    “These stars that leave the galaxy will be carried away by the same cosmic acceleration,” Loeb says. Future astronomers can monitor them as they depart. They will see stars leave, become alone in extragalactic space, and begin rushing faster and faster toward nothingness. It would look like magic. But if those future people dig into that strangeness, they will catch a glimpse of the true nature of the universe. “Just like Edwin Hubble observed galaxies—historically trying to infer expansion—they could observe those stars outside the galaxy and figure out the universe is expanding,” Loeb says. Starkman says they could accomplish this synthetically, too. “They could send out probes far enough to notice that the probes accelerated away,” he says.

    And then, perhaps, they will imagine pressing fast-forward on this scenario. And, if their imaginations are like ours, they will then think about rewinding it—all the way back to the beginning.

    Krauss doesn’t necessarily buy this. Occam’s Razor states that the least complicated answer is usually the correct one, and that principle will lead these future beings astray. It sounds crazy that the very fabric of the universe is growing larger faster all the time, carrying some runaway star with it. It’s not the explanation that comes to the tip of the tongue. But perhaps more importantly, with just Milkomeda in the night sky, astronomers will have no reason to come up with a theory of anything beyond those stars. Just as pre-telescope scientists thought only of what they could see with their eyes, not of an invisible universe outside of that, so too could future astronomers’ imaginations be constrained.

    Loeb stands by his solution, although he admits it could remain in his 21st century paper and never occur to someone in the 2.1 trillionth century. “It’s difficult to speculate what will happen in a year or 10 years on Earth, let alone a trillion years,” he says. “We don’t even know if humans will still be around…I’m just talking about what one could learn.”

    Which is why Loeb is so intent on forecasting the future cosmos, even though he won’t be around to see it. “Most of my colleagues do not care about the future because they regard themselves as down-to-Earth,” he says. “They only think about things that can be tested or looked at right now. We can’t really observe the future, so they prefer not to think about the future. They often run computer simulations of the universe to the present time and then stop. All I’m saying is ‘Why stop?’ ”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 10:17 am on January 9, 2017 Permalink | Reply
    Tags: 16S rRNA sequencing, Archaea, , , NOVA, Polymerase chain reaction, Prokaryotes, The Never-Ending Quest to Rewrite the Tree of Life   

    From NOVA: “The Never-Ending Quest to Rewrite the Tree of Life” 



    04 Jan 2017
    Carrie Arnold

    The bottom of the ocean is one of the most mysterious places on the planet, but microbiologist Karen Lloyd of the University of Tennessee, Knoxville, wanted to go deeper than that. In 2010, she was a postdoc at Aarhus University in Denmark, and Lloyd wanted to see what microbes were living more than 400 feet beneath the sea floor.

    Like nearly all microbiologists doing this type of census, she relied on 16S rRNA sequencing to determine who was there. Developed by microbiologist Carl Woese in the late 1970s, the technique looks for variation in the 16S rRNA gene, one that’s common to all organisms (it’s key to turning DNA into protein, one of life’s of the most fundamental processes). When Lloyd compared what she had seen under the microscope to what her sequencing data said, however, she knew her DNA results were missing a huge portion of the life hidden underneath the ocean.

    “I had two problems with just 16S sequencing. One, I knew it would miss organisms, and two, it’s not good for understanding small differences between microbes,” Lloyd says.

    Scientists use heat maps like these to visualize the diversity of bacteria in various environments. Credits below.

    Technology had made gene sequencing much quicker and easier compared to when Woese first started his work back in the 1970s, but the principle remained the same. The 16S rRNA gene codes for a portion of the machinery used by prokaryotes to make protein, which is a central activity in the cell. All microbes have a copy of this gene, but different species have slightly different copies. If two species are closely related, their 16S rRNA sequences will be nearly identical; more distantly related organisms will have a greater number of differences. It not only gave researchers a way to quantify evolutionary relationships between species, Woese’s work also revealed an entirely new branch on the tree of life—the archaea, a group of microscopic organisms distinct from bacteria.

    Woese’s success in using 16S rRNA to rewrite the tree of life no doubt encouraged its widespread use. But as Lloyd and other scientists began to realize, some microbes carry a version that is significantly different from that seen in other bacteria or archaea. Since biologists depended on this similarity to identify an organism, they began to realize that they were leaving out potentially significant portions of life from their investigations.

    These concerns culminated approximately ten years ago during a period when sequencing technologies were rapidly accelerating. During this time, researchers figured out how to prepare DNA for sequencing without needing to know anything about the organism you were studying. At the same time, scientists invented a strategy to isolate single cells. At her lab at the Joint Genome Institute outside San Francisco, microbiologist Tanja Woyke put these two strategies together to sequence the genomes of individual microbial cells. Meanwhile, Jill Banfield, across the bay at the University of California, Berkeley, used a different approach called metagenomics that sequenced genes from multiple species at once, and used computer algorithms to reconstruct each organism’s genome. Over the past several years, their work has helped illuminate the massive amount of microbial dark matter that comprises life on Earth.

    “These two strategies really complement each other. They have opened up our ability to see the true diversity of microbial life,” says Roger Lasken, a microbial geneticist at the J. Craig Venter Institute.

    Microbial Dark Matter

    When Woese sequenced the 16S genes of the microbes that would come to be known as archaea, they were completely different from most of the other bacterial sequences he had accumulated. They lacked a true nucleus, like other bacteria, but their metabolisms were completely different. These microbes also tended to favor extreme environments, such as those at high temperatures (hot springs and hydrothermal vents), high salt concentrations, or high acidity. Sensing their ancient origins, Woese named these microbes the archaea, and gave them their own branch on the tree of life.

    Woese did all of his original sequencing by hand, a laborious process that took years. Later, DNA sequencing machines greatly simplified the work, although it still required amplifying the small amount of DNA present using a technique known as polymerase chain reaction, or PCR, before sequencing. The utility of 16S sequencing soon made the technique one of the mainstays of the microbiology lab, along with the Petri dish and the microscope.

    The method uses a set of what’s known as universal primers—short strands of RNA or DNA that help jump start the duplication of DNA—to make lots of copies of the 16S gene so it can be sequenced. The primers bound to a set of DNA sequences flanking the 16S gene that were thought to be common to all organisms. This acted like a set of bookends to identify the region to be copied by PCR. As DNA sequencing technology improved, researchers began amplifying and sequencing 16S genes in environmental samples as a way of identifying the microbes present without the need to grow them in the lab. Since scientists have only been able to culture about one in 100 microbial species, this method opened broad swaths of biodiversity that would otherwise have remained invisible.

    “We didn’t know that these deep branches existed. Trying to study life from just 16S rRNA sequences is like trying to understand all animals by visiting a zoo,” says Lionel Guy, a microbiologist from Uppsala University in Sweden.

    Access mp4 video here .
    Discover how to interpret and create evolutionary trees, then explore the tree of life in NOVA’s Evolution Lab.

    It didn’t take long, however, for scientists to realize the universal primers weren’t nearly as universal as researchers had hoped. The use of the primers rested on the assumption that all organisms, even unknown ones, would have similar DNA sequences surrounding the 16S rRNA gene. But that meant that any true oddballs probably wouldn’t have 16S rRNA sequences that matched the universal primers—they would remain invisible. These uncultured, unsequenced species were nicknamed “microbial dark matter” by Stanford University bioengineer and physicist Stephen Quake in a 2007 PNAS paper.

    The name, he says, is analogous to dark matter in physics, which is invisible but thought to make up the bulk of the universe. “It took DNA technology to realize the depth of the problem. I mean, holy crap, there’s a lot more out there than we can discover,” Quake says.

    Quake’s snappy portmanteau translated into the Microbial Dark Matter project—an ongoing quest in microbiology, led by Woyke, to understand the branches on the tree of life that remain shrouded in mystery by isolating DNA from single bacterial and archaeal cells. These microbial misfits intrigued Lloyd as well, and she believed the subsurface had many more of them than anyone thought. Her task was to find them.

    “We had no idea what was really there, but we knew it was something,” Lloyd says.

    To solve her Rumsfeldian dilemma of identifying both her known and unknown unknowns, Lloyd needed a DNA sequencing method that would allow her to sequence the genomes of the microbes in her sample without any preconceived notions of what they looked like. As it turns out, a scientist in New Haven, Connecticut was doing just that.

    Search for Primers

    In the 1990s, Roger Lasken had recognized the problems with traditional 16S rRNA and other forms of sequencing. Not only did you need to know something about the DNA sequence ahead of time in order to make enough genetic material to be sequenced, you also needed a fairly large sample. The result was a significant limitation in the types of material that could be sequenced. Lasken wanted to be able to sequence the genome of a single cell without needing to know anything about it.

    Then employed at the biotech firm Molecular Staging, Lasken began work on what he called multiple displacement amplification (MDA). He built on a recently discovered DNA polymerase (the enzyme that adds nucleotides, one by one, to a growing piece of DNA) called φ29 DNA polymerase. Compared to the more commonly used Taq polymerase, the φ29 polymerase created much longer strands of DNA and could operate at much cooler temperatures. Scientists had also developed random primers, small pieces of randomly generated DNA. Unlike the universal primers, which were designed to match specific DNA sequences 20–30 nucleotides in length, random primers were only six nucleotides long. This meant they were small enough to match pieces of DNA on any genome. With enough random primers to act as starting points for the MDA process, scientists could confidently amplify and sequence all the genetic material in a sample. The bonus inherent in the random primers was that scientists didn’t need to know anything about the sample they were sequencing in order to begin work.

    “For the first time, you didn’t need to culture an organism or amplify its DNA to sequence it,” he says.

    The method had only been tested on relatively small pieces of DNA. Lasken’s major breakthrough was making the system work for larger chromosomes, including those in humans, which was published in 2002 in PNAS. Lasken was halfway to his goal—his next step was figuring out how to do this in a single bacterium, which would enable researchers to sequence any microbial cell they found. In 2005, Lasken and colleagues managed to isolate a single E. coli cell and sequence its 16S rRNA gene using MDA. It was a good proof of principle that the system worked, but to understand the range and depth of microbial biodiversity, researchers like Tanja Woyke, the microbiologist at the Joint Genome Institute, needed to look at the entire genome of a single cell. In theory, the system should work neatly: grab a single cell, amplify its DNA, and then sequence it. But putting all of the steps together and working on the kinks in the system would require years of work.

    Woyke had spent her postdoc at the Joint Genome Institute sequencing DNA from samples not grown in the lab, but drawn directly from the environment, like a scoop of soil. At the time, she was using metagenomics, which amplified and sequenced DNA directly from environmental samples, yielding millions of As, Ts, Gs, and Cs from even a thimble of dirt. Woyke’s problem was determining which genes belonged to which microbe, a key step in assembling a complete genome. Nor was she able to study different strains of the same microbe that were present in a sample because their genomes were just too similar to tell apart using the available sequencing technology. What’s more, the sequences from common species often completely drowned out the data from more rare ones.

    “I kept thinking to myself, wouldn’t it be nice to get the entire genome from just a single cell,” Woyke says. Single-cell genomics would enable her to match a genome and a microbe with near 100% certainty, and it would also allow her to identify species with only a few individuals in any sample. Woyke saw a chance to make her mark with these rare but environmentally important species.

    Soon after that, she read Lasken’s paper and decided to try his technique on microbes she had isolated from the grass sharpshooter Draeculacephala minerva, an important plant pest. One of her biggest challenges was contamination. Pieces of DNA are everywhere—on our hands, on tables and lab benches, and in the water. The short, random primers upon which single-cell sequencing was built could help amplify these fragments of DNA just as easily as they could the microbial genomes Woyke was studying. “If someone in the lab had a cat, it could pick up cat DNA,” Woyke says of the technique.

    In 2010, after more than a year of work, Woyke had her first genome, that of Sulcia bacteria, which had a small genome and could only live inside the grass sharpshooter. Each cell also carried two copies of the genome, which helped make Woyke’s work easier. It was a test case that proved the method, but to shine a spotlight on the world’s hidden microbial biodiversity, Woyke would need to figure out how to sequence the genomes from multiple individual microbes.

    Work with Jonathan Eisen, a microbiologist at UC Davis, on the Genomic Encyclopedia of Bacteria and Archaea Project, known as GEBA, enabled her lab to set up a pipeline to perform single cell sequencing on multiple organisms at once. GEBA, which seeks to sequence thousands of bacterial and archaeal genomes, provided a perfect entry to her Microbial Dark Matter sequencing project. More than half of all known bacterial phyla—the taxonomic rank just below kingdom—were only represented by a single 16S rRNA sequence.

    “We knew that there were far more microbes and a far greater diversity of life than just those organisms being studied in the lab,” says Matthew Kane, a program director at the National Science Foundation and a former microbiologist. Studying the select few organisms that scientists could grow in pure culture was “useful for picking apart how cells work, but not for understanding life on Earth.”

    GEBA was a start, but even the best encyclopedia is no match for even the smallest public library. Woyke’s Microbial Dark Matter project would lay the foundation for the first of those libraries. She didn’t want to fill it with just any sequences, however. Common bacteria like E. coli, Salmonella, and Clostridium were the Dr. Seuss books and Shakespeare plays of the microbial world—every library had copies, though they represented only a tiny slice of all published works. Woyke was after the bacterial and archaeal equivalents of rare, single-edition books. So she began searching in extreme environments including boiling hot springs of caustic acid, volcanic vents at the bottom of the ocean, and deep inside abandoned mines.

    Using the single-celled sequencing techniques that she had perfected at the Joint Genome Institute, Woyke and her colleagues ended up with exactly 201 genomes from these candidate phyla, representing 29 branches on the tree of life that scientists knew nothing about. “For many phyla, this was the first genomic data anyone had seen,” she says.

    The results, published in Nature in 2013, identified some unusual species for which even Woyke wasn’t prepared. Up until that study, all organisms used the same sequence of three DNA nucleotides to signal the stop of a protein, one of the most fundamental components of any organism’s genome. Several of the species of archaea identified by Woyke and her colleagues, however, used a completely different stop signal. The discovery was not unlike traveling to a different country and having the familiar red stop sign replaced by a purple square, she says. Their work also identified other rare and bizarre features of the organisms’ metabolisms that make them unique among Earth’s biodiversity. Other microbial dark matter sequencing projects, both under Woyke’s Microbial Dark Matter project umbrella and other independent ventures, identified microbes from unusual phyla living in our mouths.

    Some of the extremeophile archaea that Woyke and her colleagues identified were so unlike other forms of life that they grouped them into their own superset of phyla, known as DPANN (Diapherotrites, Parvarchaeota, Aenigmarchaeota, Nanohaloarchaeota, and Nanoarchaeota). The only thing that scientists knew about these organisms were the genomes that Woyke had sequenced, isolated from individual organisms. These single-cell sequencing projects are key not just for filling in the foliage on the tree of life, but also for demonstrating just how much remains unknown, and Woyke and her team have been at the forefront of these discoveries, Kane says.

    Sequencing microbes cell by cell, however, isn’t the only method for uncovering Earth’s hidden biodiversity. Just a few miles from Woyke’s lab, microbiologist Jill Banfield at UC Berkeley is taking a different approach that also has also produced promising results.

    Studying the Uncultured

    Typically, to study microbes, scientists have grown them in a pure culture from a single individual. Though useful for studying these organisms in the laboratory, most microbes live in complex communities of many individuals from different species. Starting in the early 2000s, genetic sequencing technologies had advanced to the point where researchers could study the complex array of microbial genomes without necessarily needing to culture each individual organism. Known as metagenomics, the field began with scientists focused on which genes were found in the wild, which would hint at how each species or strain of microbe could survive in different environments.

    Just as Woyke was doubling down on single-cell sequencing, Banfield began using metagenomics to obtain a more nuanced and detailed picture of microbial ecology. The problems she faced, though very different from Woyke’s, were no less vexing. Like Woyke, Banfield focused on extreme environments: acrid hydrothermal vents at the bottom of the ocean that belched a vile mixture of sulfuric acid and smoke; an aquifer flowing through toxic mine tailings in Rifle, Colorado; a salt flat in Chile’s perpetually parched Atacama Desert; and water found in the Iron Mountain Mine in Northern California that is some of the most acidic found anywhere on Earth. Also like Woyke, Banfield knew that identifying the full range of microbes living in these hellish environments would mean moving away from using the standard set of 16S rRNA primers. The main issue Banfield and colleagues faced was figuring out how to assemble the mixture of genetic material they isolated from their samples into discrete genomes.

    A web of connectivity calculated by Banfield and her collaborators shows how different proteins illustrate relationships between different microbes.
    Credit below.

    The solution wasn’t a new laboratory technique, but a different way of processing the data. Researchers obtain their metagenomic information by drawing a sample from a particular environment, isolating the DNA, and sequencing it. The process of sequencing breaks each genome down into smaller chunks of DNA that computers then reassemble. Reassembling a single genome isn’t unlike assembling a jigsaw puzzle, says Laura Hug, a microbiologist at the University of Waterloo in Ontario, Canada, and a former postdoc in Banfield’s lab.

    When faced with just one puzzle, people generally work out a strategy, like assembling all the corners and edges, grouping the remaining pieces into different colors, and slowly putting it all together. It’s a challenging task with a single genome, but it’s even more difficult in metagenomics. “In metagenomics, you can have hundreds or even thousands of puzzles, many of them might be all blue, and you have no idea what the final picture looks like. The computers have to figure out which blue pieces go together and try to extract a full, accurate puzzle from this jumble,” Hug says. Not surprisingly, the early days of metagenomics were filled with incomplete and misassembled genomes.

    Banfield’s breakthrough helped tame the task. She and her team developed a better method for binning, the formal name for the computer process that sorts through the pile of DNA jigsaw pieces and arranges them into a final product. As her lab made improvements, they were able to survey an increasing range of environments looking for rare and bizarre microbes. Progress was rapid. In the 1980s, most of the bacteria and archaea that scientists knew about fit into 12 major phyla. By 2014, scientists had increased that number to more than 50. But in a single 2015 Nature paper, Banfield and her colleagues added an additional 35 phyla of bacteria to the tree of life.

    The latest tree of life was produced when Banfield and her colleagues added another 35 major groups, known as phyla. Credit below.

    Because researchers knew essentially nothing about these bacteria, they dubbed them the “candidate phyla radiation”—or CPR—the bacterial equivalent of Woyke’s DPANN. Like the archaea, these bacteria were grouped together because of their similarities to each other and their stark differences to other bacteria. Banfield and colleagues estimated that the CPR organisms may encompass more than 15% of all bacterial species.

    “This wasn’t like discovering a new species of mammal,” Hug says. “It was like discovering that mammals existed at all, and that they’re all around us and we didn’t know it.”

    Nine months later, in April 2016, Hug, Banfield, and their colleagues used past studies to construct a new tree of life. Their result reaffirmed Woese’s original 1978 tree, showing humans and, indeed, most plants and animals, as mere twigs. This new tree, however, was much fuller, with far more branches and twigs and a richer array of foliage. Thanks in no small part to the efforts of Banfield and Woyke, our understanding of life is, perhaps, no longer a newborn sapling, but a rapidly maturing young tree on its way to becoming a fully rooted adult.

    Photo credits: Miller et al. 2013/PLOS, Podell et al. 2013/PLOS, Hug et al. 2016/UC Berkeley

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 10:21 am on December 29, 2016 Permalink | Reply
    Tags: , , First CRISPR-Edited Cells Tested in Lung Cancer Patient, , NOVA   

    From NOVA: “First CRISPR-Edited Cells Tested in Lung Cancer Patient” 



    17 Nov 2016 [Where has this been hiding?]
    Tim De Chant

    Geneticists edited the patient’s T-cells to more vigorously attack cancer cells. No image credit.

    In a first, oncologists and geneticists have edited a patient’s own immune cells using CRISPR and injected them as a treatment for an aggressive form of lung cancer.

    The trial, conducted at West China Hospital in Chengdu, is the first of what is expected to be many that will test the safety of using the gene editing technique to alter a person’s cells. U.S. trials are expected to begin in early 2017.

    Both studies will employ what are essentially advanced forms of immunotherapy, where doctors modify cells from a patient’s immune system to attack cancer cells. Because the cells involved are not a part of the reproductive system, their edited genomes cannot be passed on to any children the patients may have after the treatment.

    The patient involved in the Chinese study has been unsuccessfully treated for metastatic non-small-cell lung cancer, an aggressive form of the disease that’s often quickly fatal. The person received the first injection of CRISPR-edited cells on October 28.

    David Cyranoski, reporting for Nature News, has more details on the procedure:

    “The researchers removed immune cells from the recipient’s blood and then disabled a gene in them using CRISPR–Cas9, which combines a DNA-cutting enzyme with a molecular guide that can be programmed to tell the enzyme precisely where to cut. The disabled gene codes for the protein PD-1, which normally puts the brakes on a cell’s immune response: cancers take advantage of that function to proliferate.”

    The edited cells were then injected into the patient. Doctors hope the new cells will be able to exploit their PD-1 mutation to seek out and kill the cancer cells. It’s still too early to tell if the effort was safe or successful.

    If the patient shows no ill effects, the plan is to administer a second injection. Eventually, ten patients enrolled in the study will receive up to four injections.

    While scientists are optimistic about CRISPR’s broader potential in medicine, they’re less certain about whether this particular trial will be more effective than existing immunotherapies, which use modified proteins called antibodies that are easier to make in the lab than CRISPR-edited immune cells.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 2:44 pm on December 14, 2016 Permalink | Reply
    Tags: , , , Big Bang or Big Bounce, , NOVA   

    From NOVA: “Did the Universe Start with a Bounce Instead of a Bang?” 



    14 Dec 2016
    Marcus Woo

    Big Bounce could have happened, scientists say. Istock

    For a few physicists, the Big Bang wasn’t the beginning of the universe.

    Rather, they say, the universe existed before that point, stretching forever into the past as well as the future. While the universe is expanding today, it was contracting in the time before the Big Bang. In this picture, the Big Bang isn’t so much a bang but a bounce, a moment when a shrinking universe reversed course and began to grow.

    And according to their theory, the universe could bounce again. Today’s expansion could be followed by collapse in the far future, followed by another bounce. Some physicists have suggested this bouncing could be infinite, reviving a cyclic cosmology first proposed in the 1930s.

    But how that infinitesimally hot and dense point came to be remains an unanswered question. Bouncing theories could promise to explain the origin of the cosmos. Whether a single bounce or endless bounces, a handful of cosmologists have spent the last couple decades tinkering with these ideas. But to others, bounce theories are simply speculative and controversial, and to some, they’re discredited and wrong.

    Much of the debate between Big Bang and Big Bounce proponents revolves around the viability of inflation, the mainstream view of how the universe has come to be the way it is today.

    Inflationary Universe. NASA/WMAP
    Inflationary Universe. NASA/WMAP

    And although any cosmologist would agree that inflation is, at the very least, incomplete, the vast majority considers it the best model yet. Still, bounce proponents see fundamental flaws in this model.

    “Inflation’s not doing too well,” says Neil Turok, director of the Perimeter Institute for Theoretical Physics. “It’s had its day. It was useful when it was invented in the early 1980s.” But now, he says, we need a new theory, and that theory could be a bouncing universe.

    A Cosmic Growth Spurt

    The standard story of inflation goes like this: shortly after the Big Bang, the universe ballooned rapidly—much faster than its normal expansion. This sudden growth was necessary to create the smooth, flat, and uniform universe that scientists see today.

    Cosmologists first developed inflation in the early 1980s, before balloon-borne experiments and satellites returned increasingly precise data on the state of the early universe. These observations measured the leftover radiation from the Big Bang, a ubiquitous glow called the cosmic microwave background [CMB].

    CMB per ESA/Planck
    CMB per ESA/Planck

    The radiation is patchily distributed, with some spots hotter and cooler than others, an auspicious result since the exact nature of this patchiness was precisely what inflation predicted.

    Inflation also predicted the mass density of the universe, also measured from the cosmic microwave background. “We’ve measured the mass density to better than a half percent accuracy, and it agrees perfectly with what inflation predicts—which is just gorgeous,” says Alan Guth, a physicist at MIT and the first who proposed inflation in 1980.

    “It’s really remarkable how much this simple idea of inflation has done,” says Robert Brandenberger, a physicist at McGill University. Although he’s exploring alternatives to inflation, the theory is the most self-consistent one out there, he says. “It’s successful because it predicted many things—and I emphasize predicted. Early in my career, we didn’t have the data. I saw inflation pass many more tests.”

    Still, while these successes have been more than encouraging for inflation, the evidence has yet to convince everyone. One prediction that might quell some dissent would be the detection of primordial gravitational waves, ripples in the fabric of space and time that originated from fluctuations of the gravity field in the early universe. It almost happened: In March 2014, the BICEP2 experiment at the South Pole claimed to have seen these gravitational waves. But that heralded discovery vanished when astronomers realized the signal could have been entirely due to dust in the galaxy.

    Gravitational Wave Background from BICEP 2
    Gravitational Wave Background from BICEP 2, quickly discredited.

    Inflation is not without its theoretical issues either. Some critics say that inflation requires initial conditions that are too specialized and contrived to be realistic. To get inflation started, the early universe had to be just right.

    Another point of contention is that inflation could imply the existence of an infinite number of universes. In the early 1980s, physicists discovered that inflation goes on forever, stopping only in some regions of space. But in between these pockets, inflation continues, expanding faster than the speed of light. These bubbles are thus closed off from each other, effectively becoming isolated universes with their own laws of physics. According to this theory, we live in one of these bubbles.

    While inflation proponents embrace this so-called multiverse, detractors say it’s absurd. If anything can happen in these bubble universes, then scientific predictions become meaningless. “If you have a theory that can’t be disproved, you should be dissatisfied with that,” Turok says. “That’s the state with inflation and the multiverse, so I would say this is not a scientific theory.”

    Even ardent supporters of inflation would agree the theory is incomplete. It doesn’t say anything about the moment of the Big Bang itself, for example, when the known laws of physics break down at what’s called a singularity.

    What inflation still lacks is a deeper foundation. Physicists have tried connecting inflation with string theory—the best candidate for a so-called theory of everything. But it’s still a work in progress. “With inflation, we basically add something by hand and we say it works, but we don’t have a more theoretical understanding of where it could come from,” says Steffen Gielen of the Perimeter Institute, who works with Turok on bouncing models.

    Bouncing Ideas

    The suggestion that the Big Bang wasn’t the absolute beginning originates from the first half of the 20th century, when physicists proposed a cyclic universe. But at the time, no one understood the details for how the universe could enter and emerge from each bounce.

    Todays’ physicists still have their work cut out for them, but now they have all the tools of modern particle physics and string theory. In 1992, Maurizio Gasperini and Gabriele Veneziano first used these modern ideas to revisit a pre-Big-Bang universe. Ten years later, Turok and Paul Steinhardt, a physicist at Princeton University and one of inflation’s pioneers turned critic, expanded on that work. They have since become two of the most outspoken detractors of inflation and proponents of a bouncing universe.

    A bouncing universe, they argue, could produce the cosmos we see today—but without inflation. The universe doesn’t need a period of super-expansion to reach the smooth, flat state we see today; it can do so while contracting. And because every corner of a shrinking universe would have been in contact with one another, the whole cosmos could settle into a uniform temperature—again, just as we see it today.

    Because so much of the early universe is unknown, theories of cosmology can vary widely. Inflation, for instance, isn’t one particular theory but a class of models, each a bit different in detail. Likewise, physicists have theorized many ways for how a universe can bounce.

    In one case, dubbed a matter bounce, the universe only bounces once. The collapse into the bounce is like a reverse-order Big Bang. Another version, called an ekpyrotic model, can be cyclical, with contraction followed by expansion followed by contraction, and so on. The anamorphic universe might be similarly cyclical.

    Pretty much all models require some sort of new physics. The differences between these models depend on the details, whether it’s new theories or exotic types of matter that halt the inertia of collapse and guide the universe through the bounce. Figuring out what happens at the bounce poses a big challenge, because that point is where the laws of physics fail, just as they do at the start of an inflationary universe.

    At the bounce, the universe collapses into a singularity, in which Einstein’s theory of gravity, general relativity, breaks down. Relativity isn’t currently compatible with quantum mechanics, which is needed at the small scales of the singularity. To unite the two, physicists have been searching for a theory of quantum gravity, which doesn’t yet exist.

    Over the past year, though, physicists have claimed modest progress on how to handle the singularity. Turok and Gielen have outlined how a simplified, toy model of a universe could undergo a quantum bounce. A bouncing universe containing only radiation—not unlike the radiation-dominated cosmos at the Big Bang—could cross the singularity in a way like quantum tunneling: According to quantum mechanics, a particle can spontaneously appear on the other side of a barrier that would otherwise be impenetrable in non-quantum physics. A collapsing universe can act like a particle and tunnel through the barrier-like singularity, appearing on the other side as the expanding universe we know today—and evading the singularity’s problems.

    Meanwhile, Steinhardt and Anna Ijjas of Princeton University have proposed a way the universe could bounce without evoking quantum mechanics. They’ve shown that some exotic, negative energy could prevent a universe from collapsing into a singularity in the first place. By avoiding a singularity, the universe never gets small enough for quantum mechanics to come into play, so you don’t need quantum gravity. The universe then proceeds to expand.

    But while these two proposals might be a small advance, neither marks a radical leap from what’s been done before, Brandenberger says. We’re still far from solving the problem of the singularity. “If we solve the singularity problem by evoking exotic matter, the question is just twisted,” he says. In other words, instead of explaining the singularity, you now have to explain the exotic matter.

    Without new physics, a bounce doesn’t seem likely, according to Guth. “One has to adopt rather special features that one would have to assume in the underlying laws of physics to make the bounce possible,” he says. “To me, that doesn’t seem like a good bet.”

    But it’s still too early to judge, Turok says. The theories aren’t mature enough to be testable yet. Eventually, though, models could start making predictions. Future, more detailed measurements of the cosmic microwave background might support a particular model of inflation or a bouncing universe. Perhaps the most promising evidence would come in the form of primordial gravitational waves, which are about the best indicators of what happened in the moments after the Big Bang (or bounce).

    Depending on what these waves look like, researchers can start ruling out models of both bouncing universes and inflation. While the BICEP2 findings in 2014 were a false alarm, researchers hope other instruments will succeed, including its successor, BICEP3. The Atacama B-mode Search is now operating in the Atacama Desert in Chile, and researchers are planning future experiments with names such as the Primordial Inflation Polarization Explorer, Qubic, and Polarbear.

    The Right Path

    In the end, however, it may not simply come down to an either-or choice between bouncing models or inflation, even though proponents of bouncing models sell their idea as an alternative. “What they’re doing is much more closely allied to inflation than they would have you think,” says Andrew Liddle, a cosmologist at the University of Edinburgh. “I don’t think it’s that radical of a departure.” Many of the mathematical tools used in bouncing models are similar to those used for inflation, he says. And when you apply observations like the cosmic microwave background, both bouncing models and inflation give similar results.

    You can even have both a bounce and inflation. “Now, sociologically, many people who study bounce cosmologies do so because they’re interested in finding an alternative to inflation,” says Sean Carroll, a physicist at the California Institute of Technology. “That’s fine, but if you just said, without any preexisting agendas, does the universe have a bounce, and if so, could it also involve inflation? I think you’d say sure.”

    Still, the debates between bounce proponents and the most outspoken inflation supporters can get contentious, each somewhat dismissive of the other side. The conflict is a reminder that science—and perhaps theoretical physics, in particular—is ultimately a human endeavor, filled with egos and subjectivity. Legacies and Nobel Prizes could be at stake.

    “In the absence of data, you’re welcome to your opinion—opinion is all you have,” Carroll says. “All of these ideas have significant challenges and question marks next to them.” While a problem may be a deal-breaker for one person, it’s only a minor stumbling block to another. When blazing a new trail, the right path is often subjective.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 9:04 am on October 6, 2016 Permalink | Reply
    Tags: , Australia Is Moving Itself 1.8 Meters North on Maps, NOVA   

    From NOVA: “Australia Is Moving Itself 1.8 Meters North on Maps” 



    01 Aug 2016 [Just appeared in social media.]
    Tim De Chant

    Australia is drifting north, necessitating a change in its coordinate system.

    Time to update your maps—Australia’s moving.

    Since 1994, when the country last updated its coordinates, Australia has drifted 1.5 meters north (about 5 feet). In an effort to stay ahead of the Earth’s tectonic plates, the country is moving itself 1.8 meters north (about 6 feet).

    The shift will future proof the continent as it prepares for more autonomous vehicles, from farm tractors to cars.

    While a few feet here or there is within the limits of accuracy for many GPS systems, future systems will be accurate to within inches. Here’s Chris Foxx, reporting for BBC News:

    “If you want to start using driverless cars, accurate map information is fundamental,” said [project head Dan] Jaksa.

    “We have tractors in Australia starting to go around farms without a driver, and if the information about the farm doesn’t line up with the co-ordinates coming out of the navigation system there will be problems.”

    Australia, like most regions, has its own coordinate system, also known as a geodetic datum. There are the several global coordinate systems that are perfectly serviceable, but local versions do a better job at minimizing the distortion occurs when transferring the true shape of the Earth onto a flat coordinate plane.

    Because the Earth isn’t a perfect sphere, no datum is perfect. But by limiting the amount of the Earth’s surface that needs to be pulled and stretched when flattened, datums that cover smaller areas can more closely approximate the real thing.

    Australia’s new datum is expected to align with reality sometime in 2020.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 12:22 pm on September 13, 2016 Permalink | Reply
    Tags: , NOVA, Triple-Alpha Process Shows That Other Universes Might Be Better Suited to Life Than Ours   

    From NOVA: “Triple-Alpha Process Shows That Other Universes Might Be Better Suited to Life Than Ours” 



    02 Sep 2016
    Allison Eck

    If life is rare in our universe, it might be more common in alternate ones.

    According to scientists, life is possible because a triad of alpha particles unite to fuse into carbon. But there are some problems with that theory—fusing two alpha particles leads to a very unstable isotope (beryllium-8), which makes the abundance of carbon in the universe seem odd and improbable.

    In the 1950s, astronomer Fred Hoyle suggested that to resolve this problem, the fusion of three alpha particles must create carbon-12 with more energy than it needs. This “resonance” between the collective alpha particle energies and the excited state of carbon-12, which later decays to a ground state, is very sensitive—if you change it just slightly, the creation of carbon isn’t possible. Some experts insist that this fact is evidence of the multiverse’s existence: since the chances of this critical value arising are so low, other universes with other such fundamental constants must exist, too. Only those universes that are appropriately fined-tuned would give birth to life.

    Now, cosmologists are taking this idea to the next level.

    Here’s Jacob Aron, reporting for New Scientist:

    “But now Adams and his colleague Evan Grohs have argued that if other universes have different fundamental constants anyway, it’s possible to create a universe in which beryllium-8 is stable, thus making it easy to form carbon and the heavier elements.

    For this to happen would require a change in the binding energy of beryllium-8 of less than 0.1 MeV – something that the pair’s calculations show should be possible by slightly altering the strength of the strong force, which is responsible for holding nuclei together.

    Simulating how stars might burn in such a universe, they found that the stable beryllium-8 would produce an abundance of carbon, meaning life as we know it could potentially arise. ‘There are many more working universes than most people realise,’ says Adams.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 3:29 pm on August 25, 2016 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “The ‘Quantum Theory’ of Cancer Treatment” 



    20 Jul 2016 [This just appeared in social media.]
    Amanda B. Keener

    In April 2011, Christopher Barker, a radiation oncologist at Memorial Sloan Kettering Cancer Center in New York, received some unusual news about a participant in a clinical trial. The patient was battling a second recurrence of melanoma that had spread to several areas of her body. After more than a year on the experimental drug, her tumors had only gotten bigger, and after one near her spine started causing back pain, her doctors arranged for local radiation therapy to shrink the tumor and give her some relief.

    But the tumor near her spine was not the only one that shrank. “From one set of images to another, the radiologist noticed that there was a dramatic change in the extent of the melanoma,” Barker says. Although only one tumor was exposed to radiation, two others had started shrinking, too.

    The striking regression was a very rare effect of radiation therapy, Barker and his colleagues concluded, called an abscopal response. “It’s not common,” says Barker. “But we see it, and it’s pretty remarkable when it happens.”

    A woman prepares to receive radiation treatment for cancer. Photo credit: Mark Kostich/iStockphoto.

    Although the abscopal response was first recognized back in 1953, and a smattering of case reports similar to Barker’s appeared in the literature throughout the 1960s, ’70s, and ’80s, the mystery behind the abscopal response largely went unsolved until a medical student named Silvia Formenti dusted it off.

    While studying radiation therapy in Milan during the 1980s, Formenti couldn’t shake the idea that local radiotherapy must have some effect on the rest of the body. “When you burn yourself, the burn is very localized, yet you can get really systemic effects,” says Formenti, now chair of the department of radiation oncology at Weill Cornell Medical College in New York. “It seemed that applying radiotherapy to one part of the body should be sensed by the rest of the body as well.”

    The primary goal of therapy with ionizing radiation—the type used to shrink tumors—is to damage the DNA of fast-growing cancer cells so they self-destruct. But like burns, radiation also causes inflammation, a sign of the immune system preparing for action. For a long time, it was unclear what effect inflammation might have on the success of radiation therapy, though there were some hints buried in the scientific literature. For example, a 1979 study showed that mice lacking immune cells called T cells had poorer responses to radiation therapy than normal mice. But exactly what those T cells had to do with radiation therapy was anyone’s guess.

    Better Together

    In 2001, shortly after arriving in New York, Formenti attended a talk by Sandra Demaria, a pathologist also at Weill Cornell. Demaria was studying slivers of breast tumors removed from patients who had received chemotherapy and had found that in some patients, chemotherapy caused immune cells to flood the tumors. This made Formenti wonder if the same thing could happen after radiation therapy.

    In addition to fighting off illness-causing pathogens, part of the immune system’s job is to keep tabs on cells that could become cancerous. For example, cytotoxic T cells kill off any cells that display signs of cancer-related mutations. Cancer cells become troublesome when they find ways to hide these signs or release proteins that dull T cells’ senses. “Cancer is really a failure of the immune system to reject [cancer-forming] cells,” Formenti says.

    Formenti and Demaria, a fellow Italian native, quickly joined forces to determine whether the immune system was driving the abscopal response. To test their idea, their team injected breast cancer cells into mice at two separate locations, causing individual tumors to grow on either side of the animals’ bodies. Then they irradiated just one of the tumors on each mouse. Radiation alone prevented the primary tumor from growing, but didn’t do much else. Yet when the researchers also injected a protein called GM-CSF into the mice, the size of the second tumor was also controlled.

    GM-CSF expands the numbers of dendritic cells, which act as T cells’ commanding officers, providing instructions about where to attack. But the attack couldn’t happen unless one of the tumors was irradiated. “Somehow radiation inflames the tumor and makes it interesting to the immune system,” Formenti says.

    Formenti and Demaria knew that if their findings held up in human studies, then it could be possible to harness the abscopal effect to treat cancer that has metastasized throughout the body.

    Although radiation therapy is great at shrinking primary tumors, once a cancer has spread, the treatment is typically reserved for tumors that are causing patients pain. “Radiation is considered local therapy,” says Michael Lim, a neurosurgeon at Johns Hopkins University in Baltimore who is studying ways to combine radiotherapy with immunotherapy to treat brain tumors. But, he adds, “if you could use radiation to kindle a systemic response, it becomes a whole different paradigm.”

    When Demaria and Formenti first published their results in 2004, the concept of using radiation to activate immunity was a hard sell. At the time, research into how radiation affected the immune system focused on using high doses of whole-body irradiation to knock out the immune systems of animal models. It was counterintuitive to think the same treatment used locally could activate immunity throughout the body.

    That perspective, however, would soon change. In 2003 and 2004, James Hodge, an immunologist at the National Cancer Institute and his colleagues published two mouse studies showing that after radiation, tumor cells displayed higher levels of proteins that attract and activate cancer-killing T cells. It was clear radiation doesn’t just kill cancer cells, it can also make those that don’t die more attractive to immune attack, Hodge says.

    This idea received another boost in 2007 when a research team from Gustave Roussy Institute of Oncology near Paris reported that damage from radiation caused mouse and human cancer cells to release a protein that activates dendritic cells called HMGB1. They additionally found that women with breast cancer who also carried a mutation preventing their dendritic cells from sensing HMGB1 were more likely to have metastases in the two years following radiotherapy. In addition to making tumors more attractive to the immune system, Hodge says, the damage caused by radiation also releases bits of cancer cells called antigens, which then prime immune cells against the cancer, much like a vaccine.

    In some ways, Barker says, oncologists have always sensed that radiation works hand-in-hand with the immune system. For example, when his patients ask him where their tumors go after they’ve been irradiated, he tells them that immune cells mop up the dead cell debris. “The immune system acts like the garbage man,” he says.

    Now, immunologists had evidence that the garbage men do more than clean up debris: they are also part of the demolition team, and if they could coordinate at different worksites, they could generate abscopal responses. With radiation alone, this only happened very rarely. “Radiation does some of this trick,” Formenti says. “But you really need to help radiation a bit.”

    Formenti and Demaria had already shown in mice that such assistance could come in the form of immunotherapy with GM-CSF, and in 2003 they set out to test their theory in patients. They treated 26 metastatic cancer patients who were undergoing radiation treatment with GM-CSF. The researchers then used CT scans to track the sizes of non-irradiated tumors over time. Last June, they reported that the treatment generated abscopal responses in 20% of the patients. Patients with abscopal responses tended to survive longer, though none of the patients were completely cured.

    As the Weill Cornell team was conducting their GM-CSF study, a new generation of immunotherapeutic drugs arrived on the scene. Some, like imiquimod, activate dendritic cells in a more targeted way than GM-CSF does. Another group, the checkpoint inhibitors, release the brakes on the immune system and T cells in particular, freeing the T cells to attack tumors.

    In 2005, Formenti and her team found that a particular checkpoint inhibitor worked better with radiotherapy than alone and later reported that the same combination produces abscopal responses in a mouse model of breast cancer.

    Off-target, Spot-on

    In 2012, Formenti had an unexpected chance to test this treatment in the clinic when one of her patients who had read about her research requested that she try the combination on him. The patient had run out of options, so Formenti’s team obtained an exception to use the immunotherapy ipilimumab, which she had used in her 2005 study and had only been approved for melanoma, and proceeded to irradiate tumors in the patient’s liver. After five months, all but one of his tumors had disappeared. “We were ecstatic,” Formenti says. “He’s still alive and well.”

    The availability of checkpoint inhibitors seems to have opened the floodgates. Since the US Food and Drug Administration approved ipilimumab in 2011, there have been at least seven reports of suspected or confirmed abscopal responses in patients on checkpoint inhibitors, including the one Barker witnessed. Contrast that with the previous three decades, where less than one per year was reported, according to one review. Almost all of the recent cases involving checkpoint inhibitors have been in patients with melanoma, since that’s where the drugs have mainly been tested. But, abscopal responses with or without immunotherapy have been reported in patients with cancers of the liver, kidney, blood, and lung.

    There are now dozens of clinical trials combining radiation with a range of immunotherapies, including cancer vaccines and oncolytic viruses. “There’s quite a nice critical mass of people working on this,” Formenti says. She and Demaria are now finishing up a clinical trial in lung cancer patients using a protocol similar to the one that worked so well in their original patient.

    “I think we know that people who respond to checkpoint inhibitors already have more immune-activating tumors,” Demaria says. The question now, she says, is whether radiation can expand the 20% of people who respond to the combination therapy.

    One solution might be to match combinations to particular patients or tumor types. Demaria’s team is collecting blood and tissue samples from patients in a Weill Cornell lung cancer trial to look for differences in the immune responses of those who do and don’t generate abscopal responses. Such changes in the number or status of a cell type associated with particular outcomes are known as biomarkers.

    So far, there is little data about how the two types of responses differ. Barker and his team did publish measurements of a broad range of immune markers from their patient who experienced an abscopal response. “We didn’t really have a lot of clues in terms of what we should look at,” he says. They observed a bump in activated T cells and antibodies specific to tumor proteins following radiation, followed by steady declines of both as the tumors regressed. But, he says, there was no “smoking gun” that could explain why this particular patient responded the way she did.

    Understanding how the immune system responds to immunotherapy and radiation will be key to optimizing the combination of the two. “One needs to do these combinations to try and improve the outcome on both sides of the equation,” says William McBride, a radiation oncologist at the University of California, Los Angeles. There’s still controversy, for example, over whether the immune system responds better to high doses of radiation over short periods or low doses over longer periods. “We think we know the best sequence of therapy based on the pre-clinical studies, but that hasn’t been confirmed in clinical studies yet,” Barker says. “If we had a biomarker that would tell us in what way you should give the radiation, that would be enormously valuable.”

    Demaria says her research suggests that more tumor damage is not always better and that high radiation doses may be counterproductive, activating feedback responses that suppress immunity. She’s currently comparing immune signatures of different radiation regimens in mice. So far she says regimens that make the cancer look and act like virally-infected cells tend to elicit the best immune responses, but there is a long way to go in translating that work into humans.

    “Things are moving faster than they have for a long time, but at this point there are still a lot of unanswered questions,” she says.

    Fortunately, she and Formenti have plenty of motivation to work on those questions. Demaria says she still remembers examining a bit of tumor that was left behind after that first lung cancer patient received treatment. It was full of T cells which had presumably destroyed the cancer. “It’s the picture you never forget,” she says. “It is probably the biggest satisfaction to see somebody’s fate turned around by what you can do.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: