Tagged: NOVA Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:01 pm on August 16, 2018 Permalink | Reply
    Tags: , , , , Hunt for the sterile neutrino, , , , , , NOVA, , Short-Baseline Neutrino experiments   

    From Fermi National Accelerator Lab: “ICARUS neutrino detector installed in new Fermilab home” 

    FNAL II photo

    FNAL Art Image
    FNAL Art Image by Angela Gonzales

    From Fermi National Accelerator Lab , an enduring source of strength for the US contribution to scientific research world wide.

    August 16, 2018
    Leah Hesla

    For four years, three laboratories on two continents have prepared the ICARUS particle detector to capture the interactions of mysterious particles called neutrinos at the U.S. Department of Energy’s Fermi National Accelerator Laboratory.

    On Tuesday, Aug. 14, ICARUS moved into its new Fermilab home, a recently completed building that houses the large, 20-meter-long neutrino hunter. Filled with 760 tons of liquid argon, it is one of the largest detectors of its kind in the world.

    With this move, ICARUS now sits in the path of Fermilab’s neutrino beam, a milestone that brings the detector one step closer to taking data.

    It’s also the final step in an international scientific handoff. From 2010 to 2014, ICARUS operated at the Italian Gran Sasso National Laboratory, run by the Italian National Institute for Nuclear Physics. Then the detector was sent to the European laboratory CERN, where it was refurbished for its future life at Fermilab, outside Chicago. In July 2017, ICARUS completed its trans-Atlantic trip to the American laboratory.

    1
    The second of two ICARUS detector modules is lowered into its place in the detector hall. Photo: Reidar Hahn

    “In the first part of its life, ICARUS was an exquisite instrument for the Gran Sasso program, and now CERN has improved it, bringing it in line with the latest technology,” said CERN scientist and Nobel laureate Carlo Rubbia, who led the experiment when it was at Gran Sasso and currently leads the ICARUS collaboration. “I eagerly anticipate the results that come out of ICARUS in the Fermilab phase of its life.”

    Since 2017, Fermilab, working with its international partners, has been instrumenting the ICARUS building, getting it ready for the detector’s final, short move.

    “Having ICARUS settled in is incredibly gratifying. We’ve been anticipating this moment for four years,” said scientist Steve Brice, who heads the Fermilab Neutrino Division. “We’re grateful to all our colleagues in Italy and at CERN for building and preparing this sophisticated neutrino detector.”

    Neutrinos are famously fleeting. They rarely interact with matter: Trillions of the subatomic particles pass through us every second without a trace. To catch them in the act of interacting, scientists build detectors of considerable size. The more massive the detector, the greater the chance that a neutrino stops inside it, enabling scientists to study the elusive particles.

    ICARUS’s 760 tons of liquid argon give neutrinos plenty of opportunity to interact. The interaction of a neutrino with an argon atom produces fast-moving charged particles. The charged particles liberate atomic electrons from the argon atoms as they pass by, and these tracks of electrons are drawn to planes of charged wires inside the detector. Scientists study the tracks to learn about the neutrino that kicked everything off.

    Rubbia himself spearheaded the effort to make use of liquid argon as a detection material more than 25 years ago, and that same technology is being developed for the future Fermilab neutrino physics program.

    “This is an exciting moment for ICARUS,” said scientist Claudio Montanari of INFN Pavia, who is the technical coordinator for ICARUS. “We’ve been working for months choreographing and carrying out all the steps involved in refurbishing and installing it. This move is like the curtain coming down after the entr’acte. Now we’ll get to see the next act.”

    ICARUS is one part of the Fermilab-hosted Short-Baseline Neutrino program, whose aim is to search for a hypothesized but never conclusively observed type of neutrino, known as a sterile neutrino. Scientists know of three neutrino types. The discovery of a fourth could reveal new physics about the evolution of the universe. It could also open an avenue for modeling dark matter, which constitutes 23 percent of the universe’s mass.

    ICARUS is the second of three Short-Baseline Neutrino detectors to be installed. The first, called MicroBooNE, began operating in 2015 and is currently taking data. The third, called the Short-Baseline Near Detector, is under construction. All use liquid argon.

    FNAL/MicroBooNE

    FNAL Short-Baseline Near Detector

    Fermilab’s powerful particle accelerators provide a plentiful supply of neutrinos and will send an intense beam of the particle through the three detectors — first SBND, then MicroBooNE, then ICARUS. Scientists will study the differences in data collected by the trio to get a precise handle on the neutrino’s behavior.

    “So many mysteries are locked up inside neutrinos,” said Fermilab scientist Peter Wilson, Short-Baseline Neutrino coordinator. “It’s thrilling to think that we might solve even one of them, because it would help fill in our frustratingly incomplete picture of how the universe evolved into what we see today.”

    2
    Members of the crew that moved ICARUS stand by the detector. Photo: Reidar Hahn

    The three Short-Baseline Neutrino experiments are just one part of Fermilab’s vibrant suite of experiments to study the subtle neutrino.

    NOvA, Fermilab’s largest operating neutrino experiment, studies a behavior called neutrino oscillation.


    FNAL/NOvA experiment map


    FNAL NOvA detector in northern Minnesota


    FNAL Near Detector

    The three neutrino types change character, morphing in and out of their types as they travel. NOvA researchers use two giant detectors spaced 500 miles apart — one at Fermilab and another in Ash River, Minnesota — to study this behavior.

    Another Fermilab experiment, called MINERvA, studies how neutrinos interact with nuclei of different elements, enabling other neutrino researchers to better interpret what they see in their detectors.

    Scientists at Fermilab use the MINERvA to make measurements of neutrino interactions that can support the work of other neutrino experiments. Photo Reidar Hahn

    FNAL/MINERvA


    “Fermilab is the best place in the world to do neutrino research,” Wilson said. “The lab’s particle accelerators generate beams that are chock full of neutrinos, giving us that many more chances to study them in fine detail.”

    The construction and operation of the three Short-Baseline Neutrino experiments are valuable not just for fundamental research, but also for the development of the international Deep Underground Neutrino Experiment (DUNE) and the Long-Baseline Neutrino Facility (LBNF), both hosted by Fermilab.

    DUNE will be the largest neutrino oscillation experiment ever built, sending particles 800 miles from Fermilab to Sanford Underground Research Facility in South Dakota. The detector in South Dakota, known as the DUNE far detector, is mammoth: Made of four modules — each as tall and wide as a four-story building and almost as long as a football field — it will be filled with 70,000 tons of liquid argon, about 100 times more than ICARUS.

    FNAL LBNF/DUNE from FNAL to SURF, Lead, South Dakota, USA


    FNAL DUNE Argon tank at SURF


    Surf-Dune/LBNF Caverns at Sanford



    SURF building in Lead SD USA

    The knowledge and expertise scientists and engineers gain from running the Short-Baseline Neutrino experiments, including ICARUS, will inform the installation and operation of LBNF/DUNE, which is expected to start up in the mid-2020s.

    “We’re developing some of the most advanced particle detection technology ever built for LBNF/DUNE,” Brice said. “In preparing for that effort, there’s no substitute for running an experiment that uses similar technology. ICARUS fills that need perfectly.”

    Eighty researchers from five countries collaborate on ICARUS. The collaboration will spend the next year instrumenting and commissioning the detector. They plan to begin taking data in 2019.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    FNAL Icon

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics. Fermilab is America’s premier laboratory for particle physics and accelerator research, funded by the U.S. Department of Energy. Thousands of scientists from universities and laboratories around the world
    collaborate at Fermilab on experiments at the frontiers of discovery.


    FNAL/MINERvA

    FNAL DAMIC

    FNAL Muon g-2 studio

    FNAL Short-Baseline Near Detector under construction

    FNAL Mu2e solenoid

    Dark Energy Camera [DECam], built at FNAL

    FNAL DUNE Argon tank at SURF

    FNAL/MicrobooNE

    FNAL Don Lincoln

    FNAL/MINOS

    FNAL Cryomodule Testing Facility

    FNAL Minos Far Detector

    FNAL LBNF/DUNE from FNAL to SURF, Lead, South Dakota, USA

    FNAL/NOvA experiment map

    FNAL NOvA Near Detector

    FNAL ICARUS

    FNAL Holometer

     
  • richardmitnick 1:03 pm on August 5, 2018 Permalink | Reply
    Tags: , , , , , , NOVA   

    From NOVA: “NASA’s TESS Spacecraft Will Scan the Sky For Exoplanets” 

    PBS NOVA

    From NOVA

    13 Apr 2018 [Just now in social media.]
    Allison Eck

    1
    NASA/TESS will identify exoplanets orbiting the brightest stars just outside our solar system.

    The era of big data is here—not just for life on Earth, but in our quest to find Earth-like worlds, too.

    Next Monday, April 16, NASA’s $200-million Transiting Exoplanet Survey Satellite, or TESS, will surge skyward on a SpaceX Falcon 9 rocket. If all goes well, over the next two years, it will search space for signs of exoplanets, or planets beyond our own solar system. So far, scientists have found around 4,000 such celestial bodies freckled across the face of the universe, including seven Earth-sized planets orbiting the dwarf star Trappist-1 about 235 trillion miles away. NASA’s Kepler spacecraft, launched in 2009, has led this revolutionary effort—but now it’s running out of fuel.

    NASA/Kepler Telescope

    TESS, its replacement, will document close-by exoplanets circling bright stars (as opposed to the more distant ones Kepler surveyed). These data points will give scientists more information about the planets ripest for scientific exploration—and which may harbor life.

    “TESS’s job is to find an old-fashioned address book of all the planets spread out around all the stars in the sky,” said Sara Seager, astrophysicist and planetary scientist at MIT and deputy science director for the TESS mission.

    George Ricker, principal investigator for TESS, estimates that the spacecraft will be able to find some 500 super-Earths, or planets that are one-and-a-half to two times the size of Earth, and several dozen Earth-sized planets. Many of these likely orbit red dwarf stars, which are smaller and cooler than our Sun. TESS will watch for transits—the slight dimming of stars as planets pass in front of them from our vantage point on Earth.

    Planet transit. NASA/Ames

    Since red dwarfs are cooler than the Sun, habitable zone planets that revolve around them will orbit closer to their host star, making transits more frequent—and thus more scientifically useful.

    “The transits are a repeating phenomenon. Once you’ve established that a given host star has planets, you can predict where they will be in the future,” Ricker said. “That’s really going to be one of the lasting legacies from TESS.”

    Stephen Rinehart, project scientist for TESS, says that with Kepler, the goal was to get a narrow, deep look at one slice of the cosmos. By contrast, TESS will take an expansive look at the most promising candidates for future research—and compare and contrast them.

    “It’s changing the nature of the dialogue,” Rinehart said. “So far, the nature of our conversations about exoplanets have really been statistical. With TESS, we’ll find planets around bright stars that are well-suited to follow-up observations, where we can talk not just about what the population is like, but we can start talking about what individual planets are like.”

    TESS will gaze upon 20 million stars in the solar neighborhood. Kepler was only able to look at about 200,000. “We’ve got a factor of a hundred more stars that we’re going to be able to look at,” Ricker said. “These are the objects that people are going to want to come back to centuries from now.”

    The spacecraft will act as a bridge to future projects, too, like the James Webb Telescope, which is set to launch in May of 2020. That telescope will study every phase in the history of our universe—and it’ll act as the “premier observatory of the next decade.”

    Our history with exoplanets is surprisingly brief. While we had dreamt of them for centuries, it was only 25 years ago that we confirmed their existence. Now, we know that nearly every red dwarf in the Milky Way has a family of planets, and that maybe 20% of those planets lie with the habitable zone. With so much variety and many to choose from, scientists hope that by studying their atmospheres, they’ll be able to detect signs of life.

    “[Habitability] is one of the philosophical questions of our time,” Rinehart said. “Can we find evidence that there’s even a possibility of other life nearby us in the universe? TESS isn’t going to quite get us there. TESS is an important step forward.”

    Paul Hertz, director of astrophysics for NASA, echoes Rinehart’s optimism.

    “After TESS is done, you’ll be able to go outside at night, take your grandchild by the hand, and point to a star and say, ‘I know there’s a planet around that star. Let’s talk about what that planet might be like,’” Hertz said. “Nobody’s ever been able to do that in the history of mankind.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 11:00 am on July 29, 2018 Permalink | Reply
    Tags: , Dynamical dark matter, NOVA,   

    From NOVA: “Does Dark Matter Ever Die?” 

    PBS NOVA

    From NOVA

    30 May 2018 [Just found in social media]
    Kate Becker

    Dark matter is the unseen hand that fashions the universe. It decides where galaxies will form and where they won’t. Its gravity binds stars into galaxies and galaxies into galaxy clusters.

    Dark matter cosmic web and the large-scale structure it forms The Millenium Simulation, V. Springel et al

    Caterpillar Project A Milky-Way-size dark-matter halo and its subhalos circled, an enormous suite of simulations . Griffen et al. 2016

    And when two galaxies merge, dark matter is there, sculpting the product of the merger. But as for what dark matter actually is? No one knows.

    Here’s the short list of what we do know about dark matter. Number one: There’s a lot of it, about five times more than “ordinary” matter. Two: It doesn’t give off, reflect, or absorb light, but it does exert gravity, which is what gives it a driver’s-seat role in the evolution of galaxies. Three: It’s stable, meaning that for almost 13.8 billion years—the current age of the universe—dark matter hasn’t decayed into anything else, at least not enough to matter much. In fact, the thinking goes, dark matter will still be around even when the universe is quintillions (that’s billions of billions) years old—maybe even forever.

    1
    Though invisible, dark matter exerts gravity just like other matter. No image credit.

    Theoretical physicists dreaming up new ideas about dark matter typically start with these three basic principles. But what if the third—the requirement that dark matter be stable over the cosmic long haul—is wrong? That’s the renegade idea behind a new dark matter proposal called “Dynamical Dark Matter.” Though it’s still on the fringe of dark matter physics (“It’s as far as you can get from the traditional approaches,” says physicist Keith Dienes of the University of Arizona, who first developed the idea with Lafayette College theorist Brooks Thomas), it’s been gaining traction and attracting collaborators from particle physics, astrophysics, and beyond.

    And dark matter is a field that could use some new ideas. While astronomers have been picking up dark matter’s fingerprints all over the universe for at least a century, physicists can’t seem to get a fix on a single dark matter particle. It’s not for lack of trying. Particle hunters have looked for signs of them in flurries of particles set loose by colliders like the Large Hadron Collider (LHC). They have buried germanium crystals and tanks of liquid xenon and argon deep underground—beneath mountains and in old gold mines—and looked for dark matter particles pinging off the atomic nuclei inside. The result: Nothing, at least not anything that physicists can agree on.

    DARWIN Dark Matter experiment. A design study for a next-generation, multi-ton dark matter detector in Europe at the University of Zurich

    Lux Dark Matter 2 at SURF, Lead, SD, USA

    DEAP Dark Matter detector, The DEAP-3600, suspended in the SNOLAB deep in Sudbury’s Creighton Mine

    Lux Dark Matter 2 at SURF, Lead, SD, USA

    Inside the ADMX experiment hall at the University of Washington Credit Mark Stone U. of Washington

    Meanwhile, the astrophysical evidence for dark matter keeps building up. Take one universal mystery: Astronomers, after clocking how fast stars are circling around in galaxies, have found that stars skimming a galaxy’s perimeter are going just about as fast as closer-in stars. But based on everything we know about how gravity works, they should actually be going a lot slower—unless there is some invisible mass pulling on them. Then, there are galaxy clusters: Galaxies within them are jouncing around so quickly that they should fly apart, absent some invisible mass is holding them all together. Noticing a theme here? Even the cosmic microwave background radiation, the closest thing we have to a baby picture of the newborn universe, has patterns in it that can only really be explained by dark matter. So, if dark matter is so ubiquitous, why can’t we find it?

    3
    Gravity from Huchra’s Lens causes light from the quasar Einstein Cross to bend around it..No image credit.

    Some researchers are beginning to wonder if they’ve been searching for the wrong thing all along. Most (though not all) dark matter detectors are designed to find hypothetical particles called WIMPs—short for “weakly interacting massive particles.” WIMPs are an appealing dark matter candidate because they emerge naturally from a beyond-the-standard-model theory called supersymmetry, which posits that the all the fundamental subatomic particles have as-yet-undiscovered partners.

    As physicists worked out the properties of those still unseen particles, they noticed that one was a startlingly good match for dark matter. It would interact with other particles via gravity and something called the weak force, which only works when particles get within a proton’s-width of each other. Plus, it would be stable, and there could be just enough of it to account for the missing mass without upsetting with the evolution of the universe.

    The appeal of WIMPs is “almost aesthetic,” says Jason Kumar, a physicist at the University of Hawaii: it speaks to physicists’ love of all that is simple, symmetrical, and elegant. But, Kumar says, “It’s now becoming very hard to get these models to fit with the data we’re seeing.” That doesn’t mean that the WIMP model is wrong, but it does put researchers in the mood to consider ideas that, ten years ago, might have been brushed off as theoretical footnotes. Like, for instance, the idea that dark matter that isn’t stable after all.

    A Destabilizing Influence

    Dienes and Thomas were newcomers to dark matter when they first hatched the idea of Dynamical Dark Matter. They were so new to the field that, at first, they didn’t even worry about stability. Together, they began sketching a new kind of dark matter. First, they thought, what if dark matter weren’t just one kind of particle, but a whole bunch of different kinds? Second, what if those particles could decay? Some might disappear within seconds, but others could stick around for trillions of years. The trick would be getting the balance right, so that the bulk of the dark matter would linger until at least the present day.

    Dienes and Thomas called their new framework “Dynamical Dark Matter,” and started sharing it at talks and academic conferences. The reaction, according to Dienes: “A boatload of skepticism.”

    “People kept asking about stability,” Dienes remembers. “But we were not thinking about stability in the traditional way.”

    Why are physicists so sure that dark matter is stable, anyway? Galaxies from long ago—the ones astronomers see when they look billions of light years out into the universe—aren’t more weighed-down by dark matter than our nearby, present-day specimens, at least not at the level of precision that astronomers can measure. Plus, if dark matter decayed into lighter, detectable particles, the little shards would fly out into space with a lot of energy, which we would be able to measure on Earth. And if the decay started in the universe’s baby days, it would disrupt the formation of the elements, shifting the chemistry of the cosmos.

    3
    Galaxies far away from Earth aren’t any more massive than those nearby. No image credit.

    Dynamical Dark Matter resolves the stability problem through a balancing act. If most of dark matter is tied up in particles that live a long time—longer than the age of the universe—that leaves room for a small share of dark matter to be made up of particles that vanish quickly. “It’s a balancing between lifetimes and abundances,” Dienes says. “This balancing is the new underlying principle that replaces mere stability.”

    At first glance, this might sound contrived. Why should everything work out just so? But Dienes, Thomas, and their collaborators have discovered several scenarios that naturally produce just the right combination of particles. “It turns out there are a lot of interesting ways in which these things can come about,” Thomas says. Dynamical Dark Matter remains agnostic about what the dark matter particles are or how they came to be. “It’s not just a single model for dark matter, like a particle that’s a candidate,” he says. “It’s a whole new framework for thinking about what dark matter could be.”

    Dynamical Dark Matter is one of a growing number of “multi-component” dark matter models that welcome in multiple particles. “The key differentiator for Dynamical Dark Matter is that it’s not just a random collection of particles,” Kumar says. “There are just a couple of parameters that describe everything about it.”

    A Shrinking Slice of Pie

    Today, dark matter makes up about 85% of the “stuff” in the universe, out-massing regular matter by a factor of five to one. But if the Dynamical Dark Matter framework is right, one day, dark matter will fizzle out entirely. The process will start slowly. Then, as a larger share of dark matter hits its expiration date, the die-out will speed up until, ultimately, dark matter goes extinct.

    That won’t happen for a long, long time—long after dark energy, that other cosmic mystery force, stretches the universe to the brink of nothingness. (But that’s another story.) So one might ask: Who cares if a teeny weeny bit of dark matter goes “poof” if no one misses it?

    Scientists searching for dark matter particles do.

    That’s because, at dark matter detectors, Dynamical Dark Matter particles should leave a more complicated set of fingerprints than WIMPs. While WIMPs should make a relatively simple “clink” against the ordinary particles inside a detector, Dynamical Dark Matter (or any other brand of multiplex dark matter) would make a jumbled-up jangle. “If there is only one dark-matter particle, there is a well-known ‘shape’ for this recoil spectrum,” says Dienes, describing the detector read-out. “So seeing such a complex recoil spectrum would be a smoking gun of a multi-component dark-matter scenario such as Dynamical Dark Matter.”

    Particle collider experiments could also distinguish Dynamical Dark Matter from WIMPs. “Dynamical dark matter basically provides a very rich spectrum of very different types of collider signatures, some very different from conventional dark matter,” says Shufang Su, a physicist at the University of Arizona. With Dienes and Thomas, Su is trying to predict the traces Dynamical Dark Matter would leave in data from particle colliders like the LHC.

    Su was attracted to the dynamical dark matter model by the idea that dark matter could be a whole panoply of particles instead of just one, which would leave a distinctive signature on the visible particles produced in the LHC’s smash-ups. “These changes could be very dramatic and very different from what would occur if there is only a single dark matter species,” Su says. “If one dark matter particle leads to a single peak, Dynamical Dark Matter could lead to multiple peaks and perhaps even peculiar kinks.”

    Then there’s the decay factor. Depending on how long Dynamical Dark Matter particles live, some might fall apart almost as soon as they are created. Others might last long enough to travel some length of the detector, or escape entirely. “Even though it’s still dark matter, it could have a totally different signature,” Su says.

    While Su is thinking about how to detect Dynamical Dark Matter at colliders here on Earth, Kumar is thinking about whether it could explain something that has been puzzling astronomers: a mysterious excess of high-energy positrons in space. Dark matter researchers have suggested the positrons could be coming from WIMPs, which spit them out as they collide with and annihilate other WIMPs. The trouble, Kumar says, is that this process should only produce positrons up to a certain maximum energy before shutting down; so far, astronomers haven’t found such a cut-off. Dynamical dark matter just might be able to make positrons at the energy levels astronomers observe.

    Of course, Dynamical Dark Matter is just one of many alternatives to WIMPs. There are also SIMPS, RAMBOs, axions, sexaquarks—the list goes on. Until physicists make a clear-cut detection, theorists will have plenty of headroom to dream up new ideas.

    “The main message is that this is an interesting alternative. We are not claiming that it is necessarily better,” Dienes says. “The field is wide open, and data will eventually tell us.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 2:53 pm on July 28, 2017 Permalink | Reply
    Tags: , , , , How Dust Built the Universe, NOVA   

    From NOVA: “How Dust Built the Universe” 

    PBS NOVA

    NOVA

    28 Jul 2017
    Samia Bouzid

    If you’ve ever driven into the sunset with a dirty windshield or taken a drive after a snowstorm, your windshield caked with salt, you can probably relate to one of astronomers’ ongoing frustrations: seeing through dust.

    Cosmic dust, which collects in galaxies in loose fogs or thick clouds, has often plagued astronomers. The tiny grains, each 10,000 times smaller than the eye of a needle, absorb light, scatter it, or change change its wavelength so it’s invisible to the eye. In doing so, dust steals some of the few clues we have to understand the nature of the universe.

    But astronomers are discovering that dust plays important roles in both creating our universe and helping us understand it. It plants the seeds for stars, planets, and life as we know it. In the past two decades, astronomers studying dust have pulled back the curtain on important pieces of the universe that were hiding in plain sight. The more we learn about dust, the more we realize that it is part of the puzzle—not the rascal hiding the puzzle pieces.

    Fertilizing the Universe

    In the clouds of swirling gas that produce stars and planets, dust serves as a wingman for hydrogen. As a cloud condenses under its own gravity, star formation begins when hydrogen atoms meet and form molecules. But the compressing gas raises temperatures to the point where hydrogen begins whizzing around too fast to form bonds. It’s easier for the atoms to latch onto a piece of relatively big, slow dust. There, on the dust’s surface, two atoms can form a bond, making forming the first building blocks of a star. But dust is more than a matchmaker. As nearby stars blaze hot and bright in the ultraviolet, clouds of dust can act as a shield, sheltering stars-to-be from the barrage of radiation, which can break their chemical bonds and thwart their path to stardom.

    1
    The stars and dust clouds of the Milk Way. No image credit.

    When the obstacles are finally overcome, a new star blossoms out of a cloud. Some of the remaining dust and gas begins to spin around the star and flatten into a disk. Specks of dust collide, and as their gravity increases, they pull more dust and gas onto their surface, accreting material. Over time, they become pebbles, then boulders and, sometimes, a few million years later, planets.

    Xuening Bai, a research associate at the Harvard Center for Astrophysics, studies the processes that create planets and the stuff of life. Without dust, he says, the world would be a different place.

    Seeing the Universe in a New Light

    Indeed, most of what we see in space—not to mention all that we are, all that we eat, all that we breathe—owes its existence, in some way, to a grain of dust that formed the seed of a star or planet. But despite its fundamental importance, astronomers have only begun to understand what dust really is and how it affects the way we see the universe.

    Dust itself is a mishmash of mostly carbon-based ashes cast off from dying stars. “It’s a catch-all term for what we would refer to on Earth as soot,” says Caitlin Casey, an astronomer at the University of Texas at Austin. Until recently, this “soot” was poorly understood. For centuries, the practice of astronomy was limited to what people could observe at visible wavelengths—in other words, what people could actually see. Dust absorbs light that can be seen by the naked eye and re-emits it at longer, infrared wavelengths, which are invisible to us. As a result, for most of history, dust was seen only as dark blobs, riddling galaxies with holes.

    Then, in the 1960s, the first infrared telescopes pioneered the study of dust emissions. But these telescopes were not able to detect all radiation from dust. Very distant galaxies, such as the ones Casey studies some 10 billion light-years away, are receding so quickly that the light re-emitted by their dust gets stretched, shifting its wavelength into the submillimeter range and making the galaxies practically invisible, even in infrared telescopes.

    NASA Infrared Telescope facility Mauna Kea, Hawaii, USA

    It wasn’t until 1998 that a group of astronomers in Mauna Kea, Hawaii, pointed a submillimeter telescope at a blank field of sky and made a discovery that rocked the field. A few years earlier, the Hubble Space Telescope had revealed that this blank sky was swarming with distant galaxies, but now, an entirely new population of galaxies lit up in submillimeter wavelengths. It was like turning on a light in a room where astronomers had fumbled in the dark for centuries. Galaxies glowed with dust, and the earliest, most distant galaxies, showed the most dust of all.

    East Asia Observatory James Clerk Maxwell telescope, Mauna Kea, Hawaii, USA

    NASA/ESA Hubble Telescope

    Dust Bunnies in the Edges of the Universe

    Submillimeter wavelengths were the last piece of the electromagnetic spectrum to be observed by astronomers, so in some ways, the 1998 discovery seemed to complete a picture of the universe. Large swaths of the sky were now imaged at every wavelength. Dust, the quiet catalyst behind star formation, had been unmasked.

    But in another way, astronomers had merely stumbled upon more pieces to a puzzle they thought they had completed. Because if dust comes from stars, the universe should get dustier the more stars have lived and died. What business did the earliest galaxies have being so dusty? The universe has been around for nearly 14 billion years, but most of these dusty galaxies formed when the universe was a tender 2 or 3 billion years old. By then, only a few generations of stars had ever existed. So where did all that dust come from?

    Desika Narayanan, an astronomer at the University of Florida, probes for answers by developing numerical simulations to model the early universe. He says that one clue lies in the earliest galaxies, which were probably ungainly, messy galaxies a far cry from the elegant spiral that is our Milky Way. Galaxies like ours pop out a star about once or twice a year. But these old, dusty galaxies were firecrackers, bursting with up to 1,000 to 2,000 new stars a year. As the first stars died, dust billowed from them and filled the galaxy—perhaps enough to account for the levels of dust seen today.

    But telescope data can only confirm so much. In the short lifetime of submillimeter astronomy, Narayanan says, telescope sensitivity has drastically improved, outpacing even camera phone technology, which has raced from blurry images taken by flip-phones to the latest, sharpest shots on iPhones in roughly the same period.

    Still, even the greatest telescopes strain against the vastness of the universe. They have to be extremely large to detect and resolve light from the most distant galaxies. At 15 meters in diameter, the world’s largest submillimeter dish belongs to the James Clerk Maxwell Telescope at the summit of Mauna Kea, Hawaii. It was the first telescope to detect these galaxies in 1998. In Chile, the Atacama Large Millimeter Array/Submillimeter Array, or ALMA, is made up of 66 dishes that can be arranged to span nearly 10 miles in an attempt to resolve the universe’s faintest, most distant galaxies.

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    It’s no coincidence that both of these observatories were built in extreme environments, both well above 10,000 feet and in dry places where the air is thin. Water vapor in the air soaks up most of the infrared radiation passing through it that’s so critical to observing dust. Meanwhile, the Earth itself radiates enthusiastically in the infrared, creating a noisy background for any signal that does get through. “Doing infrared astronomy from the ground is like trying to observe a star in the daylight out of a telescope made of light bulbs,” George Rieke, an infrared astronomer, once said.

    For now, this difficulty has left some mysteries intact. Although astronomers are better able to observe galaxies and create simulations, some galaxies remain too old and too dusty to fit into existing models. The size, peculiar structure, and dustiness of early galaxies is not fully explained.

    The next surge in science, expected to help explain some of the mysteries surrounding dusty, star-forming galaxies, will come from the James Webb Space Telescope, a massive instrument with a six-and-a-half-meter dish—a piece of highly polished metal as wide as a giraffe is tall—set to for launch in 2018.

    NASA/ESA/CSA Webb Telescope annotated

    Free from the interfering atmosphere, this telescope will peer into the dusty edges of space in finer detail than any other telescope.

    Narayanan says that the astronomy community is excited for these new measurements, and expect it will reveal new avenues for exploration. “Immediately, you start to open up as many questions as you think you’re going to answer,” he says.

    Twenty Years of Dusty Galaxies

    On July 31, astronomers will meet in Durham, U.K., to celebrate the 20th anniversary of the discovery of dusty star-forming galaxies and share what they have learned over the last two decades. But the elusiveness of hard data has left many questions about ancient dusty galaxies still open for debate. “I suspect we’re still going to walk away from this meeting saying, ‘Theorists still haven’t figured out where they come from,’” Narayanan says.

    But the mystery is part of what fascinates him. Twenty years ago, “We had no idea these things existed,” he says. “Then they just lit up in the infrared and have posed a huge challenge ever since then.”

    Despite all the research on dust, it is only a small fraction of the universe. Even in moderately dusty galaxies like our own, dust accounts for less than 1% of the mass. Yet its ability to transform the light passing through it completely changes the way we see the universe.

    For astronomers like Casey and Narayanan, this leaves plenty of mysteries to probe. “It’s really cool to me that something that is so negligible in terms of the mass budget of the universe can have such a tremendous impact on how we perceive it,” Casey says. “There is so much to discover and rediscover.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 6:25 am on March 27, 2017 Permalink | Reply
    Tags: "Cancer Biology Reproducibility Project Sees Mixed Results" Read it and Weep, , , Cancer Biology Reproducibility Project Sees Mixed Results, , NOVA   

    From NOVA: “Cancer Biology Reproducibility Project Sees Mixed Results” Read it and Weep 

    PBS NOVA

    NOVA

    18 Jan 2017 [Don’t know how I missed this, or maybe they never put it up in social media before?]
    Courtney Humphries

    How trustworthy are the findings from scientific studies?

    A growing chorus of researchers says there’s a “reproducibility crisis” in science, with too many discoveries published that may be flukes or exaggerations. Now, an ambitious project to test the reproducibility of top studies in cancer research by independent laboratories has published its first five studies in the open-access journal eLife.

    “These are the first public replication studies conducted in biomedical science, and that in itself is a huge achievement,” says Elizabeth Iorns, CEO of Science Exchange and one of the project’s leaders.

    1
    Cancer biology is just one of many fields being scrutinized for the reproducibility of its studies.

    The Reproducibility Project: Cancer Biology is a collaboration between the non-profit Center for Open Science and the for-profit Science Exchange, which runs a network of laboratories for outsourcing biomedical research. It began in 2013 with the goal of repeating experiments from top-cited cancer papers; all of the work has been planned, executed, and published in the open, in consultation with the studies’ original authors. These papers are the first of many underway and slated to be published in the coming months.

    The outcome so far has been mixed, the project leaders say. While some results are similar, none of the studies looks exactly like the original, says Tim Errington, the project’s manager. “They’re all different in some way. They’re all different in different ways.” In some studies, the experimental system didn’t behave the same. In others, the result was slightly different, or it did not hold up under the statistical scrutiny project leaders used to analyze results. All in all, project leaders report, one study failed to reproduce the original finding, two supported key aspects of the original papers, and two were inconclusive because of technical issues.

    Errington says the goal is not to single out any individual study as replicable or not. “Our intent with this project is to perform these direct replications so that we can understand collectively how reproducible our research is,” he says.

    Indeed, there are no agreed-upon criteria for judging whether a replication is successful. At the project’s end, he says, the team will analyze the replication studies collectively by several different standards—including simply asking scientists what they think. “We’re not going to force an agreement—we’re trying to create a discussion,” he says.

    The project has been controversial; some cancer biologists say it’s designed to make them look bad bad at a time when federal research funding is under threat. Others have praised it for tackling a system that rewards shoddy research. If the first papers are any indication, those arguments won’t be easily settled. So far, the studies provide a window into the challenges of redoing complex laboratory studies. They also underscore the need that, if cancer biologists want to improve the reproducibility of their research, they have to agree on a definition of success.

    An Epidemic?

    A recent survey in Nature of more than 1,500 researchers found that 70% have tried and failed to reproduce others’ experiments, and that half have failed to reproduce their own. But you wouldn’t know it by reading published studies. Academic scientists are under pressure to publish new findings, not replicate old research. There’s little funding earmarked toward repeating studies, and journals favor publishing novel discoveries. Science relies on a gradual accumulation of studies that test hypotheses in new ways. If one lab makes a discovery using cell lines, for instance, the same lab or another lab might investigate the phenomenon in mice. In this way, one study extends and builds on what came before.

    For many researchers, that approach—called conceptual replication, which gives supporting evidence for a previous study’s conclusion using another model—is enough. But a growing number of scientists have been advocating for repeating influential studies. Such direct replications, Errington says, “will allow us to understand how reliable each piece of evidence we have is.” Replications could improve the efficiency of future research by winnowing out false hypotheses early and help scientists recreate others’ work in order to build on it.

    In the field of cancer research, some of the pressure to improve reproducibility has come from the pharmaceutical industry, where investing in a spurious hypothesis or therapy can threaten profits. In a 2012 commentary in Nature, cancer scientists Glenn Begley and Lee Ellis wrote that they had tried to reproduce 53 high-profile cancer studies while working at the pharmaceutical company Amgen, and succeeded with just six. A year earlier, scientists at Bayer HealthCare announced that they could replicate only 20–25% of 47 cancer studies. But confidentiality rules prevented both teams from sharing data from those attempts, making it difficult for the larger scientific community to assess their results.

    ‘No Easy Task’

    Enter the Reproducibility Project: Cancer Biology. It was launched with a $1.3 million grant from the Laura and John Arnold Foundation to redo key experiments from 50 landmark cancer papers from 2010 to 2012. The work is carried out in the laboratory network of Science Exchange, a Palo Alto-based startup, and the results tracked and made available through a data-sharing platform developed by the Center for Open Science. Statisticians help design the experiments to yield rigorous results. The protocols of each experiment have been peer-reviewed and published separately as a registered report beforehand, which advocates say prevents scientists from manipulating the experiment or changing their hypothesis midstream.

    The group has made painstaking efforts to redo experiments with the same methods and materials, reaching out to original laboratories for advice, data, and resources. The labs that originally wrote the studies have had to assemble information from years-old research. Studies have been delayed because of legal agreements for transferring materials from one lab to another. Faced with financial and time constraints, the team has scaled back its project; so far 29 studies have been registered, and Errington says the plan is to do as much as they can over the next year and issue a final paper.

    “This is no easy task, and what they’ve done is just wonderful,” says Begley, who is now chief scientific officer at Akriveia Therapeutics and was originally on the advisory board for the project but resigned because of time constraints. His overall impression of the studies is that they largely flunked replication, even though some data from individual experiments matched. He says that for a study to be valuable, the major conclusion should be reproduced, not just one or two components of the study. This would demonstrate that the findings are a good foundation for future work. “It’s adding evidence that there’s a challenge in the scientific community we have to address,” he says.

    Begley has argued that early-stage cancer research in academic labs should follow methods that clinical trials use, like randomizing subjects and blinding investigators as to which ones are getting a treatment or not, using large numbers of test subjects, and testing positive and negative controls. He says that when he read the original papers under consideration for replication, he assumed they would fail because they didn’t follow these methods, even though they are top papers in the field.. “This is a systemic problem; it’s not one or two labs that are behaving badly,” he says.

    Details Matter

    For the researchers whose work is being scrutinized, the details of each study matter. Although the project leaders insist they are not designing the project to judge individual findings—that would require devoting more resources to each study—cancer researchers have expressed concern that the project might unfairly cast doubt on their discoveries. The responses of some of those scientists so far raise issues about how replication studies should be carried out and analyzed.

    One study, for instance, replicated a 2010 paper led by Erkki Ruoslahti, a cancer researcher at Sanford Burnham Prebys Medical Discovery Institute in San Diego, which identified a peptide that could stick to and penetrate tumors. Ruoslahti points to a list of subsequent studies by his lab and others that support the finding and suggest that the peptide could help deliver cancer drugs to tumors. But the replication study found that the peptide did not make tumors more permeable to drugs in mice. Ruoslahti says there could be a technical reason for the problem, but the replication team didn’t try to troubleshoot it. He’s now working to finish preclinical studies and secure funding to move the treatment into human trials through a company called Drugcendr. He worries that replication studies that fail without fully exploring why could derail efforts to develop treatments. “This has real implications to what will happen to patients,” he says.

    Atul Butte, a computational biologist at the University of California San Francisco, who led one of the original studies that was reproduced, praises the diligence of the team. “I think what they did is unbelievably disciplined,” he says. But like some other scientists, he’s puzzled by the way the team analyzed results, which can make a finding that subjectively seems correct appear as if it failed. His original study used a data-crunching model to sort through open-access genetic information and identify potential new uses for existing drugs. Their model predicted that the antiulcer medication cimetidine would have an effect against lung cancer, and his team validated the model by testing the drug against lung cancer tumors in mice. The replication found very similar effects. “It’s unbelievable how well it reproduces our study,” Butte says. But the replication team used a statistical technique to analyze the results that found them not statistically significant. Butte says it’s odd that the project went to such trouble to reproduce experiments exactly, only to alter the way the results are interpreted.

    Errington and Iorns acknowledge that such a statistical analysis is not common in biological research, but they say it’s part of the group’s effort to be rigorous. “The way we analyzed the result is correct statistically, and that may be different from what the standards are in the field, but they’re what people should aspire to,” Iorns says.

    In some cases, results were complicated by inconsistent experimental systems. One study tested a type of experimental drug called a BET inhibitor against multiple myeloma in mice. The replication found that the drug improved the survival of diseased mice compared to controls, consistent with the original study. But the disease developed differently in the replication study, and statistical analysis of the tumor growth did not yield a significant finding. Constantine Mitsiades, the study’s lead author and a cancer researcher at the Dana-Farber Cancer Institute, says that despite the statistical analysis, the replication study’s data “are highly supportive of and consistent with our original study and with subsequent studies that also confirmed it.”

    A Fundamental Debate

    These papers will undoubtedly provoke debate about what the standards of replication should be. Mitsiades and other scientists say that complex biological systems like tumors are inherently variable, so it’s not surprising if replication studies don’t exactly match their originals. Inflexible study protocols and rigid statistics may not be appropriate for evaluating such systems—or needed.

    Some scientists doubt the need to perform copycat studies at all. “I think science is self-correcting,” Ruoslahti says. “Yes, there’s some loss of time and money, but that’s just part of the process.” He says that, on the positive side, this project might encourage scientists to be more careful, but he also worries that it might discourage them from publishing new discoveries.

    Though the researchers who led these studies are, not surprisingly, focused on the correctness of the findings, Errington says that the variability of experimental models and protocols is important to document. Advocates for replication say that current published research reflects an edited version of what happened in the lab. That’s why the Reproducibility Project has made a point to publish all of its raw data and include experiments that seemed to go awry, when most researchers would troubleshoot them and try again.

    “The reason to repeat experiments is to get a handle on the intrinsic variability that happens from experiment to experiment,” Begley says. With a better understanding of biology’s true messiness, replication advocates say, scientists might have a clearer sense of whether or not to put credence in a single study. And if more scientists published the full data from every experiment, those original results may look less flashy to begin with, leading fewer labs to chase over-hyped hypotheses and therapies that never pan out. An ultimate goal of the project is to identify factors that make it easier to produce replicable research, like publishing detailed protocols and validating that materials used in a study, such as antibodies, are working properly.


    Access mp4 video here .

    Beyond this project, the scientific community is already taking steps to address reproducibility. Many scientific journals are making stricter requirements for studies and publishing registered reports of studies before they’re carried out. The National Institutes of Health has launched training and funding initiatives to promote robust and reproducible research. F1000Research, an open-access, online publisher launched a Preclinical Reproducibility and Robustness Channel in 2016 for researchers to publish results from replication studies. Last week several scientists published a reproducibility manifesto in the journal Human Behavior that lays out a broad series of steps to improve the reliability of research findings, from the way studies are planned to the way scientists are trained and promoted.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 11:44 am on March 22, 2017 Permalink | Reply
    Tags: , , NOVA, Remnants of Earth’s Original Crust Found in Canada   

    From NOVA: “Remnants of Earth’s Original Crust Found in Canada” 

    PBS NOVA

    NOVA

    16 Mar 2017
    Annette Choi

    Two geologists studying North America’s oldest rocks have uncovered ancient minerals that are remnants of the Earth’s original crust which first formed more than 4.2 billion years ago.

    These rocks appear to preserve the signature of an early Earth that presumably took shape within the first few hundred million years of Earth’s history.

    Jonathan O’Neil and Richard Carlson uncovered the samples on a trek to the northeastern part of Canada to study the Canadian Shield formation, a large area of exposed continental crust underlying, centered on Hudson Bay, which was already known to contain some of the oldest parts of North America. O’Neil calls it the core or nucleus of the North American continent. “That spot on the shore of Hudson Bay has this older flavor to it, this older chemical signature.”

    1
    A view of 2.7 billion-year-old continental crust produced by the recycling of more than 4.2 billion-year-old rocks. Image credit: Alexandre Jean

    To O’Neil, an assistant professor of geology at the University of Ottawa, rocks are like books that allow geologists to study their compositions and to learn about the conditions in which they form. But as far as rock records go, the first billion years of the Earth’s history is almost completely unrepresented.

    “We’re missing basically all the crust that was present about 4.4 billion years ago. The question we’re after with our study is: what happened to it?” said Carlson, director of the Carnegie Institution for Science. “Part of the goal of this was simply to see how much crust was present before and see what that material was.”

    While most of the samples are made up of a 2.7 billion-year-old granite, O’Neil said these rocks were likely formed by the recycling of a much older crust. “The Earth is very, very good at recycling itself. It constantly recycles and remelts and reworks its own crust,” O’Neil said. He and Carlson arrived at their conclusion by determining the age of the samples using isotopic dating and then adding on the estimate of how long it would have taken for the recycled bits to have originally formed.

    O’Neil and Carlson’s estimate relies on the theory that granite forms through the reprocessing of older rocks. “That is a possibility that they form that way, but that is not the only way you can form these rocks,” said Oliver Jagoutz, an associate professor of geology at the Massachusetts Institute of Technology. “Their interpretation really strongly depends on their assumption that that is the way these granites form.

    The nature of Earth’s first crust has largely remained a mystery because there simply aren’t very many rocks that have survived the processes that can erase their signature from the geologic record. Crust is often forced back into the Earth’s interior, which then melts it down, the geologic equivalent of sending silver jewelry back into the forge. That makes it challenging for geologists to reconstruct how the original looked.

    These new findings give geologists an insight into the evolution of the oldest elements of Earth’s outer layer and how it has come to form North America. “We’re recycling extremely, extremely old crust to form our stable continent,” O’Neil said.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 1:50 pm on January 29, 2017 Permalink | Reply
    Tags: , , , , , Dark Enrgy, Lawrence Krauss says “The longer you wait the less you will see and the more of the universe will disappear before your very eyes”, Milkomeda, NOVA, Physics in 1 Trillion Years   

    From NOVA: “Physics in 1 Trillion Years” from 17 Feb 2016 

    PBS NOVA

    NOVA

    17 Feb 2016
    Sarah Scoles

    When winter weather closed Harvard University one day in 2011, astronomer Avi Loeb used the snow day not to sled or start a new novel but to contemplate the future of the universe. In that future, cosmologists like him, who study the universe’s origins and evolution, might not be able to make a living.

    Nine years before, he had written a paper outlining the problem: Dark energy makes the universe expand faster and faster every femtosecond. As spacetime—the fabric of the cosmos—stretches, it carries galaxies along with it. The stretching sends each galaxy farther and farther from the others, eventually driving them so far apart that light will never be able to bridge the gap between them.

    1
    Far future cosmologists won’t have the same evidence as we do to infer the Big Bang. No image credit.

    In that future, our own oasis, the Milky Way, will be completely alone. When future astronomers look up, they will see only our galaxy’s own stars. They won’t find any evidence—even with the powerful telescopes of a trillion years hence—that other galaxies even exist beyond the horizon of their visible universe. Without a view of those other galaxies, they won’t be able to tell that everything was born in a Big Bang, or that the black vacuum of space is expanding at all, let alone that that expansion is speeding up. Ironically, dark energy itself will destroy evidence of dark energy.

    Thinking of this emptied universe, Loeb stared out the window at the snowfall, which covered the ground in a blank blanket. “I was pretty depressed that there would be nothing to look at, and that we won’t be able to tell how the universe started by observing it.”

    He set out to find a solution.

    A Galactic Merger

    Currently, cosmic expansion clues us in to the Big Bang. Press fast-forward on the growing universe we see today, and it continues growing, with objects flying ever-farther apart. It doesn’t take much creativity to then press rewind: The universe shrinks, and its ingredients squish together. If you rewind until the very beginning of the tape, everything piles into one infinitesimally small, infinitely dense spot. Press play and it bursts forth: a Big Bang.

    Astronomers only discovered that expansion because they could see other galaxies, which all seem to be running away from us. In 1999, using ultra-distant supernova explosions, they figured out that faraway galaxies were retreating faster than they “should” be, and that even more distant galaxies were distancing themselves faster than that. Something—which they later termed dark energy—spurs expansion on, like a car whose pedal never reaches the metal no matter how hard you push.

    The real problems won’t show up for a while, until about a trillion years after the Big Bang. By that time, the Milky Way will have long ago crashed into the Andromeda Galaxy. The stars will have spent 3 billion years swirling into stable orbits, before becoming a seamless chimera: a single galaxy called “Milkomeda,” a term Loeb coined in 2008 when he simulated and then forecasted the collision’s specifics.

    1
    After their first close pass, the Andromeda Galaxy as well as the Milky Way would be tidally stretched out, as shown in this artist’s conception. NASA / ESA / STScI

    Even as that galactic collision takes place, dark energy will be dragging everything else away from us. Little by little over billions of years, everything will pop over the visible horizon, along with any physical evidence of its existence, until only our neighbor stars in Milkomeda remain. “The universe becomes lonely,” says Glenn Starkman, a physicist at Case Western Reserve University. He and astronomer Lawrence Krauss of Arizona State University in Tempe wrote an article titled Life, The Universe, and Nothing: Life and Death in an Ever-Expanding Universe, which also discusses this “lonely astronomer” problem. “The longer you wait, the less you will see and the more of the universe will disappear before your very eyes,” Krauss says.

    “Earth’s night sky will change,” Loeb says. Stars that humans (or whoever is around) will get to watch in a few billion years will shift radically. Today, the Milky Way appears as a diagonal swash of fuzzy light, the combined photons of billions of stars too small for our eyes to resolve. But when people in the distant future look up at Milkomeda, they will see those stars distributed evenly across the sky.

    If astronomers still live in Milkomeda at that point, they could be thrown into an astronomical dark age. To them, the universe will look like the one we thought we understood before telescopes. Back then, we thought we were the center of the cosmos, and we believed the Milky Way to be the entirety of the universe.

    That universe seemed static and without beginning. Alone in Milkomeda, future astronomers may—validly, based on actual evidence—see it that way, too. “Scientists who evolve on such a world will look out and find that the three main pillars of the Big Bang will all be gone,” Krauss says.

    Three Missing Pillars

    “It’s a gloomy forecast,” Loeb says. “We won’t be able to look at anything. It’s not just galaxies—it’s any relic left from Big Bang.” Right now, telescopes can see a glow of light left over from the Big Bang. This relic radiation, called the cosmic microwave background [CMB], comes from every direction in the sky. The Planck Telescope recently made a high-definition map of it, which is essentially a blueprint of a baby universe. It shows us the seeds that grew into groups of galaxies, tells us what the universe is made of, and tips us off about the very beginning of everything.

    CMB per ESA/Planck
    CMB per ESA/Planck

    ESA/Planck
    ESA/Planck

    But as time passes, the photons that make up cosmic microwave background cool off and lose energy, increasing their wavelengths. Eventually, those waves—which today are on the order of millimeters—will be bigger than the visible universe. There’s no telescope, not even one a trillion-year-old society could build, that can detect that. “They will no longer be able to learn what we know about the early universe,” Starkman says.

    The composition of the universe, which now tells scientists that the Big Bang occurred, won’t help in the far future, either. After the Big Bang, the universe began to cool off. Soon, free-range quarks settled down into electrons, protons, and neutrons, which could then intertwine into hydrogen atoms. Those atoms then smacked into each other and stuck together, fusing into larger helium atoms. In just 30 minutes, most of the helium that exists today had formed. A comparatively small amount has been created inside stars in the few billion years since.

    “Right now, we know the Big Bang happened because 25% of universe is helium,” Krauss says. “There’s no way stars could have made that.” But by the time the universe is 10 trillion years old, stars will have fused most of the hydrogen into helium. That is, in fact, their job. But in doing it so well, they will knock down the last solid evidence that the universe had a beginning at all. “All relics of Big Bang will be gone from us,” Loeb says. “There will be really nothing.”

    It seems that we live at a somewhat strange time in the universe—one in which our sky is filled with evidence of the cosmic narrative. Does that make us lucky? And does it make future observers unlucky? Astronomers generally shy away from suggestions that we are anything other than dead-average. They call it the Mediocrity Principle.

    But maybe each eon is a special snowflake in its own way, meaning none of them is really special, just like soccer kids who all get trophies. The far-future folks may have easy access to knowledge we, in our dark-energy-dominated and bright-skied time, can’t grasp. “I suspect that each era is interesting for different reasons,” Krauss says. “There may be cosmological observables that we could see in the far future that we can’t see now.”

    We can’t know for sure, nor can we know for sure that this future forecast is correct. Just like perfect weather prediction, it can only happen if we know everything about every subatomic particle. The year 1 trillion CE may not look exactly as we envision it. “That broad picture is what will happen if what we know continues to be the whole truth and nothing but the truth,” Starkman says. “There’s a lot of chutzpah in thinking that’s really so, that we’ve captured everything there is to know about physics.”

    Possible Answers

    As the winter storm swirled outside, Loeb considered the dark, empty (potential) future he’d predicted. He hated that so much knowledge—the science he loved—would disappear, like all the galaxies. He had recently given a public talk on the topic, sharing his sadness, and an audience member’s question had sent him reeling: Would this future convert cosmology into a kind of religion? “You would have books talking about the story of how the universe started, but you wouldn’t be able to verify that,” he says. “I was worried that cosmology would be turned into folklore.”

    “There will really be nothing,” he thought again. But then a flash swept through his brain. Nothing—except for one thing. “I realized that not everything is lost,” says Loeb. The key is a type of object called a hypervelocity star.

    “The center of our galaxy keeps ejecting stars at high enough speeds that they can exit the galaxy,” Loeb says. The intense and dynamic gravity near the black hole ejects them into space, where they will glide away forever like radiating rocket ships. The same thing should happen a trillion years from now.

    “These stars that leave the galaxy will be carried away by the same cosmic acceleration,” Loeb says. Future astronomers can monitor them as they depart. They will see stars leave, become alone in extragalactic space, and begin rushing faster and faster toward nothingness. It would look like magic. But if those future people dig into that strangeness, they will catch a glimpse of the true nature of the universe. “Just like Edwin Hubble observed galaxies—historically trying to infer expansion—they could observe those stars outside the galaxy and figure out the universe is expanding,” Loeb says. Starkman says they could accomplish this synthetically, too. “They could send out probes far enough to notice that the probes accelerated away,” he says.

    And then, perhaps, they will imagine pressing fast-forward on this scenario. And, if their imaginations are like ours, they will then think about rewinding it—all the way back to the beginning.

    Krauss doesn’t necessarily buy this. Occam’s Razor states that the least complicated answer is usually the correct one, and that principle will lead these future beings astray. It sounds crazy that the very fabric of the universe is growing larger faster all the time, carrying some runaway star with it. It’s not the explanation that comes to the tip of the tongue. But perhaps more importantly, with just Milkomeda in the night sky, astronomers will have no reason to come up with a theory of anything beyond those stars. Just as pre-telescope scientists thought only of what they could see with their eyes, not of an invisible universe outside of that, so too could future astronomers’ imaginations be constrained.

    Loeb stands by his solution, although he admits it could remain in his 21st century paper and never occur to someone in the 2.1 trillionth century. “It’s difficult to speculate what will happen in a year or 10 years on Earth, let alone a trillion years,” he says. “We don’t even know if humans will still be around…I’m just talking about what one could learn.”

    Which is why Loeb is so intent on forecasting the future cosmos, even though he won’t be around to see it. “Most of my colleagues do not care about the future because they regard themselves as down-to-Earth,” he says. “They only think about things that can be tested or looked at right now. We can’t really observe the future, so they prefer not to think about the future. They often run computer simulations of the universe to the present time and then stop. All I’m saying is ‘Why stop?’ ”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 10:17 am on January 9, 2017 Permalink | Reply
    Tags: 16S rRNA sequencing, Archaea, , , NOVA, Polymerase chain reaction, Prokaryotes, The Never-Ending Quest to Rewrite the Tree of Life   

    From NOVA: “The Never-Ending Quest to Rewrite the Tree of Life” 

    PBS NOVA

    NOVA

    04 Jan 2017
    Carrie Arnold

    The bottom of the ocean is one of the most mysterious places on the planet, but microbiologist Karen Lloyd of the University of Tennessee, Knoxville, wanted to go deeper than that. In 2010, she was a postdoc at Aarhus University in Denmark, and Lloyd wanted to see what microbes were living more than 400 feet beneath the sea floor.

    Like nearly all microbiologists doing this type of census, she relied on 16S rRNA sequencing to determine who was there. Developed by microbiologist Carl Woese in the late 1970s, the technique looks for variation in the 16S rRNA gene, one that’s common to all organisms (it’s key to turning DNA into protein, one of life’s of the most fundamental processes). When Lloyd compared what she had seen under the microscope to what her sequencing data said, however, she knew her DNA results were missing a huge portion of the life hidden underneath the ocean.

    “I had two problems with just 16S sequencing. One, I knew it would miss organisms, and two, it’s not good for understanding small differences between microbes,” Lloyd says.

    1
    Scientists use heat maps like these to visualize the diversity of bacteria in various environments. Credits below.

    Technology had made gene sequencing much quicker and easier compared to when Woese first started his work back in the 1970s, but the principle remained the same. The 16S rRNA gene codes for a portion of the machinery used by prokaryotes to make protein, which is a central activity in the cell. All microbes have a copy of this gene, but different species have slightly different copies. If two species are closely related, their 16S rRNA sequences will be nearly identical; more distantly related organisms will have a greater number of differences. It not only gave researchers a way to quantify evolutionary relationships between species, Woese’s work also revealed an entirely new branch on the tree of life—the archaea, a group of microscopic organisms distinct from bacteria.

    Woese’s success in using 16S rRNA to rewrite the tree of life no doubt encouraged its widespread use. But as Lloyd and other scientists began to realize, some microbes carry a version that is significantly different from that seen in other bacteria or archaea. Since biologists depended on this similarity to identify an organism, they began to realize that they were leaving out potentially significant portions of life from their investigations.

    These concerns culminated approximately ten years ago during a period when sequencing technologies were rapidly accelerating. During this time, researchers figured out how to prepare DNA for sequencing without needing to know anything about the organism you were studying. At the same time, scientists invented a strategy to isolate single cells. At her lab at the Joint Genome Institute outside San Francisco, microbiologist Tanja Woyke put these two strategies together to sequence the genomes of individual microbial cells. Meanwhile, Jill Banfield, across the bay at the University of California, Berkeley, used a different approach called metagenomics that sequenced genes from multiple species at once, and used computer algorithms to reconstruct each organism’s genome. Over the past several years, their work has helped illuminate the massive amount of microbial dark matter that comprises life on Earth.

    “These two strategies really complement each other. They have opened up our ability to see the true diversity of microbial life,” says Roger Lasken, a microbial geneticist at the J. Craig Venter Institute.

    Microbial Dark Matter

    When Woese sequenced the 16S genes of the microbes that would come to be known as archaea, they were completely different from most of the other bacterial sequences he had accumulated. They lacked a true nucleus, like other bacteria, but their metabolisms were completely different. These microbes also tended to favor extreme environments, such as those at high temperatures (hot springs and hydrothermal vents), high salt concentrations, or high acidity. Sensing their ancient origins, Woese named these microbes the archaea, and gave them their own branch on the tree of life.

    Woese did all of his original sequencing by hand, a laborious process that took years. Later, DNA sequencing machines greatly simplified the work, although it still required amplifying the small amount of DNA present using a technique known as polymerase chain reaction, or PCR, before sequencing. The utility of 16S sequencing soon made the technique one of the mainstays of the microbiology lab, along with the Petri dish and the microscope.

    The method uses a set of what’s known as universal primers—short strands of RNA or DNA that help jump start the duplication of DNA—to make lots of copies of the 16S gene so it can be sequenced. The primers bound to a set of DNA sequences flanking the 16S gene that were thought to be common to all organisms. This acted like a set of bookends to identify the region to be copied by PCR. As DNA sequencing technology improved, researchers began amplifying and sequencing 16S genes in environmental samples as a way of identifying the microbes present without the need to grow them in the lab. Since scientists have only been able to culture about one in 100 microbial species, this method opened broad swaths of biodiversity that would otherwise have remained invisible.

    “We didn’t know that these deep branches existed. Trying to study life from just 16S rRNA sequences is like trying to understand all animals by visiting a zoo,” says Lionel Guy, a microbiologist from Uppsala University in Sweden.


    Access mp4 video here .
    Discover how to interpret and create evolutionary trees, then explore the tree of life in NOVA’s Evolution Lab.

    It didn’t take long, however, for scientists to realize the universal primers weren’t nearly as universal as researchers had hoped. The use of the primers rested on the assumption that all organisms, even unknown ones, would have similar DNA sequences surrounding the 16S rRNA gene. But that meant that any true oddballs probably wouldn’t have 16S rRNA sequences that matched the universal primers—they would remain invisible. These uncultured, unsequenced species were nicknamed “microbial dark matter” by Stanford University bioengineer and physicist Stephen Quake in a 2007 PNAS paper.

    The name, he says, is analogous to dark matter in physics, which is invisible but thought to make up the bulk of the universe. “It took DNA technology to realize the depth of the problem. I mean, holy crap, there’s a lot more out there than we can discover,” Quake says.

    Quake’s snappy portmanteau translated into the Microbial Dark Matter project—an ongoing quest in microbiology, led by Woyke, to understand the branches on the tree of life that remain shrouded in mystery by isolating DNA from single bacterial and archaeal cells. These microbial misfits intrigued Lloyd as well, and she believed the subsurface had many more of them than anyone thought. Her task was to find them.

    “We had no idea what was really there, but we knew it was something,” Lloyd says.

    To solve her Rumsfeldian dilemma of identifying both her known and unknown unknowns, Lloyd needed a DNA sequencing method that would allow her to sequence the genomes of the microbes in her sample without any preconceived notions of what they looked like. As it turns out, a scientist in New Haven, Connecticut was doing just that.

    Search for Primers

    In the 1990s, Roger Lasken had recognized the problems with traditional 16S rRNA and other forms of sequencing. Not only did you need to know something about the DNA sequence ahead of time in order to make enough genetic material to be sequenced, you also needed a fairly large sample. The result was a significant limitation in the types of material that could be sequenced. Lasken wanted to be able to sequence the genome of a single cell without needing to know anything about it.

    Then employed at the biotech firm Molecular Staging, Lasken began work on what he called multiple displacement amplification (MDA). He built on a recently discovered DNA polymerase (the enzyme that adds nucleotides, one by one, to a growing piece of DNA) called φ29 DNA polymerase. Compared to the more commonly used Taq polymerase, the φ29 polymerase created much longer strands of DNA and could operate at much cooler temperatures. Scientists had also developed random primers, small pieces of randomly generated DNA. Unlike the universal primers, which were designed to match specific DNA sequences 20–30 nucleotides in length, random primers were only six nucleotides long. This meant they were small enough to match pieces of DNA on any genome. With enough random primers to act as starting points for the MDA process, scientists could confidently amplify and sequence all the genetic material in a sample. The bonus inherent in the random primers was that scientists didn’t need to know anything about the sample they were sequencing in order to begin work.

    “For the first time, you didn’t need to culture an organism or amplify its DNA to sequence it,” he says.

    The method had only been tested on relatively small pieces of DNA. Lasken’s major breakthrough was making the system work for larger chromosomes, including those in humans, which was published in 2002 in PNAS. Lasken was halfway to his goal—his next step was figuring out how to do this in a single bacterium, which would enable researchers to sequence any microbial cell they found. In 2005, Lasken and colleagues managed to isolate a single E. coli cell and sequence its 16S rRNA gene using MDA. It was a good proof of principle that the system worked, but to understand the range and depth of microbial biodiversity, researchers like Tanja Woyke, the microbiologist at the Joint Genome Institute, needed to look at the entire genome of a single cell. In theory, the system should work neatly: grab a single cell, amplify its DNA, and then sequence it. But putting all of the steps together and working on the kinks in the system would require years of work.

    Woyke had spent her postdoc at the Joint Genome Institute sequencing DNA from samples not grown in the lab, but drawn directly from the environment, like a scoop of soil. At the time, she was using metagenomics, which amplified and sequenced DNA directly from environmental samples, yielding millions of As, Ts, Gs, and Cs from even a thimble of dirt. Woyke’s problem was determining which genes belonged to which microbe, a key step in assembling a complete genome. Nor was she able to study different strains of the same microbe that were present in a sample because their genomes were just too similar to tell apart using the available sequencing technology. What’s more, the sequences from common species often completely drowned out the data from more rare ones.

    “I kept thinking to myself, wouldn’t it be nice to get the entire genome from just a single cell,” Woyke says. Single-cell genomics would enable her to match a genome and a microbe with near 100% certainty, and it would also allow her to identify species with only a few individuals in any sample. Woyke saw a chance to make her mark with these rare but environmentally important species.

    Soon after that, she read Lasken’s paper and decided to try his technique on microbes she had isolated from the grass sharpshooter Draeculacephala minerva, an important plant pest. One of her biggest challenges was contamination. Pieces of DNA are everywhere—on our hands, on tables and lab benches, and in the water. The short, random primers upon which single-cell sequencing was built could help amplify these fragments of DNA just as easily as they could the microbial genomes Woyke was studying. “If someone in the lab had a cat, it could pick up cat DNA,” Woyke says of the technique.

    In 2010, after more than a year of work, Woyke had her first genome, that of Sulcia bacteria, which had a small genome and could only live inside the grass sharpshooter. Each cell also carried two copies of the genome, which helped make Woyke’s work easier. It was a test case that proved the method, but to shine a spotlight on the world’s hidden microbial biodiversity, Woyke would need to figure out how to sequence the genomes from multiple individual microbes.

    Work with Jonathan Eisen, a microbiologist at UC Davis, on the Genomic Encyclopedia of Bacteria and Archaea Project, known as GEBA, enabled her lab to set up a pipeline to perform single cell sequencing on multiple organisms at once. GEBA, which seeks to sequence thousands of bacterial and archaeal genomes, provided a perfect entry to her Microbial Dark Matter sequencing project. More than half of all known bacterial phyla—the taxonomic rank just below kingdom—were only represented by a single 16S rRNA sequence.

    “We knew that there were far more microbes and a far greater diversity of life than just those organisms being studied in the lab,” says Matthew Kane, a program director at the National Science Foundation and a former microbiologist. Studying the select few organisms that scientists could grow in pure culture was “useful for picking apart how cells work, but not for understanding life on Earth.”

    GEBA was a start, but even the best encyclopedia is no match for even the smallest public library. Woyke’s Microbial Dark Matter project would lay the foundation for the first of those libraries. She didn’t want to fill it with just any sequences, however. Common bacteria like E. coli, Salmonella, and Clostridium were the Dr. Seuss books and Shakespeare plays of the microbial world—every library had copies, though they represented only a tiny slice of all published works. Woyke was after the bacterial and archaeal equivalents of rare, single-edition books. So she began searching in extreme environments including boiling hot springs of caustic acid, volcanic vents at the bottom of the ocean, and deep inside abandoned mines.

    Using the single-celled sequencing techniques that she had perfected at the Joint Genome Institute, Woyke and her colleagues ended up with exactly 201 genomes from these candidate phyla, representing 29 branches on the tree of life that scientists knew nothing about. “For many phyla, this was the first genomic data anyone had seen,” she says.

    The results, published in Nature in 2013, identified some unusual species for which even Woyke wasn’t prepared. Up until that study, all organisms used the same sequence of three DNA nucleotides to signal the stop of a protein, one of the most fundamental components of any organism’s genome. Several of the species of archaea identified by Woyke and her colleagues, however, used a completely different stop signal. The discovery was not unlike traveling to a different country and having the familiar red stop sign replaced by a purple square, she says. Their work also identified other rare and bizarre features of the organisms’ metabolisms that make them unique among Earth’s biodiversity. Other microbial dark matter sequencing projects, both under Woyke’s Microbial Dark Matter project umbrella and other independent ventures, identified microbes from unusual phyla living in our mouths.

    Some of the extremeophile archaea that Woyke and her colleagues identified were so unlike other forms of life that they grouped them into their own superset of phyla, known as DPANN (Diapherotrites, Parvarchaeota, Aenigmarchaeota, Nanohaloarchaeota, and Nanoarchaeota). The only thing that scientists knew about these organisms were the genomes that Woyke had sequenced, isolated from individual organisms. These single-cell sequencing projects are key not just for filling in the foliage on the tree of life, but also for demonstrating just how much remains unknown, and Woyke and her team have been at the forefront of these discoveries, Kane says.

    Sequencing microbes cell by cell, however, isn’t the only method for uncovering Earth’s hidden biodiversity. Just a few miles from Woyke’s lab, microbiologist Jill Banfield at UC Berkeley is taking a different approach that also has also produced promising results.

    Studying the Uncultured

    Typically, to study microbes, scientists have grown them in a pure culture from a single individual. Though useful for studying these organisms in the laboratory, most microbes live in complex communities of many individuals from different species. Starting in the early 2000s, genetic sequencing technologies had advanced to the point where researchers could study the complex array of microbial genomes without necessarily needing to culture each individual organism. Known as metagenomics, the field began with scientists focused on which genes were found in the wild, which would hint at how each species or strain of microbe could survive in different environments.

    Just as Woyke was doubling down on single-cell sequencing, Banfield began using metagenomics to obtain a more nuanced and detailed picture of microbial ecology. The problems she faced, though very different from Woyke’s, were no less vexing. Like Woyke, Banfield focused on extreme environments: acrid hydrothermal vents at the bottom of the ocean that belched a vile mixture of sulfuric acid and smoke; an aquifer flowing through toxic mine tailings in Rifle, Colorado; a salt flat in Chile’s perpetually parched Atacama Desert; and water found in the Iron Mountain Mine in Northern California that is some of the most acidic found anywhere on Earth. Also like Woyke, Banfield knew that identifying the full range of microbes living in these hellish environments would mean moving away from using the standard set of 16S rRNA primers. The main issue Banfield and colleagues faced was figuring out how to assemble the mixture of genetic material they isolated from their samples into discrete genomes.

    2
    A web of connectivity calculated by Banfield and her collaborators shows how different proteins illustrate relationships between different microbes.
    Credit below.

    The solution wasn’t a new laboratory technique, but a different way of processing the data. Researchers obtain their metagenomic information by drawing a sample from a particular environment, isolating the DNA, and sequencing it. The process of sequencing breaks each genome down into smaller chunks of DNA that computers then reassemble. Reassembling a single genome isn’t unlike assembling a jigsaw puzzle, says Laura Hug, a microbiologist at the University of Waterloo in Ontario, Canada, and a former postdoc in Banfield’s lab.

    When faced with just one puzzle, people generally work out a strategy, like assembling all the corners and edges, grouping the remaining pieces into different colors, and slowly putting it all together. It’s a challenging task with a single genome, but it’s even more difficult in metagenomics. “In metagenomics, you can have hundreds or even thousands of puzzles, many of them might be all blue, and you have no idea what the final picture looks like. The computers have to figure out which blue pieces go together and try to extract a full, accurate puzzle from this jumble,” Hug says. Not surprisingly, the early days of metagenomics were filled with incomplete and misassembled genomes.

    Banfield’s breakthrough helped tame the task. She and her team developed a better method for binning, the formal name for the computer process that sorts through the pile of DNA jigsaw pieces and arranges them into a final product. As her lab made improvements, they were able to survey an increasing range of environments looking for rare and bizarre microbes. Progress was rapid. In the 1980s, most of the bacteria and archaea that scientists knew about fit into 12 major phyla. By 2014, scientists had increased that number to more than 50. But in a single 2015 Nature paper, Banfield and her colleagues added an additional 35 phyla of bacteria to the tree of life.

    4
    The latest tree of life was produced when Banfield and her colleagues added another 35 major groups, known as phyla. Credit below.

    Because researchers knew essentially nothing about these bacteria, they dubbed them the “candidate phyla radiation”—or CPR—the bacterial equivalent of Woyke’s DPANN. Like the archaea, these bacteria were grouped together because of their similarities to each other and their stark differences to other bacteria. Banfield and colleagues estimated that the CPR organisms may encompass more than 15% of all bacterial species.

    “This wasn’t like discovering a new species of mammal,” Hug says. “It was like discovering that mammals existed at all, and that they’re all around us and we didn’t know it.”

    Nine months later, in April 2016, Hug, Banfield, and their colleagues used past studies to construct a new tree of life. Their result reaffirmed Woese’s original 1978 tree, showing humans and, indeed, most plants and animals, as mere twigs. This new tree, however, was much fuller, with far more branches and twigs and a richer array of foliage. Thanks in no small part to the efforts of Banfield and Woyke, our understanding of life is, perhaps, no longer a newborn sapling, but a rapidly maturing young tree on its way to becoming a fully rooted adult.

    Photo credits: Miller et al. 2013/PLOS, Podell et al. 2013/PLOS, Hug et al. 2016/UC Berkeley

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 10:21 am on December 29, 2016 Permalink | Reply
    Tags: , , First CRISPR-Edited Cells Tested in Lung Cancer Patient, , NOVA   

    From NOVA: “First CRISPR-Edited Cells Tested in Lung Cancer Patient” 

    PBS NOVA

    NOVA

    17 Nov 2016 [Where has this been hiding?]
    Tim De Chant

    1
    Geneticists edited the patient’s T-cells to more vigorously attack cancer cells. No image credit.

    In a first, oncologists and geneticists have edited a patient’s own immune cells using CRISPR and injected them as a treatment for an aggressive form of lung cancer.

    The trial, conducted at West China Hospital in Chengdu, is the first of what is expected to be many that will test the safety of using the gene editing technique to alter a person’s cells. U.S. trials are expected to begin in early 2017.

    Both studies will employ what are essentially advanced forms of immunotherapy, where doctors modify cells from a patient’s immune system to attack cancer cells. Because the cells involved are not a part of the reproductive system, their edited genomes cannot be passed on to any children the patients may have after the treatment.

    The patient involved in the Chinese study has been unsuccessfully treated for metastatic non-small-cell lung cancer, an aggressive form of the disease that’s often quickly fatal. The person received the first injection of CRISPR-edited cells on October 28.

    David Cyranoski, reporting for Nature News, has more details on the procedure:

    “The researchers removed immune cells from the recipient’s blood and then disabled a gene in them using CRISPR–Cas9, which combines a DNA-cutting enzyme with a molecular guide that can be programmed to tell the enzyme precisely where to cut. The disabled gene codes for the protein PD-1, which normally puts the brakes on a cell’s immune response: cancers take advantage of that function to proliferate.”

    The edited cells were then injected into the patient. Doctors hope the new cells will be able to exploit their PD-1 mutation to seek out and kill the cancer cells. It’s still too early to tell if the effort was safe or successful.

    If the patient shows no ill effects, the plan is to administer a second injection. Eventually, ten patients enrolled in the study will receive up to four injections.

    While scientists are optimistic about CRISPR’s broader potential in medicine, they’re less certain about whether this particular trial will be more effective than existing immunotherapies, which use modified proteins called antibodies that are easier to make in the lab than CRISPR-edited immune cells.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 2:44 pm on December 14, 2016 Permalink | Reply
    Tags: , , , Big Bang or Big Bounce, , NOVA   

    From NOVA: “Did the Universe Start with a Bounce Instead of a Bang?” 

    PBS NOVA

    NOVA

    14 Dec 2016
    Marcus Woo

    1
    Big Bounce could have happened, scientists say. Istock

    For a few physicists, the Big Bang wasn’t the beginning of the universe.

    Rather, they say, the universe existed before that point, stretching forever into the past as well as the future. While the universe is expanding today, it was contracting in the time before the Big Bang. In this picture, the Big Bang isn’t so much a bang but a bounce, a moment when a shrinking universe reversed course and began to grow.

    And according to their theory, the universe could bounce again. Today’s expansion could be followed by collapse in the far future, followed by another bounce. Some physicists have suggested this bouncing could be infinite, reviving a cyclic cosmology first proposed in the 1930s.

    But how that infinitesimally hot and dense point came to be remains an unanswered question. Bouncing theories could promise to explain the origin of the cosmos. Whether a single bounce or endless bounces, a handful of cosmologists have spent the last couple decades tinkering with these ideas. But to others, bounce theories are simply speculative and controversial, and to some, they’re discredited and wrong.

    Much of the debate between Big Bang and Big Bounce proponents revolves around the viability of inflation, the mainstream view of how the universe has come to be the way it is today.

    Inflationary Universe. NASA/WMAP
    Inflationary Universe. NASA/WMAP

    And although any cosmologist would agree that inflation is, at the very least, incomplete, the vast majority considers it the best model yet. Still, bounce proponents see fundamental flaws in this model.

    “Inflation’s not doing too well,” says Neil Turok, director of the Perimeter Institute for Theoretical Physics. “It’s had its day. It was useful when it was invented in the early 1980s.” But now, he says, we need a new theory, and that theory could be a bouncing universe.

    A Cosmic Growth Spurt

    The standard story of inflation goes like this: shortly after the Big Bang, the universe ballooned rapidly—much faster than its normal expansion. This sudden growth was necessary to create the smooth, flat, and uniform universe that scientists see today.

    Cosmologists first developed inflation in the early 1980s, before balloon-borne experiments and satellites returned increasingly precise data on the state of the early universe. These observations measured the leftover radiation from the Big Bang, a ubiquitous glow called the cosmic microwave background [CMB].

    CMB per ESA/Planck
    CMB per ESA/Planck

    The radiation is patchily distributed, with some spots hotter and cooler than others, an auspicious result since the exact nature of this patchiness was precisely what inflation predicted.

    Inflation also predicted the mass density of the universe, also measured from the cosmic microwave background. “We’ve measured the mass density to better than a half percent accuracy, and it agrees perfectly with what inflation predicts—which is just gorgeous,” says Alan Guth, a physicist at MIT and the first who proposed inflation in 1980.

    “It’s really remarkable how much this simple idea of inflation has done,” says Robert Brandenberger, a physicist at McGill University. Although he’s exploring alternatives to inflation, the theory is the most self-consistent one out there, he says. “It’s successful because it predicted many things—and I emphasize predicted. Early in my career, we didn’t have the data. I saw inflation pass many more tests.”

    Still, while these successes have been more than encouraging for inflation, the evidence has yet to convince everyone. One prediction that might quell some dissent would be the detection of primordial gravitational waves, ripples in the fabric of space and time that originated from fluctuations of the gravity field in the early universe. It almost happened: In March 2014, the BICEP2 experiment at the South Pole claimed to have seen these gravitational waves. But that heralded discovery vanished when astronomers realized the signal could have been entirely due to dust in the galaxy.

    Gravitational Wave Background from BICEP 2
    Gravitational Wave Background from BICEP 2, quickly discredited.

    Inflation is not without its theoretical issues either. Some critics say that inflation requires initial conditions that are too specialized and contrived to be realistic. To get inflation started, the early universe had to be just right.

    Another point of contention is that inflation could imply the existence of an infinite number of universes. In the early 1980s, physicists discovered that inflation goes on forever, stopping only in some regions of space. But in between these pockets, inflation continues, expanding faster than the speed of light. These bubbles are thus closed off from each other, effectively becoming isolated universes with their own laws of physics. According to this theory, we live in one of these bubbles.

    While inflation proponents embrace this so-called multiverse, detractors say it’s absurd. If anything can happen in these bubble universes, then scientific predictions become meaningless. “If you have a theory that can’t be disproved, you should be dissatisfied with that,” Turok says. “That’s the state with inflation and the multiverse, so I would say this is not a scientific theory.”

    Even ardent supporters of inflation would agree the theory is incomplete. It doesn’t say anything about the moment of the Big Bang itself, for example, when the known laws of physics break down at what’s called a singularity.

    What inflation still lacks is a deeper foundation. Physicists have tried connecting inflation with string theory—the best candidate for a so-called theory of everything. But it’s still a work in progress. “With inflation, we basically add something by hand and we say it works, but we don’t have a more theoretical understanding of where it could come from,” says Steffen Gielen of the Perimeter Institute, who works with Turok on bouncing models.

    Bouncing Ideas

    The suggestion that the Big Bang wasn’t the absolute beginning originates from the first half of the 20th century, when physicists proposed a cyclic universe. But at the time, no one understood the details for how the universe could enter and emerge from each bounce.

    Todays’ physicists still have their work cut out for them, but now they have all the tools of modern particle physics and string theory. In 1992, Maurizio Gasperini and Gabriele Veneziano first used these modern ideas to revisit a pre-Big-Bang universe. Ten years later, Turok and Paul Steinhardt, a physicist at Princeton University and one of inflation’s pioneers turned critic, expanded on that work. They have since become two of the most outspoken detractors of inflation and proponents of a bouncing universe.

    A bouncing universe, they argue, could produce the cosmos we see today—but without inflation. The universe doesn’t need a period of super-expansion to reach the smooth, flat state we see today; it can do so while contracting. And because every corner of a shrinking universe would have been in contact with one another, the whole cosmos could settle into a uniform temperature—again, just as we see it today.

    Because so much of the early universe is unknown, theories of cosmology can vary widely. Inflation, for instance, isn’t one particular theory but a class of models, each a bit different in detail. Likewise, physicists have theorized many ways for how a universe can bounce.

    In one case, dubbed a matter bounce, the universe only bounces once. The collapse into the bounce is like a reverse-order Big Bang. Another version, called an ekpyrotic model, can be cyclical, with contraction followed by expansion followed by contraction, and so on. The anamorphic universe might be similarly cyclical.

    Pretty much all models require some sort of new physics. The differences between these models depend on the details, whether it’s new theories or exotic types of matter that halt the inertia of collapse and guide the universe through the bounce. Figuring out what happens at the bounce poses a big challenge, because that point is where the laws of physics fail, just as they do at the start of an inflationary universe.

    At the bounce, the universe collapses into a singularity, in which Einstein’s theory of gravity, general relativity, breaks down. Relativity isn’t currently compatible with quantum mechanics, which is needed at the small scales of the singularity. To unite the two, physicists have been searching for a theory of quantum gravity, which doesn’t yet exist.

    Over the past year, though, physicists have claimed modest progress on how to handle the singularity. Turok and Gielen have outlined how a simplified, toy model of a universe could undergo a quantum bounce. A bouncing universe containing only radiation—not unlike the radiation-dominated cosmos at the Big Bang—could cross the singularity in a way like quantum tunneling: According to quantum mechanics, a particle can spontaneously appear on the other side of a barrier that would otherwise be impenetrable in non-quantum physics. A collapsing universe can act like a particle and tunnel through the barrier-like singularity, appearing on the other side as the expanding universe we know today—and evading the singularity’s problems.

    Meanwhile, Steinhardt and Anna Ijjas of Princeton University have proposed a way the universe could bounce without evoking quantum mechanics. They’ve shown that some exotic, negative energy could prevent a universe from collapsing into a singularity in the first place. By avoiding a singularity, the universe never gets small enough for quantum mechanics to come into play, so you don’t need quantum gravity. The universe then proceeds to expand.

    But while these two proposals might be a small advance, neither marks a radical leap from what’s been done before, Brandenberger says. We’re still far from solving the problem of the singularity. “If we solve the singularity problem by evoking exotic matter, the question is just twisted,” he says. In other words, instead of explaining the singularity, you now have to explain the exotic matter.

    Without new physics, a bounce doesn’t seem likely, according to Guth. “One has to adopt rather special features that one would have to assume in the underlying laws of physics to make the bounce possible,” he says. “To me, that doesn’t seem like a good bet.”

    But it’s still too early to judge, Turok says. The theories aren’t mature enough to be testable yet. Eventually, though, models could start making predictions. Future, more detailed measurements of the cosmic microwave background might support a particular model of inflation or a bouncing universe. Perhaps the most promising evidence would come in the form of primordial gravitational waves, which are about the best indicators of what happened in the moments after the Big Bang (or bounce).

    Depending on what these waves look like, researchers can start ruling out models of both bouncing universes and inflation. While the BICEP2 findings in 2014 were a false alarm, researchers hope other instruments will succeed, including its successor, BICEP3. The Atacama B-mode Search is now operating in the Atacama Desert in Chile, and researchers are planning future experiments with names such as the Primordial Inflation Polarization Explorer, Qubic, and Polarbear.

    The Right Path

    In the end, however, it may not simply come down to an either-or choice between bouncing models or inflation, even though proponents of bouncing models sell their idea as an alternative. “What they’re doing is much more closely allied to inflation than they would have you think,” says Andrew Liddle, a cosmologist at the University of Edinburgh. “I don’t think it’s that radical of a departure.” Many of the mathematical tools used in bouncing models are similar to those used for inflation, he says. And when you apply observations like the cosmic microwave background, both bouncing models and inflation give similar results.

    You can even have both a bounce and inflation. “Now, sociologically, many people who study bounce cosmologies do so because they’re interested in finding an alternative to inflation,” says Sean Carroll, a physicist at the California Institute of Technology. “That’s fine, but if you just said, without any preexisting agendas, does the universe have a bounce, and if so, could it also involve inflation? I think you’d say sure.”

    Still, the debates between bounce proponents and the most outspoken inflation supporters can get contentious, each somewhat dismissive of the other side. The conflict is a reminder that science—and perhaps theoretical physics, in particular—is ultimately a human endeavor, filled with egos and subjectivity. Legacies and Nobel Prizes could be at stake.

    “In the absence of data, you’re welcome to your opinion—opinion is all you have,” Carroll says. “All of these ideas have significant challenges and question marks next to them.” While a problem may be a deal-breaker for one person, it’s only a minor stumbling block to another. When blazing a new trail, the right path is often subjective.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: