Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 7:12 pm on October 21, 2014 Permalink | Reply
    Tags: , , , , , POLARBEAR Experiment   

    From Daily Galaxy: “Astrophysicists Using Big Bang’s Primordial Light to Probe Largest Structures in the Universe” 

    Daily Galaxy
    The Daily Galaxy

    October 21, 2014
    The Daily Galaxy via University of California – Berkeley

    An international team of physicists has measured a subtle characteristic in the polarization of the cosmic microwave background radiation that will allow them to map the large-scale structure of the universe, determine the masses of neutrinos and perhaps uncover some of the mysteries of dark matter and dark energy. The POLARBEAR team is measuring the polarization of light that dates from an era 380,000 years after the Big Bang, when the early universe was a high-energy laboratory, a lot hotter and denser than now, with an energy density a trillion times higher than what they are producing at the CERN collider.

    Cosmic Background Radiation Planck
    CMB per Planck

    The Large Hadron Collider near Geneva is trying to simulate that early era by slamming together beams of protons to create a hot dense soup from which researchers hope new particles will emerge, such as the newly discovered Higgs boson. But observing the early universe, as the POLARBEAR group does may also yield evidence that new physics and new particles exist at ultra-high energies.

    The team uses these primordial photon’s light to probe large-scale gravitational structures in the universe, such as clusters or walls of galaxies that have grown from what initially were tiny fluctuations in the density of the universe. These structures bend the trajectories of microwave background photons through gravitational lensing, distorting its polarization and converting E-modes into B-modes. POLARBEAR images the lensing-generated B-modes to shed light on the intervening universe.

    In a paper published this week in the Astrophysical Journal, the POLARBEAR consortium, led by University of California, Berkeley, physicist Adrian Lee, describes the first successful isolation of a “B-mode” produced by gravitational lensing in the polarization of the cosmic microwave background radiation.

    Polarization is the orientation of the microwave’s electric field, which can be twisted into a “B-mode” pattern as the light passes through the gravitational fields of massive objects, such as clusters of galaxies.

    lens

    “We made the first demonstration that you can isolate a pure gravitational lensing B-mode on the sky,” said Lee, POLARBEAR principal investigator, UC Berkeley professor of physics and faculty scientist at Lawrence Berkeley National Laboratory (LBNL). “Also, we have shown you can measure the basic signal that will enable very sensitive searches for neutrino mass and the evolution of dark energy.”

    The POLARBEAR team, which uses microwave detectors mounted on the Huan Tran Telescope in Chile’s Atacama Desert, consists of more than 70 researchers from around the world. They submitted their new paper to the journal one week before the surprising March 17 announcement by a rival group, the BICEP2 (Background Imaging of Cosmic Extragalactic Polarization) experiment, that they had found the holy grail of microwave background research. That team reported finding the signature of cosmic inflation – a rapid ballooning of the universe when it was a fraction of a fraction of a second old – in the polarization pattern of the microwave background radiation.

    Huan Tran Telescope
    Huan Tran Telescope (Kavli IPMU)

    BICEP 2
    BICEP2 with South Pole Telescope

    Subsequent observations, such as those announced last month by the Planck satellite, have since thrown cold water on the BICEP2 results, suggesting that they did not detect what they claimed to detect.

    While POLARBEAR may eventually confirm or refute the BICEP2 results, so far it has focused on interpreting the polarization pattern of the microwave background to map the distribution of matter back in time to the universe’s inflationary period, 380,000 years after the Big Bang.

    POLARBEAR’s approach, which is different from that used by BICEP2, may allow the group to determine when dark energy, the mysterious force accelerating the expansion of the universe, began to dominate and overwhelm gravity, which throughout most of cosmic history slowed the expansion.

    BICEP2 and POLARBEAR both were designed to measure the pattern of B-mode polarization, that is, the angle of polarization at each point in an area of sky. BICEP2, based at the South Pole, can only measure variation over large angular scales, which is where theorists predicted they would find the signature of gravitational waves created during the universe’s infancy. Gravitational waves could only have been created by a brief and very rapid expansion, or inflation, of the universe 10-34 seconds after the Big Bang.

    In contrast, POLARBEAR was designed to measure the polarization at both large and small angular scales. Since first taking data in 2012, the team focused on small angular scales, and their new paper shows that they can measure B-mode polarization and use it to reconstruct the total mass lying along the line of sight of each photon.

    The polarization of the microwave background records minute density differences from that early era. After the Big Bang, 13.8 billion years ago, the universe was so hot and dense that light bounced endlessly from one particle to another, scattering from and ionizing any atoms that formed. Only when the universe was 380,000 years old was it sufficiently cool to allow an electron and a proton to form a stable hydrogen atom without being immediately broken apart. Suddenly, all the light particles – called photons – were set free.

    “The photons go from bouncing around like balls in a pinball machine to flying straight and basically allowing us to take a picture of the universe from only 380,000 years after the Big Bang,” Lee said. “The universe was a lot simpler then: mainly hydrogen plasma and dark matter.”

    These photons, which, today, have cooled to a mere 3 degrees Kelvin above absolute zero, still retain information about their last interaction with matter. Specifically, the flow of matter due to density fluctuations where the photon last scattered gave that photon a certain polarization (called E-mode polarization).

    “Think of it like this: the photons are bouncing off the electrons, and there is basically a last kiss, they touch the last electron and then they go for 14 billion years until they get to telescopes on the ground,” Lee said. “That last kiss is polarizing.”

    While E-mode polarization contains some information, B-mode polarization contains more, because photons carry this only if matter around the last point of scattering was unevenly or asymmetrically distributed. Specifically, the gravitational waves created during inflation squeezed space and imparted a B-mode polarization that BICEP2 may have detected. POLARBEAR, on the other hand, has detected B-modes that are produced by distortion of the E-modes by gravitational lensing.

    While many scientists suspected that the gravitational-wave B-mode polarization might be too faint to detect easily, the BICEP2 team, led by astronomers at Harvard University’s Center for Astrophysics, reported a large signal that fit predictions of gravitational waves. Current doubt about this result centers on whether or not they took into account the emission of dust from the galaxy that would alter the polarization pattern.

    In addition, BICEP2’s ability to measure inflation at smaller angular scales is contaminated by the gravitational lensing B-mode signal.

    “POLARBEAR’s strong suit is that it also has high angular resolution where we can image this lensing and subtract it out of the inflationary signal to clean it up,” Lee said.

    Two other papers describing related results from POLARBEAR were accepted in the spring by Physical Review Letters.

    One of those papers is about correlating E-mode polarization with B-mode polarization, which “is the most sensitive channel to cosmology; that’s how you can measure neutrino masses, how you might look for early behavior of dark energy,” Lee said.

    The [basically blue] image [above] shows the scale of a large quasar group” (LQG), the largest structure ever seen in the entire universe that runs counter to our current understanding of the scale of the universe. Even traveling at the speed of light, it would take 4 billion years to cross. This is significant not just because of its size but also because it challenges the Cosmological Principle, which has been widely accepted since [Albert] Einstein, the assumption that the universe, when viewed at a sufficiently large scale, looks the same no matter where you are observing it from.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 6:24 pm on October 21, 2014 Permalink | Reply
    Tags: , , Brain Studies,   

    From Princeton: “Immune proteins moonlight to regulate brain-cell connections” 

    Princeton University
    Princeton University

    October 21, 2014
    Morgan Kelly, Office of Communications

    When it comes to the brain, “more is better” seems like an obvious assumption. But in the case of synapses, which are the connections between brain cells, too many or too few can both disrupt brain function.

    Researchers from Princeton University and the University of California-San Diego (UCSD) recently found that an immune-system protein called MHCI, or major histocompatibility complex class I, moonlights in the nervous system to help regulate the number of synapses, which transmit chemical and electrical signals between neurons. The researchers report in the Journal of Neuroscience that in the brain MHCI could play an unexpected role in conditions such as Alzheimer’s disease, type II diabetes and autism.

    MHCI proteins are known for their role in the immune system where they present protein fragments from pathogens and cancerous cells to T cells, which are white blood cells with a central role in the body’s response to infection. This presentation allows T cells to recognize and kill infected and cancerous cells.

    In the brain, however, the researchers found that MHCI immune molecules are one of the only known factors that limit the density of synapses, ensuring that synapses form in the appropriate numbers necessary to support healthy brain function. MHCI limits synapse density by inhibiting insulin receptors, which regulate the body’s sugar metabolism and, in the brain, promote synapse formation.

    Tangled web

    web
    Researchers from Princeton University and the University of California-San Diego recently found that an immune-system protein called MHCI, or major histocompatibility complex class I, moonlights in the nervous system to help regulate the number of synapses, which transmit chemical and electrical signals between neurons. Pictured is a mouse hippocampal neuron studded with thousands of synaptic connections (yellow). The number and location of synapses — not too many or too few — is critical to healthy brain function. The researchers found that MHCI proteins, known for their role in the immune system, also are one of the only known factors that ensure synapse density is not too high. The protein does so by inhibiting insulin receptors, which promote synapse formation. (Image courtesy of Lisa Boulanger, Department of Molecular Biology)

    Senior author Lisa Boulanger, an assistant professor in the Department of Molecular Biology and the Princeton Neuroscience Institute (PNI), said that MHCI’s role in ensuring appropriate insulin signaling and synapse density raises the possibility that changes in the protein’s activity could contribute to conditions such Alzheimer’s disease, type II diabetes and autism. These conditions have all been associated with a complex combination of disrupted insulin-signaling pathways, changes in synapse density, and inflammation, which activates immune-system molecules such as MHCI.

    Patients with type II diabetes develop “insulin resistance” in which insulin receptors become incapable of responding to insulin, the reason for which is unknown, Boulanger said. Similarly, patients with Alzheimer’s disease develop insulin resistance in the brain that is so pronounced some have dubbed the disease “type III diabetes,” Boulanger said.

    “Our results suggest that changes in MHCI immune proteins could contribute to disorders of insulin resistance,” Boulanger said. “For example, chronic inflammation is associated with type II diabetes, but the reason for this link has remained a mystery. Our results suggest that inflammation-induced changes in MHCI could have consequences for insulin signaling in neurons and maybe elsewhere.”

    green
    This image of a neuron from a mouse hippocampus shows insulin receptors (green) and the protein calbindin (red). In this area of the brain, calbindin is present in dentate granule cells, which form synapses on MHCI-expressing cells. The extensive overlap (yellow) suggests that this neuron, which expresses insulin receptors, is a dentate granule cell neuron. (Image courtesy of Lisa Boulanger, Department of Molecular Biology)

    MHCI levels also are “dramatically altered” in the brains of people with Alzheimer’s disease, Boulanger said. Normal memory depends on appropriate levels of MHCI. Boulanger was senior author on a 2013 paper in the journal Learning and Memory that found that mice bred to produce less functional MHCI proteins exhibited striking changes in the function of the hippocampus, a part of the brain where some memories are formed, and had severe memory impairments.

    “MHCI levels are altered in the Alzheimer’s brain, and altering MHCI levels in mice disrupts memory, reduces synapse number and causes neuronal insulin resistance, all of which are core features of Alzheimer’s disease,” Boulanger said.

    Links between MHCI and autism also are emerging, Boulanger said. People with autism have more synapses than usual in specific brain regions. In addition, several autism-associated genes regulate synapse number, often via a signaling protein known as mTOR (mammalian target of rapamycin). In their study, Boulanger and her co-authors found that mice with reduced levels of MHCI had increased insulin-receptor signaling via the mTOR pathway, and, consequently, more synapses. When elevated mTOR signaling was reduced in MHCI-deficient mice, normal synapse density was restored.

    Thus, Boulanger said, MHCI and autism-associated genes appear to converge on the mTOR-synapse regulation pathway. This is intriguing given that inflammation during pregnancy, which alters MHCI levels in the fetal brain, may slightly increase the risk of autism in genetically predisposed individuals, she said.

    “Up-regulating MHCI is essential for the maternal immune response, but changing MHCI activity in the fetal brain when synaptic connections are being formed could potentially affect synapse density,” Boulanger said.

    Ben Barres, a professor of neurobiology, developmental biology and neurology at the Stanford University School of Medicine, said that while it is known that both insulin-receptor signaling increases synapse density, and MHCI signaling decreases it, the researchers are the first to show that MHCI actually affects insulin receptors to control synapse density.

    “The idea that there could be a direct interaction between these two signaling systems comes as a great surprise,” said Barres, who was not involved in the research. “This discovery not only will lead to new insight into how brain circuitry develops but to new insight into declining brain function that occurs with aging.”

    cer
    This section of adult mouse cerebellum shows insulin receptors (green) and calbindin (red), which in this case is present in the cerebellar neurons known as Purkinje cells. Insulin receptors are highly expressed in fibers that form synapses onto Purkinje cells, which express MHCI. Thus both in the cerebellum and hippocampus (previous image), insulin receptors are highly expressed in cells that form synapses onto MHCI-expressing neurons, which suggests MHCI and insulin receptors could interact, either directly or indirectly, in the living brain. (Image courtesy of Lisa Boulanger, Department of Molecular Biology)

    Particularly, the research suggests a possible functional connection between type II diabetes and Alzheimer’s disease, Barres said.

    “Type II diabetes has recently emerged as a risk factor for Alzheimer’s disease but it has not been clear what the connection is to the synapse loss experienced with Alzheimer’s disease,” he said. “Given that type II diabetes is accompanied by decreased insulin responsiveness, it may be that the MHCI signaling becomes able to overcome normal insulin signaling and contribute to synapse decline in this disease.”

    Research during the past 15 years has shown that MHCI lives a prolific double-life in the brain, Boulanger said. The brain is “immune privileged,” meaning the immune system doesn’t respond as rapidly or effectively to perceived threats in the brain. Dozens of studies have shown, however, that MHCI is not only present throughout the healthy brain, but is essential for normal brain development and function, Boulanger said. A 2013 paper from her lab published in the journal Molecular and Cellular Neuroscience showed that MHCI is even present in the fetal-mouse brain, at a stage when the immune system is not yet mature.

    “Many people thought that immune molecules like MHCI must be missing from the brain,” Boulanger said. “It turns out that MHCI immune proteins do operate in the brain — they just do something completely different. The dual roles of these proteins in the immune system and nervous system may allow them to mediate both harmful and beneficial interactions between the two systems.”

    The paper, MHC Class I Limits Hippocampal Synapse Density by Inhibiting Neuronal Insulin Receptor Signaling, was published Aug. 27 in the Journal of Neuroscience. Boulanger worked with Carolyn Tyler, a postdoctoral research fellow in PNI; Julianna Poole, who received her master’s degree in molecular biology from Princeton in 2014; Princeton senior Joseph Park; and Lawrence Fourgeaud and Tracy Dixon-Salazar, both at UCSD. The work was supported by the Whitehall Foundation; the Sloan Foundation; Cure Autism Now; the Princeton Neuroscience Institute Innovation Fund; the Silvio Varon Chair in Neuroregeneration at UCSD; Autism Speaks; and the National Science Foundation.

    See the full article here.

    About Princeton: Overview

    Princeton University is a vibrant community of scholarship and learning that stands in the nation’s service and in the service of all nations. Chartered in 1746, Princeton is the fourth-oldest college in the United States. Princeton is an independent, coeducational, nondenominational institution that provides undergraduate and graduate instruction in the humanities, social sciences, natural sciences and engineering.

    As a world-renowned research university, Princeton seeks to achieve the highest levels of distinction in the discovery and transmission of knowledge and understanding. At the same time, Princeton is distinctive among research universities in its commitment to undergraduate teaching.

    Today, more than 1,100 faculty members instruct approximately 5,200 undergraduate students and 2,600 graduate students. The University’s generous financial aid program ensures that talented students from all economic backgrounds can afford a Princeton education.

    Princeton Shield
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 4:42 pm on October 21, 2014 Permalink | Reply
    Tags: , , , ,   

    From astrobio.net: “Scientists create possible precursor to life” 

    Astrobiology Magazine

    Astrobiology Magazine

    Oct 21, 2014
    University of Southern Denmark
    Contact Professor, Head of FLINT Center, Steen Rasmussen. Email: steen@sdu.dk. Mobile: +45 60112507

    How did life originate? And can scientists create life? These questions not only occupy the minds of scientists interested in the origin of life, but also researchers working with technology of the future. If we can create artificial living systems, we may not only understand the origin of life – we can also revolutionize the future of technology.

    pro
    Model of a protocell. Image by Janet Iwasa

    Protocells are the simplest, most primitive living systems, you can think of. The oldest ancestor of life on Earth was a protocell, and when we see, what it eventually managed to evolve into, we understand why science is so fascinated with protocells. If science can create an artificial protocell, we get a very basic ingredient for creating more advanced artificial life.

    However, creating an artificial protocell is far from simple, and so far no one has managed to do that. One of the challenges is to create the information strings that can be inherited by cell offspring, including protocells. Such information strings are like modern DNA or RNA strings, and they are needed to control cell metabolism and provide the cell with instructions about how to divide.

    Essential for life

    If one daughter cell after a division has a slightly altered information (maybe it provides a slightly faster metabolism), they may be more fit to survive. Therefore it may be selected and an evolution has started.

    Now researchers from the Center for Fundamental Living Technology (FLINT), Department of Physics, Chemistry and Pharmacy, University of Southern Denmark, describe in the journal Europhysics Letters, how they, in a virtual computer experiment, have discovered information strings with peculiar properties.

    Professor and head of FLINT, Steen Rasmussen, says: “Finding mechanisms to create information strings are essential for researchers working with artificial life.”

    auto
    An autocatalytic network is a network of molecules, which catalyze each other’s production. Each molecule can be formed by at least one chemical reaction in the network, and each reaction can be catalyzed by at least one other molecule in the network. This process will create a network that exhibits a primitive form of metabolism and an information system that replicates itself from generation to generation. Credit University of Southern Denmark.

    Steen Rasmussen and his colleagues know they face two problems:

    Firstly long molecular strings are decomposed in water. This means that long information strings “break” quickly in water and turn into many short strings. Thus it is very difficult to maintain a population of long strings over time.

    Secondly, it is difficult to make these molecules replicate without the use of modern enzymes, whereas it is easier to make a so-called ligation. A ligation is to connect any combination of two shorter strings into a longer string, assisted by another matching longer string. Ligation is the mechanism used by the SDU-researchers.

    “In our computer simulation – our virtual molecular laboratory – information strings began to replicate quickly and efficiently as expected. However, we were struck to see that the system quickly developed an equal number of short and long information strings and further that a strong pattern selection on the strings had occurred. We could see that only very specific information patterns on the strings were to be seen in the surviving strings. We were puzzled: How could such a coordinated selection of strings occur, when we knew that we had not programmed it. The explanation had to be found in the way the strings interacted with each other”, explains Steen Rasmussen.

    It is like society

    According to Steen Rasmussen, a so-called self-organizing autocatalytic network was created in the virtual pot, into which he and his colleagues poured the ingredients for information strings.

    “An autocatalytic network works like a community; each molecule is a citizen who interacts with other citizens and together they help create a society”, explains Steen Rasmussen.

    This autocatalytic set quickly evolved into a state where strings of all lengths existed in equal concentrations, which is not what is usually found. Further, the selected strings had strikingly similar patterns, which is also unusual.

    “We might have discovered a process similar to the processes that initially sparked the first life. We of course don’t know if life actually was created this way – but it could have been one of the steps. Perhaps a similar process created sufficiently high concentrations of longer information strings when the first protocell was created”, explains Steen Rasmussen.

    Basis for new technology

    The mechanisms underlying the formation and selection of effective information strings are not only interesting for the researchers who are working to create protocells. They also have value to researchers working with tomorrow’s technology, like they do at the FLINT Center.

    “We seek ways to develop technology that’s based on living and life-like processes. If we succeed, we will have a world where technological devices can repair themselves, develop new properties and be re-used. For example a computer made of biological materials poses very different – and less environmentally stressful – requirements for production and disposal”, says Steen Rasmussen.

    Ref: http://epljournal.edpsciences.org/articles/epl/abs/2014/14/epl16388/epl16388.html

    See the full article here.

    NASA

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 4:19 pm on October 21, 2014 Permalink | Reply
    Tags: , , , USGS   

    From livescience: “Earthquake Forecast: 4 California Faults Are Ready to Rupture” 

    Livescience

    October 13, 2014
    Becky Oskin

    With several faults slicing through the San Francisco Bay Area, forecasting the next deadly earthquake becomes a question of when and where, not if.

    Now researchers propose that four faults have built up enough seismic strain (stored energy) to unleash destructive earthquakes, according to a study published today (Oct. 13) in the Bulletin of the Seismological Society of America.

    The quartet includes the Hayward Fault, the Rodgers Creek Fault, the Green Valley Fault and the Calaveras Fault. While all are smaller pieces of California’s San Andreas Fault system, which is more than 800 miles (1,300 kilometers) long, the four faults are a serious threat because they directly underlie cities. [Photo Journal: The Gorgeous San Andreas Fault]

    fault
    San Francisco Bay Area earthquake faults are drawn in red.

    saf
    Description: USGS diagram of San Andreas Fault
    Date: 14 March 2006

    “The Hayward Fault is just right in the heart of where people live, and the most buildings and the most infrastructure,” said Jim Lienkaemper, lead study author and a research geophysicist at the U.S. Geological Survey’s Earthquake Science Center in Menlo Park, California. “But it’s not just one fault, it’s the whole shopping basket. If you are in the middle of the Bay Area, you are near a whole lot of faults, and I’m concerned about all of them.”

    Lienkaemper and his colleagues gauged the potential for destructive earthquakes by monitoring tiny surface shifts along California faults. Certain faults are in constant motion, creeping steadily by less than 0.4 inches (1 centimeter) each year. These slow movements add up over time, cracking sidewalk curbs and buildings. They also serve as clues to what’s happening deep below ground, where earthquakes strike.

    “If you figure out where faults are creeping, it tells you where they’re locked and how much they’re locked,” Lienkaemper told Live Science.

    Fault creep varies, with some faults sliding at a snail’s pace and others barely budging. Models suggest that the diversity comes from locked zones that are 3 to 6 miles (5 to 10 km) below the surface, where the fault is stuck instead of sliding. For example, the relatively fast-creeping southern Hayward Fault is only about 40 percent locked, on average, while the slow-creeping Rodgers Creek Fault is 89 percent locked, the study reports. When these locked areas build up a critical amount of strain, they break apart in an earthquake.
    earthquakes

    sfa
    Map of Bay Area earthquake faults and creep measurement sites.
    Credit: USGS

    Lienkaemper and his co-author estimated a fault’s future earthquake potential by combining creep measurements with mathematical fault models and other regional data, such as the time since the last earthquake.

    The Hayward Fault has banked enough energy for a magnitude-6.8 earthquake, according to the study. The Rodgers Creek Fault could trigger a magnitude-7.1 earthquake, and the Green Valley Fault also has the potential to unleash a magnitude-7.1 shaker. The Northern Calaveras Fault is set for a magnitude-6.8 temblor.

    Of all Bay Area faults, the Hayward Fault is most likely to spawn a damaging earthquake in the next 30 years, scientists think. Its 1868 earthquake was called the Big One until the great 1906 San Francisco quake came along. The Hayward Fault has ruptured about every 140 years for its previous five large earthquakes. The probability of a magnitude-6.7 earthquake on the Hayward Fault is 30 percent in the next 30 years.

    Though 146 years have now passed since the last Hayward earthquake, that doesn’t mean the fault is overdue for another quake, Lienkaemper said. “The average is 160 years, but the uncertainty is plus or minus 100 years, which is almost as big as the time [interval] itself.” The 160-year average comes from an analysis of data collected from trenches dug across the fault that revealed evidence of earthquakes over thousands of years.

    The Rodgers Creek and Green Valley Faults are also closing in on their average repeat times between earthquakes.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 3:36 pm on October 21, 2014 Permalink | Reply
    Tags: ,   

    From WCG: 1,000,000 years of data processing 

    New WCG Logo

    We didn’t plan it this way, but it couldn’t have been better if we tried. The total runtime donated by our amazing volunteers has reached 1 million years.
    Thank you for making this possible, but even more thanks for what this represents: important, humanitarian research that would have been impossible without your help. And please, sign up today to participate in the new Uncovering Genome Mysteries project!

    World Community Grid (WCG) brings people together from across the globe to create the largest non-profit computing grid benefiting humanity. It does this by pooling surplus computer processing power. We believe that innovation combined with visionary scientific research and large-scale volunteerism can help make the planet smarter. Our success depends on like-minded individuals – like you.”

    WCG projects run on BOINC software from UC Berkeley.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing.

    CAN ONE PERSON MAKE A DIFFERENCE? YOU BETCHA!!

    “Download and install secure, free software that captures your computer’s spare power when it is on, but idle. You will then be a World Community Grid volunteer. It’s that simple!” You can download the software at either WCG or BOINC.

    Please visit the project pages-

    Uncovering Genome Mysteries
    Uncovering Genome Mysteries

    Say No to Schistosoma

    GO Fight Against Malaria

    Drug Search for Leishmaniasis

    Computing for Clean Water

    The Clean Energy Project

    Discovering Dengue Drugs – Together

    Help Cure Muscular Dystrophy

    Help Fight Childhood Cancer

    Help Conquer Cancer

    Human Proteome Folding

    FightAIDS@Home

    World Community Grid is a social initiative of IBM Corporation
    IBM Corporation
    ibm

    IBM – Smarter Planet
    sp

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 3:08 pm on October 21, 2014 Permalink | Reply
    Tags: , , Fermilab Scientific Computing, , , ,   

    From FNAL: “Simulation in the 21st century” 


    Fermilab is an enduring source of strength for the US contribution to scientific research world wide.

    Tuesday, Oct. 21, 2014
    V. Daniel Elvira, Scientific Computing Simulation Department head

    Simulation is not magic, but it can certainly produce the feeling. Although it can’t miraculously replace particle physics experiments, revealing new physics phenomena at the touch of a key, it can help scientists to design detectors for best physics at the minimum cost in time and money.

    sim
    This CMS simulated event was created using Geant4 simulation software. Image: CMS Collaboration

    CERN CMS New
    CMS at CERN

    Geant4 is a detector simulation software toolkit originally created at CERN and currently developed by about 100 physicists and computer scientists from all around the world to model the passage of particles through matter and electromagnetic fields. For example, physicists use simulation to optimize detectors and software algorithms with the goal to measure, with utmost efficiency, marks that previously unobserved particles predicted by new theories would leave in their experimental devices.

    Particle physics detectors are typically large and complex. Think of them as a set of hundreds of different shapes and materials. Particles coming from accelerator beams or high-energy collisions traverse the detectors, lose energy and transform themselves into showers of more particles as they interact with the detector material. The marks they leave behind are read by detector electronics and reconstructed by software into the original incident particles with their associated energies and trajectories.

    We wouldn’t even dream of starting detector construction, much less asking for the funding to do it, without simulating the detector geometry and magnetic fields, as well as the physics of the interactions of particles with detector material, in exquisite detail. One of the goals of simulation is to demonstrate that the proposed detector would do the job.

    Geant4 includes tools to represent the detector geometry by assembling elements of different shapes, sizes and material, as well as the mathematical expressions to propagate particles and calculate the details of the electromagnetic and nuclear interactions of particles with matter.

    Geant4 is the current incarnation of Geant (Geometry and Tracking, or “giant” in French). It has become extremely popular for physics, medical and space science applications and is the tool of choice for high-energy physics, including CERN’s LHC experiments and Fermilab’s neutrino and muon programs.

    The Fermilab Scientific Computing Simulation Department (SCS) has grown a team of Geant4 experts that participate actively in its core development and maintenance, offering detector simulation support to experiments and projects within Fermilab’s scientific program. The focus of our team is on improving physics and testing tools, as well as time and memory performance. The SCS team also spearheads an exciting R&D program to re-engineer the toolkit to run on modern computer architectures.

    New-generation machines containing chips called coprocessors, or graphics processing units such as those used in game consoles or smart phones, may be used to speed execution times significantly. Software engineers do this by exploiting the benefits of the novel circuit design of the chips, as well as by using parallel programming. For example, a program execution mode called “multi-threading” would allow us to simulate particles from showers of different physics collisions simultaneously by submitting these threads to the hundreds or thousands of processor cores contained within these novel computer systems.

    As the high-energy community builds, commissions and runs the experiments of the first half of the 21st century, a world of exciting and promising possibilities is opening in the field of simulation and detector modeling. Our Fermilab SCS team is at the forefront of this effort.

    See the full article here.

    Fermilab Campus

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 6:22 am on October 21, 2014 Permalink | Reply
    Tags: , , ,   

    From SLAC: “Puzzling New Behavior Found in High-Temperature Superconductors” 


    SLAC Lab

    October 20, 2014

    Ultimate Goal: A Super-efficient Way to Conduct Electricity at Room Temperature

    Research by an international team led by SLAC and Stanford scientists has uncovered a new, unpredicted behavior in a copper oxide material that becomes superconducting – conducting electricity without any loss – at relatively high temperatures.

    This new phenomenon – an unforeseen collective motion of electric charges coursing through the material – presents a challenge to scientists seeking to understand its origin and connection with high-temperature superconductivity. Their ultimate goal is to design a superconducting material that works at room temperature.

    “Making a room-temperature superconductor would save the world enormous amounts of energy,” said Thomas Devereaux, leader of the research team and director of the Stanford Institute for Materials and Energy Sciences (SIMES), which is jointly run with SLAC. “But to do that we must understand what’s happening inside the materials as they become superconducting. This result adds a new piece to this long-standing puzzle.”

    The results are published Oct. 19 in Nature Physics.

    Delving Into Doping Differences

    The researchers used an emerging X-ray technique called resonant inelastic X-ray scattering, or RIXS, to measure how the properties of a copper oxide change as extra electrons are added in a process known as doping. The team used the Swiss Light Source’s RIXS instrument, which currently has the world’s highest resolution and can reveal atomic-scale excitations – rapid changes in magnetism, electrical charge and other properties – as they move through the material.

    Copper oxide, a ceramic that normally doesn’t conduct electricity at all, becomes superconducting only when doped with other elements to add or remove electrons and chilled to low temperatures. Intriguingly, the electron-rich version loses its superconductivity when warmed to about 30 degrees above absolute zero (30 kelvins) while the electron-poor one remains superconducting up to 120 kelvins (minus 244 degrees Fahrenheit). One of the goals of the new research is to understand why they behave so differently.

    The experiments revealed a surprising increase of magnetic energy and the emergence of a new collective excitation in the electron-rich compounds, said Wei-sheng Lee, a SLAC staff scientist and lead author on the Nature Physics paper. “It’s very puzzling that these new electronic phenomena are not seen in the electron-poor material,” he said.

    wl
    SLAC Staff Scientist Wei-sheng Lee (SLAC National Accelerator Laboratory)

    Lee added that it’s unclear whether the new collective excitation is related to the ability of electrons to pair up and effortlessly conduct electricity – the hallmark of superconductivity – or whether it promotes or limits high-temperature superconductivity. Further insight can be provided by additional experiments using next-generation RIXS instruments that will become available in a few years at synchrotron light sources worldwide.

    A Long, Tortuous Path

    This discovery is the latest step in the long and tortuous path toward understanding high-temperature superconductivity.

    Scientists have known since the late 1950s why certain metals and simple alloys become superconducting when chilled within a few degrees of absolute zero: Their electrons pair up and ride waves of atomic vibrations that act like a virtual glue to hold the pairs together. Above a certain temperature, however, the glue fails as thermal vibrations increase, the electron pairs split up and superconductivity disappears.

    Starting in 1986, researchers discovered a number of materials that are superconducting at higher temperatures. By understanding and optimizing how these materials work, they hope to develop superconductors that work at room temperature and above.

    Until recently, the most likely glue holding superconducting electron pairs together at higher temperatures seemed to be strong magnetic excitations created by interactions between electron spins. But a recent theoretical simulation by SLAC and Stanford researchers concluded that these high-energy magnetic interactions are not the sole factor in copper oxide’s high-temperature superconductivity. The new results confirm that prediction, and also complement a 2012 report on the behavior of electron-poor copper oxides by a team that included Lee, Devereaux and several other SLAC/Stanford scientists.

    “Theorists must now incorporate this new ingredient into their explanations of how high-temperature superconductivity works,” said Thorsten Schmitt, leader of the RIXS team at the Paul Scherrer Institute in Switzerland, who collaborated on the study.

    Other researchers involved in the study were from Columbia University, University of Minnesota, AGH University of Science and Technology in Poland, National Synchrotron Radiation Research Center and National Tsing Hua University in Taiwan, and the Chinese Academy of Sciences. Funding for the research came from the DOE Office of Science, U.S. National Science Foundation and Swiss National Science Foundation.

    See the full article, with animation video, here.

    SLAC Campus
    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.
    i1

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 6:49 pm on October 20, 2014 Permalink | Reply
    Tags: , ,   

    From NYT: “25 Years Ago, NASA Envisioned Its Own ‘Orient Express’” 

    New York Times

    The New York Times

    OCT. 20, 2014
    KENNETH CHANG

    The National Aero-Space Plane was to be a revolutionary advance beyond the space shuttle.

    plane

    In his 1986 State of the Union address, President Ronald Reagan promised “a new Orient Express that could, by the end of the next decade, take off from Dulles Airport and accelerate up to 25 times the speed of sound, attaining low-earth orbit or flying to Tokyo within two hours.”

    On Oct. 3, 1989, an article in Science Times, Designing a Plane for the Leap of Space (and Back), reported frenetic activity at NASA and the Defense Department.

    “Scientists and engineers are making rapid progress in developing technologies needed to build a 17,000-mile-an-hour ‘space plane’ that could escape earth’s gravity and circle the globe in 90 minutes,” the article began.

    “Their goal,” it continued, “is a space plane that could take off and land from virtually any airport in the world, carry satellites and other space cargo into orbit cheaply, shuttle between the earth and an orbiting space station, or carry a load of bombs deep into enemy territory as fast as an intercontinental missile.”

    Proponents contended the space plane would be far cheaper to operate than the shuttle.

    Others were dubious. The Air Force, which was providing most of the financing, had already tried to back out, but the National Space Council, headed by Vice President Dan Quayle, recommended continuing work at a slower pace.

    The target for the first flight of the first experimental version, known as the X-30, was originally 1993 but was pushed back to 1997.

    25 YEARS LATER The space plane, able to fly by itself to orbit, never took off. The X-30 died in 1994. Smaller-scale hypersonic programs came and went.

    Was the X-30 technologically feasible?

    “No, and it’s still not,” said Jess Sponable, a program manager in the tactical technology office at Darpa, the Defense Advanced Research Projects Agency. For X-30 to succeed, infant ideas would have had to have been developed into robust, reliable technologies — materials that could survive intense temperatures, air-breathing engines that could fly faster and higher.

    Nonetheless, “absolutely, it was worthwhile,” Mr. Sponable said, although he added perhaps not worth the more than $1.6 billion spent. “We learned a lot.”

    The pendulum for spacecraft design has since swung away from the cutting edge to the tried and true. The Orion craft, which NASA is building for deep-space missions, is a capsule, just like the one used for the Apollo moon missions but bigger. The two private company designs that NASA chose to take future astronauts to the space station are also capsules. (The loser in that competition was a mini-shuttle offering.)

    NASA Orion Spacecraft
    NASA/Orion

    But the dream of hypersonic space planes continues.

    At Darpa, Mr. Sponable heads the XS-1 space plane project. It is not a do-it-all-at-once effort like the 1980s space plane but a much simpler, unmanned vehicle that would serve as a reusable first stage.

    Mr. Sponable is eager to figure out how to send it up many times, quickly and cheaply; the goal is 10 flights in 10 days.

    “We want operability No. 1,” he said. With the quick launches, the issue of cost “just disappears, because we can’t spend a lot of money from Day 1 to Day 2 to Day 3.”

    Darpa has awarded contracts to three industry teams to develop preliminary designs. Mr. Sponable said the decision of a next step would come next spring.

    The space plane episode illustrates the recurring money woes that have bedeviled NASA for decades: A grandiose plan is announced with fanfare and a burst of financing that fades as delays and cost overruns undercut the optimistic plans. Then a new president or a new NASA administrator changes course.

    Most recently, the Obama administration canceled plans started under President George W. Bush to send astronauts back to the moon and told NASA to consider an asteroid instead.

    If the pattern continues, NASA priorities could zig again after the next president moves into the White House in 2017.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 3:29 pm on October 20, 2014 Permalink | Reply
    Tags: , , , , , ,   

    From astrobio.net: ” Exomoons Could Be Abundant Sources Of Habitability” 

    Astrobiology Magazine

    Astrobiology Magazine

    Oct 20, 2014
    Elizabeth Howell

    With about 4,000 planet candidates from the Kepler Space Telescope data to analyze so far, astronomers are busy trying to figure out questions about habitability. What size planet could host life? How far from its star does it need to be? What would its atmosphere need to be made of?

    NASA Kepler Telescope
    NASA/Kepler

    Look at our own solar system, however, and there’s a big gap in the information we need. Most of the planets have moons, so surely at least some of the Kepler finds would have them as well. Tracking down these tiny worlds, however, is a challenge.

    europa
    Europa is one of the moons in our solar system that could host life. What about beyond the solar system? Credit: NASA/JPL/Ted Stryk

    A new paper in the journal Astrobiology, called Formation, Habitability, and Detection of Extrasolar Moons, goes over this mostly unexplored field of extrasolar research. The scientists do an extensive literature review of what is supposed about moons beyond the Solar System, and they add intriguing new results.

    A wealth of moons exist in our own solar system that could host life. Icy Europa, which is circling Jupiter, was recently discovered to have plumes of water erupting from its surface. Titan, in orbit around Saturn, is the only known moon with an atmosphere, and could have the precursor elements to life in its hydrocarbon seas that are warmed by Saturn’s heat. Other candidates for extraterrestrial hosts include Jupiter’s moons Callisto and Ganymede, as well as Saturn’s satellite Enceladus.

    Lead author René Heller, an astrophysicist at the Origins Institute at McMaster University, in Ontario, Canada, said some exomoons could be even better candidates for life than many exoplanets.

    “Moons have separate energy sources,” he said. “While the habitability of terrestrial planets is mostly determined by stellar illumination, moons also receive reflected stellar light from the planet as well as thermal emission from the planet itself.”

    Moreover, a planet like Jupiter — which hosts most of the moons in the Solar System that could support life — provides even more potential energy sources, he added. The planet is still shrinking and thereby converts gravitational energy into heat, so that it actually emits more light than it receives from the Sun, providing yet more illumination. Besides that, moons orbiting close to a gas giant are flexed by the planet’s gravity, providing potential tidal heating as an internal, geological heat source.

    tri
    Triton’s odd, melted appearance hint that the moon was captured and altered by Neptune. Credit: NASA

    Finding the first exomoon

    The first challenge in studying exomoons outside our Solar System is to actually find one. Earlier this year, NASA-funded researchers reported the possible discovery of such a moon, but this claim was ambiguous and can never be confirmed. That’s because it appeared as a one-time event, when one star passed in front of another, acting as a sort of gravitational lens that amplified the background star. Two objects popped out in the gravitational lens in the foreground — either a planet and a star, or a planet and an extremely heavy exomoon.

    For his part, Heller is convinced that exomoons are lurking in the Kepler data, but they have not been discovered yet. Only one project right now is dedicated to searching for exomoons, and is led by David Kipping at the Canadian Space Agency. His group has published several papers investigating 20 Kepler planets and candidates in total. The big restriction to their efforts is computational power, as their simulations require supercomputers.

    Another limiting factor is the number of observatories that can search for exomoons. To detect them, at least a handful of transits of the planet-moon system across their common host star would be required to absolutely make sure that the companion is a moon, Heller said. Also, the planet with the moon would have to be fairly far from its star, and decidedly not those close-in hot Jupiters that take only a few days to make an orbit. In that zone, the gravitational drag of the star would fatally perturb any moon’s orbit.

    Heller estimates that a telescope would need to stare constantly at the same patch of sky for several hundred days, minimum, to pick up an exomoon. Kepler fulfilled that obligation in spades with its four years of data gazing at the same spot in the sky, but astronomers will have to wait again for that opportunity.

    Because two of Kepler’s gyroscopes (pointing devices) have failed, Kepler’s new mission will use the pressure of the Sun to keep it steady. But it can only now point to the same region of the sky for about 80 days at at time because the telescope will periodically need to be moved so as not to risk placing its optics too close to the Sun.

    NASA’s forthcoming Transiting Exoplanet Survey Satellite [TESS} is only expected to look at a given field for 70 days. Further into the future, the European Space Agency’s PLAnetary Transits and Oscillations of stars (PLATO) will launch in 2024 for what is a planned six-year mission looking at several spots in the sky.

    NASA TESS
    NASA/TESS

    ESA PLATO
    ESA PLATO

    “PLATO is the next step, with a comparable accuracy to Kepler but a much larger field of view and hopefully a longer field of view coverage,” Heller said.

    Clues in our solar system

    pla
    Thousands of exoplanets and exoplanet candidates have been discovered, but astronomers are still searching for exomoons. Credit: ESA – C. Carreau

    Heller characterizes moons as an under-appreciated feature of extrasolar planetary systems. Just by looking around us in the Solar System, he says, astronomers have been able to make crucial explanations about how the moons must have formed and evolved together with their planets. Moons thus carry information about the substructure of planet evolution, which is not accessible by planet observations alone.

    The Earth’s moon, for example, was likely formed when a Mars-sized object collided with the proto-Earth and produced a debris disk. Over time, that debris coalesced into our moon.

    While Heller says the literature mostly focuses on collision scenarios between an Earth-sized object and a Mars-sized object, he doesn’t see any reason why crashes on a bigger scale might not happen. Perhaps an Earth-sized object crashed into an object that was five times the mass of Earth, producing an extrasolar Earth-Earth binary planet system, he suggests.

    Another collision scenario likely took place at Uranus. The gas giant’s rotation is tilted about 90 degrees in its orbit around the Sun. In other words, it is rolling on its side. More intriguing, its two dozen moons follow Uranus’ rotational equator, and they do not orbit in the same plane as Uranus’ track around the Sun. This scenario suggests that Uranus was hit multiple times by huge objects instead of just once, Heller said.

    Examining mighty Jupiter’s moons gives astronomers a sense of how high temperatures were in the disk that formed the gas giant and its satellites, Heller added. Ganymede, for example, is an icy moon. Models indicate that beyond Ganymede’s orbit (at about 15 Jupiter radii) it is sufficiently cold for water to pass from the gas to the solid (ice) stage, so the regular moons in these regions are very water-rich compared to the inner, mostly rocky moons Io and Europa.

    “It sounds a bit technical, but we couldn’t have this information about planetary accretion if we did not have the moons today to observe,” Heller said.

    Some moons could also have been captured, such as Neptune’s large moon, Triton. The moon orbits in a direction opposite to other moons in Neptune‘s system (and in fact, opposite to the direction of other large moons in the Solar System.) Plus, its odd terrain suggests that it used to be a free-floating object that was captured by Neptune’s gravity. Neptune is so huge that it raised tides within the moon, reforming its surface.

    Even comparing the different types of moons around planets in the Solar System reveals different timescales of formation. Jupiter includes four moons similar in size to Earth’s moon (Europa, Callisto, Ganymede and Io), while the next largest planet in our solar system, Saturn, only has one large moon called Titan. Astronomers believe Saturn has only one large moon because the gas that formed objects in our solar system was more plentiful in Jupiter’s system to provide material for the moons to form.

    The gas abundance happened as a consequence of the huge gas giant creating a void in the material surrounding our young Sun, pulling the material in for its moons. Saturn was not quite large enough to do this, resulting in fewer large moons.

    More strange situations could exist beyond our solar system’s boundaries, but it will take a dedicated search to find exomoons. Once they are discovered, however, they will allow planet formation and evolution studies on a completely new level.

    This research was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC), the Center for Exoplanets and Habitable Worlds, which is supported by the Pennsylvania State University, the Pennsylvania Space Grant Consortium, the National Science Foundation (NSF) the NASA Astrobiology Institute.

    See the full article here.

    NASA

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 3:04 pm on October 20, 2014 Permalink | Reply
    Tags: , , ,   

    From LLNL: “Supercomputers link proteins to drug side effects” 


    Lawrence Livermore National Laboratory

    10/20/2014
    Kenneth K Ma, LLNL, (925) 423-7602, ma28@llnl.gov

    New medications created by pharmaceutical companies have helped millions of Americans alleviate pain and suffering from their medical conditions. However, the drug creation process often misses many side effects that kill at least 100,000 patients a year, according to the journal Nature.

    Lawrence Livermore National Laboratory researchers have discovered a high-tech method of using supercomputers to identify proteins that cause medications to have certain adverse drug reactions (ADR) or side effects. They are using high-performance computers (HPC) to process proteins and drug compounds in an algorithm that produces reliable data outside of a laboratory setting for drug discovery.

    The team recently published its findings in the journal PLOS ONE, titled Adverse Drug Reaction Prediction Using Scores Produced by Large-Scale Drug-Protein Target Docking on High-Performance Computer Machines.

    “We need to do something to identify these side effects earlier in the drug development cycle to save lives and reduce costs,” said Monte LaBute, a researcher from LLNL’s Computational Engineering Division and the paper’s lead author.

    It takes pharmaceutical companies roughly 15 years to bring a new drug to the market, at an average cost of $2 billion. A new drug compound entering Phase I (early stage) testing is estimated to have an 8 percent chance of reaching the market, according to the Food and Drug Administration (FDA).

    A typical drug discovery process begins with identifying which proteins are associated with a specific disease. Candidate drug compounds are combined with target proteins in a process known as binding to determine the drug’s effectiveness (efficacy) and/or harmful side effects (toxicity). Target proteins are proteins known to bind with drug compounds in order for the pharmaceutical to work.

    While this method is able to identify side effects with many target proteins, there are myriad unknown “off-target” proteins that may bind to the candidate drug and could cause unanticipated side effects.

    Because it is cost prohibitive to experimentally test a drug candidate against a potentially large set of proteins — and the list of possible off-targets is not known ahead of time — pharmaceutical companies usually only test a minimal set of off-target proteins during the early stages of drug discovery. This results in ADRs remaining undetected through the later stages of drug development, such as clinical trials, and possibly making it to the marketplace.

    There have been several highly publicized medications with off-target protein side effects that have reached the marketplace. For example, Avandia, an anti-diabetic drug, caused heart attacks in some patients; and Vioxx, an anti-inflammatory medication, caused heart attacks and strokes among certain patient populations. Both therapeutics were recalled because of their side effects.

    “There were no indications of side effects of these medications in early testing or clinical trials,” LaBute said. “We need a way to determine the safety of such therapeutics before they reach patients. Our work can help direct such drugs to patients who will benefit the most from them with the least amount of side effects.”

    LaBute and the LLNL research team tackled the problem by using supercomputers and information from public databases of drug compounds and proteins. The latter included protein databases of DrugBank, UniProt and Protein Data Bank (PDB), along with drug databases from the FDA and SIDER, which contain FDA-approved drugs with ADRs.

    The team examined 4,020 off-target proteins from DrugBank and UniProt. Those proteins were indexed against the PDB, which whittled the number down to 409 off-proteins that have high-quality 3D crystallographic X-ray diffraction structures essential for analysis in a computational setting.

    mp

    The 409 off-target proteins were fed into a Livermore HPC software known as VinaLC along with 906 FDA-approved drug compounds. VinaLC used a molecular docking matrix that bound the drugs to the proteins. A score was given to each combination to assess whether effective binding occurred.

    The binding scores were fed into another computer program and combined with 560 FDA-approved drugs with known side effects. An algorithm was used to determine which proteins were associated with certain side effects.

    The Lab team showed that in two categories of disorders — vascular disorders and neoplasms — their computational model of predicting side effects in the early stages of drug discovery using off-target proteins was more predictive than current statistical methods that do not include binding scores.

    In addition to LLNL ADR prediction methods performing better than current prediction methods, the team’s calculations also predicted new potential side effects. For example, they predicted a connection between a protein normally associated with cancer metastasis to vascular disorders like aneurysms. Their ADR predictions were validated by a thorough review of existing scientific data.

    “We have discovered a very viable way to find off-target proteins that are important for side effects,” LaBute said. “This approach using HPC and molecular docking to find ADRs never really existed before.”

    The team’s findings provide drug companies with a cost-effective and reliable method to screen for side effects, according to LaBute. Their goal is to expand their computational pharmaceutical research to include more off-target proteins for testing and eventually screen every protein in the body.

    “If we can do that, the drugs of tomorrow will have less side effects that can potentially lead to fatalities,” Labute said. “Optimistically, we could be a decade away from our ultimate goal. However, we need help from pharmaceutical companies, health care providers and the FDA to provide us with patient and therapeutic data.”

    two
    LLNL researchers Monte LaBute (left) and Felice Lightstone (right) were part of a Lab team that recently published an article in PLOS ONE detailing the use of supercomputers to link proteins to drug side effects. Photo by Julie Russell/LLNL

    The LLNL team also includes Felice Lightstone, Xiaohua Zhang, Jason Lenderman, Brian Bennion and Sergio Wong.

    See the full article here.

    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 343 other followers

%d bloggers like this: