Tagged: BOINC-Berkeley Open Infrastructure for Network Computing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 8:10 am on December 26, 2019 Permalink | Reply
    Tags: BOINC-Berkeley Open Infrastructure for Network Computing, , ,   

    From SETI@home: Winter 2019 News Letter 

    SETI@home
    From SETI@home

    1

    I’ve worked at Berkeley’s SETI Research Center for 25 years and co-founded SETI@home.

    1

    Thank you for your efforts as a member of the SETI@home team in 2019. Our program of searching for intelligent extraterrestrial life continues to expand, but SETI@home still needs your help.

    We are putting the finishing touches on our Nebula software suite which will analyze all results from both SETI@home and SERENDIP VI. We are focusing our first efforts on a complete analysis of all SETI@home Arecibo data to date.


    NAIC Arecibo Observatory operated by University of Central Florida, Yang Enterprises and UMET, Altitude 497 m (1,631 ft).

    As you can imagine, it is difficult to quantify the quality and sensitivity of our analysis given that there are no known ETIs to use as a reference! So part of the design of Nebula is to generate a large number of synthetic ETI-like signals, called birdies. Our set of birdies ranges from those that model stationary transmitters on far off planets to transmitters orbiting around a variety of planet types. We are also looking into using machine learning for anomaly detection.

    Two major papers will come out of this analysis. One will be on SETI@home as an instrument and the other will present the analysis in detail.

    This year saw the further commissioning and improving of SERENDIP VI / FASTBurst, deployed on the FAST radio telescope in China – now the largest on the planet.

    FAST [Five-hundred-meter Aperture Spherical Telescope] radio telescope, with phased arrays from CSIRO engineers Australia [located in the Dawodang depression in Pingtang County, Guizhou Province, south China

    Our instrument is dual purpose, looking for both ETI and Fast Radio Bursts (FRBs).

    FRB Fast Radio Bursts from NAOJ Subaru, Mauna Kea, Hawaii, USA

    FRBs are transient radio pulses of short duration caused by some as yet unknown astrophysical process. During one exciting testing session we detected repeating FRB 121102, a rare repeating FRB. The detection demonstrates the sensitivity of our instrument as this faint signal is detectable by very few telescopes/instruments.

    We continue to obtain raw data from Berkeley’s Breakthrough Listen program.

    Breakthrough Listen Project

    1

    UC Observatories Lick Autmated Planet Finder, fully robotic 2.4-meter optical telescope at Lick Observatory, situated on the summit of Mount Hamilton, east of San Jose, California, USA




    GBO radio telescope, Green Bank, West Virginia, USA


    CSIRO/Parkes Observatory, located 20 kilometres north of the town of Parkes, New South Wales, Australia


    SKA Meerkat telescope, 90 km outside the small Northern Cape town of Carnarvon, SA

    Newly added

    CfA/VERITAS, a major ground-based gamma-ray observatory with an array of four 12m optical reflectors for gamma-ray astronomy in the GeV – TeV energy range. Located at Fred Lawrence Whipple Observatory, Mount Hopkins, Arizona, US in AZ, USA, Altitude 2,606 m (8,550 ft)

    At Green Bank, observing is about to migrate from looking at stars within our own galaxy to observing other galaxies. Meanwhile, at Parkes, we will be surveying the galactic plane. During this survey the raw “voltage” data from the telescope will be recorded. These data will be ideal for processing by SETI@home volunteers like you.

    To accomplish our goals for next year, SETI@home needs two things. First, we need you, and your friends and family. Please spread the word about SETI@home and encourage people to participate. Second, SETI@home needs the funding to obtain the hardware and develop software required to handle new data sources.

    How to donate

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The science of SETI@home
    SETI (Search for Extraterrestrial Intelligence) is a scientific area whose goal is to detect intelligent life outside Earth. One approach, known as radio SETI, uses radio telescopes to listen for narrow-bandwidth radio signals from space. Such signals are not known to occur naturally, so a detection would provide evidence of extraterrestrial technology.

    Radio telescope signals consist primarily of noise (from celestial sources and the receiver’s electronics) and man-made signals such as TV stations, radar, and satellites. Modern radio SETI projects analyze the data digitally. More computing power enables searches to cover greater frequency ranges with more sensitivity. Radio SETI, therefore, has an insatiable appetite for computing power.

    Previous radio SETI projects have used special-purpose supercomputers, located at the telescope, to do the bulk of the data analysis. In 1995, David Gedye proposed doing radio SETI using a virtual supercomputer composed of large numbers of Internet-connected computers, and he organized the SETI@home project to explore this idea. SETI@home was originally launched in May 1999.

    SETI@home is not a part of the SETI Institute

    The SET@home screensaver image

    SETI@home, a BOINC project originated in the Space Science Lab at UC Berkeley

    To participate in this project, download and install the BOINC software on which it runs. Then attach to the project. While you are at BOINC, look at some of the other projects which you might find of interest.

    My BOINC

     
  • richardmitnick 9:50 am on September 22, 2019 Permalink | Reply
    Tags: , , , BOINC-Berkeley Open Infrastructure for Network Computing, , , Max Planck Institute for Gravitational Physics, , , The radio pulsar J0952-0607   

    From Max Planck Institute for Gravitational Physics: “Pulsating gamma rays from neutron star rotating 707 times a second” 

    From Max Planck Institute for Gravitational Physics

    September 19, 2019

    Media contact

    Dr. Benjamin Knispel
    Press Officer AEI Hannover
    Phone:+49 511 762-19104
    Fax:+49 511 762-17182
    benjamin.knispel@aei.mpg.de

    Science contacts
    Lars Nieder
    Phone:+49 511 762-17491
    Fax:+49 511 762-2784
    lars.nieder@aei.mpg.de

    Prof. Dr. Bruce Allen
    Director
    Phone:+49 511 762-17148
    Fax:+49 511 762-17182
    bruce.allen@aei.mpg.de

    1
    A black widow pulsar and its small stellar companion, viewed within their orbital plane. Powerful radiation and the pulsar’s “wind” – an outflow of high-energy particles — strongly heat the facing side of the star to temperatures twice as hot as the sun’s surface. The pulsar is gradually evaporating its partner, which fills the system with ionized gas and prevents astronomers from detecting the pulsar’s radio beam most of the time. NASA’s Goddard Space Flight Center/Cruz deWilde

    Second fastest spinning radio pulsar known is a gamma-ray pulsar, too. Multi-messenger observations look closely at the system and raise new questions.

    An international research team led by the Max Planck Institute for Gravitational Physics (Albert Einstein Institute; AEI) in Hannover has discovered that the radio pulsar J0952-0607 also emits pulsed gamma radiation. J0952-0607 spins 707 times in one second and is 2nd in the list of rapidly rotating neutron stars. By analyzing about 8.5 years worth of data from NASA’s Fermi Gamma-ray Space Telescope, LOFAR radio observations from the past two years, observations from two large optical telescopes, and gravitational-wave data from the LIGO detectors, the team used a multi-messenger approach to study the binary system of the pulsar and its lightweight companion in detail.

    Gran Telescopio Canarias at the Roque de los Muchachos Observatory on the island of La Palma, in the Canaries, Spain, sited on a volcanic peak 2,267 metres (7,438 ft) above sea level

    TFC HiPERCAM mounted on the Gran Telescopio Canarias,

    ESO/NTT at Cerro La Silla, Chile, at an altitude of 2400 metres

    ESO La Silla NTT ULTRACAM is an ultra fast camera capable of capturing some of the most rapid astronomical events. It can take up to 500 pictures a second in three different colours simultaneously. It was designed and built by scientists from the Universities of Sheffield and Warwick (United Kingdom), in collaboration with the UK Astronomy Technology Centre in Edinburgh. ULTRACAM employs the latest in charged coupled device (CCD) detector technology in order to take, store and analyse data at the required sensitivities and speeds. CCD detectors can be found in digital cameras and camcorders, but the devices used in ULTRACAM are special because they are larger, faster and most importantly, much more sensitive to light than the detectors used in today’s consumer electronics products. Since it was built, it has operated at the William Herschel Telescope, the New Technology Telescope, and the Very Large Telescope. It is now permanently mounted on the Thai National Telescope.

    NASA/Fermi LAT


    NASA/Fermi Gamma Ray Space Telescope

    ASTRON LOFAR European Map


    ASTRON LOFAR Radio Antenna Bank, Netherlands

    Their study published in The Astrophysical Journal shows that extreme pulsar systems are hiding in the Fermi catalogues and published in the Astrophysical Journal today shows that extreme pulsar systems are hiding in the Fermi catalogues and motivates further searches. Despite being very extensive, the analysis also raises new unanswered questions about this system.

    MIT /Caltech Advanced aLigo

    Pulsars are the compact remnants of stellar explosions which have strong magnetic fields and are rapidly rotating.

    Women in STEM – Dame Susan Jocelyn Bell Burnell

    Dame Susan Jocelyn Bell Burnell, discovered pulsars with radio astronomy. Jocelyn Bell at the Mullard Radio Astronomy Observatory, Cambridge University, taken for the Daily Herald newspaper in 1968. Denied the Nobel.

    Dame Susan Jocelyn Bell Burnell at work on first plusar chart 1967 pictured working at the Four Acre Array in 1967. Image courtesy of Mullard Radio Astronomy Observatory.

    Dame Susan Jocelyn Bell Burnell 2009

    Dame Susan Jocelyn Bell Burnell (1943 – ), still working from http://www. famousirishscientists.weebly.com

    They emit radiation like a cosmic lighthouse and can be observable as radio pulsars and/or gamma-ray pulsars depending on their orientation towards Earth.

    The fastest pulsar outside globular clusters

    PSR J0952-0607 (the name denotes the position in the sky) was first discovered in 2017 by radio observations of a source identified by the Fermi Gamma-ray Space Telescope as possibly being a pulsar. No pulsations of the gamma rays in data from the Large Area Telescope (LAT) onboard Fermi had been detected. Observations with the radio telescope array LOFAR identified a pulsating radio source and – together with optical telescope observations – allowed to measure some properties of the pulsar. It is orbiting the common center of mass in 6.2 hours with a companion star that only weighs a fiftieth of our Sun. The pulsar rotates 707 times in a single second and is therefore the fastest spinning in our Galaxy outside the dense stellar environments of globular clusters.

    Searching for extremely faint signals

    Using this prior information on the binary pulsar system, Lars Nieder, a PhD student at the AEI Hannover, set out to see if the pulsar also emitted pulsed gamma rays. “This search is extremely challenging because the Fermi gamma-ray telescope only registered the equivalent of about 200 gamma rays from the faint pulsar over the 8.5 years of observations. During this time the pulsar itself rotated 220 billion times. In other words, only once in every billion rotations was a gamma ray observed!” explains Nieder. “For each of these gamma rays, the search must identify exactly when during each of the 1.4 millisecond rotations it was emitted.”

    This requires combing through the data with very fine resolution in order not to miss any possible signals. The computing power required is enormous. The very sensitive search for faint gamma-ray pulsations would have taken 24 years to complete on a single computer core. By using the Atlas computer cluster at the AEI Hannover it finished in just 2 days.

    MPG Institute for Gravitational Physics Atlas Computing Cluster

    A strange first detection

    “Our search found a signal, but something was wrong! The signal was very faint and not quite where it was supposed to be. The reason: our detection of gamma rays from J0952-0607 had revealed a position error in the initial optical-telescope observations which we used to target our analysis. Our discovery of the gamma-ray pulsations revealed this error,” explains Nieder. “This mistake was corrected in the publication reporting the radio pulsar discovery. A new and extended gamma-ray search made a rather faint – but statistically significant – gamma-ray pulsar discovery at the corrected position.”

    Having discovered and confirmed the existence of pulsed gamma radiation from the pulsar, the team went back to the Fermi data and used the full 8.5 years from August 2008 until January 2017 to determine physical parameters of the pulsar and its binary system. Since the gamma radiation from J0952-0607 was so faint, they had to enhance their analysis method developed previously to correctly include all unknowns.

    3
    The pulse profile (distribution of gamma-ray photons during one rotation of the pulsar) of J0952-0607 is shown at the top. Below is the corresponding distribution of the individual photons over the ten years of observations. The greyscale shows the probability (photon weights) for individual photons to originate from the pulsar. From mid 2011 on, the photons line up along tracks corresponding to the pulse profile. This shows the detection of gamma-ray pulsations, which is not possible before mid 2011. L. Nieder/Max Planck Institute for Gravitational Physics.

    Another surprise: no gamma-ray pulsations before July 2011

    The derived solution contained another surprise, because it was impossible to detect gamma-ray pulsations from the pulsar in the data from before July 2011. The reason for why the pulsar only seems to show pulsations after that date is unknown. Variations in how much gamma rays it emitted might be one reason, but the pulsar is so faint that it was not possible to test this hypothesis with sufficient accuracy. Changes in the pulsar orbit seen in similar systems might also offer an explanation, but there was not even a hint in the data that this was happening.

    Optical observations raise further questions

    The team also used observations with the ESO’s New Technology Telescope at La Silla and the Gran Telescopio Canarias on La Palma to examine the pulsar’s companion star. It is most likely tidally locked to the pulsar like the Moon to the Earth so that one side always faces the pulsar and gets heated up by its radiation. While the companion orbits the binary system’s center of mass its hot “day” side and cooler “night” side are visible from the Earth and the observed brightness and color vary.

    These observations create another riddle. While the radio observations point to a distance of roughly 4,400 light-years to the pulsar, the optical observations imply a distance about three times larger. If the system was relatively close to Earth, it would feature a never-seen-before extremely compact high density companion, while larger distances are compatible with the densities of known similar pulsar companions. An explanation for this discrepancy might be the existence of shock waves in the wind of particles from the pulsar, which could lead to a different heating of the companion. More gamma-ray observations with Fermi LAT observations should help answer this question.

    Searching for continuous gravitational waves

    Another group of researchers at the AEI Hannover searched for continuous gravitational wave emission from the pulsar using LIGO data from the first (O1) and second (O2) observation run. Pulsars can emit gravitational waves when they have tiny hills or bumps. The search did not detect any gravitational waves, meaning that the pulsar’s shape must be very close to a perfect sphere with the highest bumps less than a fraction of a millimeter.

    Rapidly rotating neutron stars

    Understanding rapidly spinning pulsars is important because they are probes of extreme physics. How fast neutron stars can spin before they break apart from centrifugal forces is unknown and depends on unknown nuclear physics. Millisecond pulsars like J0952-0607 are rotating so rapidly because they have been spun up by accreting matter from their companion. This process is thought to bury the pulsar’s magnetic field. With the long-term gamma-ray observations, the research team showed that J0952-0607 has one of the ten lowest magnetic fields ever measured for a pulsar, consistent with expectations from theory.

    Einstein@Home searches for test cases of extreme physics

    “We will keep studying this system with gamma-ray, radio, and optical observatories since there are still unanswered questions about it. This discovery also shows once more that extreme pulsar systems are hiding in the Fermi LAT catalogue,” says Prof. Bruce Allen, Nieder’s PhD supervisor and Director at the AEI Hannover. “We are also employing our citizen science distributed computing project Einstein@Home to look for binary gamma-ray pulsar systems in other Fermi LAT sources and are confident to make more exciting discoveries in the future.”

    Einstein@home, a BOINC project

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Max Planck Institute for Gravitational Physics (Albert Einstein Institute) is the largest research institute in the world specializing in general relativity and beyond. The institute is located in Potsdam-Golm and in Hannover where it is closely related to the Leibniz Universität Hannover.

     
  • richardmitnick 4:21 pm on June 26, 2019 Permalink | Reply
    Tags: BOINC-Berkeley Open Infrastructure for Network Computing, BOINC@TACC, , Volunteer Citizen Science   

    From Texas Advanced Computing Center: “Science enthusiasts, researchers, and students benefit from volunteer computing using BOINC@TACC” 

    TACC bloc

    From Texas Advanced Computing Center

    June 24, 2019
    Faith Singer-Villalobos

    You don’t have to be a scientist to contribute to research projects in fields such as biomedicine, physics, astronomy, artificial intelligence, or earth sciences.

    Using specialized, open-source software from the Berkeley Open Infrastructure for Network Computing project (BOINC), hundreds of thousands of home and work computers are being used for volunteer computing using consumer devices and organizational resources. For the past 17 years, with funding primarily from the National Science Foundation (NSF), BOINC is now used by 38 projects and more than a half a million computers running these projects around the world.

    David Anderson, BOINC’s founder, is a research scientist at the University of California Berkeley Space Sciences Laboratory. His objective in creating BOINC was to build software to handle the details of distributed computing so that scientists wouldn’t have to.

    “I wanted to create a new way of doing scientific computing as an alternative to grids, clusters, and clouds,” Anderson said. “As a software system, BOINC has been very successful. It’s evolved without too many growing pains to handle multi-core CPUs, all kinds of GPUs, virtual machines and containers, and Android mobile devices.”

    The Texas Advanced Computing Center (TACC) started its own project in 2017 — BOINC@TACC — that supports virtualized, parallel, cloud, and GPU-based applications to allow the public to help solve science problems. BOINC@TACC is the first use of volunteer computing by a major high performance computing (HPC) center.

    “BOINC@TACC is an excellent project for making the general public a key contributor in science and technology projects,” said Ritu Arora, a research scientist at TACC and the project lead.

    “We love engaging with people in the community who can become science enthusiasts and connect with TACC and generate awareness of science projects,” Arora said. “And, importantly for students and researchers, there is always an unmet demand for computing cycles. If there is a way for us to connect these two communities, we’re fulfilling a major need.”

    BOINC volunteer Dick Duggan is a retired IT professional who lives in Massachusetts, and a volunteer computing enthusiast for more than a decade.

    “I’m a physics nerd. Those tend to be my favorite projects,” he said. “I contribute computing cycles to many projects, including the Large Hadron Collider (LHC). LHC is doing state-of-the-art physics — they’re doing physics on the edge of what we know about the universe and are pushing that edge out.”

    Duggan uses his laptop, desktop, tablet, and Raspberry Pi to provide computing cycles to BOINC@TACC. “When my phone is plugged in and charged, it runs BOINC@TACC, too.”

    Joining BOINC@TACC is simple: Sign up as volunteer, set up your device, and pick your projects.

    Compute cycles on more than 1,300 computing devices have been volunteered for the BOINC@TACC project and more than 300 devices have processed the jobs submitted using the BOINC@TACC infrastructure. The aggregate computer power available through the CPUs on the volunteered devices is about 3.5 teraflops (or 3.5 trillion floating point operations per second).

    Why BOINC@TACC?

    It’s no secret that computational resources are in great demand, and that researchers with the most demanding computational requirements require supercomputing systems. Access to the most powerful supercomputers in the world, like the resources at TACC, is important for the advancement of science in all disciplines. However, with funding limitations, there is always an unmet need for these resources.

    “BOINC@TACC helps fill a gap in what researchers and students need and what the open-science supercomputing centers can currently provide them,” Arora said.

    Researchers from UT Austin; any of the 14 UT System institutions; and researchers around the country through XSEDE, the national advanced computing infrastructure in the U.S., are invited to submit science jobs to BOINC@TACC.

    To help researchers with this unmet need, TACC started a collaboration with Anderson at UC Berkeley to see how the center could outsource high-throughput computing jobs to BOINC.

    When a researcher is ready to submit projects through BOINC@TACC, all they need to do is log in to a TACC system and run a program from their account that will register them for BOINC@TACC, according to Arora. Thereafter, the researcher can continue running programs that will help them (1) decide whether BOINC@TACC is the right infrastructure for running their jobs; and (2) submit the qualified high-throughput computing jobs through the command-line interface. The researchers can also submit jobs through the web interface.

    Instead of the job running on Stampede2, for example, it could run on a volunteer’s home or work computer.

    “Our software matches the type of resources for a job and what’s available in the community,” Arora said. “The tightly-coupled, compute-intensive, I/O-intensive, and memory-intensive applications are not appropriate for running on the BOINC@TACC infrastructure. Therefore, such jobs are filtered out and submitted for running on Stampede2 or Lonestar5 instead of BOINC@TACC,” she clarified.

    A significant number of high-throughput computing jobs are also run on TACC systems in addition to the tightly-coupled MPI jobs. These high-throughput computing jobs consist of large sets of loosely-coupled tasks, each of which can be executed independently and in parallel to other tasks. Some of these high-throughput computing jobs have modest memory and input/output needs, and do not have an expectation of a fixed turnaround time. Such jobs qualify to run on the BOINC@TACC infrastructure.

    “Volunteer computing is well-suited to this kind of workload,” Anderson said. “The idea of BOINC@TACC is to offload these jobs to a BOINC server, freeing up the supercomputers for the tightly-coupled parallel jobs that need them.”

    To start, the TACC team deployed an instance of the BOINC server on a cloud-computing platform. Next, the team developed the software for integrating BOINC with supercomputing and cloud computing platforms. During the process, the project team developed and released innovative software components that can be used by the community to support projects from a variety of domains. For example, a cloud-based shared filesystem and a framework for creating Docker images that was developed in this project can be useful for a variety of science gateway projects.

    As soon as the project became operational, volunteers enthusiastically started signing up. The number of researchers using BOINC@TACC is gradually increasing.

    Carlos Redondo, a senior in Aerospace Engineering at UT Austin, is both a developer on the BOINC@TACC project and a researcher who uses the infrastructure.

    “The incentive for researchers to use volunteer computing is that they save on their project allocation,” Redondo said. “But researchers need to be mindful that the number of cores on volunteer systems are going to be small, and they don’t have the special optimization that servers at TACC have,” Redondo said.

    As a student researcher, Redondo has submitted multiple computational fluid dynamics jobs through BOINC@TACC. In this field, computers are used to simulate the flow of fluids and the interaction of the fluid (liquids and gases) with surfaces. Supercomputers can achieve better solutions and are often required to solve the largest and most complex problems.

    “The results in terms of the numbers produced from the volunteer devices were exactly those expected, and also identical to those running on Stampede2,” he said.

    Since jobs run whenever volunteers’ computers are available, researchers’ turnaround time is longer than that of Stampede2, according to Redondo. “Importantly, if a volunteer decides to stop a job, BOINC@TACC will automatically safeguard the progress, protect the data, and save the results.”

    TACC’s Technical Contribution to BOINC

    BOINC software works out-of-the-box. What it doesn’t support is the software for directly accepting jobs from supercomputers.

    “We’re integrating BOINC software with the software that is running on supercomputing devices so these two pieces can talk to each other when we have to route qualified high-throughput computing jobs from supercomputers to volunteer devices. The other piece TACC has contributed is extending BOINC to the cloud computing platforms,” Arora said.

    Unlike other BOINC projects, the BOINC@TACC infrastructure can execute jobs on Virtual Machines (VMs) running on cloud computing systems. These systems are especially useful for GPU jobs and for assuring a certain quality of service to the researchers. “If the pool of the volunteered resources goes down, we’re able to route the jobs to the cloud computing systems and meet the expectations of the researchers. This is another unique contribution of the project,” Arora said.

    BOINC@TACC is also pioneering the use of Docker to package custom-written science applications so that they can run on volunteered resources.

    Furthermore, the project team is planning to collaborate with companies that may have corporate social responsibility programs for soliciting compute-cycles on their office computers or cloud computing systems.

    “We have the capability to harness office desktops and laptops, and also the VMs in the cloud. We’ve demonstrated that we’re capable of routing jobs from Stampede2 to TACC’s cloud computing systems, Chameleon and Jetstream, through the BOINC@TACC infrastructure,” Arora said.

    Anderson concluded, “We hope that BOINC@TACC will provide a success story that motivates other large scientific computing centers to use the same approach. This will benefit thousands of computational scientists and, we hope, will greatly increase the volunteer population.”

    Dick Duggan expressed a common sentiment of BOINC volunteers that people want to do it for the love of science. “This is the least I can do. I may not be a scientist but I’m accomplishing something…and it’s fun to do,” Duggan said.

    Learn More: The software infrastructure that TACC developed for routing jobs from TACC systems to the volunteer devices and the cloud computing systems is described in this paper.

    BOINC@TACC is funded through NSF award #1664022. The project collaborators are grateful to TACC, XSEDE, and the Science Gateway Community Institute (SGCI) for providing the resources required for implementing this project.

    Computing power
    24-hour average: 17.707 PetaFLOPS.
    Active: 139,613 volunteers, 590,666 computers.
    Not considered for the TOP500 because it is distributed, BOINC is right now more powerful than No.9 Titan 17.590 PetaFLOPS and No 10 Sequoia 17.173 PetaFLOPS.

    My BOINC

    See the full article here .

    Please help promote STEM in your local schools.


    Stem Education Coalition

    The Texas Advanced Computing Center (TACC) designs and operates some of the world’s most powerful computing resources. The center’s mission is to enable discoveries that advance science and society through the application of advanced computing technologies.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    TACC DELL EMC Stampede2 supercomputer


     
  • richardmitnick 10:29 am on May 25, 2019 Permalink | Reply
    Tags: BOINC-Berkeley Open Infrastructure for Network Computing,   

    From SETI@home via The Ringer: “E.T.’s Home Phone” 

    SETI@home
    From SETI@home

    via

    The Ringer

    May 24, 2019
    Ben Lindbergh

    UC Berkeley’s SETI@home, one of the most significant citizen-science projects of the late 20th century, brought the search for intelligent life to PCs. It hasn’t yet found what it set out to, but there’s still hope.

    1
    Getty Images/Ringer illustration

    Around the time the movie Contact came out in 1997, Kevin D., a governmental IT support and procurement employee in Toronto, saw a notice on a technical news site about a piece of software that was being developed by researchers at the University of California, Berkeley. The scientists were interested in SETI, the Search for Extraterrestrial Intelligence, and courtesy of Contact, so was Kevin D. The moment he heard about the program that would eventually come to be called SETI@home wasn’t as dramatic as Jodie Foster’s portrayal of Dr. Eleanor Arroway discovering a message sent across the universe, but it would make a major impact on the next two decades of his life. It also signaled the advent of a productive and unprecedented citizen-science project that continues today, 20 years after it launched in May 1999.

    Kevin D. aspired to be a scientist as soon as he could read, but financial difficulties forced him to drop out of university, which put an end to that dream. “I could have probably gone with student loans and a few years of eating ramen, but I wasn’t in the right frame of mind anymore,” he says. “SETI@home and other distributed-computing projects have filled that need nicely, allowing me to contribute to science on a scale that would have been unimaginable just a few decades ago.”

    SETI@home was the brainchild of a UC Berkeley grad student named David Gedye, who came up with the concept of using personal computers for scientific purposes in 1995. “That was the point where a lot of home computers were on the internet,” says Berkeley research scientist David Anderson, Gedye’s grad advisor and the cofounder of SETI@home. “Also the point where personal computers were becoming fast enough that they could potentially do number-crunching for scientific purposes.”

    Gedye thought using computers to comb through data recorded by radio telescopes in search of signals sent by intelligent extraterrestrial life would both appeal to the public and demonstrate the potential for public participation to boost the scientific community’s processing power. He and Anderson joined forces with multiple partners in the astronomy and SETI fields, including Eric Korpela, the current director of SETI@home, and Dan Werthimer, the Berkeley SETI Research Center’s chief scientist. Werthimer was a SETI veteran who had been hunting for alien life since the 1970s and oversaw the SERENDIP program, which piggybacks on observations that radio astronomers are already conducting and scours the results for evidence that E.T. is phoning our home. SERENDIP supplied the incipient SETI@home with data from the venerable Arecibo Observatory in Puerto Rico, which until 2016 featured the world’s largest single-aperture radio telescope.

    NAIC Arecibo Observatory operated by University of Central Florida, Yang Enterprises and UMET, Altitude 497 m (1,631 ft).

    Fueled by $50,000 from the Planetary Society and $10,000 from a company backed by Microsoft cofounder and SETI enthusiast Paul Allen, Korpela and Anderson started designing software that would split that data into chunks that could be distributed to personal computers, processed, and sent back to Berkeley for further analysis. By the spring of ’99, SETI@home was ready to launch, despite the difficulty of making it compatible with all kinds of computers and dealing with pre-broadband internet. But its creators weren’t prepared for the outpouring of public interest that propagated through word of mouth and posts on forums and sites such as Slashdot.

    “The biggest issue was not the people on dial-up connections,” Korpela recalls. “It was just the sheer number of people that were interested in SETI@home. When we started SETI@home, we planned or thought that maybe we could get 10,000 people to be interested in doing this. The day we turned it on, we had close to half a million people show up.”

    In 1999, the public portion of the internet was new enough that going viral was a nearly unknown phenomenon. But Korpela says that within a month or two, SETI@home had attracted a couple million active users, which overwhelmed the modest equipment underpinning the project, causing frequent crashes. “We were planning on running our servers from a small desktop machine,” Korpela says. “That didn’t really work.” Sun Microsystems stepped in to donate more powerful hardware, and SETI@home users helped the perpetually underfunded program defray the cost of bandwidth, which was expensive at the time. In 1999, Korpela says, Berkely was paying $600 a month for each megabit per second, and SETI@home was guzzling about 25.

    On the plus side, the uptick in processing power was immediately apparent. “The main benefit of the SETI@home–type processing is that it gives us about a factor-of-10 increase in sensitivity,” Korpela says. “So we can detect a signal that’s 10 times smaller than we could just using the instrumentation that’s available at the radio telescope.”

    As SETI@home spread, a few of its more zealous acolytes ran afoul of the workplaces where they installed it, which the program’s creators advised users not to do without permission. In 2001, 16 employees of the Tennessee Valley Authority were reprimanded for installing the software on their office computers. (I know the feeling; my mom wasn’t pleased about the electricity costs she claimed I was incurring when she spotted the screensaver on my own early-2000s PC.) In 2002, computer technician David McOwen faced criminal charges and was ultimately put on probation when he installed SETI@home at DeKalb Technical College in Atlanta. And in 2009, network systems administrator Brad Niesluchowski lost his job after installing SETI@home on thousands of computers across an Arizona school district. (Niesluchowski, or “NEZ,” still ranks 17th on the all-time SETI@home leaderboard for data processed.) Korpela has made several SETI@home sightings in the wild, including on point-of-sale cash registers and, once, on a public computer at an Air Force base (which wasn’t Area 51).

    Over the decades, SETI@home’s user base has dwindled to between 100,000 and 150,000 people, operating an average of two computers and six to eight CPUs per person. But the remaining participants’ computers are hundreds or thousands of times more powerful than they were in 1999. “When we started, we designed our work units—our data chunks going out to people—to be something that a typical PC would be able to finish computing in about a week, and a current GPU will do those in a couple of minutes,” Korpela says. SETI@home is now available via an Android app that’s used by about 12,000 participants, and even smartphones smoke turn-of-the-century desktop computers in processing speed.

    The SETI@home software has evolved along with the hardware that hosts it. In the early years, the program ran as a screensaver, which served multiple purposes. First, screensavers were popular, so the software filled a need. Second, the graphical representations of the program’s activities fed users’ scientific curiosity and reassured them that the program was working as intended. And third, it functioned as eye candy that entertained users and caught the attention of anyone within visual range. Now that screensavers have fallen out of favor and more people prefer to turn off their monitors or computers when they’re not in use to save power, Anderson says, “We’ve kind of moved away from the screensaver model to the model of just running invisibly in the background while you’re at your computer.”

    A shortcoming of the original SETI@home software led to a much more significant change—and, indirectly, the greatest legacy of SETI@home, at least so far. In the program’s initial form, the signal-processing logic and the code that handled displaying the screensaver and receiving and transmitting data were a package deal. “Each time we wanted to change the algorithms, to change the scientific part, we had to have all of our users download and install a new program,” Anderson says. “And then we would lose some fraction of our users each time we did that.”

    The solution was separating the science part from the distributed-computing part by building a platform that could update the algorithm without requiring a reinstall. Better yet, that platform could act as a conduit for any number of alternative distributed-computing efforts. In 2002, Anderson built and released that system, which he called Berkeley Open Infrastructure for Network Computing, or BOINC.

    SETI@home, which migrated to BOINC in 2005, has thus far failed in its primary purpose: to detect intelligent alien life. But it’s succeeded in its secondary goal of demonstrating the viability of distributed computing. Other researchers have emulated that model, and BOINC, which is funded primarily by the National Science Foundation, is now home to 38 active projects that are doing useful science, including investigating diseases and identifying drugs that could combat cancer, modeling climate change, and searching for phenomena such as pulsars and gravitational waves. Research conducted by BOINC-based projects has generated 150 scientific papers (and counting), and the network’s collective computing power—about 27 petaflops—makes it more powerful than all but four of the world’s individual supercomputers. Anderson, who believes volunteer computing is still underutilized by the scientific community, says it’s especially “well suited to the general area of physical simulations where you have programs that simulate physical reality, which scales anywhere from the atomic level up to the entire universe.”

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The science of SETI@home
    SETI (Search for Extraterrestrial Intelligence) is a scientific area whose goal is to detect intelligent life outside Earth. One approach, known as radio SETI, uses radio telescopes to listen for narrow-bandwidth radio signals from space. Such signals are not known to occur naturally, so a detection would provide evidence of extraterrestrial technology.

    Radio telescope signals consist primarily of noise (from celestial sources and the receiver’s electronics) and man-made signals such as TV stations, radar, and satellites. Modern radio SETI projects analyze the data digitally. More computing power enables searches to cover greater frequency ranges with more sensitivity. Radio SETI, therefore, has an insatiable appetite for computing power.

    Previous radio SETI projects have used special-purpose supercomputers, located at the telescope, to do the bulk of the data analysis. In 1995, David Gedye proposed doing radio SETI using a virtual supercomputer composed of large numbers of Internet-connected computers, and he organized the SETI@home project to explore this idea. SETI@home was originally launched in May 1999.

    SETI@home is not a part of the SETI Institute

    The SET@home screensaver image

    SETI@home, a BOINC project originated in the Space Science Lab at UC Berkeley

    To participate in this project, download and install the BOINC software on which it runs. Then attach to the project. While you are at BOINC, look at some of the other projects which you might find of interest.

    My BOINC

     
  • richardmitnick 9:17 am on November 1, 2018 Permalink | Reply
    Tags: BOINC-Berkeley Open Infrastructure for Network Computing, , , , ,   

    From Discover Magazine: “Meet the Biochemist Engineering Proteins From Scratch” 

    DiscoverMag

    From Discover Magazine

    October 30, 2018
    Jonathon Keats

    1
    David Baker. Brian Dalbalcon/UW Medicine

    U Washington Dr. David Baker

    In a sleek biochemistry laboratory at the University of Washington, postdoctoral fellow Yang Hsia is watching yellowish goo — the liquefied remains of E. coli — ooze through what looks like a gob of white marshmallow. “This isn’t super exciting,” he says.

    While growing proteins in bacteria and then purifying them, using blobby white resin as a filter, doesn’t make for riveting viewing, the end product is extraordinary. Accumulating in Hsia’s resin is a totally artificial protein, unlike anything seen in nature, that might just be the ideal chassis for the first universal flu vaccine.

    David Baker, Hsia’s adviser, calls this designer protein a “Death Star.” Imaged on his computer, its structure shows some resemblance to the notorious Star Wars superweapon. Though microscopic, by protein standards it’s enormous: a sphere made out of many interlocking pieces.

    2
    The Death Star artificial protein. Institute for Protein Design

    “We’ve figured out a way to put these building blocks together at the right angles to form these very complex nanostructures,” Baker explains. He plans to stud the exterior with proteins from a whole suite of flu strains so that the immune system will learn to recognize them and be prepared to fend off future invaders. A single Death Star will carry 20 different strains of the influenza virus.

    Baker hopes this collection will cover the entire range of possible influenza mutation combinations. This all-in-one preview of present and future flu strains could replace annual shots: Get the Death Star vaccination, and you’ll already have the requisite antibodies in your bloodstream.

    As Baker bets on designer proteins to defeat influenza, others are betting on David Baker.

    After revolutionizing the study of proteins — molecules that perform crucial tasks in every cell of every natural organism — Baker is now engineering them from scratch to improve on nature. In late 2017, the Open Philanthropy Project gave his University of Washington Institute for Protein Design more than $10 million to develop the Death Star and support Rosetta, the software platform he conceived in the 1990s to discover how proteins are assembled. Rosetta has allowed Baker’s lab not only to advance basic science and pioneer new kinds of vaccines, but also to create drugs for genetic disorders, biosensors to detect toxins and enzymes to convert waste into biofuels.

    His team currently numbers about 80 grad students and postdocs, and Baker is in constant contact with all of them. He challenges their assumptions and tweaks their experiments while maintaining an egalitarian environment in which ideas may come from anyone. He calls his operation a “communal brain.” Over the past quarter-century, this brain has generated nearly 450 scientific papers.

    “David is literally creating a new field of chemistry right in front of our eyes,” says Raymond Deshaies, senior vice president for discovery research at the biotech company Amgen and former professor of biology at Caltech. “He’s had one first after another.”

    Nature’s Origami

    When Baker was studying philosophy at Harvard University, he took a biology class that taught him about the so-called “protein folding problem.” The year was 1983, and scientists were still trying to make sense of an experiment, carried out in the early ’60s by biochemist Christian Anfinsen, that revealed the fundamental building blocks of all life on Earth were more complex than anyone imagined.

    The experiment was relatively straightforward. Anfinsen mixed a sample of the protein ribonuclease — which breaks down RNA — with a denaturant, a chemical that deactivated it. Then he allowed the denaturant to evaporate. The protein started to function again as if nothing ever happened.

    What made this simple experiment so striking was the fact that the amino acids in protein molecules are folded in three-dimensional forms that make origami look like child’s play. When the denaturant unfolded Anfinsen’s ribonuclease, there were myriad ways it could refold, resulting in structures as different as an origami crane and a paper airplane. Much as the folds determine whether a piece of paper can fly across a room, only one fold pattern would result in functioning ribonuclease. So the puzzle was this: How do proteins “know” how to refold properly?

    “Anfinsen showed that the information for both structure and activity resided in the sequence of amino acids,” says University of California, Los Angeles, biochemist David Eisenberg, who has been researching protein folding since the 1960s. “There was a hope that it would be possible to use sequence information to get three-dimensional structural information. Well, that proved much more difficult than anticipated.”

    2
    Protein molecules play critical roles in every aspect of life. The way each protein folds determines its function, and the ways to fold are virtually limitless, as shown in this small selection of proteins visualized through the software platform Rosetta, born in Baker’s lab. Institute for Protein Design.

    Baker was interested enough in protein folding and other unsolved mysteries of biology to switch majors and apply to grad school. “I’d never worked in a lab before,” he recalls. He had only a vague notion of what biologists did on a daily basis, but he also sensed that the big questions in science, unlike philosophy, could actually be answered.

    Grad school plunged Baker into the tediousness and frustrations of benchwork, while also nurturing some of the qualities that would later distinguish him. He pursued his Ph.D. under Randy Schekman, who was studying how molecules move within cells, at the University of California, Berkeley. To aid in this research, students were assigned the task of dismantling living cells to observe their internal molecular traffic. Nearly half a dozen of them, frustrated by the assignment’s difficulty, had given up by the time Baker got the job.

    Baker decided to follow his instincts even though it meant going against Schekman’s instructions. Instead of attempting to keep the processes within a cell still functioning as he dissected it under his microscope, Baker concentrated on preserving cell structure. If the cell were a wristwatch, his approach would be equivalent to focusing on the relationship between gears, rather than trying to keep it ticking, while taking it apart.

    “He was completely obsessed,” recalls Deshaies, who was his labmate at the time (and one of the students who’d surrendered). Nobody could stop Baker, or dissuade him. He worked for months until he proved his approach was correct: Cell structure drove function, so maintaining its anatomy preserved the internal transportation network. Deshaies believes Baker’s methodological breakthrough was “at the core of Randy’s Nobel Prize,” awarded in 2013 for working out one of the fundamentals of cellular machinery.

    But Baker didn’t dwell on his achievement, or cell biology for that matter. By 1989, Ph.D. in hand, he’d headed across the Bay to the University of California, San Francisco, where he switched his focus to structural biology and biochemistry. There he built computer models to study the physical properties of the proteins he worked with at the bench. Anfinsen’s puzzle remained unsolved, and when Baker got his first faculty appointment at the University of Washington, he took up the protein-folding problem full time.

    From Baker’s perspective, this progression was perfectly natural: “I was getting to more and more fundamental problems.” Deshaies believes Baker’s tortuous path, from cells to atoms and from test tubes to computers, has been a factor in his success. “He just has greater breadth than most people. And you couldn’t do what he’s done without being somewhat of a polymath.”

    3
    Illustration above: National Science foundation. Illustrations below: Jay Smith

    Rosetta Milestone

    Every summer for more than a decade, scores of protein-folding experts convene at a resort in Washington’s Cascade Mountains for four days of hiking and shop talk. The only subject on the agenda: how to advance the software platform known as Rosetta.

    David Baker’s Rosetta@home project, a project running on BOINC software from UC Berkeley


    Rosetta@home BOINC project



    They call it Rosettacon.

    Rosetta has been the single most important tool in the quest to understand how proteins fold, and to design new proteins based on that knowledge. It is the link between Anfinsen’s ribonuclease experiment and Baker’s Death Star vaccine.

    When Baker arrived at the University of Washington in 1993, researchers knew that a protein’s function was determined by its structure, which was determined by the sequence of its amino acids. Just 20 different amino acids were known to provide all the raw ingredients. (Their particular order — specified by DNA — makes one protein fold into, say, a muscle fiber and another fold into a hormone.) Advances in X-ray crystallography, a technique for imaging molecular structure, had provided images of many proteins in all their folded splendor. Sequencing techniques had also improved, benefitting from the Human Genome Project as well as the exponential increase in raw computing power.

    “There’s a right time for things,” Baker says in retrospect. “To some extent, it’s just luck and historical circumstance. This was definitely the right time for this field.”

    Which is not to say that modeling proteins on a computer was a simple matter of plugging in the data. Proteins fold to their lowest free energy state: All of their amino acids must align in equilibrium. The trouble is that the equilibrium state is just one of hundreds of thousands of options — or millions, if the amino acid sequence is long. That’s far too many possibilities to test one at a time. Nature must have another way of operating, given that folding is almost instantaneous.

    Baker’s initial approach was to study what nature was doing. He broke apart proteins to see how individual pieces behaved, and he found that each fragment was fluctuating among many possible structures. “And then folding would occur when they all happened to be in the right geometry at the same time,” he says. Baker designed Rosetta to simulate this dance for any amino acid sequence.

    Baker wasn’t alone in trying to predict how proteins fold. In 1994, the protein research community organized a biennial competition called CASP (Critical Assessment of Protein Structure Prediction). Competitors were given the amino acid sequences of proteins and challenged to anticipate how they would fold.

    The first two contests were a flop. Structures that competitors number-crunched looked nothing like folded proteins, let alone the specific proteins they were meant to predict. Then everything changed in 1998.

    3
    Rosetta’s impressive computational power allows researchers to predict how proteins — long, complex chains of amino acids — will fold; the platform also helps them reverse engineer synthetic proteins to perform specific tasks in medicine and other fields. Brian Dalbalcon/UW Medicine.

    Function Follows Form

    That summer, Baker’s team received 20 sequences from CASP, a considerable number of proteins to model. But Baker was optimistic: Rosetta would transform protein-folding prediction from a parlor game into legitimate science.

    In addition to incorporating fresh insights from the bench, team members — using a janky collection of computers made of spare parts — found a way to run rough simulations tens of thousands of times to determine which fold combinations were most likely.

    They successfully predicted structures for 12 out of the 20 proteins. The predictions were the best yet, but still approximations of actual proteins. In essence, the picture was correct, but blurry.

    Improvements followed rapidly, with increased computing power contributing to higher-resolution models, as well as improved ability to predict the folding of longer amino acid chains. One major leap was the 2005 launch of Rosetta@Home, a screensaver that runs Rosetta on hundreds of thousands of networked personal computers whenever they’re not being used by their owners.

    Yet the most significant source of progress has been RosettaCommons, the community that has formed around Rosetta. Originating in Baker’s laboratory and growing with the ever-increasing number of University of Washington graduates — as well as their students and colleagues — it is Baker’s communal brain writ large.

    Dozens of labs continue to refine the software, adding insights from genetics and methods from machine learning. New ideas and applications are constantly emerging.

    4
    Protein (in green) enveloping fentanyl molecule. Bick et al. eLife 2017.

    The communal brain has answered Anfinsen’s big question — a protein’s specific amino acid alignment creates its unique folding structure — and is now posing even bigger ones.

    “I think the protein-folding problem is effectively solved,” Baker says. “We can’t necessarily predict every protein structure accurately, but we understand the principles.

    “There are so many things that proteins do in nature: light harvesting, energy storage, motion, computation,” he adds. “Proteins that just evolved by pure, blind chance can do all these amazing things. What happens if you actually design proteins intelligently?”

    De Novo Design

    Matthew Bick is trying to coax a protein into giving up its sugar habit for a full-blown fentanyl addiction. His computer screen shows a colorful image of ribbons and swirls representing the protein’s molecular structure. A sort of Technicolor Tinkertoy floats near the center, representing the opioid. “You see how it has really good packing?” he asks me, tracing the ribbons with his finger. “The protein kind of envelops the whole fentanyl molecule like a hot dog bun.”

    A postdoctoral fellow in Baker’s lab, Bick engineers protein biosensors using Rosetta. The project originated with the U.S. Department of Defense. “Back in 2002, Chechen rebels took a bunch of people hostage, and there was a standoff with the Russian government,” he says. The Russians released a gas, widely believed to contain a fentanyl derivative, that killed more than a hundred people. Since then, the Defense Department has been interested in simple ways to detect fentanyl in the environment in case it’s used for chemical warfare in the future.

    Proteins are ideal molecular sensors. In the natural world, they’ve evolved to bind to specific molecules like a lock and key. The body uses this system to identify substances in its environment. Scent is one example; specific volatiles from nutrients and toxins fit into dedicated proteins lining the nose, the first step in alerting the brain to their presence. With protein design, the lock can be engineered to order.

    For the fentanyl project, Bick instructed Rosetta to modify a protein with a natural affinity for the sugar xylotetraose. The software generated hundreds of thousands of designs, each representing a modification of the amino acid sequence predicted to envelop fentanyl instead of sugar molecules. An algorithm then selected the best several hundred options, which Bick evaluated by eye, eventually choosing 62 promising candidates. The protein on Bick’s screen was one of his favorites.

    “After this, we do the arduous work of testing designs in the lab,” Bick says.

    5
    Cassie Bryan, a senior fellow at Baker’s Institute for Protein Design at the University of Washington, checks on a tube of synthetic proteins. The proteins, not seen in nature, are in the process of thawing and being prepped to test how they perform. Brian Dalbalcon/UW Medicine.

    With another image, he reveals his results. All 62 contenders have been grown in yeast cells infused with synthetic genes that spur the yeasts’ own amino acids to produce the foreign proteins. The transgenic yeast cells have been exposed to fentanyl molecules tagged with a fluorescing chemical. By measuring the fluorescence — essentially shining ultraviolet light on the yeast cells to see how many glow with fentanyl — Bick can determine which candidates bind to the opioid with the greatest strength and consistency.

    Baker’s lab has already leveraged this research to make a practical environmental sensor. Modified to glow when fentanyl binds to the receptor site, Bick’s customized protein can now be grown in a common plant called thale cress. This transgenic weed can cover terrain where chemical weapons might get deployed, and then glow if the dangerous substances are present, providing an early warning system for soldiers and health workers.

    The concept can also be applied to other biohazards. For instance, Bick is now developing a sensor for aflatoxin, a residue of fungus that grows on grain, causing liver cancer when consumed by humans. He wants the sensor to be expressed in the grain itself, letting people know when their food is unsafe.

    But he’s going about things differently this time around. Instead of modifying an existing protein, he’s starting from scratch. “That way, we can control a lot of things better than in natural proteins,” he explains. His de novo protein can be much simpler, and have more predictable behavior, because it doesn’t carry many million years of evolutionary baggage.

    For Baker, de novo design represents the summit of his quarter-century quest. The latest advances in Rosetta allow him to work backward from a desired function to an appropriate structure to a suitable amino acid sequence. And he can use any amino acids at all — thousands of options, some already synthesized and others waiting to be designed — not only the 20 that are standard in nature for building proteins.

    Without the freedom of de novo protein design, Baker’s Death Star would never have gotten off the ground. His group is now also designing artificial viruses. Like natural viruses, these protein shells can inject genetic material into cells. But instead of infecting you with a pathogen, their imported DNA would patch dangerous inherited mutations. Other projects aim to take on diseases ranging from malaria to Alzheimer’s.

    In Baker’s presence, protein design no longer seems so extraordinary. Coming out of a brainstorming session — his third or fourth of the day — he pulls me aside and makes the case that his calling is essentially the destiny of our species.

    “All the proteins in the world today are the product of natural selection,” he tells me. “But the current world is quite a bit different than the world in which we evolved. We live much longer, so we have a whole new class of diseases. We put all these nasty chemicals into the environment. We have new needs for capturing energy.

    “Novel proteins could solve a lot of the problems that we face today,” he says, already moving to his next meeting. “The goal of protein design is to bring those into existence.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 4:51 pm on September 28, 2018 Permalink | Reply
    Tags: , , BOINC-Berkeley Open Infrastructure for Network Computing, , , ,   

    From World Community Grid (WCG): “A Graduation, a Paper, and a Continuing Search for the ‘Help Stop TB’ Researchers” 

    New WCG Logo

    WCGLarge

    From World Community Grid (WCG)

    By: Dr. Anna Croft
    University of Nottingham, UK
    28 Sep 2018

    Summary
    In this update, principal investigator Dr. Anna Croft shares two recent milestones for the Help Stop TB research team, and discusses their continuing search for additional researchers.

    The Help Stop TB (HSTB) project uses the massive computing power of World Community Grid to examine part of the coating of Mycobacterium tuberculosis, the bacterium that causes tuberculosis. We hope that by learning more about the mycolic acids that are part of this coating, we can contribute to the search for better treatments for tuberculosis, which is one of the world’s deadliest diseases.

    Graduation Ceremony for Dr. Athina Meletiou

    In recent news for the HSTB project, Dr. Athina Meletiou has now officially graduated. It was a lovely day, finished off with some Pimms and Lemonade in the British tradition.

    1
    Athina (center) with supervisors Christof (left) and Anna (right)

    2
    Athina and her scientific “body-guard,” Christof

    Search for New Team Members Continues

    We are still looking for suitably qualified chemists, biochemists, mathematicians, engineers and computer scientists to join our team, especially to develop the new analytical approaches (including machine-learning approaches) to understand the substantial data generated by the World Community Grid volunteers.

    We will be talking to students from our BBSRC-funded doctoral training scheme in the next few days and encouraging them to join the project. Click here for more details.

    Paper Published

    Dr. Wilma Groenwald, one of the founding researchers for the HSTB project, recently published a paper describing some of the precursor work to the project. The paper, which discusses the folding behavior of mycolic acids, is now freely available on ChemRXiv Revealing Solvent-Dependent Folding Behavior of Mycolic Acids from Mycobacterium Tuberculosis by Advanced Simulation Analysis.

    We hope to have Athina’s first papers with World Community Grid data available later in the year, and will keep you updated.

    Thank you to all volunteers for your support.

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ways to access the blog:
    https://sciencesprings.wordpress.com
    http://facebook.com/sciencesprings
    World Community Grid (WCG) brings people together from across the globe to create the largest non-profit computing grid benefiting humanity. It does this by pooling surplus computer processing power. We believe that innovation combined with visionary scientific research and large-scale volunteerism can help make the planet smarter. Our success depends on like-minded individuals – like you.”
    WCG projects run on BOINC software from UC Berkeley.
    BOINCLarge

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing.

    BOINC WallPaper

    CAN ONE PERSON MAKE A DIFFERENCE? YOU BET!!

    My BOINC
    MyBOINC
    “Download and install secure, free software that captures your computer’s spare power when it is on, but idle. You will then be a World Community Grid volunteer. It’s that simple!” You can download the software at either WCG or BOINC.

    Please visit the project pages-

    Microbiome Immunity Project

    FightAIDS@home Phase II

    FAAH Phase II
    OpenZika

    Rutgers Open Zika

    Help Stop TB
    WCG Help Stop TB
    Outsmart Ebola together

    Outsmart Ebola Together

    Mapping Cancer Markers
    mappingcancermarkers2

    Uncovering Genome Mysteries
    Uncovering Genome Mysteries

    Say No to Schistosoma

    GO Fight Against Malaria

    Drug Search for Leishmaniasis

    Computing for Clean Water

    The Clean Energy Project

    Discovering Dengue Drugs – Together

    Help Cure Muscular Dystrophy

    Help Fight Childhood Cancer

    Help Conquer Cancer

    Human Proteome Folding

    FightAIDS@Home

    faah-1-new-screen-saver

    faah-1-new

    World Community Grid is a social initiative of IBM Corporation
    IBM Corporation
    ibm

    IBM – Smarter Planet
    sp

     
  • richardmitnick 1:49 pm on August 8, 2018 Permalink | Reply
    Tags: , , BOINC-Berkeley Open Infrastructure for Network Computing, Maping Cancer Markets project takes on sarcoma, ,   

    From World Community Grid (WCG): “Sarcoma Dataset Coming Soon to Mapping Cancer Markers Project” 

    New WCG Logo

    WCGLarge

    From World Community Grid (WCG)

    8 Aug 2018
    Dr. Igor Jurisica

    In this comprehensive update, the Mapping Cancer Markers team explains how they are determining which genes and gene signatures carry the greatest promise for lung cancer diagnosis. They also introduce the next type of cancer–sarcoma–to be added soon to the project.

    The Mapping Cancer Markers (MCM) project continues to process work units for the ovarian cancer dataset. As we accumulate these outcomes, we continue to analyze MCM results from the lung cancer dataset. In this update, we discuss preliminary findings from this analysis. In addition, we introduce the sarcoma dataset that will be our focus in the next stage.

    Patterns of gene-family biomarkers in lung cancer

    In cancer, and human biology in general, multiple groups of biomarkers (genes, protein, microRNAs, etc.) can have similar patterns of activity and thus clinical utility, helping diagnosis, prognosis or predicting treatment outcome. For each cancer subtype, one could find large number of such groups of biomarkers, each having similar predictive power; yet current statistical and AI-based methods identify only one from a given data set.

    We have two primary goals in MCM: 1) to find good groups of biomarkers for the cancers we study, and 2) to identify how and why these biomarkers form useful groups, so we can build a heuristic approach that will find such groups for any disease without needing months of computation on World Community Grid. The first goal will give us not only information that after validation may be useful in clinical practice, but importantly, it will generate data that we will use to validate our heuristics.

    1
    Illustration 1: Proteins group by similar interactions and similar biological functions.

    Multiple groups of biomarkers exist primarily due to the redundancy and complex wiring of the biological system. For example, the highly interconnected human protein-protein interaction network enables us to see how individual proteins perform diverse molecular functions and together contribute to a specific biological process, as shown above in Illustration 1. Many of these interactions change between healthy and disease states, which in turn affects the functions these proteins carry. Through these analyses, we aim to build models of these processes that in turn could be used to design new therapeutic approaches.

    Two specific groups of biomarkers may appear different from each other, yet perform equivalently because the proteins perform similar molecular functions. However, using these groups of biomarkers for patient stratification may not be straightforward. Groups of biomarkers often do not validate in new patient cohorts or when measured by different biological assays, and there are thousands of possible combinations to consider. Some groups of biomarkers may have all reagents available while others may need to be develop (or be more expensive); they may also have different robustness, sensitivity and accuracy, affecting their potential as clinically useful biomarkers.

    At the present time, there is no effective approach to find all good groups of biomarkers necessary to achieve the defined goal, such as accurately predicting patient risk or response to treatment.

    The first goal of the Mapping Cancer Markers project is to gain a deeper understanding of the “rules” of why and how proteins interact and can be combined to form a group of biomarkers, which is essential to understanding their role and applicability. Therefore, we are using the unique computational resource of World Community Grid to systematically survey the landscape of useful groups of biomarkers for multiple cancers and purposes (diagnosis and prognosis). Thereby, we established a benchmark for cancer gene biomarker identification and validation. Simultaneously, we are applying unsupervised learning methods such as hierarchical clustering to proteins that group by predictive power and biological function.

    The combination of this clustering and the World Community Grid patterns enables us to identify generalized gene clusters that provide deeper insights to the molecular background of cancers, and give rise to more reliable groups of gene biomarkers for cancer detection and prognosis.

    Currently, we are focusing on the first-phase results from the lung cancer dataset, which focused on a systematic exploration of the entire space of potential fixed-length groups of biomarkers.

    3
    Illustration 2: Workflow of the MCM-gene-pattern-family search. The results of the World Community Grid analysis combined with the unsupervised clustering of genes identifies a set of gene-pattern-families, generalizing the groups of biomarkers. Finally, the results are evaluated using known cancer biomarkers and by using functional annotations, such as signaling pathways, gene ontology function and processes.

    As depicted above in Illustration 2, World Community Grid computed about 10 billion randomly selected groups of biomarkers, to help us understand the distribution of which group sizes and biomarker combinations perform well, which in turn we will use to validate heuristic approaches. Analysis showed that about 45 million groups of biomarkers had a high predictive power and passed the quality threshold. This evaluation gives us a detailed and systematic picture of which genes and gene groups carry the most valuable information for lung cancer diagnosis. Adding pathway and protein interaction network data enables us to further interpret and fathom how and why these groups of biomarkers perform well, and what processes and functions these proteins carry.

    Simultaneously, we used the described lung cancer data to discover groups of similar genes. We assume that these genes or the encoded proteins fulfill similar biological functions or are involved in the same molecular processes.

    3
    Illustration 3: Evaluation of the hierarchical clustering of the lung cancer data, using the complete linkage parameter, for different numbers of groups indicated by the K-values (100 to 1000). The first plot shows the silhouette value – a quality metric in this clustering, i.e., measure of how well each object relates to its cluster compared to other clusters. The second plot depicts the inter- and intra-cluster distance and the ratio of intra/inter cluster distance.

    To find the appropriate clustering algorithms and the right number of gene groups (clusters) we use different measures to evaluate the quality of each of the individual clustering. For instance, Illustration 3 (above) shows the results of the evaluation of the hierarchical clustering for different numbers of clusters. To evaluate clustering quality, we used silhouette value (method for assessing consistency within clusters of data, i.e., measure of how well each object relates to its own cluster compared to other clusters). A high silhouette value indicates good clustering configuration, and the figure shows a large increase in the silhouette value at 700 gene groups. Since this indicates a significant increase in quality, we subsequently select this clustering for further analysis.

    Not all combinations of biological functions or the lack of it will lead to cancer development and will be biologically important. In the next step, we apply a statistical search to investigate which combinations of clusters are most common among the well-preforming biomarkers, and therefore result in gene groups or pattern families. Since some gene-pattern-families are likely to occur even at random, we use enrichment analysis to ensure the selection only contains families that occur significantly more often than random.

    In the subsequent step we validated the selected generalized gene-pattern-families using an independent set of 28 lung cancer data sets. Each of these studies report one or several groups of biomarkers of up- or down-regulated genes that are indicative for lung cancer.


    Illustration 4: Shown is a selection of high performing pattern families and how they are supported by 28 previously published gene signatures. Each circle in the figure indicates the strength of the support: The size of the circle represents the number of clusters in the family that where found significantly more often in the signature of this study. The color of the circle indicates the average significance calculated for all clusters in the pattern-family.

    5
    Illustration 5: One of the most frequent gene-pattern-families, is a combination of cluster 1, 7 and 21. We annotated each cluster with pathways using pathDIP and visualized it using word clouds (the larger the word/phrase, the most frequently it occurs).

    The word cloud visualization indicates that cluster 7 is involved in pathways related to GPCRs (G protein–coupled receptor) and NHRs (nuclear hormone receptors). In contrast, the genes in cluster 1 are highly enriched in EGFR1 (epidermal growth factor receptor) as well as translational regulation pathways. Mutations affecting the expression of EGFR1, a transmembrane protein, have shown to result in different types of cancer, and in particular lung cancer (as we have shown earlier, e.g., (Petschnigg et al., J Mol Biol 2017; Petschnigg et al., Nat Methods 2014)). The aberrations increase the kinase activity of EGFR1, leading to hyperactivation of downstream pro-survival signaling pathways and a subsequent uncontrolled cell division. The discovery of EGFR1 initiated the development of therapeutic approaches against various cancer types including lung cancer. The third group of genes are common targets of microRNAs. Cluster 21 indicates strong involvement with microRNAs, as we and others have shown before (Tokar et al., Oncotarget 2018; Becker-Santos et al., J Pathology, 2016; Cinegaglia et al., Oncotarget 2016).

    6
    Illustration 6: Evaluation of enriched pathways for cluster 1. Here we used our publicly available pathway enrichment analysis portal pathDIP (Rahmati et al., NAR 2017). The network was generated with our network visualization and analysis tool NAViGaTOR 3 (http://ophid.utoronto.ca/navigator).

    The final illustration evaluates the 20 most significantly enriched pathways for cluster 1. The size of the pathway nodes corresponds to the number of involved genes, and the width of the edges corresponds the number genes of overlapping between pathways. One can see that all pathways involved in translation are highly overlapping. mRNA-related pathways form another highly connected component in the graph. The EGFR1 pathway is strongly overlapping with many of the other pathways, indicating that genes that are affected by those pathways are involved in a similar molecular mechanism.

    Sarcoma

    After lung and ovarian cancers, next we will focus on sarcoma. Sarcomas are a heterogeneous group of malignant tumors that are relatively rare. They are typically categorized according to the morphology and type of connective tissues that they arise in, including fat, muscle, blood vessels, deep skin tissues, nerves, bones and cartilage, which comprises less than 10% of all malignancies (Jain 2010). Sarcomas can occur anywhere in the human body, from head to foot, can develop in patients of any age including children, and often vary in aggressiveness, even within the same organ or tissue subtype (Honore 2015). This suggests that a histological description by organ and tissue type is neither sufficient for categorization of the disease nor does it help in selecting the most optimal treatment.

    Diagnosing sarcomas poses a particular dilemma, not only due to their rarity, but also due to their diversity, with more than 70 histological subtypes, and our insufficient understanding of the molecular characteristics of these subtypes (Jain 2010).

    Therefore, recent research studies focused on molecular classifications of sarcomas based on genetic alterations, such as fusion genes or oncogenic mutations. While research achieved major developments in local control/limb salvage, the survival rate for “high-risk” soft tissue sarcomas (STSs) has not improved significantly, especially in patients with a large, deep, high-grade sarcoma (stage III) (Kane III 2018).

    For these reasons, in the next phase of World Community Grid analysis, we will focus on the evaluation of the genomic background of sarcoma. We will utilize different sequencing information and technologies to gain a broader knowledge between the different levels of genetic aberrations and the regulational implications. We will provide a more detailed description of the data and the incentives in the next update.

    Petschnigg J, Kotlyar M, Blair L, Jurisica I, Stagljar I, and Ketteler R, Systematic identification of oncogenic EGFR interaction partners, J Mol Biol, 429(2): 280-294, 2017.
    Petschnigg, J., Groisman, B., Kotlyar, M., Taipale, M., Zheng, Y., Kurat, C., Sayad, A., Sierra, J., Mattiazzi Usaj, M., Snider, J., Nachman, A., Krykbaeva, I., Tsao, M.S., Moffat, J., Pawson, T., Lindquist, S., Jurisica, I., Stagljar, I. Mammalian Membrane Two-Hybrid assay (MaMTH): a novel split-ubiquitin two-hybrid tool for functional investigation of signaling pathways in human cells; Nat Methods, 11(5):585-92, 2014.
    Rahmati, S., Abovsky, M., Pastrello, C., Jurisica, I. pathDIP: An annotated resource for known and predicted human gene-pathway associations and pathway enrichment analysis. Nucl Acids Res, 45(D1): D419-D426, 2017.
    Kane, John M., et al. “Correlation of High-Risk Soft Tissue Sarcoma Biomarker Expression Patterns with Outcome following Neoadjuvant Chemoradiation.” Sarcoma 2018 (2018).
    Jain, Shilpa, et al. “Molecular classification of soft tissue sarcomas and its clinical applications.” International journal of clinical and experimental pathology 3.4 (2010): 416.
    Honore, C., et al. “Soft tissue sarcoma in France in 2015: epidemiology, classification and organization of clinical care.” Journal of visceral surgery 152.4 (2015): 223-230.
    Tokar T, Pastrello C, Ramnarine VR, Zhu CQ, Craddock KJ, Pikor L, Vucic EA, Vary S, Shepherd FA, Tsao MS, Lam WL, Jurisica Differentially expressed microRNAs in lung adenocarcinoma invert effects of copy number aberrations of prognostic genes. Oncotarget. 9(10):9137-9155, 2018
    Becker-Santos, D.D., Thu, K.L, English, J.C., Pikor, L.A., Chari, R., Lonergan, K.M., Martinez, V.D., Zhang, M., Vucic, E.A., Luk, M.T.Y., Carraro, A., Korbelik, J., Piga, D., Lhomme, N.M., Tsay, M.J., Yee, J., MacAulay, C.E., Lockwood, W.W., Robinson, W.P., Jurisica, I., Lam, W.L., Developmental transcription factor NFIB is a putative target of oncofetal miRNAs and is associated with tumour aggressiveness in lung adenocarcinoma, J Pathology, 240(2):161-72, 2016.
    Cinegaglia, N.C., Andrade, S.C.S., Tokar, T., Pinheiro, M., Severino, F. E., Oliveira, R. A., Hasimoto, E. N., Cataneo, D. C., Cataneo, A.J.M., Defaveri, J., Souza, C.P., Marques, M.M.C, Carvalho, R. F., Coutinho, L.L., Gross, J.L., Rogatto, S.R., Lam, W.L., Jurisica, I., Reis, P.P. Integrative transcriptome analysis identifies deregulated microRNA-transcription factor networks in lung, adenocarcinoma, Oncotarget, 7(20): 28920-34, 2016.

    Other news

    We have secured a major funding from Ontario Government for our research: The Next Generation Signalling Biology Platform. The main goal of the project is developing novel integrated analytical platform and workflow for precision medicine. This project will create an internationally accessible resource that unifies different types of biological data, including personal health information—unlocking its full potential and making it more usable for research across the health continuum: from genes and proteins to pathways, drugs and humans.

    We have also published papers describing several tools, portals and applications with our collaborators. Below we list those most related directly or indirectly to work on World Community Grid:

    Wong, S., Pastrello, C., Kotlyar, M., Faloutsos, C., Jurisica, I. SDREGION: Fast spotting of changing communities in biological networks. ACM KDD Proceedings, 2018. In press. BMC Cancer, 18(1):408, 2018.
    Kotlyar, M., Pastrello, C., Rossos, A., Jurisica, I. Protein-protein interaction databases. Eds. Cannataro, M. et al. Encyclopedia of Bioinformatics and Computational Biology, 81, Elsevier. In press. doi.org/10.1016/B978-0-12-811414-8.20495-1
    Rahmati, S., Pastrello, C., Rossos, A., Jurisica, I. Two Decades of Biological Pathway Databases: Results and Challenges, Eds. Cannataro, M. et al. Encyclopedia of Bioinformatics and Computational Biology, 81, Elsevier. In press.
    Hauschild, AC, Pastrello, C., Rossos, A., Jurisica, I. Visualization of Biomedical Networks, Eds. Cannataro, M. et al. Encyclopedia of Bioinformatics and Computational Biology, 81, Elsevier. In press.
    Sivade Dumousseau M, Alonso-López D, Ammari M, Bradley G, Campbell NH, Ceol A, Cesareni G, Combe C, De Las Rivas J, Del-Toro N, Heimbach J, Hermjakob H, Jurisica I, Koch M, Licata L, Lovering RC, Lynn DJ, Meldal BHM, Micklem G, Panni S, Porras P, Ricard-Blum S, Roechert B, Salwinski L, Shrivastava A, Sullivan J, Thierry-Mieg N, Yehudi Y, Van Roey K, Orchard S. Encompassing new use cases – level 3.0 of the HUPO-PSI format for molecular interactions. BMC Bioinformatics, 19(1):134, 2018.
    Minatel BC, Martinez VD, Ng KW, Sage AP, Tokar T, Marshall EA, Anderson C, Enfield KSS, Stewart GL, Reis PP, Jurisica I, Lam WL., Large-scale discovery of previously undetected microRNAs specific to human liver. Hum Genomics, 12(1):16, 2018.
    Tokar T, Pastrello C, Ramnarine VR, Zhu CQ, Craddock KJ, Pikor L, Vucic EA, Vary S, Shepherd FA, Tsao MS, Lam WL, Jurisica, I. Differentially expressed microRNAs in lung adenocarcinoma invert effects of copy number aberrations of prognostic genes. Oncotarget. 9(10):9137-9155, 2018.
    Paulitti A, Corallo D, Andreuzzi E, Bizzotto D, Marastoni S, Pellicani R, Tarticchio G, Pastrello C, Jurisica I, Ligresti G, Bucciotti F, Doliana R, Colladel R, Braghetta P, Di Silvestre A, Bressan G, Colombatti A, Bonaldo P, Mongiat M. Matricellular EMILIN2 protein ablation ca 1 uses defective vascularization due to impaired EGFR-dependent IL-8 production, Oncogene, Feb 27. doi: 10.1038/s41388-017-0107-x. [Epub ahead of print] 2018.
    Tokar, T., Pastrello, C., Rossos, A., Abovsky, M., Hauschild, A.C., Tsay, M., Lu, R., Jurisica. I. mirDIP 4.1 – Integrative database of human microRNA target predictions, Nucl Acids Res, D1(46): D360-D370, 2018.
    Kotlyar M., Pastrello, C., Rossos, A., Jurisica, I., Prediction of protein-protein interactions, Current Protocols in Bioinf, 60, 8.2.1–8.2.14., 2017.
    Singh, M., Venugopal, C., Tokar, T., Brown, K.B., McFarlane, N., Bakhshinyan, D., Vijayakumar, T., Manoranjan, B., Mahendram, S., Vora, P., Qazi, M., Dhillon, M., Tong, A., Durrer, K., Murty, N., Hallet, R., Hassell, J.A., Kaplan, D., Jurisica, I., Cutz, J-C., Moffat, J., Singh, D.K., RNAi screen identifies essential regulators of human brain metastasis initiating cells, Acta Neuropathologica, 134(6):923-940, 2017.

    Thank you.

    This work would not be possible without the participation of World Community Grid Members. Thank you for generously contributing CPU cycles, and for your interest in this and other World Community Grid projects.

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ways to access the blog:
    https://sciencesprings.wordpress.com
    http://facebook.com/sciencesprings
    World Community Grid (WCG) brings people together from across the globe to create the largest non-profit computing grid benefiting humanity. It does this by pooling surplus computer processing power. We believe that innovation combined with visionary scientific research and large-scale volunteerism can help make the planet smarter. Our success depends on like-minded individuals – like you.”
    WCG projects run on BOINC software from UC Berkeley.
    BOINCLarge

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing.

    BOINC WallPaper

    CAN ONE PERSON MAKE A DIFFERENCE? YOU BET!!

    My BOINC
    MyBOINC
    “Download and install secure, free software that captures your computer’s spare power when it is on, but idle. You will then be a World Community Grid volunteer. It’s that simple!” You can download the software at either WCG or BOINC.

    Please visit the project pages-

    Microbiome Immunity Project

    FightAIDS@home Phase II

    FAAH Phase II
    OpenZika

    Rutgers Open Zika

    Help Stop TB
    WCG Help Stop TB
    Outsmart Ebola together

    Outsmart Ebola Together

    Mapping Cancer Markers
    mappingcancermarkers2

    Uncovering Genome Mysteries
    Uncovering Genome Mysteries

    Say No to Schistosoma

    GO Fight Against Malaria

    Drug Search for Leishmaniasis

    Computing for Clean Water

    The Clean Energy Project

    Discovering Dengue Drugs – Together

    Help Cure Muscular Dystrophy

    Help Fight Childhood Cancer

    Help Conquer Cancer

    Human Proteome Folding

    FightAIDS@Home

    faah-1-new-screen-saver

    faah-1-new

    World Community Grid is a social initiative of IBM Corporation
    IBM Corporation
    ibm

    IBM – Smarter Planet
    sp

     
  • richardmitnick 9:29 am on July 4, 2018 Permalink | Reply
    Tags: BOINC-Berkeley Open Infrastructure for Network Computing, Carbon nanostructures, ,   

    From World Community Grid (WCG): “The Expanding Frontiers of Carbon Nanotube Technology” 

    New WCG Logo

    WCGLarge

    From World Community Grid (WCG)

    3 Jul 2018

    Summary
    The Clean Water Project made an exciting discovery about the possible applications of carbon nanostructures to water purification, biomedical research, and energy research. Dr. Ming Ma, one of the scientists on the project, recently published a paper that summarizes the current status of work in this field.

    1
    The team at Tsinghua University includes (left to right) Ming Ma, Kunqi Wang, Wei Cao, and Jin Wang. Not pictured: Yao Cheng

    Dr. Ming Ma (of the Computing for Clean Water project) at Tsinghua University recently published a paper in the Journal of Micromechanics and Microengineering entitled “Carbon nanostructure based mechano-nanofluidics.” The paper is a thorough survey of all the recent research work on fluid flow in carbon nanostructures, such as carbon nanotubes and graphene sheets.

    Carbon atoms can form single-atom thick sheets known as graphene. When these are rolled into tube shape, they are called carbon nanotubes. In recent years, there has been a flurry of research work with these nanostructures, called that because they deal with very tiny atomic structures measured in nanometers (billionths of a meter). The Computing for Clean Water project is one example of recent research in this area: By using World Community Grid to simulate water flow through carbon nanotubes at an unprecedented level of detail, the project’s research team discovered that under specific conditions, certain kinds of natural vibrations of atoms inside the nanotubes can lead to a 300% increased rate of diffusion (a kind of flow) of water through the nanotubes.

    Among their many surprising properties are the ability to dramatically enhance water flow through or past the nanostructures. There is much research being conducted to understand how this happens and ultimately how to make best use of this property to potentially purify water, desalinate water, and meet other goals in biomedical and energy research. Challenges remain in how to efficiently manufacture these materials and how to adjust their structures to achieve the best results.

    Thanks to everyone who supported this project.

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ways to access the blog:
    https://sciencesprings.wordpress.com
    http://facebook.com/sciencesprings
    World Community Grid (WCG) brings people together from across the globe to create the largest non-profit computing grid benefiting humanity. It does this by pooling surplus computer processing power. We believe that innovation combined with visionary scientific research and large-scale volunteerism can help make the planet smarter. Our success depends on like-minded individuals – like you.”
    WCG projects run on BOINC software from UC Berkeley.
    BOINCLarge

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing.

    BOINC WallPaper

    CAN ONE PERSON MAKE A DIFFERENCE? YOU BET!!

    My BOINC
    MyBOINC
    “Download and install secure, free software that captures your computer’s spare power when it is on, but idle. You will then be a World Community Grid volunteer. It’s that simple!” You can download the software at either WCG or BOINC.

    Please visit the project pages-

    Microbiome Immunity Project

    FightAIDS@home Phase II

    FAAH Phase II
    OpenZika

    Rutgers Open Zika

    Help Stop TB
    WCG Help Stop TB
    Outsmart Ebola together

    Outsmart Ebola Together

    Mapping Cancer Markers
    mappingcancermarkers2

    Uncovering Genome Mysteries
    Uncovering Genome Mysteries

    Say No to Schistosoma

    GO Fight Against Malaria

    Drug Search for Leishmaniasis

    Computing for Clean Water

    The Clean Energy Project

    Discovering Dengue Drugs – Together

    Help Cure Muscular Dystrophy

    Help Fight Childhood Cancer

    Help Conquer Cancer

    Human Proteome Folding

    FightAIDS@Home

    faah-1-new-screen-saver

    faah-1-new

    World Community Grid is a social initiative of IBM Corporation
    IBM Corporation
    ibm

    IBM – Smarter Planet
    sp

     
  • richardmitnick 3:07 pm on June 27, 2018 Permalink | Reply
    Tags: BOINC-Berkeley Open Infrastructure for Network Computing, Help Cure Muscular Dystrophy project,   

    From World Community Grid (WCG): “Data from Help Cure Muscular Dystrophy Project Helps Shed Light on the Mysteries of Protein Interactions” 

    New WCG Logo

    WCGLarge

    From World Community Grid (WCG)

    26 Jun 2018
    Dr. Alessandra Carbone
    Sorbonne Université

    Summary
    Protein-protein interactions are the basis of cellular structure and function, and understanding these interactions is key to understanding cell life itself. Dr. Alessandra Carbone and her team continue to analyze data on these interactions from the Help Cure Muscular Dystrophy project, and they recently published a new paper to contribute to the body of knowledge in this field.


    29 mminutes

    Dr. Alessandra Carbone (principal investigator of the Help Cure Muscular Dystrophy project) and team have published a paper entitled “Hidden partners: Using cross-docking calculations to predict binding sites for proteins with multiple interactions” in the journal Proteins.

    Protein interactions are the basis for most biological functions. How they interact with each other and other compounds (such as DNA, RNA, and ligands) in the cell is key to understanding life and disease functions. Complicating things, proteins often interact with more than one other kind of protein. To better understand protein functions, tools are required to uncover these potential interactions.

    Different parts (surfaces) of the protein can be binding sites that attract another protein. This paper describes a methodology the research team developed to better predict these alternative binding sites. A subset of the Help Cure Muscular Dystrophy project data was used to validate their technique, which will be subsequently applied to the whole dataset computed via World Community Grid.

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ways to access the blog:
    https://sciencesprings.wordpress.com
    http://facebook.com/sciencesprings

    World Community Grid (WCG) brings people together from across the globe to create the largest non-profit computing grid benefiting humanity. It does this by pooling surplus computer processing power. We believe that innovation combined with visionary scientific research and large-scale volunteerism can help make the planet smarter. Our success depends on like-minded individuals – like you.”
    WCG projects run on BOINC software from UC Berkeley.
    BOINCLarge

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing.

    BOINC WallPaper

    CAN ONE PERSON MAKE A DIFFERENCE? YOU BET!!

    My BOINC
    MyBOINC
    “Download and install secure, free software that captures your computer’s spare power when it is on, but idle. You will then be a World Community Grid volunteer. It’s that simple!” You can download the software at either WCG or BOINC.

    Please visit the project pages-

    Microbiome Immunity Project

    FightAIDS@home Phase II

    FAAH Phase II
    OpenZika

    Rutgers Open Zika

    Help Stop TB
    WCG Help Stop TB
    Outsmart Ebola together

    Outsmart Ebola Together

    Mapping Cancer Markers
    mappingcancermarkers2

    Uncovering Genome Mysteries
    Uncovering Genome Mysteries

    Say No to Schistosoma

    GO Fight Against Malaria

    Drug Search for Leishmaniasis

    Computing for Clean Water

    The Clean Energy Project

    Discovering Dengue Drugs – Together

    Help Cure Muscular Dystrophy

    Help Fight Childhood Cancer

    Help Conquer Cancer

    Human Proteome Folding

    FightAIDS@Home

    faah-1-new-screen-saver

    faah-1-new

    World Community Grid is a social initiative of IBM Corporation
    IBM Corporation
    ibm

    IBM – Smarter Planet
    sp

     
  • richardmitnick 3:19 pm on June 19, 2018 Permalink | Reply
    Tags: , BOINC-Berkeley Open Infrastructure for Network Computing, ,   

    From World Community Grid (WCG): “Microbiome Immunity Project Researchers Create Ambitious Plans for Data” 

    New WCG Logo

    WCGLarge

    From World Community Grid (WCG)

    By: Dr. Tomasz Kościółek and Bryn Taylor
    University of California San Diego
    19 Jun 2018

    Summary
    The Microbiome Immunity Project researchers—from Boston, New York, and San Diego—met in person a few weeks ago to make plans that include a 3D map of the protein universe and other far-ranging uses for the data from the project.


    The research team members pictured above are (from left to right): Vladimir Gligorijevic (Simons Foundation’s Flatiron Institute), Tommi Vatanen (Broad Institute of MIT and Harvard), Tomasz Kosciolek (University of California San Diego), Rob Knight (University of California San Diego), Rich Bonneau (Simons Foundation’s Flatiron Institute), Doug Renfrew (Simons Foundation’s Flatiron Institute), Bryn Taylor (University of California San Diego), Julia Koehler Leman (Simons Foundation’s Flatiron Institute). Visit the project’s Research Participants page for additional team members.

    During the week of May 28, researchers from all Microbiome Immunity Project (MIP) institutions (University of California San Diego, Broad Institute of MIT and Harvard, and the Simons Foundation’s Flatiron Institute) met in San Diego to discuss updates on the project and plan future work.

    Our technical discussions included a complete overview of the practical aspects of the project, including data preparation, pre-processing, grid computations, and post-processing on our machines.

    We were excited to notice that if we keep the current momentum of producing new structures for the project, we will double the universe of known protein structures (compared to the Protein Data Bank) by mid-2019! We also planned how to extract the most useful information from our data, store it effectively for future use, and extend our exploration strategies.

    We outlined three major areas we want to focus on over the next six months.

    Structure-Aided Function Predictions

    We can use the structures of proteins to gain insight into protein function—or what the proteins actually do. Building on research from MIP co-principal investigator Richard Bonneau’s lab, we will extend their state-of-the-art algorithms to predict protein function using structural models generated through MIP. Using this new methodology based on deep learning, akin to the artificial intelligence algorithms of IBM, we hope to see improvements over more simplistic methods and provide interesting examples from the microbiome (e.g., discover new genes creating antibiotic resistance).

    Map of the Protein Universe

    Together we produce hundreds of high-quality protein models every month! To help researchers navigate this ever-growing space, we need to put them into perspective of what we already know about protein structures and create a 3D map of the “protein universe.” This map will illustrate how the MIP has eliminated the “dark matter” from this space one structure at a time. It will also be made available as a resource for other researchers to explore interactively.

    Structural and Functional Landscape of the Human Gut Microbiome

    We want to show what is currently known about the gut microbiome in terms of functional annotations and how our function prediction methods can help us bridge the gap in understanding of gene functions. Specifically, we want to follow up with examples from early childhood microbiome cohorts (relevant to Type-1 diabetes, or T1D) and discuss how our methodology can help us to better understand T1D and inflammatory bowel disease.

    The future of the Microbiome Immunity Project is really exciting, thanks to everyone who makes our research possible. Together we are making meaningful contributions to not one, but many scientific problems!

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ways to access the blog:
    https://sciencesprings.wordpress.com
    http://facebook.com/sciencesprings

    World Community Grid (WCG) brings people together from across the globe to create the largest non-profit computing grid benefiting humanity. It does this by pooling surplus computer processing power. We believe that innovation combined with visionary scientific research and large-scale volunteerism can help make the planet smarter. Our success depends on like-minded individuals – like you.”
    WCG projects run on BOINC software from UC Berkeley.
    BOINCLarge

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing.

    BOINC WallPaper

    CAN ONE PERSON MAKE A DIFFERENCE? YOU BET!!

    My BOINC
    MyBOINC
    “Download and install secure, free software that captures your computer’s spare power when it is on, but idle. You will then be a World Community Grid volunteer. It’s that simple!” You can download the software at either WCG or BOINC.

    Please visit the project pages-

    Microbiome Immunity Project

    FightAIDS@home Phase II

    FAAH Phase II
    OpenZika

    Rutgers Open Zika

    Help Stop TB
    WCG Help Stop TB
    Outsmart Ebola together

    Outsmart Ebola Together

    Mapping Cancer Markers
    mappingcancermarkers2

    Uncovering Genome Mysteries
    Uncovering Genome Mysteries

    Say No to Schistosoma

    GO Fight Against Malaria

    Drug Search for Leishmaniasis

    Computing for Clean Water

    The Clean Energy Project

    Discovering Dengue Drugs – Together

    Help Cure Muscular Dystrophy

    Help Fight Childhood Cancer

    Help Conquer Cancer

    Human Proteome Folding

    FightAIDS@Home

    faah-1-new-screen-saver

    faah-1-new

    World Community Grid is a social initiative of IBM Corporation
    IBM Corporation
    ibm

    IBM – Smarter Planet
    sp

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: