Tagged: BOINC Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:57 pm on July 14, 2015 Permalink | Reply
    Tags: , , BOINC,   

    From SETI@home: Interview with Dave Anderson 

    SETI@home
    SETI@home

    David Anderson talks BOINC, Citizen Science, and Why We Still Need You

    David Anderson is co-creator of SETI@home, and the Director of BOINC, the Berkeley Open Infrastructure for Network Computing. David is a computer scientist by trade, and mathematician by training who’s had a decades long interest in distributed computing and volunteer science.

    Download BOINC and join the science!
    http://boinc.berkeley.edu/download.php

    Learn more about David
    https://seti.berkeley.edu/user/46

    Follow us on Twitter:
    https://twitter.com/setiathome
    Facebook:
    https://www.facebook.com/BerkeleySETI

    Watch, enjoy, learn, maybe think about joining up.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The science of SETI@home
    SETI (Search for Extraterrestrial Intelligence) is a scientific area whose goal is to detect intelligent life outside Earth. One approach, known as radio SETI, uses radio telescopes to listen for narrow-bandwidth radio signals from space. Such signals are not known to occur naturally, so a detection would provide evidence of extraterrestrial technology.

    Radio telescope signals consist primarily of noise (from celestial sources and the receiver’s electronics) and man-made signals such as TV stations, radar, and satellites. Modern radio SETI projects analyze the data digitally. More computing power enables searches to cover greater frequency ranges with more sensitivity. Radio SETI, therefore, has an insatiable appetite for computing power.

    Previous radio SETI projects have used special-purpose supercomputers, located at the telescope, to do the bulk of the data analysis. In 1995, David Gedye proposed doing radio SETI using a virtual supercomputer composed of large numbers of Internet-connected computers, and he organized the SETI@home project to explore this idea. SETI@home was originally launched in May 1999.

    SETI@home is not a part of the SETI Institute

    The SET@home screensaver image
    SETI@home screensaver

    To participate in this project, download and install the BOINC software on which it runs. Then attach to the project. While you are at BOINC, look at some of the other projects which you might find of interest.

    BOINC

     
  • richardmitnick 4:46 pm on March 10, 2015 Permalink | Reply
    Tags: , BOINC,   

    From WCG: “Top distributed computing projects still hard at work fighting the world’s worst health issues” in IT World 

    New WCG Logo

    1

    March 9, 2015
    Andy Patrizio

    This past fall saw the worst Ebola outbreak ever ravage western Africa, and while medical researchers are trying to find a drug to treat or prevent the disease, the process is long and complicated. That’s because you don’t just snap your fingers and produce a drug with a virus like Ebola. What’s needed is a massive amount of trial and error to find chemical compounds that can bind with the proteins in the virus and inhibit replication. In labs, it can take years or decades.

    Thanks to thousands of strangers, Ebola researchers are getting the help and computing power they need to shave off the time needed to find new drugs by a few years.

    Distributed computing is not a new concept, but as it is constituted today, it’s an idea born of the Internet. Contributors download a small app that runs in the background and uses spare PC compute cycles to perform a certain process.

    When you are running a PC and using it for Word, Outlook and browsing, you are using a pittance of the compute power in a modern CPU, maybe 5% total, and that’s only in bursts. Distributed computing programs use the other 95%, or less if you specify, and if you need more compute power for work, the computing clients dial back their work and let you have the CPU power you need. If you leave the PC on when not using it, the application goes full out.

    There is a wide variety of programs, and one of them is aimed at finding drugs to help stop Ebola. It’s part of the World Community Grid (WCG), run by IBM and using software developed at the University of California at Berkeley.

    The WCG has almost 700,000 members with three million devices signed up to crunch away on those projects, according to Dr. Viktors Berstis, architect and chief scientist for the WCG at IBM. All told, WCG is running nearly 30 drug projects.

    WCG uses software developed at Berkley called BOINC, or the Berkeley Open Infrastructure for Network Computing. Past distributed projects, and even current ones such as Folding@Home, use their own client to do the work. BOINC is a National Science Foundation-funded project to create one distributed computing client that any project can use, thus sparing researchers the effort of having to reinvent the wheel and just focus on their project and not the client.

    The program simulates how a potential compound reacts vs. the target, such as a protein in a virus that is needed for the disease to survive. With distributed computing, WCG can go through millions of compounds for any given target and cut down dramatically on the research time it would take to do it in a lab.

    “It’s applicable to anything that takes a lot of CPU time and can be split up into millions of independent-running jobs. The only difference between World Community Grid and a supercomputer is the supercomputer processors can talk to each other,” said Dr. Berstis.

    The Ebola hunt

    In the case of Ebola, WCG has partnered with The Scripps Institute, a biomedical research group in La Jolla, Calif., to launch Outsmart Ebola Together [see below]. The project will target multiple hemorrhagic viruses in the Ebola family, according to Dr. Erica Saphire, the researcher heading the program at Scripps.

    The project will target one specific protein that is used to attach the virus to healthy cells in a human to then replicate. This protein is being targeted because unlike other proteins in the viruses, it can’t mutate. “The site they are targeting is the way the virus finds its way into the cell. So if it changes too much in any way, it’s not viable. It’s one of the few places [in the virus] that can’t change. It has to keep that the same. So that makes an ideal drug target,” said Dr. Saphire.

    Dr. Saphire said the FightAIDS@Home group [see below]at Scripps often gets done in a few months what would have taken 10 years otherwise and wants to put the three million devices of WCG to work. “With this massive computational power, we’re asking what can we understand that we’ve never understood before,” she said. “It’s the most fundamentally important thing my lab has ever done. It’s also the biggest.”

    Scripps is a large, well-funded institute and could easily afford supercomputers, but Dr. Saphire said WCG is a better option. “It turns out that having hundreds of thousands of computers in parallel accelerates things more than having a supercomputer here,” she said.

    Success stories

    Dr. Art Olson, professor in a department of integrative computational and structural biology at Scripps, has used WCG for the FightAIDS@Home project since 2005, and before that with a now-defunct company called United Devices back in 2000 in one of the first distributed biomedical computing project.

    The first papers published by Dr. Olson’s group talked about the mutation of the HIV protease, the part of the HIV virus that handles replication, and how they can both target the protease to stop replication and how the protease develops drug resistance. “That gave us a set of targets to try and find drugs that could be effective against that spanning set of mutants,” said Dr. Olson.

    Like Dr. Saphire, he prefers the massive number of CPUs available via WCG over an in-house supercomputer. “We have very good computing resources here, but we’re not the only people who use the computing resources at Scripps. We can only get 300 CPUs at any given time, whereas on the World Community Grid we can get tens of thousands of CPUs to use at any given time. So it’s a major boost. We would never even try to do the scope of the kinds of dockings we do using just our local institutional resources,” he said.

    There are other WCG successes besides Scripps. Dr. Berstis said one success story was simulations of carbon nanotubes [see below]. Water flows through the tubes 10,000 times more efficiently than thought, so there are now experiments to find less expensive methods of filtering or desalinating water than using the very expensive reverse osmosis filters.

    A recently disclosed project from the Help Fight Childhood Cancer group at WCG [see below]found compounds to cure childhood neuroblastoma, a cancer of the nervous system. Done in conjunction with a group in Japan, they found 7 drug candidates with a 95% likelihood of curing the cancer.

    Finally, there was a cancer project that looked at images of biopsies with machine optical scanning. Eventually an algorithm was developed that helped analyze those images to determine if cancer cells are present. “They are as good as humans now so it will help identify if there is cancer present or not much faster,” said Dr. Berstis.

    DIY distributed computing

    The concept of using idle CPU cycles instead of investing millions in supercomputers is not lost on IT departments or companies with big processing tasks. Anecdotal stories of firms setting up their own internal distributed computing networks have been around for several years, although most firms will not discuss them out of concern for giving away a competitive advantage.

    CDx Diagnostics, which develops equipment to detect cancer in its earliest stage, was willing to discuss its efforts. It has built a data center of computers just to do processing, plus it utilizes idle CPU cycles on employee computers, to build its own internal grid computing environment to analyze digitized microscopic slide data for detection of cellular changes that would indicate cancerous and pre-cancerous cells.

    CDx needed a cheap system that can process 590GB of image data generated per pathology slide, and patients can have multiple slides, in less than four minutes. On a single PC, such analysis would normally take four hours. And it’s still no replacement for human eyes. Slides are still processed by humans, but the grid system can pick up on anomalies, or note there are none.

    Employees leave their computers on when they go home at night. The client PCs tell the servers their computing capabilities and the servers decide which computers get what kinds of workloads. Faster computers get the higher priority in doing the next task, said Robert Tjon, vice president of engineering and developer of the grid.

    Tjon said the best performance for price is from commodity hardware, which is robust, highly reconfigurable and scalable as long as there is a centralized system that can manage the external resources efficiently so the computers are constantly fed data.

    “One hundred percent utilization of the computer resources will keep the cost of the overall grid down in terms of space, heat, power, and manpower to keep the system up. We also like the fact that Intel invests billions to make the computer cheaper and faster and we only have to pay the price of a regular, popular consumer item,” he said.

    So it could be that your idle PC may one day save your life.

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    World Community Grid (WCG) brings people together from across the globe to create the largest non-profit computing grid benefiting humanity. It does this by pooling surplus computer processing power. We believe that innovation combined with visionary scientific research and large-scale volunteerism can help make the planet smarter. Our success depends on like-minded individuals – like you.”

    WCG projects run on BOINC software from UC Berkeley.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing.

    CAN ONE PERSON MAKE A DIFFERENCE? YOU BETCHA!!

    “Download and install secure, free software that captures your computer’s spare power when it is on, but idle. You will then be a World Community Grid volunteer. It’s that simple!” You can download the software at either WCG or BOINC.

    Please visit the project pages-
    Outsmart Ebola together

    Outsmart Ebola Together

    Mapping Cancer Markers
    mappingcancermarkers2

    Uncovering Genome Mysteries
    Uncovering Genome Mysteries

    Say No to Schistosoma

    GO Fight Against Malaria

    Drug Search for Leishmaniasis

    Computing for Clean Water

    The Clean Energy Project

    Discovering Dengue Drugs – Together

    Help Cure Muscular Dystrophy

    Help Fight Childhood Cancer

    Help Conquer Cancer

    Human Proteome Folding

    FightAIDS@Home

    World Community Grid is a social initiative of IBM Corporation
    IBM Corporation
    ibm

    IBM – Smarter Planet
    sp

     
  • richardmitnick 3:01 pm on February 12, 2015 Permalink | Reply
    Tags: , BOINC, ,   

    From Mapping Cancer Markers at WCG: “Using one cancer to help defeat many: Mapping Cancer Markers makes progress” 

    WCG new
    World Community Grid

    Mapping Cancer Markers

    Mapping Cancer Markers Banner

    Mapping Cancer Markers

    By: The Mapping Cancer Markers research team
    12 Feb 2015

    Summary
    Results from the first stage of the Mapping Cancer Markers project are helping the researchers identify the markers for lung cancer, as well as improve their research methodology as they move on to analyze other cancers.

    2

    Once again, the Mapping Cancer Markers (MCM) team would like to extend a huge thank you to the World Community Grid members. Although we publish this thank you each update, we are truly grateful for your contribution to this project.

    The MCM project has continued to process lung cancer data, exploring fixed-length random gene signatures. This long stage of the project is nearly over, and we are preparing to transition our focus to a narrower set of genes of interest. Target genes will be chosen by a process combining statistics from the initial results, with pathway and biological-network analysis.

    Analytics

    In our previous update, we reported the adoption of a new package, the IBM® InfoSphere® Streams real-time analytics platform, to process our World Community Grid data. The majority of our work since the last update has concentrated on continued development and expansion of our Streams system in order to handle the incoming data more robustly and efficiently.

    There are two main reasons why stream-processing design is better for processing MCM results than a batch-computing approach. One reason relates to the nature of World Community Grid: a huge computing resource that continuously consumes work units and produces compute results. Data is best processed as it arrives, to avoid backlogs or storage limitations.

    Importantly, as we transition to the new focus, this enables us to make the process of designing new work units based on partial results more effective. MCM will soon focus on genes of interest revealed by our broad survey of gene-signature space in the first stage. To narrow the focus, we will take an iterative approach, where we design small batches of work units (e.g., 100,000 units), submit them to World Community Grid, analyze the results, and then incorporate the new analysis into designing the next batch. In this way, we will slowly converge towards the answers we are seeking. Because of the continuous nature of the MCM project, and the volume of data we receive on a daily basis, it is imperative that our analysis system processes results quickly enough to generate the next set of work units.

    New stage in lung cancer signature discovery

    The MCM project has continued to process lung cancer data, exploring random fixed-length signatures of between 5 and 25 biomarkers. This computational component of the “landscape” stage is winding down, and we are preparing to transition our focus to a narrower set of genes of interest. Target genes will be selected by integrating results from several methods, carefully combining statistics from the initial results with pathway and biological-network analysis.

    Network analysis/integration of pathway knowledge

    One of the most exciting (and crucial) parts of this project is the integration of other research to help understand the results we are collecting. We already know that in most cancers no single biomarker is sufficient, we can find thousands of clinically-relevant signatures, and, most importantly, many seemingly weak markers when combined with others provide highly useful information. Therefore, we have been trying to find these “best supporting actors” and then the best signatures through “integrative network analysis”.

    3

    Figure 1: An iterative strategy for biomarker discovery. Work units are processed on World Community Grid. The results are analyzed via a Streams pipeline. This generates a list of high-scoring genes, which combined with biological network information (NAViGaTOR
    ) are used to design new MCM work units targeting areas of interest in signature space.

    We know that disease is more accurately described in terms of altered signaling cascades (pathways): higher-level patterns composed of multiple genes in a biological network. A pathway can be defined as a series of reactions (“steps”) that result in a certain biochemical process. For example, one could consider the electrical and mechanical systems in a car as a set of interrelated pathways. These systems are important for the overall function of the car; however, some are clearly more important than others. In the same way, a particular cancer occurrence could have a single catastrophic cause (a missing engine block) or smaller, multiple causes affecting the same system (e.g., the bolts holding the exhaust system together).

    Around the world, researchers are continually finding, publishing and curating biological pathways and their building blocks (protein interactions). We are taking this information and applying it to high-scoring genes and gene signatures identified from Mapping Cancer Marker results. For example, if the first part of our landscape study identified a certain gene as a potential target, we can see via our network analysis (NAViGaTOR) as well as other external sources if that same gene is involved in known pathways. We can then gather information about those pathways and refine our findings by resubmitting work units to World Community Grid. In essence, we are identifying genes of interest by combining top-scoring genes with pathway and network context. Those investigations will continue to refine our search space and converge on better and better solutions. Below, we list some examples of this work, but especially Kotlyar et al., Nature Methods, 2015 work provides comprehensive in silico prediction of these signaling cascades. Wong et al., Proteomics, 2015 introduces systematic approach to derive important information about cancer-related structures in these networks. Fortney et al., PLoS Computational Biology uses results of this work to identify potential new treatment options for lung cancer.

    Transition to the targeted stage

    We expect a gradual and seamless transition to the new stage of MCM, with no interruption in the supply of work units, and no changes to the visualization or code. Both stages will overlap for a period as the last statistics from the first stage are gathered, and the initial, targeted work units are sent out. Average work unit run-time should remain the same. The consistency of run-times should remain the same or improve.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Cancers, one of the leading causes of death worldwide, come in many different types and forms in which uncontrolled cell growth can spread to other parts of the body. Unchecked and untreated, cancer can spread from an initial site to other parts of the body and ultimately lead to death. The disease is caused by genetic or environmental changes that interfere with biological mechanisms that control cell growth. These changes, as well as normal cell activities, can be detected in tissue samples through the presence of their unique chemical indicators, such as DNA and proteins, which together are known as “markers.” Specific combinations of these markers may be associated with a given type of cancer.

    The pattern of markers can determine whether an individual is susceptible to developing a specific form of cancer, and may also predict the progression of the disease, helping to suggest the best treatment for a given individual. For example, two patients with the same form of cancer may have different outcomes and react differently to the same treatment due to a different genetic profile. While several markers are already known to be associated with certain cancers, there are many more to be discovered, as cancer is highly heterogeneous.

    Mapping Cancer Markers on World Community Grid aims to identify the markers associated with various types of cancer. The project is analyzing millions of data points collected from thousands of healthy and cancerous patient tissue samples. These include tissues with lung, ovarian, prostate, pancreatic and breast cancers. By comparing these different data points, researchers aim to identify patterns of markers for different cancers and correlate them with different outcomes, including responsiveness to various treatment options.

    Some related published work

    Hoeng J, Peitsch MC, Meyer, P. and Jurisica, I. Where are we at regarding Species Translation? A review of the sbv IMPROVER Challenge, Bioinformatics, 2015. In press.

    Fortney, K., Griesman, G., Kotlyar, M., Pastrello, C., Angeli, M., Tsao, M.S., Jurisica, I. Prioritizing therapeutics for lung cancer: An integrative meta-analysis of cancer gene signatures and chemogenomic data, PLoS Comp Biol, 2015, In Press.

    Kotlyar M., Pastrello C., Pivetta, F., Lo Sardo A., Cumbaa, C., Li, H., Naranian, T., Niu Y., Ding Z., Vafaee F., Broackes-Carter F., Petschnigg, J., Mills, G.B., Jurisicova, A., Stagljar, I., Maestro, R., & Jurisica, I. In silico prediction of physical protein interactions and characterization of interactome orphans, Nat Methods, 12(1):79-84, 2015.

    Vucic, E. A., Thu, K. T., Pikor, L. A., Enfield, K. S. S., Yee, J., English, J. C., MacAulay, C. E., Lam, S., Jurisica, I., Lam, W. L. Smoking status impacts microRNA mediated prognosis and lung adenocarcinoma biology, BMC Cancer, 14: 778, 2014. E-pub 2014/10/25

    Lalonde, E., Ishkanian, A. S., Sykes, J., Fraser, M., Ross-Adam, H., Erho, N., Dunning, M., Lamb, A.D., Moon, N.C., Zafarana, G., Warren, A.Y., Meng, A., Thoms, J., Grzadkowski, M.R., Berlin, A., Halim, S., Have, C.L., Ramnarine, V.R., Yao, C.Q., Malloff, C.A., Lam, L. L., Xie, H., Harding, N.J., Mak, D.Y.F., Chu1, K. C., Chong, L.C., Sendorek, D.H., P’ng, C., Collins, C.C., Squire, J.A., Jurisica, I., Cooper, C., Eeles, R., Pintilie, M., Pra, A.D., Davicioni, E., Lam, W. L., Milosevic, M., Neal, D.E., van der Kwast, T., Boutros, P.C., Bristow, R.G., Tumour genomic and microenvironmental heterogeneity for integrated prediction of 5-year biochemical recurrence of prostate cancer: a retrospective cohort study. Lancet Oncology. 15(13):1521-32, 2014.

    Dingar, D., Kalkat, M., Chan, M. P-K, Bailey, S.D., Srikumar, T., Tu, W.B., Ponzielli, R., Kotlyar, M., Jurisica, I., Huang, A., Lupien, M., Penn, L.Z., Raught, B. BioID identifies novel c-MYC interacting partners in cultured cells and xenograft tumors, Proteomics, pii: S1874-3919(14)00462-X, 2014. doi: 10.1016/j.jprot.2014.09.029

    Wong, S. W. H., Cercone, N., Jurisica, I. Comparative network analysis via differential graphlet communities, Special Issue of Proteomics dedicated to Signal Transduction, Proteomics, 15(2-3):608-17, 2015. E-pub 2014/10/07. doi: 10.1002/pmic.201400233

    Berlin, A., Lalonde, E., Sykes, J., Zafarana, G., Chu, K.C., Ramnarine, V.R., Ishkanian, A., Sendorek, D.H.S., Pasic, I., Lam, W.L., Jurisica, I., van der Kwast, T., Milosevic, M., Boutros, P.C., Bristow, R.G.. NBN Gain Is Predictive for Adverse Outcome Following Image-Guided Radiotherapy for Localized Prostate Cancer, Oncotarget, 3:e133, 2014.

    Lapin, V., Shirdel, E., Wei, X., Mason, J., Jurisica, I., Mak, T.W., Kinome-wide screening of HER2+ breast cancer cells for molecules that mediate cell proliferation or sensitize cells to trastuzumab therapy, Oncogenesis, 3, e133; doi:10.1038/oncsis.2014.45, 2014.

    Tu WB, Helander S, Pilstål R, Hickman KA, Lourenco C, Jurisica I, Raught B, Wallner B, Sunnerhagen M, Penn LZ. Myc and its interactors take shape. Biochim Biophys Acta. pii: S1874-9399(14)00154-0.

    This project runs on BOINC software. Visit BOINC or WCG, download and install the software and attach to the project. While you are at BOINC and WCG, look over the other projects for some that you might find of interest.

    WCG

    BOINC

     
  • richardmitnick 5:14 pm on January 28, 2015 Permalink | Reply
    Tags: , BOINC, , , ,   

    From WCG: “Using grid computing to understand an underwater world” 

    New WCG Logo

    SustainableWater screensaver

    28 Jan 2015
    By: Gerard P. Learmonth Sr., M.B.A., M.S., Ph.D.
    University of Virginia

    The Computing for Sustainable Water (CFSW) project focused on the Chesapeake Bay watershed in the United States. This is the largest watershed in the US and covers all or part of six states (Virginia, West Virginia, Maryland, Delaware, Pennsylvania, and New York) and Washington, D.C., the nation’s capital. The Bay has been under environmental pressure for many years. Previous efforts to address the problem have been unsuccessful. As a result, the size of the Bay’s anoxic region (dead zone) continues to affect the native blue crab (callinectes sapidus) population.

    2
    Callinectes sapidus – the blue crab

    he problem is largely a result of nutrient flow (nitrogen and phosphorous) into the Bay that occurs due to agricultural, industrial, and land development activities. Federal, state, and local agencies attempt to control nutrient flow through a set of incentives known as Best Management Practices (BMPs). Entities adopting BMPs typically receive payments. Each BMP is believed to be helpful in some way for controlling nutrient flow. However, the effectiveness of the various BMPs has not been studied on an appropriately large scale. Indeed, there is no clear scientific evidence for the effectiveness of some BMPs that have already been widely adopted.

    The Computing for Sustainable Water project conducted a set of large-scale simulation experiments of the impact of BMPs on nutrient flow into the Chesapeake Bay and the resulting environmental health of the Bay. Table 1 lists the 23 BMPs tested in this project. Initially, a simulation run with no BMPs was produced as a baseline case. Then each individual BMP was run separately and compared with the baseline. Table 2 shows the results of these statistical comparisons.

    Table 1. Best Management Practices employed in the Chesapeake Bay watershed
    3

    Table 2. Statistical results comparing each BMP to a baseline (no-BMPs) simulation experiment.
    4

    Student’s t-tests of individual BMPs compared to base case of no BMPs * = significant at α = 0.10; ** = significant at α = 0.05; *** = significant at α = 0.01
    For more information about t-statistic, click here. For more information about p-value, click here.

    These results identify several BMPs that are effective in reducing the corresponding nitrogen and phosphorous loads entering the Chesapeake Bay. In particular, BMPs 4, 7, and 23 are highly effective. These results are very informative for policymakers not only in the Chesapeake Bay watershed but globally as well, because many regions of the world experience similar problems and employ similar BMPs.

    In all, World Community Grid members facilitated over 19.1 million experiments. These include various combinations of BMPs to discover the possible effectiveness of combinations of BMPs. The analysis of these experiments continues for combinations of BMPs.

    We would like to once again express our gratitude to the World Community Grid community. A project of this size and scope simply would not have been possible without your help.

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    World Community Grid (WCG) brings people together from across the globe to create the largest non-profit computing grid benefiting humanity. It does this by pooling surplus computer processing power. We believe that innovation combined with visionary scientific research and large-scale volunteerism can help make the planet smarter. Our success depends on like-minded individuals – like you.”

    WCG projects run on BOINC software from UC Berkeley.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing.

    CAN ONE PERSON MAKE A DIFFERENCE? YOU BETCHA!!

    “Download and install secure, free software that captures your computer’s spare power when it is on, but idle. You will then be a World Community Grid volunteer. It’s that simple!” You can download the software at either WCG or BOINC.

    Please visit the project pages-
    Outsmart Ebola together

    Outsmart Ebola Together

    Mapping Cancer Markers
    mappingcancermarkers2

    Uncovering Genome Mysteries
    Uncovering Genome Mysteries

    Say No to Schistosoma

    GO Fight Against Malaria

    Drug Search for Leishmaniasis

    Computing for Clean Water

    The Clean Energy Project

    Discovering Dengue Drugs – Together

    Help Cure Muscular Dystrophy

    Help Fight Childhood Cancer

    Help Conquer Cancer

    Human Proteome Folding

    FightAIDS@Home

    Computing for Sustainable Water

     
  • richardmitnick 3:37 pm on January 27, 2015 Permalink | Reply
    Tags: , , BOINC, , , ,   

    From JPL: “Citizen Scientists Lead Astronomers to Mystery Objects in Space” 

    JPL

    January 27, 2015
    Whitney Clavin
    Jet Propulsion Laboratory, Pasadena, California
    818-354-4673
    whitney.clavin@jpl.nasa.gov

    1
    Volunteers using the web-based Milky Way Project brought star-forming features nicknamed “yellowballs” to the attention of researchers, who later showed that they are a phase of massive star formation. The yellow balls — which are several hundred to thousands times the size of our solar system — are pictured here in the center of this image taken by NASA’s Spitzer Space Telescope. Infrared light has been assigned different colors; yellow occurs where green and red overlap. The yellow balls represent an intermediary stage of massive star formation that takes place before massive stars carve out cavities in the surrounding gas and dust (seen as green-rimmed bubbles with red interiors in this image).

    Infrared light of 3.6 microns is blue; 8-micron light is green; and 24-micron light is red.

    2
    This series of images show three evolutionary phases of massive star formation, as pictured in infrared images from NASA’s Spitzer Space Telescope. The stars start out in thick cocoon of dust (left), evolve into hotter features dubbed “yellowballs” (center); and finally, blow out cavities in the surrounding dust and gas, resulting in green-rimmed bubbles with red centers (right). The process shown here takes roughly a million years. Even the oldest phase shown here is fairly young, as massive stars live a few million years. Eventually, the stars will migrate away from their birth clouds.

    In this image, infrared light of 3.6 microns is blue; 8-micron light is green; and 24-micron light is red.

    NASA’s Jet Propulsion Laboratory, Pasadena, California, manages the Spitzer Space Telescope mission for NASA’s Science Mission Directorate, Washington. Science operations are conducted at the Spitzer Science Center at the California Institute of Technology in Pasadena. Spacecraft operations are based at Lockheed Martin Space Systems Company, Littleton, Colorado. Data are archived at the Infrared Science Archive housed at the Infrared Processing and Analysis Center at Caltech. Caltech manages JPL for NASA.

    NASA Spitzer Telescope
    Spitzer

    Milkyway@home
    MilkyWay@home

    Milkyway@Home uses the BOINC platform to harness volunteered computing resources, creating a highly accurate three dimensional model of the Milky Way galaxy using data gathered by the Sloan Digital Sky Survey (SDSS). This project enables research in both astroinformatics and computer science.

    SDSS Telescope
    SDSS Telescope

    BOINC

    In computer science, the project is investigating different optimization methods which are resilient to the fault-prone, heterogeneous and asynchronous nature of Internet computing; such as evolutionary and genetic algorithms, as well as asynchronous newton methods. While in astroinformatics, Milkyway@Home is generating highly accurate three dimensional models of the Sagittarius stream, which provides knowledge about how the Milky Way galaxy was formed and how tidal tails are created when galaxies merge.

    Milkyway@Home is a joint effort between Rensselaer Polytechnic Institute‘s departments of Computer Science and Physics, Applied Physics and Astronomy. Feel free to contact us via our forums, or email astro@cs.lists.rpi.edu.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NASA JPL Campus

    Jet Propulsion Laboratory (JPL) is a federally funded research and development center and NASA field center located in the San Gabriel Valley area of Los Angeles County, California, United States. Although the facility has a Pasadena postal address, it is actually headquartered in the city of La Cañada Flintridge [1], on the northwest border of Pasadena. JPL is managed by the nearby California Institute of Technology (Caltech) for the National Aeronautics and Space Administration. The Laboratory’s primary function is the construction and operation of robotic planetary spacecraft, though it also conducts Earth-orbit and astronomy missions. It is also responsible for operating NASA’s Deep Space Network.

    Caltech Logo
    jpl

     
    • academix2015 4:22 pm on January 27, 2015 Permalink | Reply

      Web based Milky Way project would open up new opportunities for amateur astronomers. Thank you.

      Like

    • academix2015 4:22 pm on January 27, 2015 Permalink | Reply

      Reblogged this on Academic Avenue and commented:
      How about studying the intricacies of the astronomical processes and phenomena in the Milky Way?

      Like

  • richardmitnick 7:51 pm on December 17, 2014 Permalink | Reply
    Tags: BOINC, ,   

    From HCC at WCG: “New imaging tools accelerate cancer research” 

    New WCG Logo

    15 Dec 2014
    Help Conquer Cancer research team

    Summary
    The Help Conquer Cancer research team at the Ontario Cancer Institute continues to analyze the millions of protein-crystallization images processed by World Community Grid volunteers, by building new classifiers based on a combination of Grid-processed image features, and deep features learned directly from image pixels. Improvements in image classification, along with new data provided by our collaborators increase possibilities for discovering useful and interesting patterns in protein crystallization.

    Dear World Community Grid volunteers,

    Since our last Help Conquer Cancer (HCC) project update, we have continued to analyze the results that you generated. Here, we provide an update on that analysis work, and new research directions the project is taking.

    Analyzing HCC Results

    Volunteers for the HCC project received raw protein crystallization images and processed each image into a set of over 12,000 numeric image features. These features were implemented by a combination of image-processing algorithms, and refined over several generations of image-processing research leading up to the launch of HCC. The features (HCC-processed images) were then used to train a classifier that would convert each image’s features into a label describing the crystallization reaction captured in the image.

    Importantly, these thousands of features were human-designed. Most protein crystals have straight edges, for example, and so certain features were incorporated into HCC that search for straight lines. This traditional method of building an image classifier involves two types of learning: the crystallographer or image-processing expert (human), who studies the image and designs features, and the classifier (computer model), that learns to predict image labels from the designed features. The image classifier itself never sees the pixels; any improvements to the feature design must come from the human expert.

    More recently, we have applied a powerful computer-vision/machine-learning technology that improves this process by closing the feedback loop between pixels, features and the classifier: deep convolutional neural networks (CNNs). These models learn their own features directly from the image pixels; thus, they could complement human-designed features.

    CrystalNet

    We call our deep convolutional neural networks [CNN] CrystalNet. Our preliminary results suggest that it is an accurate and efficient classifier for protein crystallization images.

    In a CNN, multiple filters act like pattern detectors that are applied across the input image. A single map of the layer 1 feature maps shows the activation responses from a single filter. Deep CNNs refers to CNNs with many layers: higher-level filters stacked upon lower-level filters. Information from image pixels at the bottom of the network rises upwards through layers of filters until the “deep” features emerge from the top. Although the example shown in Figure 1 (below) has only 6 layers, more layers can be easily added. Including other image preprocessing and normalization layers, CrystalNet has 13 layers in total.

    1
    Fig. 1: Diagram of the standard convolutional neural network. For a single feature map, the convolution operation applies inner product of the same filter across the input image. 2D topography is preserved in the feature map representation. Spatial pooling performs image down-sampling of the feature maps by a factor of 2. Fully connected layers are the same as standard neural network layers. Outputs are discrete random variables or “1-of-K” codes. Element-wise nonlinearity is applied at every layer of the network.

    2
    After training, Figure 2 shows examples of the first layer filters. These filters extract interesting features useful for protein crystallography classification. Note that some of these filters look like segments of straight lines. Others resemble microcrystal-detecting filters previously designed for HCC.
    Fig. 2: Selected examples of the first-layer filters learned by our deep convolutional neural net. These filters have resemblances to human-designed feature extractors such as edge (top row), microcrystal (bottom), texture, and other detectors from HCC and computer vision generally.

    3
    Figure 3 shows CrystalNet’s crystal-detection performance across 10 image classes in the test set. CrystalNet produces an area under curve (AUC) 0.9894 for crystal class classification. At 5% false positive rate, our model can accurately detect 98% of the positive cases.

    CrystalNet can provide labels for images generated during the high-throughput process effectively, with a low miss rate and high precision for crystal detection. Moreover, CrystalNet operates in real-time, where labeling 1,536 images from a single plate only requires approximately 2 seconds. The combination of accuracy and efficiency makes a fully automated high-throughput crystallography pipeline possible, substantially reducing labor-intensive screening.

    New data from collaborators

    Our collaborators at the High-Throughput Screening Lab at the Hauptman-Woodward Medical Research Institute (HWI) supplied the original protein-crystallization image data. They continue to generate more, and are using versions of the image classifiers derived from the HCC project.

    Our research on the predictive science of protein crystallization has been limited by the information we have about the proteins being crystallized. Our research partners at HWI run crystallization trials on proteins supplied by labs all over the world. Often, protein samples are missing the identifying information that allows us to link these samples to global protein databases (e.g., Uniprot). Missing protein identifiers prevent us from integrating these samples into our data-mining system, and thereby linking the protein’s physical and chemical properties to each cocktail and corresponding crystallization response.

    Recently, however, HWI crystallographers were able to compile and share with us a complete record of all crystallization-trial proteins produced by the North-Eastern Structural Genomics (NESG) group. This dataset represents approximately 25% of all proteins processed by HCC volunteers on World Community Grid. Now all our NESG protein records are complete with each protein’s Uniprot ID, amino-acid sequence, and domain signatures.

    With more complete protein/cocktail information, combined with more accurate image labels from improved deep neural-net image classifiers, we anticipate greater success mining our protein-crystallization database. Work is ongoing.

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    World Community Grid (WCG) brings people together from across the globe to create the largest non-profit computing grid benefiting humanity. It does this by pooling surplus computer processing power. We believe that innovation combined with visionary scientific research and large-scale volunteerism can help make the planet smarter. Our success depends on like-minded individuals – like you.”

    WCG projects run on BOINC software from UC Berkeley.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing.

    CAN ONE PERSON MAKE A DIFFERENCE? YOU BETCHA!!

    “Download and install secure, free software that captures your computer’s spare power when it is on, but idle. You will then be a World Community Grid volunteer. It’s that simple!” You can download the software at either WCG or BOINC.

    Please visit the project pages-
    Outsmart Ebola together

    Outsmart Ebola Together

    Mapping Cancer Markers
    mappingcancermarkers2

    Uncovering Genome Mysteries
    Uncovering Genome Mysteries

    Say No to Schistosoma

    GO Fight Against Malaria

    Drug Search for Leishmaniasis

    Computing for Clean Water

    The Clean Energy Project

    Discovering Dengue Drugs – Together

    Help Cure Muscular Dystrophy

    Help Fight Childhood Cancer

    Help Conquer Cancer

    Human Proteome Folding

    FightAIDS@Home

    World Community Grid is a social initiative of IBM Corporation
    IBM Corporation
    ibm

    IBM – Smarter Planet
    sp

     
  • richardmitnick 10:16 am on December 17, 2014 Permalink | Reply
    Tags: , , , BOINC, , TheSkyNet   

    BOINC New Project – The SkyNet 


    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing, developed at UC Berkeley.

    New Project for BOINC

    TheSkyNet

    theSkyNet

    Have a computer? Want to help astronomers make awesome discoveries and understand our Universe? theSkyNet needs you!

    By connecting 100s and 1000s of computers together through the Internet, it’s possible to simulate a single machine capable of doing some pretty amazing stuff. theSkyNet is a community computing project dedicated to radio astronomy. Astronomers use telescopes to observe the Universe at many different wavelengths. All day, every day, signals from distant galaxies, stars and other cosmic bits and pieces arrive at the Earth in the form of visible (optical) light, radio waves, infrared radiation, ultraviolet radiation and many other types of waves. Once detected by a telescope the signal is processed by computers and used by scientists to support a theory or inspire a new one.

    When you join theSkyNet your computer will help astronomers process information and answer some of the big questions we have about the Universe. As a part of theSkyNet community your computer will be called upon to process small packets of data, but you wont even notice it’s going on. The key to theSkyNet is to have lots of computers connected, with each doing only a little, but it all adding up to a lot.

    At the heart of theSkyNet is this website, theSkyNet.org where you’ll find alliances you’ve joined stack up against others. The more data you and your alliances process, the higher you’ll climb in the rankings. But that’s not it, because as theSkyNet project evolves we’ll be adding more features for you to explore. In the pipeline we have visualisation tools to help you understand the data you’re processing and even an opportunity to help identify and catalogue radio wave sources in the sky.

    At the moment theSkyNet has two main science projects for you to contribute to, theSkyNet SourceFinder and theSkyNet POGS. You can find out more about theSkyNet’s science and our two projects at the Science Portals.

    theSkyNet SourceFinder

    TheSkyNet SourceFinder was the first science project on theSkyNet. It’s based on a Java distributed computing technology called Nereus.

    Right now SourceFinder is busy processing an absolutely huge simulated cube of data that’s over one terabyte (1TB) in size.

    Automatically working through a data set this big has never been done before, and ICRAR’s astronomers are eagerly awaiting the results from SourceFinder to prove it’s possible.

    The next generation of amazing radio telescopes, such as CSIRO’s Australian Square Kilometre Array Pathfinder (ASKAP), will produce data cubes just like the one that SourceFinder is currently working on. When ASKAP starts collecting data from the sky in 2015 astronomers need to be ready to process the information it collects – and to find the location of radio sources, like galaxies, within it.

    Australian Square Kilometer Array Pathfinder Project
    ASKAP

    ASKAP will produce many thousands of data sets like the cube SourceFinder is working on, so figuring out how to process them automatically, with maximum efficiency and accuracy is one of the challenges astronomers are facing.

    SourceFinder is using some code called Duchamp that has been built to automatically tell the difference between background noise and real radio sources in data from a radio telescope. By processing this 1TB chunk of data on theSkyNet SourceFinder astronomers are:

    working out the best way to run Duchamp on big data in the future,
    proving that distributed computing is a real solution to process such detailed (and large) radio astronomy data, and
    learning more about how they’re going to get the most information out of telescopes like ASKAP and the SKA.

    The SkyNet POGS

    5

    TheSkyNet POGS is a research project that uses Internet-connected computers to do research in astronomy. We will combine the spectral coverage of GALEX, Pan-STARRS1, and WISE to generate a multi-wavelength UV-optical-NIR galaxy atlas for the nearby Universe. We will measure physical parameters (such as stellar mass surface density, star formation rate surface density, attenuation, and first-order star formation history) on a resolved pixel-by-pixel basis using spectral energy distribution (SED) fitting techniques in a distributed computing mode. You can participate by downloading and running a free program on your computer.

    NASA Galaxy Telescope
    NASA/GALEX

    Pann-STARSR1 Telescope
    Pann-STARRS1 interior
    Pan-STARRS1

    NASA Wise Telescope
    NASA/WISE

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Visit the BOINC web page, click on Choose projects and check out some of the very worthwhile studies you will find. Then click on Download and run BOINC software/ All Versons. Download and install the current software for your 32bit or 64bit system, for Windows, Mac or Linux. When you install BOINC, it will install its screen savers on your system as a default. You can choose to run the various project screen savers or you can turn them off. Once BOINC is installed, in BOINC Manager/Tools, click on “Add project or account manager” to attach to projects. Many BOINC projects are listed there, but not all, and, maybe not the one(s) in which you are interested. You can get the proper URL for attaching to the project at the projects’ web page(s) BOINC will never interfere with any other work on your computer.

    MAJOR PROJECTS RUNNING ON BOINC SOFTWARE

    SETI@home The search for extraterrestrial intelligence. “SETI (Search for Extraterrestrial Intelligence) is a scientific area whose goal is to detect intelligent life outside Earth. One approach, known as radio SETI, uses radio telescopes to listen for narrow-bandwidth radio signals from space. Such signals are not known to occur naturally, so a detection would provide evidence of extraterrestrial technology.

    Radio telescope signals consist primarily of noise (from celestial sources and the receiver’s electronics) and man-made signals such as TV stations, radar, and satellites. Modern radio SETI projects analyze the data digitally. More computing power enables searches to cover greater frequency ranges with more sensitivity. Radio SETI, therefore, has an insatiable appetite for computing power.

    Previous radio SETI projects have used special-purpose supercomputers, located at the telescope, to do the bulk of the data analysis. In 1995, David Gedye proposed doing radio SETI using a virtual supercomputer composed of large numbers of Internet-connected computers, and he organized the SETI@home project to explore this idea. SETI@home was originally launched in May 1999.”


    SETI@home is the birthplace of BOINC software. Originally, it only ran in a screensaver when the computer on which it was installed was doing no other work. With the powerand memory available today, BOINC can run 24/7 without in any way interfering with other ongoing work.

    seti
    The famous SET@home screen saver, a beauteous thing to behold.

    einstein@home The search for pulsars. “Einstein@Home uses your computer’s idle time to search for weak astrophysical signals from spinning neutron stars (also called pulsars) using data from the LIGO gravitational-wave detectors, the Arecibo radio telescope, and the Fermi gamma-ray satellite. Einstein@Home volunteers have already discovered more than a dozen new neutron stars, and we hope to find many more in the future. Our long-term goal is to make the first direct detections of gravitational-wave emission from spinning neutron stars. Gravitational waves were predicted by Albert Einstein almost a century ago, but have never been directly detected. Such observations would open up a new window on the universe, and usher in a new era in astronomy.”

    MilkyWay@Home Milkyway@Home uses the BOINC platform to harness volunteered computing resources, creating a highly accurate three dimensional model of the Milky Way galaxy using data gathered by the Sloan Digital Sky Survey. This project enables research in both astroinformatics and computer science.”

    Leiden Classical “Join in and help to build a Desktop Computer Grid dedicated to general Classical Dynamics for any scientist or science student!”

    World Community Grid (WCG) World Community Grid is a special case at BOINC. WCG is part of the social initiative of IBM Corporation and the Smarter Planet. WCG has under its umbrella currently eleven disparate projects at globally wide ranging institutions and universities. Most projects relate to biological and medical subject matter. There are also projects for Clean Water and Clean Renewable Energy. WCG projects are treated respectively and respectably on their own at this blog. Watch for news.

    Rosetta@home “Rosetta@home needs your help to determine the 3-dimensional shapes of proteins in research that may ultimately lead to finding cures for some major human diseases. By running the Rosetta program on your computer while you don’t need it you will help us speed up and extend our research in ways we couldn’t possibly attempt without your help. You will also be helping our efforts at designing new proteins to fight diseases such as HIV, Malaria, Cancer, and Alzheimer’s….”

    GPUGrid.net “GPUGRID.net is a distributed computing infrastructure devoted to biomedical research. Thanks to the contribution of volunteers, GPUGRID scientists can perform molecular simulations to understand the function of proteins in health and disease.” GPUGrid is a special case in that all processor work done by the volunteers is GPU processing. There is no CPU processing, which is the more common processing. Other projects (Einstein, SETI, Milky Way) also feature GPU processing, but they offer CPU processing for those not able to do work on GPU’s.

    gif

    These projects are just the oldest and most prominent projects. There are many others from which you can choose.

    There are currently some 300,000 users with about 480,000 computers working on BOINC projects That is in a world of over one billion computers. We sure could use your help.

    My BOINC

    graph

     
  • richardmitnick 10:28 pm on December 3, 2014 Permalink | Reply
    Tags: , , BOINC, , , , , , , ,   

    From isgtw: “Volunteer computing: 10 years of supporting CERN through LHC@home” 


    international science grid this week

    December 3, 2014
    Andrew Purcell

    LHC@home recently celebrated a decade since its launch in 2004. Through its SixTrack project, the LHC@home platform harnesses the power of volunteer computing to model the progress of sub-atomic particles traveling at nearly the speed of light around the Large Hadron Collider (LHC) at CERN, near Geneva, Switzerland. It typically simulates about 60 particles whizzing around the collider’s 27km-long ring for ten seconds, or up to one million loops. Results from SixTrack were used to help the engineers and physicists at CERN design stable beam conditions for the LHC, so today the beams stay on track and don’t cause damage by flying off course into the walls of the vacuum tube. It’s now also being used to carry out simulations relevant to the design of the next phase of the LHC, known as the High-Luminosity LHC.

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    “The results of SixTrack played an essential role in the design of the LHC, and the high-luminosity upgrades will naturally require additional development work on SixTrack,” explains Frank Schmidt, who works in CERN’s Accelerators and Beam Physics Group of the Beams Department and is the main author of the SixTrack code. “In addition to its use in the design stage, SixTrack is also a key tool for the interpretation of data taken during the first run of the LHC,” adds Massimo Giovannozzi, who also works in CERN’s Accelerators and Beams Physics Group. “We use it to improve our understanding of particle dynamics, which will help us to push the LHC performance even further over the coming years of operation.” He continues: “Managing a project like SixTrack within LHC@home requires resources and competencies that are not easy to find: Igor Zacharov, a senior scientist at the Particle Accelerator Physics Laboratory (LPAP) of the Swiss Federal Institute of Technology in Lausanne (EPFL), provides valuable support for SixTrack by helping with BOINC integration.”

    c
    Volunteer computing is a type of distributed computing through which members of the public donate computing resources (usually processing power) to aid research projects. Image courtesy Eduardo Diez Viñuela, Flickr (CC BY-SA 2.0).

    Before LHC@home was created, SixTrack was run only on desktop computers at CERN, using a platform called the Compact Physics Screen Saver (CPSS). This proved to be a useful tool for a proof of concept, but it was first with the launch of the LHC@home platform in 2004 that things really took off. “I am surprised and delighted by the support from our volunteers,” says Eric McIntosh, who formerly worked in CERN’s IT Department and is now an honorary member of the Beams Department. “We now have over 100,000 users all over the world and many more hosts. Every contribution is welcome, however small, as our strength lies in numbers.”

    Virtualization to the rescue

    Building on the success of SixTrack, the Virtual LHC@home project (formerly known as Test4Theory) was launched in 2011. It enables users to run simulations of high-energy particle physics using their home computers, with the results submitted to a database used as a common resource by both experimental and theoretical scientists working on the LHC.

    Whereas the code for SixTrack was ported for running on Windows, OS X, and Linux, the high-energy-physics code used by each of the LHC experiments is far too large to port in a similar way. It is also being constantly updated. “The experiments at CERN have their own libraries and they all run on Linux, while the majority of people out there have common-or-garden variety Windows machines,” explains CERN honorary staff member of the IT department and chief technology officer of the Citizen Cyberscience Centre Ben Segal. “Virtualization is the way to solve this problem.”

    The birth of the LHC@home platform

    In 2004, Ben Segal and François Grey , who were both members of CERN’s IT department at the time, were asked to plan an outreach event for CERN’s 50th anniversary that would help people around the world to get an impression of the computational challenges facing the LHC. “I had been an early volunteer for SETI@home after it was launched in 1999,” explains Grey. “Volunteer computing was often used as an illustration of what distributed computing means when discussing grid technology. It seemed to me that it ought to be feasible to do something similar for LHC computing and perhaps even combine volunteer computing and grid computing this way.”

    “I contacted David Anderson, the person behind SETI@Home, and it turned out the timing was good, as he was working on an open-source platform called BOINC to enable many projects to use the SETI@home approach,” Grey continues. BOINC (Berkeley Open Infrastructures for Network Computing)is an open-source software platform for computing with volunteered resources. It was first developed at the University of California, Berkeley in the US to manage the SETI@Home project, and uses the unused CPU and GPU cycles on a computer to support scientific research.

    “I vividly remember the day we phoned up David Anderson in Berkeley to see if we could make a SETI-like computing challenge for CERN,” adds Segal. “We needed a CERN application that ran on Windows, as over 90% of BOINC volunteers used that. The SixTrack people had ported their code to Windows and had already built a small CERN-only desktop grid to run it on, as they needed lots of CPU power. So we went with that.”

    A runaway success

    “I was worried that no one would find the LHC as interesting as SETI. Bear in mind that this was well before the whole LHC craziness started with the Angels and Demons movie, and news about possible mini black holes destroying the planet making headlines,” says Grey. “We made a soft launch, without any official announcements, in 2004. To our astonishment, the SETI@home community immediately jumped in, having heard about LHC@home by word of mouth. We had over 1,000 participants in 24 hours, and over 7,000 by the end of the week — our server’s maximum capacity.” He adds: “We’d planned to run the volunteer computing challenge for just three months, at the time of the 50th anniversary. But the accelerator physicists were hooked and insisted the project should go on.”

    Predrag Buncic, who is now coordinator of the offline group within the ALICE experiment, led work to create the CERN Virtual Machine in 2008. He, Artem Harutyunyan (former architect and lead developer of CernVM Co-Pilot), and Segal subsequently adopted this virtualization technology for use within Virtual LHC@home. This has made it significantly easier for the experiments at CERN to create their own volunteer computing applications, since it is no longer necessary for them to port their code. The long-term vision for Virtual LHC@home is to support volunteer-computing applications for each of the large LHC experiments.
    Growth of the platform

    The ATLAS experiment recently launched a project that simulates the creation and decay of supersymmetric bosons and fermions. “ATLAS@Home offers the chance for the wider public to participate in the massive computation required by the ATLAS experiment and to contribute to the greater understanding of our universe,” says David Cameron, a researcher at the University of Oslo in Norway. “ATLAS also gains a significant computing resource at a time when even more resources will be required for the analysis of data from the second run of the LHC.”

    CERN ATLAS New
    ATLAS

    ATLAS@home

    Meanwhile, the LHCb experiment has been running a limited test prototype for over a year now, with an application running Beauty physics simulations set to be launched for the Virtual LHC@home project in the near future. The CMS and ALICE experiments also have plans to launch similar applications.

    CERN LHCb New
    LHCb

    CERN CMS New
    CMS

    CERN ALICE New
    ALICE

    An army of volunteers

    “LHC@home allows CERN to get additional computing resources for simulations that cannot easily be accommodated on regular batch or grid resources,” explains Nils Høimyr, the member of the CERN IT department responsible for running the platform. “Thanks to LHC@home, thousands of CPU years of accelerator beam dynamics simulations for LHC upgrade studies have been done with SixTrack, and billions of events have been simulated with Virtual LHC@home.” He continues: “Furthermore, the LHC@home platform has been an outreach channel, giving publicity to LHC and high-energy physics among the general public.”

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 5:23 pm on November 28, 2014 Permalink | Reply
    Tags: , , BOINC, , ,   

    From CERN: “ATLAS@Home looks for CERN volunteers” 

    ATLAS@home

    ATLAS@home

    Mon 01 Dec 2014
    Rosaria Marraffino

    ATLAS@Home is a CERN volunteer computing project that runs simulated ATLAS events. As the project ramps up, the project team is looking for CERN volunteers to test the system before planning a bigger promotion for the public.

    as
    The ATLAS@home outreach website.

    ATLAS@Home is a large-scale research project that runs ATLAS experiment simulation software inside virtual machines hosted by volunteer computers. “People from all over the world offer up their computers’ idle time to run simulation programmes to help physicists extract information from the large amount of data collected by the detector,” explains Claire Adam Bourdarios of the ATLAS@Home project. “The ATLAS@Home project aims to extrapolate the Standard Model at a higher energy and explore what new physics may look like. Everything we’re currently running is preparation for next year’s run.”

    ATLAS@Home became an official BOINC (Berkeley Open Infrastructure for Network Computing) project in May 2014. After a beta test with SUSY events and Z decays, real production started in the summer with inelastic proton-proton interaction events. Since then, the community has grown remarkably and now includes over 10,000 volunteers spread across five continents. “We’re running the full ATLAS simulation and the resulting output files containing the simulated events are integrated with the experiment standard distributed production,” says Bourdarios.

    Compared to other LHC@Home projects, ATLAS@Home is heavier in terms of network traffic and memory requirements. “From the start, we have been successfully challenging the underlying infrastructure of LHC@Home,” says Bourdarios. “Now we’re looking for CERN volunteers to go one step further before doing a bigger public promotion.”

    e
    This simulated event display is created using ATLAS data.

    If you want to join the community and help the ATLAS experiment, you just need to download and run the necessary free software, VirtualBox and BOINC, which are available on NICE. Find out more about the project and how to join on the ATLAS@Home outreach website.

    “This project has huge outreach potential,” adds Bourdarios. “We hope to demonstrate how big discoveries are often unexpected deviations from existing models. This is why we need simulations. We’re also working on an event display, so that people can learn more about the events they have been producing and capture an image of what they have done.”

    If you have any questions about the ATLAS@Home project, e-mail atlas-comp-contact-home@cern.ch
    .

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ATLAS@Home is a research project that uses volunteer computing to run simulations of the ATLAS experiment at CERN. You can participate by downloading and running a free program on your computer.

    ATLAS is a particle physics experiment taking place at the Large Hadron Collider at CERN, that searches for new particles and processes using head-on collisions of protons of extraordinary high energy. Petabytes of data were recorded, processed and analyzed during the first three years of data taking, leading to up to 300 publications covering all the aspects of the Standard Model of particle physics, including the discovery of the Higgs boson in 2012.

    Large scale simulation campaigns are a key ingredient for physicists, who permanently compare their data with both “known” physics and “new” phenomena predicted by alternative models of the universe, particles and interactions. This simulation runs on the WLCG Computing Grid and at any one point there are around 150,000 tasks running. You can help us run even more simulation by using your computer’s idle time to run these same tasks.

    No knowledge of particle physics is required, but for those interested in more details, at the moment we simulate the creation and decay of supersymmetric bosons and fermions, new types of particles that we would love to discover next year, as they would help us to shed light on the dark matter mystery!

    This project runs on BOINC software from UC Berkeley.
    Visit BOINC, download and install the software and attach to the project.

    BOINCLarge

     
  • richardmitnick 3:22 pm on November 18, 2014 Permalink | Reply
    Tags: , BOINC, , ,   

    From NOVA: “Why There’s No HIV Cure Yet” 

    [After the NOVA article, I tell you how you and your family, friends, and colleagues can help to find a cure for AIDS and other diseases]

    PBS NOVA

    NOVA

    27 Aug 2014
    Alison Hill

    Over the past two years, the phrase “HIV cure” has flashed repeatedly across newspaper headlines. In March 2013, doctors from Mississippi reported that the disease had vanished in a toddler who was infected at birth. Four months later, researchers in Boston reported a similar finding in two previously HIV-positive men. All three were no longer required to take any drug treatments. The media heralded the breakthrough, and there was anxious optimism among HIV researchers. Millions of dollars of grant funds were earmarked to bring this work to more patients.

    But in December 2013, the optimism evaporated. HIV had returned in both of the Boston men. Then, just this summer, researchers announced the same grim results for the child from Mississippi. The inevitable questions mounted from the baffled public. Will there ever be a cure for this disease? As a scientist researching HIV/AIDS, I can tell you there’s no straightforward answer. HIV is a notoriously tricky virus, one that’s eluded promising treatments before. But perhaps just as problematic is the word “cure” itself.

    Science has its fair share of trigger words. Biologists prickle at the words “vegetable” and “fruit”—culinary terms which are used without a botanical basis—chemists wrinkle their noses at “chemical free,” and physicists dislike calling “centrifugal” a force—it’s not; it only feels like one. If you ask an HIV researcher about a cure for the disease, you’ll almost certainly be chastised. What makes “cure” such a heated word?

    t
    HIV hijacks the body’s immune system by attacking T cells.

    It all started with a promise. In the early 1980s, doctors and public health officials noticed large clusters of previously healthy people whose immune systems were completely failing. The new condition became known as AIDS, for “acquired immunodeficiency syndrome.” A few years later, in 1984, researchers discovered the cause—the human immunodeficiency virus, now known commonly as HIV. On the day this breakthrough was announced, health officials assured the public that a vaccine to protect against the dreaded infection was only two years away. Yet here we are, 30 years later, and there’s still no vaccine. This turned out to be the first of many overzealous predictions about controlling the HIV epidemic or curing infected patients.

    The progression from HIV infection to AIDS and eventual death occurs in over 99% of untreated cases—making it more deadly than Ebola or the plague. Despite being identified only a few decades ago, AIDS has already killed 25 million people and currently infects another 35 million, and the World Health Organization lists it as the sixth leading cause of death worldwide.

    HIV disrupts the body’s natural disease-fighting mechanisms, which makes it particularly deadly and complicates efforts to develop a vaccine against it. Like all viruses, HIV gets inside individual cells in the body and highjacks their machinery to make thousands of copies of itself. HIV replication is especially hard for the body to control because the white blood cells it infects, and eventually kills, are a critical part of the immune system. Additionally, when HIV copies its genes, it does so sloppily. This causes it to quickly mutate into many different strains. As a result, the virus easily outwits the body’s immune defenses, eventually throwing the immune system into disarray. That gives other obscure or otherwise innocuous infections a chance to flourish in the body—a defining feature of AIDS.

    Early Hope

    In 1987, the FDA approved AZT as the first drug to treat HIV. With only two years between when the drug was identified in the lab and when it was available for doctors to prescribe, it was—and remains—the fastest approval process in the history of the FDA. AZT was widely heralded as a breakthrough. But as the movie The Dallas Buyer’s Club poignantly retells, AZT was not the miracle drug many hoped. Early prescriptions often elicited toxic side-effects and only offered a temporary benefit, as the virus quickly mutated to become resistant to the treatment. (Today, the toxicity problems have been significantly reduced, thanks to lower doses.) AZT remains a shining example of scientific bravura and is still an important tool to slow the infection, but it is far from the cure the world had hoped for.

    In three decades, over 25 highly-potent drugs have been developed and FDA-approved to treat HIV.

    Then, in the mid-1990s, some mathematicians began probing the data. Together with HIV scientists, they suggested that by taking three drugs together, we could avoid the problem of drug resistance. The chance that the virus would have enough mutations to allow it to avoid all drugs at once, they calculated, would simply be too low to worry about. When the first clinical trials of these “drug cocktails” began, both mathematical and laboratory researchers watched the levels of virus drop steadily in patients until they were undetectable. They extrapolated this decline downwards and calculated that, after two to three years of treatment, all traces of the virus should be gone from a patient’s body. When that happened, scientists believed, drugs could be withdrawn, and finally, a cure achieved. But when the time came for the first patients to stop their drugs, the virus again seemed to outwit modern medicine. Within a few weeks of the last pill, virus levels in patients’ blood sprang up to pre-treatment levels—and stayed there.

    In the three decades since, over 25 more highly-potent drugs have been developed and FDA-approved to treat HIV. When two to five of them are combined into a drug cocktail, the mixture can shut down the virus’s replication, prevent the onset of AIDS, and return life expectancy to a normal level. However, patients must continue taking these treatments for their entire lives. Though better than the alternative, drug regimens are still inconvenient and expensive, especially for patients living in the developing world.

    Given modern medicine’s success in curing other diseases, what makes HIV different? By definition, an infection is cured if treatment can be stopped without the risk of it resurfacing. When you take a week-long course of antibiotics for strep throat, for example, you can rest assured that the infection is on track to be cleared out of your body. But not with HIV.

    A Bad Memory

    The secret to why HIV is so hard to cure lies in a quirk of the type of cell it infects. Our immune system is designed to store information about infections we have had in the past; this property is called “immunologic memory.” That’s why you’re unlikely to be infected with chickenpox a second time or catch a disease you were vaccinated against. When an infection grows in the body, the white blood cells that are best able to fight it multiply repeatedly, perfecting their infection-fighting properties with each new generation. After the infection is cleared, most of these cells will die off, since they are no longer needed. However, to speed the counter-attack if the same infection returns, some white blood cells will transition to a hibernation state. They don’t do much in this state but can live for an extremely long time, thereby storing the “memory” of past infections. If provoked by a recurrence, these dormant cells will reactivate quickly.

    This near-immortal, sleep-like state allows HIV to persist in white blood cells in a patient’s body for decades. White blood cells infected with HIV will occasionally transition to the dormant state before the virus kills them. In the process, the virus also goes temporarily inactive. By the time drugs are started, a typical infected person contains millions of these cells with this “latent” HIV in them. Drug cocktails can prevent the virus from replicating, but they do nothing to the latent virus. Every day, some of the dormant white blood cells wake up. If drug treatment is halted, the latent virus particles can restart the infection.

    Latent HIV’s near-immortal, sleep-like state allows it to persist in white blood cells in a patient’s body for decades.

    HIV researchers call this huge pool of latent virus the “barrier to a cure.” Everyone’s looking for ways to get rid of it. It’s a daunting task, because although a million HIV-infected cells may seem like a lot, there are around a million times that many dormant white blood cells in the whole body. Finding the ones that contain HIV is a true needle-in-a-haystack problem. All that remains of a latent virus is its DNA, which is extremely tiny compared to the entire human genome inside every cell (about 0.001% of the size).
    Defining a Cure

    Around a decade ago, scientists began to talk amongst themselves about what a hypothetical cure could look like. They settled on two approaches. The first would involve purging the body of latent virus so that if drugs were stopped, there would be nothing left to restart the infection. This was often called a “sterilizing cure.” It would have to be done in a more targeted and less toxic way than previous attempts of the late 1990s, which, because they attempted to “wake up” all of the body’s dormant white blood cells, pushed the immune system into a self-destructive overdrive. The second approach would instead equip the body with the ability to control the virus on its own. In this case, even if treatment was stopped and latent virus reemerged, it would be unable to produce a self-sustaining, high-level infection. This approach was referred to as a “functional cure.”

    The functional cure approach acknowledged that latency alone was not the barrier to a cure for HIV. There are other common viruses that have a long-lived latent state, such as the Epstein-Barr virus that causes infectious mononucleosis (“mono”), but they rarely cause full-blown disease when reactivated. HIV is, of course, different because the immune system in most people is unable to control the infection.

    The first hint that a cure for HIV might be more than a pipe-dream came in 2008 in a fortuitous human experiment later known as the “Berlin patient.” The Berlin patient was an HIV-positive man who had also developed leukemia, a blood cancer to which HIV patients are susceptible. His cancer was advanced, so in a last-ditch effort, doctors completely cleared his bone marrow of all cells, cancerous and healthy. They then transplanted new bone marrow cells from a donor.

    Fortunately for the Berlin patient, doctors were able to find a compatible bone marrow donor who carried a unique HIV-resistance mutation in a gene known as CCR5. They completed the transplant with these cells and waited.

    For the last five years, the Berlin patient has remained off treatment without any sign of infection. Doctors still cannot detect any HIV in his body. While the Berlin patient may be cured, this approach cannot be used for most HIV-infected patients. Bone marrow transplants are extremely risky and expensive, and they would never be conducted in someone who wasn’t terminally ill—especially since current anti-HIV drugs are so good at keeping the infection in check.

    Still, the Berlin patient was an important proof-of-principle case. Most of the latent virus was likely cleared out during the transplant, and even if the virus remained, most strains couldn’t replicate efficiently given the new cells with the CCR5 mutation. The Berlin patient case provides evidence that at least one of the two cure methods (sterilizing or functional), or perhaps a combination of them, is effective.

    Researchers have continued to try to find more practical ways to rid patients of the latent virus in safe and targeted ways. In the past five years, they have identified multiple anti-latency drug candidates in the lab. Many have already begun clinical trials. Each time, people grow optimistic that a cure will be found. But so far, the results have been disappointing. None of the drugs have been able to significantly lower levels of latent virus.

    In the meantime, doctors in Boston have attempted to tease out which of the two cure methods was at work in the Berlin patient. They conducted bone marrow transplants on two HIV-infected men with cancer—but this time, since HIV-resistant donor cells were not available, they just used typical cells. Both patients continued their drug cocktails during and after the transplant in the hopes that the new cells would remain HIV-free. After the transplants, no HIV was detectable, but the real test came when these patients volunteered to stop their drug regimens. When they remained HIV-free a few months later, the results were presented at the International AIDS Society meeting in July 2013. News outlets around the world declared that two more individuals had been cured of HIV.

    Latent virus had likely escaped the detection methods available.

    It quickly became clear that everyone had spoken too soon. Six months later, researchers reported that the virus had suddenly and rapidly returned in both individuals. Latent virus had likely escaped the detection methods available—which are not sensitive enough—and persisted at low, but significant levels. Disappointment was widespread. The findings showed that even very small amounts of latent virus could restart an infection. It also meant meant that the anti-latency drugs in development would need to be extremely potent to give any hope of a cure.

    But there was one more hope—the “Mississippi baby.” A baby was born to an HIV-infected mother who had not received any routine prenatal testing or treatment. Tests revealed high levels of HIV in the baby’s blood, so doctors immediately started the infant on a drug cocktail, to be continued for life.

    The mother and child soon lost touch with their health care providers. When they were relocated a few years later, doctors learned that the mother had stopped giving drugs to the child several months prior. The doctors administered all possible tests to look for signs of the virus, both latent and active, but they didn’t find any evidence. They chose not to re-administer drugs, and a year later, when the virus was still nowhere to be found, they presented the findings to the public. It was once again heralded as a cure.

    Again, it was not to be. Just last month, the child’s doctors announced that the virus had sprung back unexpectedly. It seemed that even starting drugs as soon as infection was detected in the newborn could not prevent the infection from returning over two years later.
    Hope Remains

    Despite our grim track record with the disease, HIV is probably not incurable. Although we don’t have a cure yet, we’ve learned many lessons along the way. Most importantly, we should be extremely careful about using the word “cure,” because for now, we’ll never know if a person is cured until they’re not cured.

    Clearing out latent virus may still be a feasible approach to a cure, but the purge will have to be extremely thorough. We need drugs that can carefully reactivate or remove latent HIV, leaving minimal surviving virus while avoiding the problems that befell earlier tests that reactivated the entire immune system. Scientists have proposed multiple, cutting-edge techniques to engineer “smart” drugs for this purpose, but we don’t yet know how to deliver this type of treatment safely or effectively.

    As a result, most investigations focus on traditional types of drugs. Researchers have developed ways to rapidly scan huge repositories of existing medicines for their ability to target latent HIV. These methods have already identified compounds that were previously used to treat alcoholism, cancer, and epilepsy, and researchers are repurposing them to be tested in HIV-infected patients.
    The less latent virus that remains, the less chance there is that the virus will win the game of chance.

    Mathematicians are also helping HIV researchers evaluate new treatments. My colleagues and I use math to take data collected from just a few individuals and fill in the gaps. One question we’re focusing on is exactly how much latent virus must be removed to cure a patient, or at least to let them stop their drug cocktails for a few years. Each cell harboring latent virus is a potential spark that could restart the infection. But we don’t know when the virus will reactivate. Even once a single latent virus awakens, there are still many barriers it must overcome to restart a full-blown infection. The less latent virus that remains, the less chance there is that the virus will win this game of chance. Math allows us to work out these odds very precisely.

    Our calculations show that “apparent cures”—where patients with latent virus levels low enough to escape detection for months or years without treatment—are not a medical anomaly. In fact, math tells us that they are an expected result of these chance dynamics. It can also help researchers determine how good an anti-latency drug should be before it’s worth testing in a clinical trial.

    Many researchers are working to augment the body’s ability to control the infection, providing a functional cure rather than a sterilizing one. Studies are underway to render anyone’s immune cells resistant to HIV, mimicking the CCR5 mutation that gives some people natural resistance. Vaccines that could be given after infection, to boost the immune response or protect the body from the virus’s ill effects, are also in development.

    In the meantime, treating all HIV-infected individuals—which has the added benefit of preventing new transmissions—remains the best way to control the epidemic and reduce mortality. But the promise of “universal treatment” has also not materialized. Currently, even in the U.S., only 25% of HIV-positive people have their viral levels adequately suppressed by treatment. Worldwide, for every two individuals starting treatment, three are newly infected. While there’s no doubt that we’ve made tremendous progress in fighting the virus, we have a long way to go before the word “cure” is not taboo when it comes to HIV/AIDS.

    See the full article here.

    Did you know that you can help in the fight against AIDS? By donating time on your computer to the Fight Aids at Home project of World Community Grid, you can become a part of the solution. The work is called “crunching” because you are crunching computational data the results of which will then be fed back into the necessary lab work. We save researchers literally millions of hours of lab time in this process.
    Vsit World Community Grid (WCG) or Berkeley Open infrastructure for Network Computing (BOINC). Download the BOINC software and install it on your computer. Then visit WCG and attach to the FAAH project. The project will send you computational work units. Your computer will process them and send the results back to the project, the project will then send you more work units. It is that simple. You do nothing, unless you want to get into the nuts and bolts of the BOINC software. If you take up this work, and if you see it as valuable, please tell your family, friends and colleagues, anyone with a computer, even an Android tablet. We found out that my wife’s oncologist’s father in Brazil is a cruncher on two projects from WCG.

    This is the projects web site. Take a look.

    While you are visiting BOINC and WCG, look around at all of the very valuable projects being conducted at some of the worlds most distinguished universities and scientific institutions. You can attach to as many as you like, on one or a number of computers. You can only be a help here, particpating in Citizen Science.

    This is a look at the present and past projects at WCG:

    Please visit the project pages-

    Mapping Cancer Markers
    mappingcancermarkers2

    Uncovering Genome Mysteries
    Uncovering Genome Mysteries

    Say No to Schistosoma

    GO Fight Against Malaria

    Drug Search for Leishmaniasis

    Computing for Clean Water

    The Clean Energy Project

    Discovering Dengue Drugs – Together

    Help Cure Muscular Dystrophy

    Help Fight Childhood Cancer

    Help Conquer Cancer

    Human Proteome Folding

    FightAIDS@Home

    World Community Grid is a social initiative of IBM Corporation
    IBM Corporation
    ibm

    IBM – Smarter Planet
    <img

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 463 other followers

%d bloggers like this: