Tagged: Grid Computing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 7:52 am on June 9, 2012 Permalink | Reply
    Tags: , , , , , , Grid Computing, ,   

    A Look at SETI@home 

    EVERY “CRUNCHER” WORKING ON PROJECTS RUNNING ON BOINC SOFTWARE OWES A DEBT TO THE SETI@HOME PROJECT. NO SETI@HOME, NO BOINC. SO I THINK THAT IT IS FAIR TO SAY THAT ANYONE CRUNCHING ON ANY PROJECT, EVEN THOSE PURIST WCG CRUNCHERS, SHOULD BE REPAYING THAT DEBT BY CRUNCHING FOR SETI@HOME.

    SETI@HOME IS SOMETIMES CONFUSED AND CONFLATED WITH THE SETI INSTITUTE. THEY ARE SEPARATE ORGANIZATIONS.

    Everything in this post was taken directly from the SETI@home web site, with the exception of a wee portion from Wikipedia. There is a lot more information to be found if you access the SETI@home web pages.

    SETI@home (“SETI at home”) is an Internet-based public volunteer computing project employing the BOINC software platform, hosted by the Space Sciences Laboratory, at the University of California, Berkeley, in the United States. SETI is an acronym for the Search for Extra-Terrestrial Intelligence. Its purpose is to analyze radio signals, searching for signs of extra terrestrial intelligence, and is one of many activities undertaken as part of SETI.

    SETI@home was released to the public on May 17, 1999, making it the second large-scale use of distributed computing over the Internet for research purposes, as Distributed.net was launched in 1997. Along with MilkyWay@home and Einstein@home, it is the third major computing project of this type that has the investigation of phenomena in interstellar space as its primary purpose.

    How SETI@home works

    The Problem — Mountains of Data

    Most of the SETI programs in existence today, including those at UC Berkeley build large computers that analyze that data from the telescope in real time. None of these computers look very deeply at the data for weak signals nor do they look for a large class of signal types. The reason for this is because they are limited by the amount of computer power available for data analysis. To tease out the weakest signals, a great amount of computer power is necessary. It would take a monstrous supercomputer to get the job done. SETI programs could never afford to build or buy that computing power. There is a trade-off that they can make. Rather than a huge computer to do the job, they could use a smaller computer but just take longer to do it. But then there would be lots of data piling up. What if they used LOTS of small computers, all working simultaneously on different parts of the analysis? Where can the SETI team possibly find thousands of computers they’d need to analyze the data continuously streaming from Arecibo?

    The UC Berkeley SETI team has discovered that there are already thousands of computers that might be available for use. Most of these computers sit around most of the time with toasters flying across their screens accomplishing absolutely nothing and wasting electricity to boot. This is where SETI@home (and you!) come into the picture. The SETI@home project hopes to convince you to allow us to borrow your computer when you aren’t using it and to help us “…search out new life and new civilizations.” We’ll do this with a screen saver that can go get a chunk of data from us over the internet, analyze that data, and then report the results back to us. When you need your computer back, our screen saver instantly gets out of the way and only continues it’s analysis when you are finished with your work.

    ss
    Screenshot of SETI@home Enhanced BOINC Screensaver (v6.03)

    It’s an interesting and difficult task. There’s so much data to analyze that it seems impossible! Fortunately, the data analysis task can be easily broken up into little pieces that can all be worked on separately and in parallel. None of the pieces depends on the other pieces. Also, there is only a finite amount of sky that can be seen from Arecibo. In the next two years the entire sky as seen from the telescope will be scanned three times. We feel that this will be enough for this project. By the time we’ve looked at the sky three times, there will be new telescopes, new experiments, and new approaches to SETI. We hope that you will be able to participate in them too!

    Breaking Up the Data

    Data will be recorded on high-density tapes at the Arecibo telescope in Puerto Rico, filling about one 35 Gbyte DLT tape per day. Because Arecibo does not have a high bandwidth Internet connection, the data tape must go by snail-mail to Berkeley. The data is then divided into 0.25 Mbyte chunks (which we call “work-units”). These are sent from the Seti@Home server over the Internet to people around the world to analyze.

    Extra Credit Section: How the data is broken up

    SETI@home looks at 2.5 MHz of data, centered at 1420 MHz. This is still too broad a spectrum to send to you for analysis, so we break this spectrum space up into 256 pieces, each 10 kHz wide (more like 9766 Hz, but we’ll simplify the numbers to make calculations easier to see). This is done with a software program called the “splitter”. These 10 kHz pieces are now more manageable in size. To record signals up to 10 KHz you have to record the bits at 20,000 bits per second (kbps). (This is called the Nyquist frequency.) We send you about 107 seconds of this 10 kHz (20kbps) data. 100 seconds times 20,000 bits equals 2,000,000 bits, or about 0.25 megabyte given that there are 8 bits per byte. Again, we call this 0.25 megabyte chunk a “work-unit.” We also send you lots of additional info about the work-unit, so the total comes out to about 340 kbytes of data.
    What is Astropulse?

    ap
    Snapshot of BOINC SETI@home Astropulse Screensaver.

    Astropulse is a new type of SETI. It expands on the original SETI@home, but does not replace it. The original SETI@home is narrowband, meaning that it is listening for a particular radio frequency. That’s like listening to an orchestra playing, and trying to hear when anyone plays the note “A sharp”. Astropulse listens for short-time pulses. In the orchestra analogy, it’s like listening for a quick drum beat, or a series of drumbeats. Since no one knows what extraterrestrial communications will “sound like,” it seems like a good idea to search for several types of signals. In scientific terms, Astropulse is a sky survey that searches for microsecond transient radio pulses. These pulses could come from ET, or from some other source. I’ll define each of those terms:

    Sky survey: The telescope we use (Arecibo Observatory) scans across the sky, searching for signals everywhere. This differs from a directed SETI search, in which the telescope examines a few stars carefully.
    Microsecond: A millionth of a second. Astropulse is better than previous searches at detecting signals that last for a very short length of time. The shorter the signal, the better Astropulse is at detecting it, to a lower limit of 0.4 microseconds. Astropulse can detect signals shorter than 0.4 microseconds, it just stops getting better and better in comparison to other searches.
    Transient: A signal is transient if it is short, like a drumbeat. A transient signal can be repeating (it beats over and over again) or single pulse (it beats only once.)
    Radio: The signals are made of the same type of electromagnetic radiation that an AM or FM radio detects. (Actually of substantially higher frequency than that, but still considered “radio.”) Electromagnetic radiation includes radio waves, microwaves, infrared light, visible light, ultraviolet light, x-rays, and gamma rays. Click here for more information on electromagnetic radiation.

    Sources of pulses

    Where would a microsecond transient radio pulse come from? There are several possibilities, including:
    ET: Previous searches have looked for extraterrestrial communications in the form of narrow-band signals, analogous to our own radio stations. Since we know nothing about how ET might communicate, this might be a bit closed-minded.
    Pulsars and RRATs: Pulsars are rotating neutron stars that can produce signals as short as 100 microseconds, although typically much longer. 0.4 microseconds seems like a stretch. Astropulse is capable of detecting pulsars, but is unlikely to find any new ones. RRATs are a recently discovered pulsar variant. Perhaps Astropulse will discover a new type of rotating neutron star with a very short duty cycle.
    Exploding primordial black holes: Martin Rees has theorized that a black hole, exploding via Hawking radiation, might produce a signal that’s detectable in the radio. Click here to learn about black holes.
    Extragalactic pulses: Some scientists recently saw a single transient radio pulse from far outside the Milky Way galaxy. No one knows what caused it, but perhaps there are more of them for Astropulse to find.
    New phenomena: Perhaps the most likely result is that we will discover some unknown astrophysical phenomenon. Any time an astronomer looks at the sky in a new way, he or she may see a new phenomenon, whether it be a type of star, explosion, galaxy, or something else.
    Dispersed pulses

    As a microsecond transient radio pulse comes to us from a distant source in space, it passes through the interstellar medium (ISM). The ISM is a gas of hydrogen atoms that pervades the whole galaxy. There is one big difference between the ISM and ordinary hydrogen gas. Some of the hydrogen atoms in the ISM are ionized, meaning they have no electron attached to them. For each ionized hydrogen atom in the ISM, a free electron is floating off somewhere nearby. A substance composed of free floating, ionized particles is called a plasma.
    The microsecond radio pulse is composed of many different frequencies. As the pulse passes through the ISM plasma, the high frequency radiation goes slightly faster than the lower frequency radiation.When the pulse reaches Earth, we look at the parts of the signal ranging from 1418.75 MHz to 1421.25 MHz. This is a range of 2.5 MHz. The highest frequency radiation arrives about 0.4 milliseconds to 4 milliseconds earlier than the lowest frequency radiation, depending on the distance from which the signal originates. This effect is called dispersion. Click here to see how dispersed and undispersed pulses can be composed of many different frequencies.

    In order to see the signal’s true shape, we have to undo this dispersion. That is, we must dedisperse the signal. Dedispersion is the primary purpose of the Astropulse algorithm.
    Not only does dedispersion allow us to see the true shape of the signal, it also reduces the amount of noise that interferes with the signal’s visibility. Noise consists of fluctuations that produce a false signal. There could be electrical noise in the telescope, for instance, creating the illusion of a signal where there is none. Because dispersion spreads a signal out to be up to 10,000 times as long, this can cause 10,000 times as much noise to appear with the signal. (There’s a square root factor due to the math, so there’s really only 100 times as much noise power, but that’s still a lot.)

    The amount of dispersion depends on the amount of ISM plasma between the Earth and the source of the pulse. The dispersion measure (DM) tells us how much plasma there is. DM is measured in “parsecs per centimeter cubed”, which is written pc cm-3. To get the DM, multiply the distance to the source of the signal (in parsecs) by the electron density in electrons per cubic centimeter. A parsec is about 3 light years. So if a source is 2 parsecs away, and the space between the Earth and that source is filled with plasma, with 3 free electrons per cubic centimeter, then that’s 6 pc cm-3. The actual density of free electrons in the ISM is about 0.03 per cubic centimeter.
    Astropulse algorithm

    Single Pulse Loops
    Astropulse has to analyze the whole workunit at nearly 15,000 different DMs (14,208, to be precise.) At each DM, the whole dedispersion algorithm has to be run again for the entire workunit. The lowest DM is 55 pc cm-3, and the highest is 800 pc cm-3. Astropulse examines DMs at regular intervals between those two. Without going into detail about how to examine a piece of a workunit at a given DM, here is the organization with which Astropulse handles the data: it divides the DMs to be covered into large DM chunks of 128 DMs each, and then small DM chunks of 16 DMs each. It divides the data into chunks of 4096 bytes, and processes them one at a time. Once it has dedispersed the data, Astropulse co-adds the dedispersed data at 10 different levels, meaning that it looks for signals of size 0.4 microseconds, then twice that, 4 times, 8 times, and so on. (0.4 microseconds, 0.8, 1.6, 3.2, 6.4, …) On the lowest level of organization, astropulse looks at individual bins of data. A bin corresponds to 2 bits of the original data, but after dedispersion, it requires a floating point number to represent it. Here’s the breakdown of Astropulse’s loops:
    1 workunit => 111 large DM chunks
    1 large DM chunk => 8 small DM chunks
    1 small DM chunk => 2048 data chunks
    1 data chunk => 16 DMs
    1 DM => 10 fold levels
    1 fold level => 16384 bins (or less)
    1 bin = smallest unit
    So each workunit is composed of 111 large DM chunks, each of which is 0.901% of the whole. Each large DM chunk is composed of 8 small DM chunks, each of which is 0.113% of the whole. And so on.
    The number of large DM chunks will probably change before the final version of Astropulse is released.

    Fast Folding Algorithm
    At the end of each small and large DM chunk, Astropulse performs the Fast Folding Algorithm. This algorithm checks for repeating pulses over a certain range of periods. (The period is the length of time after which the pulse repeats.) When the fast folding algorithm is performed after each large DM chunk, it searches over an entire 13 second workunit, and looks for repeating signals with a period of 256 times the sample rate (256 * 0.4 microseconds) or more. When the FFA is performed after each small DM chunk, it searches over a small fraction of the workunit, and looks for repeating signals with a period of 16 times the sample rate or more.”

    SETI People

    da
    David is a computer scientist, with research interests in volunteer computing, distributed systems, and real-time systems. He also runs the BOINC project.
    David is a rock climber, mountain climber, classical pianist, and father of Noah (born Oct 2005).

    dw
    Dan specializes in signal processing for radio astronomy. He has been doing SETI since 1979, and he runs the SERENDIP, Optical SETI, and CASPER projects.
    Dan dabbles in jazz piano, and is the father of a 4-year old son, William.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing, developed at UC Berkeley.

    Visit the BOINC web page, click on Choose projects and check out some of the very worthwhile studies you will find. Then click on Download and run BOINC software/ All Versons. Download and install the current software for your 32bit or 64bit system, for Windows, Mac or Linux. When you install BOINC, it will install its screen savers on your system as a default. You can choose to run the various project screen savers or you can turn them off. Once BOINC is installed, in BOINC Manager/Tools, click on “Add project or account manager” to attach to projects. Many BOINC projects are listed there, but not all, and, maybe not the one(s) in which you are interested. You can get the proper URL for attaching to the project at the projects’ web page(s) BOINC will never interfere with any other work on your computer.

    MAJOR PROJECTS RUNNING ON BOINC SOFTWARE

    SETI@home The search for extraterrestrial intelligence. “SETI (Search for Extraterrestrial Intelligence) is a scientific area whose goal is to detect intelligent life outside Earth. One approach, known as radio SETI, uses radio telescopes to listen for narrow-bandwidth radio signals from space. Such signals are not known to occur naturally, so a detection would provide evidence of extraterrestrial technology.

    Radio telescope signals consist primarily of noise (from celestial sources and the receiver’s electronics) and man-made signals such as TV stations, radar, and satellites. Modern radio SETI projects analyze the data digitally. More computing power enables searches to cover greater frequency ranges with more sensitivity. Radio SETI, therefore, has an insatiable appetite for computing power.

    Previous radio SETI projects have used special-purpose supercomputers, located at the telescope, to do the bulk of the data analysis. In 1995, David Gedye proposed doing radio SETI using a virtual supercomputer composed of large numbers of Internet-connected computers, and he organized the SETI@home project to explore this idea. SETI@home was originally launched in May 1999.”


    SETI@home is the birthplace of BOINC software. Originally, it only ran in a screensaver when the computer on which it was installed was doing no other work. With the powerand memory available today, BOINC can run 24/7 without in any way interfering with other ongoing work.

    seti
    The famous SET@home screen saver, a beauteous thing to behold.

    einstein@home The search for pulsars. “Einstein@Home uses your computer’s idle time to search for weak astrophysical signals from spinning neutron stars (also called pulsars) using data from the LIGO gravitational-wave detectors, the Arecibo radio telescope, and the Fermi gamma-ray satellite. Einstein@Home volunteers have already discovered more than a dozen new neutron stars, and we hope to find many more in the future. Our long-term goal is to make the first direct detections of gravitational-wave emission from spinning neutron stars. Gravitational waves were predicted by Albert Einstein almost a century ago, but have never been directly detected. Such observations would open up a new window on the universe, and usher in a new era in astronomy.”

    MilkyWay@Home Milkyway@Home uses the BOINC platform to harness volunteered computing resources, creating a highly accurate three dimensional model of the Milky Way galaxy using data gathered by the Sloan Digital Sky Survey. This project enables research in both astroinformatics and computer science.”

    Leiden Classical “Join in and help to build a Desktop Computer Grid dedicated to general Classical Dynamics for any scientist or science student!”

    World Community Grid (WCG) World Community Grid is a special case at BOINC. WCG is part of the social initiative of IBM Corporation and the Smarter Planet. WCG has under its umbrella currently eleven disparate projects at globally wide ranging institutions and universities. Most projects relate to biological and medical subject matter. There are also projects for Clean Water and Clean Renewable Energy. WCG projects are treated respectively and respectably on their own at this blog. Watch for news.

    Rosetta@home “Rosetta@home needs your help to determine the 3-dimensional shapes of proteins in research that may ultimately lead to finding cures for some major human diseases. By running the Rosetta program on your computer while you don’t need it you will help us speed up and extend our research in ways we couldn’t possibly attempt without your help. You will also be helping our efforts at designing new proteins to fight diseases such as HIV, Malaria, Cancer, and Alzheimer’s….”

    GPUGrid.net “GPUGRID.net is a distributed computing infrastructure devoted to biomedical research. Thanks to the contribution of volunteers, GPUGRID scientists can perform molecular simulations to understand the function of proteins in health and disease.” GPUGrid is a special case in that all processor work done by the volunteers is GPU processing. There is no CPU processing, which is the more common processing. Other projects (Einstein, SETI, Milky Way) also feature GPU processing, but they offer CPU processing for those not able to do work on GPU’s.

    gif

    These projects are just the oldest and most prominent projects. There are many others from which you can choose.

    There are currently some 300,000 users with about 480,000 computers working on BOINC projects That is in a world of over one billion computers. We sure could use your help.

    My BOINC

    graph

     
  • richardmitnick 1:51 pm on October 18, 2011 Permalink | Reply
    Tags: , , , Grid Computing, ,   

    From Livermore Lab: “Lab biophysicist invents improvement to Monte Carlo technique “ 

    Brian D Johnson
    10/17/2011

    Jerome P. Nilmeier, a biophysicist working in computational biology, is willing to bet his new research will provide a breakthrough in the use of the Monte Carlo probability code in biological simulations.

    Working with Gavin E. Crooks at Lawrence Berkeley National Lab, David D. L. Minh at Argonne, and John D. Chodera, from the University of California, Berkeley, Nilmeier has co-authored a paper that introduces a new class of Monte Carlo moves based on nonequilibrium dynamics. The paper appears in the current issue of Proceedings of the National Academy of Sciences.

    The Monte Carlo technique is one of the most widely used methods to model a system and determine the odds for a variety of different outcomes. The technique was first developed by scientists working on the Manhattan Project who needed to figure out how far neutrons might pass through a variety of different types of shielding materials.

    The Monte Carlo technique harnesses the power of computers to figure out the probable outcomes of equations that have hundreds or thousands of variables. It is a short cut that, instead of giving a definitive answer, gives a probable answer. Random numbers are put into the equation, the outcome is tested, the probability for the different outcomes is determined and then a decision can be made about what is the most likely outcome.”

    Read the full article here.

    Just a note, “crunchers” in Public Distributed Computing using BOINC software, have know about The Monte Carlo Technique for quite some time. Many of us are or have been working on QMC@home.


    i1

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security Administration

    i2

     
  • richardmitnick 9:25 pm on April 25, 2011 Permalink | Reply
    Tags: , , , , Grid Computing, ,   

    From Sandia Labs: “Sandia and UNM lead effort to destroy cancers” 

    Boosting medicine with nanotechnology strengthens drug cocktail many times over

    “Melding nanotechnology and medical research, Sandia National Laboratories, the University of New Mexico, and the UNM Cancer Research and Treatment Center have produced an effective strategy that uses nanoparticles to blast cancerous cells with a mélange of killer drugs.

    In the cover article of the May issue of Nature Materials, available online April 17 , the researchers describe silica nanoparticles about 150 nanometers in diameter as honeycombed with cavities that can store large amounts and varieties of drugs.

    ‘ The enormous capacity of the nanoporous core, with its high surface area, combined with the improved targeting of an encapsulating lipid bilayer [called a liposome], permit a single ‘protocell’ loaded with a drug cocktail to kill a drug-resistant cancer cell,’ says Sandia researcher and UNM professor Jeff Brinker, the principal investigator. ‘That’s a millionfold increase in efficiency over comparable methods employing liposomes alone — without nanoparticles — as drug carriers.’

    The nanoparticles and the surrounding cell-like membranes formed from liposomes together become the combination referred to as a protocell: the membrane seals in the deadly cargo and is modified with molecules (peptides) that bind specifically to receptors overexpressed on the cancer cell’s surface. (Too many receptors is one signal the cell is cancererous.) The nanoparticles provide stability to the supported membrane and contain and release the therapeutic cargo within the cell. “

    i1
    The figure on the left (Hep3B) shows a greenly fluoresced cancerous liver cell penetrated by protocells. The small red dots are lipid bilayer wrappings. Their cargo — drug-filled nanoparticles, their pores here filled with white fluorescent dyes for imaging purposes — penetrate the cancerous cell. (Penetration is more clearly seen in the second image.) The normal cell on the right (hepatocyte) shows no penetration. (Images courtesy of Carlee Ashley)

    i3
    Sandia post-doctoral fellow Carlee Ashley introduces a buffer into a protocell solution to dilute it as Sandia researcher and University of New Mexico professor Jeff Brinker watches. (Photo by Randy Montoya)

    R$ead the full article here.

    And, don’t forget, at World Community Grid we have two Cancer projects,
    Help Conquer Cancer
    i4

    And
    Help Fight Childhood Cancer
    i4

    You can join the 98,000 WCG crunchers in these two projects. Visit WCG, download and install the little piece of BOINC software. Then attch to these two projects and look at the rest of what we are doing. You will be amazed.

     
  • richardmitnick 8:45 am on April 22, 2011 Permalink | Reply
    Tags: , , , Grid Computing, ,   

    From the WCG Chat Room Forum: Google to Donate 1 Billion Core Hours to Research 

    i1

    1 billion core-hours of computational capacity for researchers
    April 07, 2011

    Posted by Dan Belov, Principal Engineer and David Konerding, Software Engineer

    We’re pleased to announce a new academic research grant program: Google Exacycle for Visiting Faculty. Through this program, we’ll award up to 10 qualified researchers with at least 100 million core-hours each, for a total of 1 billion core-hours. The program is focused on large-scale, CPU-bound batch computations in research areas such as biomedicine, energy, finance, entertainment, and agriculture, amongst others. For example, projects developing large-scale genomic search and alignment, massively scaled Monte Carlo simulations, and sky survey image analysis could be an ideal fit.

    Exacycle for Visiting Faculty expands upon our current efforts through University Relations to stimulate advances in science and engineering research, and awardees will participate through the Visiting Faculty Program. We invite full-time faculty members from universities worldwide to apply. All grantees, including those outside of the U.S., will work on-site at specific Google offices in the U.S. or abroad. The exact Google office location will be determined at the time of project selection.

    Technical Specifications and Requirements

    Proposals that are ideal for Google Exacycle include, but are not limited to, research projects like Folding@Home, Rosetta@Home, various [other] BOINC projects, and grid parameter sweeps. Other examples include large-scale genomic search and alignment, protein family modeling and sky survey image analysis.

    The best projects will have a very high number of independent work units, a high CPU to I/O ratio, and no inter-process communication (commonly described as Embarrassingly or Pleasantly Parallel). The higher the CPU to I/O rate, the better the match with the system. Programs must be developed in C/C++ and compiled via Native Client. Awardees will be able to consult an on-site engineering team.

    Preference will be given to projects that are fairly high-risk/high-reward with the potential to drastically transform the scientific landscape. Even projects that yield negative results can still provide public data that the community can continue to analyze. At completion of the project, we recommend, but do not require, that all the researcher’s data be made freely available to the academic community.

    We are excited to accept proposals starting today. The application deadline is 11:59 p.m. PST May 31, 2011. Applicants are encouraged to send in their proposals early as awards will be granted starting in June.

    More information and details on how to apply for a Google Exacycle for Visiting Faculty grant can be found on the Google Exacycle for Visiting Faculty website.

     
  • richardmitnick 4:06 pm on April 10, 2011 Permalink | Reply
    Tags: , , , , , Grid Computing, , , ,   

    WCG: An Overview 

    World Community Grid

    WCG tells us: “World Community Grid brings people together from across the globe to create the largest non-profit computing grid benefiting humanity. It does this by pooling surplus computer processing power. We believe that innovation combined with visionary scientific research and large-scale volunteerism can help make the planet smarter. Our success depends on like-minded individuals – like you.”

    “We are now partnering with People for a Smarter Planet, a collective of communities that let you make a personal difference in solving some of the world’s toughest challenges. Please show your support by clicking the Like button on their Facebook page.

    i1

    World Community Grid operates under the watchful eye and with the financial support of IBM Corporation.

    ibm

    Here is what IBM says: “Our World Community Grid initiative utilizes grid computing technology to harness the tremendous power of idle computers to perform specific computations related to critical research around complex biological, environmental and health-related issues. The current projects include Help Fight Childhood Cancer, Clean Energy, and Nutritious Rice for the World, FightAIDS@Home, Help Conquer Cancer, AfricanClimate@Home, and a genomics initiative and research on Dengue Fever.

    Lets look at some of these projects. All of the text for each project comes from the project’s page at WCG.

    Computing for Clean Water

    cw

    Mission
    The mission of Computing for Clean Water is to provide deeper insight on the molecular scale into the origins of the efficient flow of water through a novel class of filter materials. This insight will in turn guide future development of low-cost and more efficient water filters.

    Significance
    Lack of access to clean water is one of the major humanitarian challenges for many regions in the developing world. It is estimated that 1.2 billion people lack access to safe drinking water, and 2.6 billion have little or no sanitation. Millions of people die annually – estimates are 3,900 children a day – from the results of diseases transmitted through unsafe water, in particular diarrhea.

    Technologies for filtering dirty water exist, but are generally quite expensive. Desalination of sea water, a potentially abundant source of drinking water, is similarly limited by filtering costs. Therefore, new approaches to efficient water filtering are a subject of intense research. Carbon nanotubes, stacked in arrays so that water must pass through the length of the tubes, represent a new approach to filtering water.”

    This projects partners with CNMM, the Center for Nano and Micro Mechanics at Tsinghua University in Beijing, China

    cnmm

    The Clean Energy Project

    ce

    Mission
    The mission of The Clean Energy Project is to find new materials for the next generation of solar cells and later, energy storage devices. By harnessing the immense power of World Community Grid, researchers can calculate the electronic properties of hundreds of thousands of organic materials – thousands of times more than could ever be tested in a lab – and determine which candidates are most promising for developing affordable solar energy technology.

    Significance
    We are living in the Age of Energy. The fossil fuel based economy of the present must give way to the renewable energy based economy of the future, but getting there is one of the greatest challenge humanity faces. Chemistry can help meet this challenge by discovering new materials that efficiently harvest solar radiation, store energy for later use, and reconvert the stored energy when needed.

    The Clean Energy Project uses computational chemistry and the willingness of people to help look for the best molecules possible for: organic photovoltaics to provide inexpensive solar cells, polymers for the membranes used in fuel cells for electricity generation, and how best to assemble the molecules to make those devices. By helping search combinatorially among thousands of potential systems, World Community Grid volunteers are contributing to this effort.”

    Discovering Dengue Drugs – Together

    mos

    Mission
    The mission of Discovering Dengue Drugs – Together – Phase 2 is to identify promising drug candidates to combat the Dengue, Hepatitis C, West Nile, Yellow Fever, and other related viruses. The extensive computing power of World Community Grid will be used to complete the structure-based drug discovery calculations required to identify these drug candidates.

    Significance
    This project will discover promising drug candidates that stop the replication of viruses within the Flaviviridae family. Members of this family, including dengue, hepatitis C, West Nile, and yellow fever viruses, pose significant health threats throughout the developed and developing world. More than 40% of the world’s population is at risk for infection by dengue virus. Annually, ~1.5 million people are treated for dengue fever and dengue hemorrhagic fever. Hepatitis C virus has infected ~2% of the world’s population. Yellow fever and West Nile viruses also have had significant global impact. Unfortunately, there are no drugs that effectively treat these diseases. Consequently, the supportive care necessary to treat these infections and minimize mortality severely strains already burdened health facilities throughout the world. The discovery of both broad-spectrum and specific antiviral drugs is expected to significantly improve global health.”

    Help Cure Muscular Dystrophy

    md

    Mission
    The mission of Discovering Dengue Drugs – Together – Phase 2 is to identify promising drug candidates to combat the Dengue, Hepatitis C, West Nile, Yellow Fever, and other related viruses. The extensive computing power of World Community Grid will be used to complete the structure-based drug discovery calculations required to identify these drug candidates.

    Significance
    This project will discover promising drug candidates that stop the replication of viruses within the Flaviviridae family. Members of this family, including dengue, hepatitis C, West Nile, and yellow fever viruses, pose significant health threats throughout the developed and developing world. More than 40% of the world’s population is at risk for infection by dengue virus. Annually, ~1.5 million people are treated for dengue fever and dengue hemorrhagic fever. Hepatitis C virus has infected ~2% of the world’s population. Yellow fever and West Nile viruses also have had significant global impact. Unfortunately, there are no drugs that effectively treat these diseases. Consequently, the supportive care necessary to treat these infections and minimize mortality severely strains already burdened health facilities throughout the world. The discovery of both broad-spectrum and specific antiviral drugs is expected to significantly improve global health.”

    Help Conquer Cancer

    hcc

    Mission
    The mission of Help Conquer Cancer is to improve the results of protein X-ray crystallography, which helps researchers not only annotate unknown parts of the human proteome, but importantly improves their understanding of cancer initiation, progression and treatment.

    Significance
    In order to significantly impact the understanding of cancer and its treatment, novel therapeutic approaches capable of targeting metastatic disease (or cancers spreading to other parts of the body) must not only be discovered, but also diagnostic markers (or indicators of the disease), which can detect early stage disease, must be identified.

    Researchers have been able to make important discoveries when studying multiple human cancers, even when they have limited or no information at all about the involved proteins. However, to better understand and treat cancer, it is important for scientists to discover novel proteins involved in cancer, and their structure and function.

    Scientists are especially interested in proteins that may have a functional relationship with cancer. These are proteins that are either over-expressed or repressed in cancers, or proteins that have been modified or mutated in ways that result in structural changes to them.

    Improving X-ray crystallography will enable researchers to determine the structure of many cancer-related proteins faster. This will lead to improving our understanding of the function of these proteins and enable potential pharmaceutical interventions to treat this deadly disease.”

    Human Proteome Folding

    hpf

    “Human Proteome Folding Phase 2 (HPF2) continues where the first phase left off. The two main objectives of the project are to: 1) obtain higher resolution structures for specific human proteins and pathogen proteins and 2) further explore the limits of protein structure prediction by further developing Rosetta software structure prediction. Thus, the project will address two very important parallel imperatives, one biological and one biophysical.

    The project, which began at the Institute for Systems Biology and now continues at New York University’s Department of Biology and Computer Science, will refine, using the Rosetta software in a mode that accounts for greater atomic detail, the structures resulting from the first phase of the project. The goal of the first phase was to understand protein function. The goal of the second phase is to increase the resolution of the predictions for a select subset of human proteins. Better resolution is important for a number of applications, including but not limited to virtual screening of drug targets with docking procedures and protein design. By running a handful of well-studied proteins on World Community Grid (like proteins from yeast), the second phase also will serve to improve the understanding of the physics of protein structure and advance the state-of-the-art in protein structure prediction. This also will help the Rosetta developers community to further develop the software and the reliability of its predictions.

    HPF2 will focus on human-secreted proteins (proteins in the blood and the spaces between cells). These proteins can be important for signaling between cells and are often key markers for diagnosis. These proteins have even ended up being useful as drugs (when synthesized and given by doctors to people lacking the proteins). Examples of human secreted proteins turned into therapeutics are insulin and the human growth hormone. Understanding the function of human secreted proteins may help researchers discover the function of proteins of unknown function in the blood and other interstitial fluids.”

    FightAIDS@Home

    HAAH

    What is AIDS?
    UNAIDS, the Joint United Nations Program on HIV/AIDS, estimated that in 2004 there were more than 40 million people around the world living with HIV, the Human Immunodeficiency Virus. The virus has affected the lives of men, women and children all over the world. Currently, there is no cure in sight, only treatment with a variety of drugs.

    Prof. Arthur J. Olson’s laboratory at The Scripps Research Institute (TSRI) is studying computational ways to design new anti-HIV drugs based on molecular structure. It has been demonstrated repeatedly that the function of a molecule — a substance made up of many atoms — is related to its three-dimensional shape. Olson’s target is HIV protease (“pro-tee-ace”), a key molecular machine of the virus that when blocked stops the virus from maturing. These blockers, known as “protease inhibitors”, are thus a way of avoiding the onset of AIDS and prolonging life. The Olson Laboratory is using computational methods to identify new candidate drugs that have the right shape and chemical characteristics to block HIV protease. This general approach is called “Structure-Based Drug Design”, and according to the National Institutes of Health’s National Institute of General Medical Sciences, it has already had a dramatic effect on the lives of people living with AIDS.

    Even more challenging, HIV is a “sloppy copier,” so it is constantly evolving new variants, some of which are resistant to current drugs. It is therefore vital that scientists continue their search for new and better drugs to combat this moving target.

    Scientists are able to determine by experiment the shapes of a protein and of a drug separately, but not always for the two together. If scientists knew how a drug molecule fit inside the active site of its target protein, chemists could see how they could design even better drugs that would be more potent than existing drugs.

    To address these challenges, World Community Grid’s FightAIDS@Home project runs a software program called AutoDock developed in Prof. Olson’s laboratory. AutoDock is a suite of tools that predicts how small molecules, such as drug candidates, might bind or “dock” to a receptor of known 3D structure.”

    ——————————————–

    There are currently about 98,000 members of this crunching community. We are called crunchers because that is what our computers do. Once having installed the software and chosen our projects, we are sent small work units to process. The finished data is sent back to WCG and we get new work units. How are we rewarded for our efforts? Really, just with the satisfactiuon of knowing that we might be helping to save lives. But, we do get little gifts, badges based upon our completed work. Some crunchers have organized themselves into teams. The teams compete for points or credits. There are al;l sorts of teams, from a few people organizing in a church or synagogue, to mega teams of techies building mroe and more Linux boxes.

    So, 98,000. That is a lot of people; but not in a world with one billion computers. We want and need your help. I am personally crunching 24/7 on five machines – yesterday the sixth, an older PC died.
    The cost in electricity? About the same as a 100-150 watt light bulb as long as you have your monitor on a power save setting.

    All WCG projects run on software developed and continually upgraded by At UC Berkeley, The Berkeley Open Infrastructure for Network Computing.You can download the little piece of BOINC software that makes this all happen either at WCG or http://boinc.berkeley.edu/.

    If you choose to download the software at the BOINC page, there you will see a link to a whole other list of wonderful projects which are running independently of WCG.

    So, please, won’t you give us a look?

     
  • richardmitnick 12:43 pm on March 1, 2011 Permalink | Reply
    Tags: , , Grid Computing,   

    From Medill Reports U.Chicago:”Collision crunching at CERN takes a global grid” 

    This is copyright protected material, so, just a taste.

    by Chelsea Whyteand Justin Eure
    Feb 25, 2011

    “Scientists at Switzerland’s CERN are diving into the mysteries of the universe, explaining the origin of mass, seeking dark matter, and looking for extra dimensions never before detected. These puzzles are the purpose of the Large Hadron Collider

    Realizing that assembling enough computing power in a single place wasn’t a feasible option for the many member countries involved in CERN, the engineers and scientists turned to grid computing as a solution.

    ‘You will get about a billion collisions every second if the machine is operating at full speed,’ said Oliver Keeble, activity leader for the CERN grid.

    At Fermilab alone, ‘…it takes over 100 servers, each with 44 discs…’ to process the data coming in from CERN, said Oliver Gutsche, an application physicist at Fermilab. [Long Live FermiLab]

    When data is recorded and stored at CERN for access to the grid, it is written onto magnetic tapes of the type once used in the Sony Walkman.

    The information on these tapes is accessible by any of the institutions at all tiers of the worldwide LHC grid, located in 34 different countries.


    Grid computing at CERN.

    Read the full article here.

     
  • richardmitnick 2:32 pm on January 26, 2011 Permalink | Reply
    Tags: , , , Grid Computing, ,   

    Project Update from WCG’s Fight AIDS At Home Project 

    From World Community Grid’s (WCG) Fight Aids At Home Project (FAAH)

    YOU ARE PROBABLY NOT GOING TO UNDERSTAND A SINGLE TECHNICAL TERM IN THIS UPDATE FROM THE FIGHT AIDS AT HOME PROJECT. BUT, READ THE ACCOUNT ANYWAY. THESE GUYS IN THE OLSON LABORATORY AT THE SCRIPPS RESEARCH INSTITUTE HAVE BEEN IN THE FOREFRONT OF AIDS RESEARCH FOR A VERY LONG TIME.

    “Experiment 35 involves screening the full NCI library of ~ 316,000 compounds against the active site of 8 different versions of HIV protease. Thus, this experiment is similar to Exp. 32, but a different library of compounds is being screened, and one new target has been added. All but two of these target conformations were generated by Dr. Alex L. Perryman’s Molecular Dynamics (MD) simulations of 5 different variants of HIV protease. These 8 targets include 2 snapshots of the V82F/I84V mutant from ALP’s 2004 paper in Protein Science. These 2 snapshots of a multi-drug-resistant “superbug” have semi-open conformations of the flaps, which makes these models good targets for the “eye site” that is located between the tip of a semi-open flap and the top of the wall of the active site. The 3rd target is the equilibration MD (EqMD) output for 1HSI.pdb, which is a semi-open conformation of HIV-2 protease. HIV-2 is the group of strains of HIV that are most common in Africa. We’ll be targeting the “eye site” of 1HSI, as well. The 4th target is the EqMD output from 1MSN.pdb, which was created using a different crystal structure of the V82F/I84V superbug. This model has a closed conformation of the flaps, which means that we’ll be targeting the floor of the active site. The 5th target also has a closed conformation of the flaps, but this EqMD output is from 2R5P.pdb, which is the wild type HIV-1c protease. HIV-1c is the group of strains of HIV that are most commonly found in Asia. The 6th target has semi-open flaps, and it is the EqMD output from 1TW7.pdb, which is a superbug with the mutations L10I/D25N/M36V/M46L/I54V/I62V/L63P/A71V/V82A/I84V/L90M. We’ll be targeting the eye site of this superbug, too.

    The 7th target is a crystal structure of the wild type HIV protease with 5-nitroindole bound in the eye site. This new crystal structure from Prof. C. David Stout’s lab was presented in the Supporting Information for our recent article in Chemical Biology and Drug Design, vol. 75: 257-268 (March 2010). This new research article of ours was recently discussed in a press release on Science Daily and in a news story on KPBS-FM. This paper was recently listed as one of the “most read papers” from Chemical Biology and Drug Design this year! I deleted the 5-nitroindole fragment from this structure before generating the AutoDock input file for this target. We’ll be screening new fragments against this crystal structure’s eye site, as well.

    The 8th target has never been used on FightAIDS@Home before. It is a brand new crystal structure from Assoc. Prof. C. David Stout’s lab of the chimeric “FIV 6s98S” protease, which was developed by our collaborators Ying-Chuan Lin, Prof. Bruce E. Torbett, and Prof. John H. Elder. A paper on this new crystal structure of FIV 6s98S protease is currently being peer-reviewed. This protease enzyme is “chimeric,” because it contains 5 residues from HIV protease that were substituted into the corresponding positions in FIV protease. The 6th residue was also substituted from HIV protease, but it changed into a different residue during serial passage experiments (i.e., during directed evolution studies performed with the presence of different HIV protease drugs). This 6s98S FIV protease has HIV-like drug sensitivity profiles and is a new model system for multi-drug-resistant HIV protease.”

    You can help in this vital project and other very worthwhile projects which are a part of World Community Grid (WCG). Visit WCG, download the BOINC software application and attach to the WCG project. Build your own “profile” at WCG, selecting which projects you find to be of interest. There are some 97,000 of us “crunching” data for these projects on out home and/or work computers. Most of these projects are in the fields of medical or biological research.

    While you are at it, visit the BOINC site where you will find a whole host of other projects in biology, chemistry, cosmology, mathematics and physics.

    Visit the project home pages, read about the work, and maybe you will also find other attractive projects on which you might wish to lend a hand. All in all, about 303,000 people “crunch” data for all of the BOINC projects combined, including the projects at WCG. Together, we have saved lab scientists literally thousands of hours of research time. Current over all statistics: 303,045 volunteers, 486,047 computers. 24-hour average: 5,150.82 TeraFLOPS. So, we have just under a half million computers on all of the projects. Think that’s a goodly number? Well, there are close to a billion computers in use in the world. So that means we have 0.0003, that’s 0.03%, 3 one hundredths of one percent. If you add just one computer to this total, it means a lot.

    The process uses the idle CPU cycles of your computer(s) while they are running. After you attach to projects, you have really no work to do. The BOINC software takes care of everything, downloading “work units” (WU’s), processing the WU’s, uploading the finished results. You can if you wish become active in the forums maintained by WCG, BOINC, and each project. You can join a team, say at your alma mater, or your company; or you can start a team in your company, church, mosque, temple or synagogue, whatever. Some of the projects have really cool screen savers which you can use.

    I am personally running three Win 7 machines and two Vista machines, 24/7. The cost of running a computer is about the same as a 100-150 watt light bulb, so it is quite cheap.

    I consider this to be the most meaningful thing that I have ever done with my computers. I urge you to take a look, give us a shot.

     
  • richardmitnick 10:18 am on December 14, 2010 Permalink | Reply
    Tags: , , , , , , , , Grid Computing, , , , , , ,   

    From MIT News:”Building a list of Earth candidates” 

    Building a list of Earth candidates

    i1

    There are several topics in Particle Physics and Cosmology which are gaining real momentum. First, of course, is the search for the Higgs Boson, at FerrmiLab via the Tevatron, and now also and with greater probability of success at Cern via the LHC. Second would be the search for Dark Matter and Dark Energy. But, not far behind is the search for planets which might have an environment which would support life. The SETI Institute, aided and ably abetted by the BOINC project SETI@home. These guys are going about the process sort of in reverse. If they can find evidence of intelligent life in the universe, why then, we will know that there is/are planets we must locate.

    So, after that preamble, here is a taste of Morgan’s article.

    Morgan Bettex, MIT News Office
    December 14, 2010

    The possibility of discovering a planet that is small, cool, rocky, orbiting a sunlike star and able to host life — an Earth twin, in other words — has made the search for planets outside of our solar system, or exoplanets, one of the hottest research areas in physical science. This three-part series explores MIT researchers’ roles in the quest to find an Earth twin and the effort to make sense of the 500 exoplanets that have been discovered since 1995.

    “In September, researchers announced the discovery of Gliese 581g, a rocky planet with a mass that is just three to four times that of Earth. If the discovery is confirmed with independent data, it could be the closest that planetary scientists have come to finding a planet outside the solar system that resembles our own. Although other planets with nearly the same mass as Earth have been discovered, Gliese 581g is the smallest planet that is also in the “Goldilocks zone,” or at a distance from its host star to make the planet’s temperature cool enough for liquid water to exist on its surface. Astronomers discovered Gliese 581g using two Earth-based telescopes to observe the movements of the planet’s host star that are caused by gravitational tugs from orbiting bodies. Based on these slight tugs, the researchers were able to estimate the planet’s mass…

    “It’s highly likely that a planet that is smaller and even more Earthlike than Gliese 581g will be discovered by Kepler, a NASA satellite that is observing 150,000 stars with the goal of detecting Earth-sized planets located in or near the Goldilocks zone. But although Kepler has delivered promising data to date — data that several MIT researchers are analyzing — the satellite is looking at only a narrow field of the sky. MIT faculty, researchers and students are working on several satellites to complement Kepler’s efforts and scan much more of the sky.

    i3
    Kepler in the womb

    i5
    What does what on Kepler

    You should read the full article here. Morgan is a very good writer, and there is a lot to learn.

    And, hey, think about what you might be able to contrribute to this search. Visit BOINC, download and install the little piece of software that make BOINC go, and attach to the SETI@home project. While you are at BOINC, look at some of the other very worthwhile projects at august institutions and universities around our own little globe. You might find some of them very attractive. There are about 275,000 “crunchers” in the world, a very small number in a world with about one billion computers. We need all of the help that we can get. Visit also the SETI Institute. There really is a lot going on here.

    i5

    If you have gone this far, then, I beseech you, visit a very important member of the BOINC family of projects, World Community Grid (WCG). BOINC calls it one project, but actually, WCG. powered by IBM, is the mother ship for eight vital projects with an emphasis on fighting diseases such as AIDS, Cancer, Dengue Fever, and Muscular Dystrophy. There are also projects in Clean Energy and Clean Water, and the iconic Human Proteome Folding project, now in its second phase.

    All of this work goes on in the background on your computer. It never ever interferes with what you are actively doing like work, or listening to music, or watching video, shopping, etc.

    Give us a shot. And, keep following MIT News for the latest news in the search for the possibilities of life on other planets.

     
  • richardmitnick 3:17 pm on November 29, 2010 Permalink | Reply
    Tags: , Grid Computing   

    From Scientific Computing: GPU’s much faster than CPU’s 

    From Scientific Computing

    Speeding up Science
    Many of today’s most difficult scientific computations include some gain from GPUs.

    By Mike May

    Scientific simulations can tackle larger problems and produce more accurate answers, as computing grows increasingly parallel. This need for parallel processing proves particularly crucial for some questions, including how to make better drugs and how elements formed after the Big Bang. Such simulations may require millions of iterations, with results feeding back into more calculations. In the past, the parallelism came from using more CPUs — either from a cluster or from a multicore chip — but graphics processing units (GPUs) now offer an alternative. As a result, today’s scientific calculations run faster than ever.

    In short, GPUs offer much more parallelism than a multicore CPU. “A GPU has tens or even hundreds of cores,” says Dinesh Manocha, professor of computer science at the University of North Carolina at Chapel Hill. “If a specific algorithm can make good use of GPU-based parallelism, then perhaps you can achieve higher performance on a GPU.”

    i1
    The Folding@home project often uses GPUs to compute simulations of the folding of proteins, such as the one displayed here.Courtesy of Vijay Pande

    This is becoming very important in Grid Computing, aka Public Distributed Computing, as more and more projects running software from BOINC adopt the proper procedures.

    Read the full article here.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 347 other followers

%d bloggers like this: