Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:27 am on July 29, 2015 Permalink | Reply
    Tags: , , , , Supercomputing   

    From isgtw: “Supercomputers listen for extraterrestrial life” 


    international science grid this week

    July 29, 2015
    Lance Farrell

    Last week, NASA’s New Horizons spacecraft thrilled us with images from its close encounter with Pluto.

    NASA New Horizons spacecraft II
    NASA/New Horizons

    New Horizons now heads into the Kuiper belt and to points spaceward. Will it find life?


    Known objects in the Kuiper belt beyond the orbit of Neptune (scale in AU; epoch as of January 2015).

    That’s the question motivating Aline Vidotto, scientific collaborator at the Observatoire de Genève in Switzerland. Her recent study harnesses supercomputers to find out how to tune our radio dials to listen in on other planets.

    1
    Model of an interplanetary medium. Stellar winds stream from the star and interact with the magnetosphere of the hot-Jupiters. Courtesy Vidotto

    Vidotto has been studying interstellar environments for a while now, focusing on the interplanetary atmosphere surrounding so-called hot-Jupiter exoplanets since 2009. Similar in size to our Jupiter, these exoplanets orbit their star up to 20 times as closely as Earth orbits the sun, and are considered ‘hot’ due to the extra irradiation they receive.

    Every star generates a stellar wind, and the characteristics of this wind depend on the star from which it originates. The speed of its rotation, its magnetism, its gravity, or how active it is are among the factors affecting this wind. These variables also modify the effect this wind will have on planets in its path.

    Since the winds of different star systems are likely to be very different from our own, we need computers to help us boldly go where no one has ever gone before. “Observationally, we know very little about the winds and the interplanetary space of other stars,” Vidotto says. “This is why we need models and numerical simulations.”

    Vidotto’s research focuses on planets four to nine times closer to their host star than Mercury is to the sun. She takes observations of the magnetic fields around five stars from astronomers at the Canada-France-Hawaii Telescope (CFHT) in Hawaii and the Bernard-Lyot Telescope in France and feeds them into 3D simulations. For her most recent study, she divided the computational load between the Darwin cluster (part of the DiRAC network) at the University of Cambridge (UK) and the Piz Daint at the Swiss National Supercomputing Center.

    Canada-France-Hawaii Telescope
    CFHT nterior
    CFHT

    Bernard Lyot telescope
    Bernard Lyot telescope interior
    Bernard Lyot

    The Darwin cluster consists of 9,728 cores, with a theoretical peak in excess of 202 teraFLOPS. Piz Daint consists of 5,272 compute nodes with 32 GB of RAM per node, and is capable of 7.8 petaFLOPS — that’s more computation in a day than a typical laptop could manage in a millennium.

    Vidotto’s analysis of the DiRAC simulations reveals a much different interplanetary medium than in our home solar system, with an overall interplanetary magnetic field 100 times larger than ours, and stellar wind pressures at the point of orbit in excess of 10,000 times ours.

    This immense pressure means these planets must have a very strong magnetic shield (magnetosphere) or their atmospheres would be blown away by the stellar wind, as we suspect happened on Mars. A planet’s atmosphere is thought to be initimately related to its habitability.

    A planet’s magnetism can also tell us something about the interior properties of the planet such as its thermal state, composition, and dynamics. But since the actual magnetic fields of these exoplanets have not been observed, Vidotto is pursuing a simple hypothesis: What if they were similar to our own Jupiter?

    Temp 1
    A model of an exoplanet magnetosphere interacting with an interstellar wind. Knowing the characteristics of the interplanetary medium and the flux of the exoplanet radio emissions in this medium can help us tune our best telescopes to listen for distant signs of life. Courtesy Vidotto.

    If this were the case, then the magnetosphere around these planets would extend five times the radius of the planet (Earth’s magnetosphere extends 10-15 times). Where it mingles with the onrushing stellar winds, it creates the effect familiar to us as an aurora display. Indeed, Vidotto’s research reveals the auroral power in these exoplanets is more impressive than Jupiter’s. “If we were ever to live on one of these planets, the aurorae would be a fantastic show to watch!” she says.

    Knowing this auroral power enables astronomers to realistically characterize the interplanetary medium around the exoplanets, as well as the auroral ovals through which cosmic and stellar particles can penetrate the exoplanet atmosphere. This helps astronomers correctly estimate the flux of exoplanet radio emissions and how sensitive equipment on Earth would have to be to detect them. In short, knowing how to listen is a big step toward hearing.

    Radio emissions from these hot-Jupiters would present a challenge to our current class of radio telescopes, such as the Low Frequency Array for radio astronomy (LOFAR). However, “there is one radio array that is currently being designed where these radio fluxes could be detected — the Square Kilometre Array (SKA),” Vidotto says. The SKA is set for completion in 2023, and in the DiRAC clusters Vidotto finds some of the few supercomputers in the world capable of testing correlation software solutions.

    Lofar radio telescope

    While there’s much more work ahead of us, Vidotto’s research presents a significant advance in radio astronomy and is helping refine our ability to detect signals from beyond. With her 3D exoplanet simulations, the DiRAC computation power, and the ears of SKA, it may not be long before we’re able to hear radio signals from distant worlds.

    Stay tuned!

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 1:47 pm on July 21, 2015 Permalink | Reply
    Tags: , , , Supercomputing   

    From isgtw: “Simulations reveal a less crowded universe” 


    international science grid this week

    July 15, 2015
    Jan Zverina

    1
    Blue Waters supercomputer

    Simulations conducted on the Blue Waters supercomputer at the National Center for Supercomputing Applications (NCSA) suggest there may be far fewer galaxies in the universe than expected.

    The study, published this week in Astrophysical Journal Letters, shows the first results from the Renaissance Simulations, a suite of extremely high-resolution adaptive mesh refinement calculations of high redshift galaxy formation. Taking advantage of data transferred to SDSC Cloud at the San Diego Supercomputer Center (SDSC), these simulations show hundreds of well-resolved galaxies.

    “Most critically, we show that the ultraviolet luminosity function of our simulated galaxies is consistent with observations of redshift galaxy populations at the bright end of the luminosity function, but at lower luminosities is essentially flat rather than rising steeply,” says principal investigator and lead author Brian W. O’Shea, an associate professor at Michigan State University.

    This discovery allows researchers to make several novel and verifiable predictions ahead of the October 2018 launch of the James Webb Space Telescope, a new space observatory succeeding the Hubble Space Telescope.

    NASA Webb Telescope
    NASA/Webb

    NASA Hubble Telescope
    NASA/ESA Hubble

    “The Hubble Space Telescope can only see what we might call the tip of the iceberg when it comes to taking inventory of the most distant galaxies,” said SDSC director Michael Norman. “A key question is how many galaxies are too faint to see. By analyzing these new, ultra-detailed simulations, we find that there are 10 to 100 times fewer galaxies than a simple extrapolation would predict.”

    The simulations ran on the National Science Foundation (NSF) funded Blue Waters supercomputer, one of the largest and most powerful academic supercomputers in the world. “These simulations are physically complex and very large — we simulate thousands of galaxies at a time, including their interactions through gravity and radiation, and that poses a tremendous computational challenge,” says O’Shea.

    Blue Waters, based at the University of Illinois, is used to tackle a wide range of challenging problems, from predicting the behavior of complex biological systems to simulating the evolution of the cosmos. The supercomputer has more than 1.5 petabytes of memory — enough to store 300 million images from a digital camera — and can achieve a peak performance level of more than 13 quadrillion calculations per second.

    “The flattening at lower luminosities is a key finding and significant to researchers’ understanding of the reionization of the universe, when the gas in the universe changed from being mostly neutral to mostly ionized,” says John H. Wise, Dunn Family assistant professor of physics at the Georgia Institute of Technology.

    Temp 1
    Matter overdensity (top row) and ionized fraction (bottom row) for the regions simulated in the Renaissance Simulations. The red triangles represent locations of galaxies detectable with the Hubble Space Telescope. The James Webb Space Telescope will detect many more distant galaxies, shown by the blue squares and green circles. These first galaxies reionized the universe shown in the image with blue bubbles around the galaxies. Courtesy Brian W. O’Shea (Michigan State University), John H. Wise (Georgia Tech); Michael Norman and Hao Xu (UC San Diego). Click for larger image.

    The term ‘reionized’ is used because the universe was ionized immediately after the fiery big bang. During that time, ordinary matter consisted mostly of hydrogen atoms with positively charged protons stripped of their negatively charged electrons. Eventually, the universe cooled enough for electrons and protons to combine and form neutral hydrogen. They didn’t give off any optical or UV light — and without it, conventional telescopes are of no use in finding traces of how the cosmos evolved during these Dark Ages. The light returned when reionization began.

    In an earlier paper, previous simulations concluded that the universe was 20 percent ionized about 300 million years after the Big Bang; 50 percent ionized at 550 million years after; and fully ionized at 860 million years after its creation.

    “Our work suggests that there are far fewer faint galaxies than one could previously infer,” says O’Shea. “Observations of high redshift galaxies provide poor constraints on the low-luminosity end of the galaxy luminosity function, and thus make it challenging to accurately account for the full budget of ionizing photons during that epoch.”

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 3:49 pm on July 10, 2015 Permalink | Reply
    Tags: , , , , Supercomputing   

    From BNL: “Big PanDA and Titan Merge to Tackle Torrent of LHC’s Full-Energy Collision Data” 

    Brookhaven Lab

    July 7, 2015
    Karen McNulty Walsh

    Workload handling software has broad potential to maximize use of available supercomputing resources

    1
    The PanDA workload management system developed at Brookhaven Lab and the University of Texas, Arlington, has been integrated on the Titan supercomputer at the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory.

    With the successful restart of the Large Hadron Collider (LHC), now operating at nearly twice its former collision energy, comes an enormous increase in the volume of data physicists must sift through to search for new discoveries.

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    Thanks to planning and a pilot project funded by the offices of Advanced Scientific Computing Research and High-Energy Physics within the Department of Energy’s Office of Science, a remarkable data-management tool developed by physicists at DOE’s Brookhaven National Laboratory and the University of Texas at Arlington is evolving to meet the big-data challenge.

    The workload management system, known as PanDA (for Production and Distributed Analysis), was designed by high-energy physicists to handle data analysis jobs for the LHC’s ATLAS collaboration.

    CERN ATLAS New
    CERN/ATLAS

    During the LHC’s first run, from 2010 to 2013, PanDA made ATLAS data available for analysis by 3000 scientists around the world using the LHC’s global grid of networked computing resources. The latest rendition, known as Big PanDA, schedules jobs opportunistically on Titan—the world’s most powerful supercomputer for open scientific research, located at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility at Oak Ridge National Laboratory—in a manner that does not conflict with Titan’s ability to schedule its traditional, very large, leadership-class computing jobs.

    This integration of the workload management system on Titan—the first large-scale use of leadership class supercomputing facilities fully integrated with PanDA to assist in the analysis of experimental high-energy physics data—will have immediate benefits for ATLAS.

    “Titan is ready to help with new discoveries at the LHC,” said Brookhaven physicist Alexei Klimentov, a leader on the development of Big PanDA.

    The workload management system will likely also help meet big data challenges in many areas of science by maximizing the use of limited supercomputing resources.

    “As a DOE leadership computing facility, OLCF was designed to tackle large complex computing problems that cannot be readily performed using smaller facilities—things like modeling climate and nuclear fusion,” said Jack Wells, Director of Science for the National Center for Computational Science at ORNL. OLCF prioritizes the scheduling of these leadership jobs, which can take up 20, 60, or even greater than 90 percent of Titan’s computational resources. One goal is to make the most of the available running time and get as close to 100 percent utilization of the system as possible.

    “But even when Titan is fully loaded and large jobs are standing in the queue to run, we are typically using about 90 percent of the machine averaging over long periods of time,” Wells said. “That means, on average, there’s 10 percent of the machine that we are unable to use that could be made available to handle a mix of smaller jobs, essentially ‘filling in the cracks’ between the very large jobs.”

    As Klimentov explained, “Applications from high-energy physics don’t require a huge allocation of resources on a supercomputer. If you imagine a glass filled with stones to represent the supercomputing capacity and how much ‘space’ is taken up by the big computing jobs, we use the small spaces between the stones.”

    A workload-management system like PanDA could help fill those spaces with other types of jobs as well.

    3
    Brookhaven physicists Alexei Klimentov and Torre Wenaus have helped to design computational strategies for handling a torrent of data from the ATLAS experiment at the LHC.

    New territory for experimental physicists

    While supercomputers have been absolutely essential for the complex calculations of theoretical physics, distributed grid resources have been the workhorses for analyzing experimental high-energy physics data. PanDA, as designed by Kaushik De, a professor of physics at UT, Arlington, and Torre Wenaus of Brookhaven Lab, helped to integrate these worldwide computing centers by introducing common workflow protocols and access to the entire ATLAS data set.

    But as the volume of data increases with the LHC collision energy, so does the need for running simulations that help scientists interpret their experimental results, Klimentov said. These simulations are perfectly suited for running on supercomputers, and Big PanDA makes it possible to do so without eating up valuable computing time.

    The cutting-edge prototype Big PanDA software, which has been significantly modified from its original design, “backfills” simulations of the collisions taking place at the LHC into spaces between typically large supercomputing jobs.

    “We can insert jobs at just the right time and in just the right size chunks so they can run without competing in any way with the mission leadership jobs, making use of computing power that would otherwise sit idle,” Wells said.

    In early June, as the LHC ramped up to 13 trillion electron volts of energy per proton, Titan ramped up to 10,000 core processing units (CPUs) simultaneously calculating LHC collisions, and has tested scalability successfully up to 90,000 concurrent cores.

    “These simulations provide a clear path to understanding the complex physical phenomena recorded by the ATLAS detector,” Klimentov said.

    He noted that during one 10-day period just after the LHC restart, the group ran ATLAS simulations on Titan for 60,000 Titan core-hours in backfill mode. (30 Titan cores used over a period of one hour consume 30 Titan core-hours of computing resource.)

    “This is a great achievement of the pilot program,” said De of UT Arlington, co-leader of the Big PanDA project.

    “We’ll be able to reach far greater heights when the pilot matures into daily operations at Titan in the next phase of this project,” he added.

    The Big PanDA team is now ready to bring its expertise to advancing the use of supercomputers for fields beyond high-energy physics. Already they have plans to use Big PanDA to help tackle the data challenges presented by the LHC’s nuclear physics research using the ALICE detector—a program that complements the exploration of quark-gluon plasma and the building blocks of visible matter at Brookhaven’s Relativistic Heavy Ion Collider (RHIC).

    ALICE - EMCal supermodel
    CERN/ALICE

    Brookhaven RHIC
    BNL/RHIC

    But they see widespread applicability in other data-intensive fields, including molecular dynamics simulations and studies of genes and proteins in biology, the development of new energy technologies and materials design, and understanding global climate change.

    “Our goal is to work with Jack and our other colleagues at OLCF to develop Big PanDA as a general workload tool available to all users of Titan and other supercomputers to advance fundamental discovery and understanding in a broad range of scientific and engineering disciplines,” Klimentov said. Supercomputing groups in the Czech Republic, UK, and Switzerland have already been making inquiries.

    Brookhaven’s role in this work was supported by the DOE Office of Science. The Oak Ridge Leadership Computing Facility is supported by the DOE Office of Science.

    Brookhaven National Laboratory and Oak Ridge National Laboratory are supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 1:56 pm on July 9, 2015 Permalink | Reply
    Tags: , , , Network Computing, Supercomputing,   

    From Symmetry: “More data, no problem” 

    Symmetry

    July 09, 2015
    Katie Elyce Jones

    Scientists are ready to handle the increased data of the current run of the Large Hadron Collider.

    1
    Photo by Reidar Hahn, Fermilab

    Physicist Alexx Perloff, a graduate student at Texas A&M University on the CMS experiment, is using data from the first run of the Large Hadron Collider for his thesis, which he plans to complete this year.

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC

    CERN CMS Detector
    CMS

    When all is said and done, it will have taken Perloff a year and a half to conduct the computing necessary to analyze all the information he needs—not unusual for a thesis.

    But had he used the computing tools LHC scientists are using now, he estimates he could have finished his particular kind of analysis in about three weeks. Although Perloff represents only one scientist working on the LHC, his experience shows the great leaps scientists have made in LHC computing by democratizing their data, becoming more responsive to popular demand and improving their analysis software.

    A deluge of data

    Scientists estimate the current run of the LHC could create up to 10 times more data than the first one. CERN already routinely stores 6 gigabytes (or 6 billion units of digital information) per second, up from 1 gigabyte per second in the first run.

    The second run of the LHC is more data-intensive because the accelerator itself is more intense: The collision energy is 60 percent greater, resulting in “pile-up” or more collisions per proton bunch. Proton bunches are also injected into the ring closer together, resulting in more collisions per second.

    On top of that, the experiments have upgraded their triggers, which automatically choose which of the millions of particle events per second to record. The CMS trigger will now record more than twice as much data per second as it did in the previous run.

    Had CMS and ATLAS scientists relied only on adding more computers to make up for the data hike, they would likely have needed about four to six times more computing power in CPUs and storage than they used in the first run of the LHC.

    CERN ATLAS New
    ATLAS

    To avoid such a costly expansion, they found smarter ways to share and analyze the data.

    Flattening the hierarchy

    Over a decade ago, network connections were less reliable than they are today, so the Worldwide LHC Computing Grid was designed to have different levels, or tiers, that controlled data flow.

    All data recorded by the detectors goes through the CERN Data Centre, known as Tier-0, where it is initially processed, then to a handful of Tier-1 centers in different regions across the globe.

    CERN DATA Center
    One view of the Cern Data Centre

    During the last run, the Tier-1 centers served Tier-2 centers, which were mostly the smaller university computing centers where the bulk of physicists do their analyses.

    “The experience for a user on Run I was more restrictive,” says Oliver Gutsche, assistant head of the Scientific Computing Division for Science Workflows and Operations at Fermilab, the US Tier-1 center for CMS*. “You had to plan well ahead.”

    Now that the network has proved reliable, a new model “flattens” the hierarchy, enabling a user at any ATLAS or CMS Tier-2 center to access data from any of their centers in the world. This was initiated in Run I and is now fully in place for Run II.

    Through a separate upgrade known as data federation, users can also open a file from another computing center through the network, enabling them to view the file without going through the process of transferring it from center to center.

    Another significant upgrade affects the network stateside. Through its Energy Sciences Network, or ESnet, the US Department of Energy increased the bandwidth of the transatlantic network that connects the US CMS and ATLAS Tier-1 centers to Europe. A high-speed network, ESnet transfers data 15,000 times faster than the average home network provider.

    Dealing with the rush

    One of the thrilling things about being a scientist on the LHC is that when something exciting shows up in the detector, everyone wants to talk about it. The downside is everyone also wants to look at it.

    “When data is more interesting, it creates high demand and a bottleneck,” says David Lange, CMS software and computing co-coordinator and a scientist at Lawrence Livermore National Laboratory. “By making better use of our resources, we can make more data available to more people at any time.”

    To avoid bottlenecks, ATLAS and CMS are now making data accessible by popularity.

    “For CMS, this is an automated system that makes more copies when popularity rises and reduces copies when popularity declines,” Gutsche says.

    Improving the algorithms

    One of the greatest recent gains in computing efficiency for the LHC relied on the physicists who dig into the data. By working closely with physicists, software engineers edited the algorithms that describe the physics playing out in the LHC, thereby significantly improving processing time for reconstruction and simulation jobs.

    “A huge amount of effort was put in, primarily by physicists, to understand how the physics could be analyzed while making the computing more efficient,” says Richard Mount, senior research scientist at SLAC National Accelerator Laboratory who was ATLAS computing coordinator during the recent LHC upgrades.

    CMS tripled the speed of event reconstruction and halved simulation time. Similarly, ATLAS quadrupled reconstruction speed.

    Algorithms that determine data acquisition on the upgraded triggers were also improved to better capture rare physics events and filter out the background noise of routine (and therefore uninteresting) events.

    “More data” has been the drumbeat of physicists since the end of the first run, and now that it’s finally here, LHC scientists and students like Perloff can pick up where they left off in the search for new physics—anytime, anywhere.

    *While not noted in the article, I believe that Brookhaven National Laboratory is the Tier 1 site for Atlas in the United States.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 7:33 pm on April 15, 2015 Permalink | Reply
    Tags: , , , Supercomputing   

    From isgtw: “Supercomputing enables researchers in Norway to tackle cancer” 


    international science grid this week

    April 15, 2015
    Yngve Vogt

    Cancer researchers are using the Abel supercomputer at the University of Oslo in Norway to detect which versions of genes are only found in cancer cells. Every form of cancer, even every tumour, has its own distinct variants.

    “This charting may help tailor the treatment to each patient,” says Rolf Skotheim, who is affiliated with the Centre for Cancer Biomedicine and the research group for biomedical informatics at the University of Oslo, as well as the Department of Molecular Oncology at Oslo University Hospital.

    Temp 0
    “Charting the versions of the genes that are only found in cancer cells may help tailor the treatment offered to each patient,” says Skotheim. Image courtesy Yngve Vogt.

    His research group is working to identify the genes that cause bowel and prostate cancer, which are both common diseases. There are 4,000 new cases of bowel cancer in Norway every year. Only six out of ten patients survive the first five years. Prostate cancer affects 5,000 Norwegians every year. Nine out of ten survive.

    Comparisons between healthy and diseased cells

    In order to identify the genes that lead to cancer, Skotheim and his research group are comparing genetic material in tumours with genetic material in healthy cells. In order to understand this process, a brief introduction to our genetic material is needed:

    Our genetic material consists of just over 20,000 genes. Each gene consists of thousands of base pairs, represented by a specific sequence of the four building blocks, adenine, thymine, guanine, and cytosine, popularly abbreviated to A, T, G, and C. The sequence of these building blocks is the very recipe for the gene. Our whole DNA consists of some six billion base pairs.

    The DNA strand carries the molecular instructions for activity in the cells. In other words, DNA contains the recipe for proteins, which perform the tasks in the cells. DNA, nevertheless, does not actually produce proteins. First, a copy of DNA is made: this transcript is called RNA and it is this molecule that is read when proteins are produced.

    RNA is only a small component of DNA, and is made up of its active constituents. Most of DNA is inactive. Only 1–2 % of the DNA strand is active.

    In cancer cells, something goes wrong with the RNA transcription. There is either too much RNA, which means that far too many proteins of a specific type are formed, or the composition of base pairs in the RNA is wrong. The latter is precisely the area being studied by the University of Oslo researchers.

    Wrong combinations

    All genes can be divided into active and inactive parts. A single gene may consist of tens of active stretches of nucleotides (exons). “RNA is a copy of a specific combination of the exons from a specific gene in DNA,” explains Skotheim. There are many possible combinations, and it is precisely this search for all of the possible combinations that is new in cancer research.

    Different cells can combine the nucleotides in a single gene in different ways. A cancer cell can create a combination that should not exist in healthy cells. And as if that didn’t make things complicated enough, sometimes RNA can be made up of stretches of nucleotides from different genes in DNA. These special, complex genes are called fusion genes.

    Temp 0
    “We need powerful computers to crunch the enormous amounts of raw data,” says Skotheim. “Even if you spent your whole life on this task, you would not be able to find the location of a single nucleotide.”

    In other words, researchers must look for errors both inside genes and between the different genes. “Fusion genes are usually found in cancer cells, but some of them are also found in healthy cells,” says Skotheim. In patients with prostate cancer, researchers have found some fusion genes that are only created in diseased cells. These fusion genes may then be used as a starting-point in the detection of and fight against cancer.

    The researchers have also found fusion genes in bowel cells, but they were not cancer-specific. “For some reason, these fusion genes can also be found in healthy cells,” adds Skotheim. “This discovery was a let-down.”
    Improving treatment

    There are different RNA errors in the various cancer diseases. The researchers must therefore analyze the RNA errors of each disease.

    Among other things, the researchers are comparing RNA in diseased and healthy tissue from 550 patients with prostate cancer. The patients that make up the study do not receive any direct benefits from the results themselves. However, the research is important in order to be able to help future patients.

    “We want to find the typical defects associated with prostate cancer,” says Skotheim. “This will make it easier to understand what goes wrong with healthy cells, and to understand the mechanisms that develop cancer. Once we have found the cancer-specific molecules, they can be used as biomarkers.” In some cases, the biomarkers can be used to find cancer, determine the level of severity of the cancer and the risk of spreading, and whether the patient should be given a more aggressive treatment.

    Even though the researchers find deviations in the RNA, there is no guarantee that there is appropriate, targeted medicine available. “The point of our research is to figure out more of the big picture,” says Skotheim. “If we identify a fusion gene that is only found in cancer cells, the discovery will be so important in itself that other research groups around the world will want to begin working on this straight away. If a cure is found that counteracts the fusion genes, this may have enormous consequences for the cancer treatment.”

    Laborious work

    Recreating RNA is laborious work. The set of RNA molecules consists of about 100 million bases, divided into a few thousand bases from each gene.

    The laboratory machine reads millions of small nucleotides. Each one is only 100 base pairs long. In order for the researchers to be able to place them in the right location, they must run large statistical analyses. The RNA analysis of a single patient can take a few days.

    All of the nucleotides must be matched with the DNA strand. Unfortunately the researchers do not have the DNA strands of each patient. In order to learn where the base pairs come from in the DNA strand, they must therefore use the reference genome of the human species. “This is not ideal, because there are individual differences,” explains Skotheim. The future potentially lies in fully sequencing the DNA of each patient when conducting medical experiments.
    Supercomputing

    There is no way this research could be carried out using pen and paper. “We need powerful computers to crunch the enormous amounts of raw data. Even if you spent your whole life on this task, you would not be able to find the location of a single nucleotide. This is a matter of millions of nucleotides that must be mapped correctly in the system of coordinates of the genetic material. Once we have managed to find the RNA versions that are only found in cancer cells, we will have made significant progress. However, the work to get that far requires advanced statistical analyses and supercomputing,” says Skotheim.

    The analyses are so demanding that the researchers must use the University of Oslo’s Abel supercomputer, which has a theoretical peak performance of over 250 teraFLOPS. “With the ability to run heavy analyses on such large amounts of data, we have an enormous advantage not available to other cancer researchers,” explains Skotheim. “Many medical researchers would definitely benefit from this possibility. This is why they should spend more time with biostatisticians and informaticians. RNA samples are taken from the patients only once. The types of analyses that can be run are only limited by the imagination.”

    “We need to be smart in order to analyze the raw data.” He continues: “There are enormous amounts of data here that can be interpreted in many different ways. We just got started. There is lots of useful information that we have not seen yet. Asking the right questions is the key. Most cancer researchers are not used to working with enormous amounts of data, and how to best analyze vast data sets. Once researchers have found a possible answer, they must determine whether the answer is chance or if it is a real finding. The solution is to find out whether they get the same answers from independent data sets from other parts of the world.”

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 9:35 am on March 17, 2015 Permalink | Reply
    Tags: , , , Supercomputing   

    From CBS: “Scientists mapping Earth in 3D, from the inside out’ 

    CBS News

    CBS News

    March 16, 2015
    Michael Casey

    1
    Using a technique that is similar to a medical CT (“CAT”) scan, researchers at Princeton are using seismic waves from earthquakes to create images of the Earth’s subterranean structures — such as tectonic plates, magma reservoirs and mineral deposits — which will help better understand how earthquakes and volcanoes occur. Ebru Bozdağ, University of Nice Sophia Antipolis, and David Pugmire, Oak Ridge National Laboratory

    The wacky adventures of scientists traveling to the Earth’s core have been a favorite plot line in Hollywood over the decades, but actually getting there is mostly science fiction.

    Now, a group of scientists is using some of the world’s most powerful supercomputers to do what could be the next best thing.

    Princeton’s Jeroen Tromp and colleagues are eavesdropping on the seismic vibrations produced by earthquakes, and using the data to create a map of the Earth’s mantle, the semisolid rock that stretches to a depth of 1,800 miles, about halfway down to the planet’s center and about 300 times deeper than humans have drilled. The research could help understand and predict future earthquakes and volcanic eruptions.

    “We need to scour the maps for interesting and unexpected features,” Tromp told CBS News. “But it’s really a 3D mapping expedition.”

    To do this, Tromp and his colleagues will exploit an interesting phenomenon related to seismic activity below the surface of the Earth. As seismic waves travel, they change speed depending on the density, temperature and type of rock they’re moving through, for instance slowing down when traveling through an underground aquifer or magma.

    2
    This three-dimensional image displays contours of locations where seismic wave speeds are faster than average.
    Ebru Bozdağ, University of Nice Sophia Antipolis, and David Pugmire, Oak Ridge National Laboratory

    Thousands of seismographic stations worldwide make recordings, or seismograms, that detail the movement produced by seismic waves, which typically travel at speeds of several miles per second and last several minutes. By combining seismographic readings of roughly 3,000 quakes of magnitude 5.5 and greater the geologists can produce a three-dimensional model of the structures under the Earth’s surface.

    For the task, Tromp’s team will use the supercomputer called Titan, which can perform more than 20 quadrillion calculations per second and is located at the Department of Energy’s Oak Ridge National Laboratory in Tennessee.

    ORNL Titan Supercomputer
    TITAN at ORNL

    The technique, called seismic tomography, has been compared to the computerized tomography used in medical CAT scans, in which a scanner captures a series of X-ray images from different viewpoints, creating cross-sectional images that can be combined into 3D images.

    Tromp acknowledged he doesn’t think his research could one day lead to a scientist actually reaching the mantle. But he said it could help seismologists do a better job of predicting the damage from future earthquakes and the possibility of volcanic activity.

    For example, they might find a fragment of a tectonic plate that broke off and sank into the mantle. The resulting map could tell seismologists more about the precise locations of underlying tectonic plates, which can trigger earthquakes when they shift or slide against each other. The maps could also reveal the locations of magma that, if it comes to the surface, causes volcanic activity.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 5:30 pm on February 18, 2015 Permalink | Reply
    Tags: , Enzyme studies, , Supercomputing,   

    From UCSD: “3D Enzyme Model Provides New Tool for Anti-Inflammatory Drug Development” 

    UC San Diego bloc

    UC San Diego

    January 26, 2015
    Heather Buschman

    Researchers develop first computer models of phospholipase A2 enzymes extracting their substrates out of the cell membrane, an early step in inflammation

    Phospholipase A2 (PLA2) enzymes are known to play a role in many inflammatory diseases, including asthma, arthritis and atherosclerosis. It then stands to reason that PLA2 inhibitors could represent a new class of anti-inflammatory medication. To better understand PLA2 enzymes and help drive therapeutic drug development, researchers at University of California, San Diego School of Medicine developed 3D computer models that show exactly how two PLA2 enzymes extract their substrates from cellular membranes. The new tool is described in a paper published online the week of Jan. 26 by the Proceedings of the National Academy of Sciences.

    1
    Phospholipase Cleavage Sites. Note that an enzyme that displays both PLA1 and PLA2 activities is called a Phospholipase B

    “This is the first time experimental data and supercomputing technology have been used to visualize an enzyme interacting with a membrane,” said Edward A. Dennis, PhD, Distinguished Professor of Pharmacology, chemistry and biochemistry and senior author of the study. “In doing so, we discovered that binding the membrane triggers a conformational change in PLA2 enzymes and activates them. We also saw several important differences between the two PLA2 enzymes we studied — findings that could influence the design and development of specific PLA2 inhibitor drugs for each enzyme.”

    The computer simulations of PLA2 enzymes developed by Dennis and his team, including first author Varnavas D. Mouchlis, PhD, show the specific molecular interactions between PLA2 enzymes and their substrate, arachidonic acid, as the enzymes suck it up from cellular membranes.

    Make no mistake, though — the animations of PLA2 in action are not mere cartoons. They are sophisticated molecular dynamics simulations based upon previously published deuterium exchange mass spectrometry (DXMS) data on PLA2. DXMS is an experimental laboratory technique that provides molecular information about the interactions of these enzymes with membranes.

    “The combination of rigorous experimental data and in silico [computer] models is a very powerful tool — the experimental data guided the development of accurate 3D models, demonstrating that these two scientific fields can inform one another,” Mouchlis said.

    The liberation of arachidonic acid by PLA2 enzymes, as shown in these simulations, sets off a cascade of molecular events that result in inflammation. Aspirin and many other anti-inflammatory drugs work by inhibiting enzymes in this cascade that rely on PLA2 enzymes to provide them with arachidonic acid. That means PLA2 enzymes could potentially also be targeted to dampen inflammation at an earlier point in the process.

    Co-authors include Denis Bucher, UC San Diego, and J. Andrew McCammon, UC San Diego and Howard Hughes Medical Institute.

    This research was funded, in part, by the National Institute of General Medical Sciences at the National Institutes of Health (grants GM20501 and P41GM103712-S1), National Science Foundation (grant ACI-1053575) and Howard Hughes Medical Institute.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    UC San Diego Campus

    The University of California, San Diego (also referred to as UC San Diego or UCSD), is a public research university located in the La Jolla area of San Diego, California, in the United States.[12] The university occupies 2,141 acres (866 ha) near the coast of the Pacific Ocean with the main campus resting on approximately 1,152 acres (466 ha).[13] Established in 1960 near the pre-existing Scripps Institution of Oceanography, UC San Diego is the seventh oldest of the 10 University of California campuses and offers over 200 undergraduate and graduate degree programs, enrolling about 22,700 undergraduate and 6,300 graduate students. UC San Diego is one of America’s Public Ivy universities, which recognizes top public research universities in the United States. UC San Diego was ranked 8th among public universities and 37th among all universities in the United States, and rated the 18th Top World University by U.S. News & World Report ‘s 2015 rankings.

     
  • richardmitnick 4:31 am on February 18, 2015 Permalink | Reply
    Tags: , , , Supercomputing   

    From LBL: “Bigger steps: Berkeley Lab researchers develop algorithm to make simulation of ultrafast processes possible” 

    Berkeley Logo

    Berkeley Lab

    February 17, 2015
    Rachel Berkowitz

    When electronic states in materials are excited during dynamic processes, interesting phenomena such as electrical charge transfer can take place on quadrillionth-of-a-second, or femtosecond, timescales. Numerical simulations in real-time provide the best way to study these processes, but such simulations can be extremely expensive. For example, it can take a supercomputer several weeks to simulate a 10 femtosecond process. One reason for the high cost is that real-time simulations of ultrafast phenomena require “small time steps” to describe the movement of an electron, which takes place on the attosecond timescale – a thousand times faster than the femtosecond timescale.

    1
    Model of ion (Cl) collision with atomically thin semiconductor (MoSe2). Collision region is shown in blue and zoomed in; red points show initial positions of Cl. The simulation calculates the energy loss of the ion based on the incident and emergent velocities of the Cl.

    To combat the high cost associated with the small-time steps, Lin-Wang Wang, senior staff scientist at the Lawrence Berkeley National Laboratory (Berkeley Lab), and visiting scholar Zhi Wang from the Chinese Academy of Sciences, have developed a new algorithm which increases the small time step from about one attosecond to about half a femtosecond. This allows them to simulate ultrafast phenomena for systems of around 100 atoms.

    “We demonstrated a collision of an ion [Cl] with a 2D material [MoSe2] for 100 femtoseconds. We used supercomputing systems for ten hours to simulate the problem – a great increase in speed,” says L.W. Wang. That represents a reduction from 100,000 time steps down to only 500. The results of the study were reported in a Physical Review Letters paper titled Efficient real-time time-dependent DFT method and its application to a collision of an ion with a 2D material.

    Conventional computational methods cannot be used to study systems in which electrons have been excited from the ground state, as is the case for ultrafast processes involving charge transfer. But using real-time simulations, an excited system can be modeled with time-dependent quantum mechanical equations that describe the movement of electrons.

    The traditional algorithms work by directly manipulating these equations. Wang’s new approach is to expand the equations into individual terms, based on which states are excited at a given time. The trick, which he has solved, is to figure out the time evolution of the individual terms. The advantage is that some terms in the expanded equations can be eliminated.

    2
    Zhi Wang (left) and Berkeley Lab’s Lin-Wang Wang (right).

    “By eliminating higher energy terms, you significantly reduce the dimension of your problem, and you can also use a bigger time step,” explains Wang, describing the key to the algorithm’s success. Solving the equations in bigger timesteps reduces the computational cost and increases the speed of the simulations

    Comparing the new algorithm with the old, slower algorithm yields similar results, e.g., the predicted energies and velocities of an atom passing through a layer of material are the same for both models. This new algorithm opens the door for efficient real-time simulations of ultrafast processes and electron dynamics, such as excitation in photovoltaic materials and ultrafast demagnetization following an optical excitation.

    The work was supported by the Department of Energy’s Office of Science and used the resources of the National Energy Research Scientific Computing center (NERSC).

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
  • richardmitnick 11:43 am on January 24, 2015 Permalink | Reply
    Tags: , , Supercomputing   

    From isgtw: “Unlocking the secrets of vertebrate evolution” 


    international science grid this week

    January 21, 2015
    Lance Farrell

    Conventional wisdom holds that snakes evolved a particular form and skeleton by losing regions in their spinal column over time. These losses were previously explained by a disruption in Hox genes responsible for patterning regions of the vertebrae.

    Paleobiologists P. David Polly, professor of geological sciences at Indiana University, US, and Jason Head, assistant professor of earth and atmospheric sciences at the University of Nebraska-Lincoln, US, overturned that assumption. Recently published in Nature, their research instead reveals that snake skeletons are just as regionalized as those of limbed vertebrates.

    Using Quarry [being taken out of service Jan 30, 2015 and replaced by Karst, a supercomputer at Indiana University, Polly and Head arrived at a compelling new explanation for why snake skeletons are so different: Vertebrates like mammals, birds, and crocodiles evolved additional skeletal regions independently from ancestors like snakes and lizards.

    Karst
    Karst

    “Our study finds that snakes did not require extensive modification to their regulatory gene systems to evolve their elongate bodies,” Head notes.

    Despite having no limbs and more vertebrae, snake skeletons are just as regionalized as lizards’ skeletons.

    “Our study finds that snakes did not require extensive modification to their regulatory gene systems to evolve their elongate bodies,” Head notes.

    3
    P. David Polly. Photo courtesy Indiana University.

    Polly and Head had to overcome challenges in collection and analysis to arrive at this insight. “If you are sequencing a genome all you really need is a little scrap of tissue, and that’s relatively easy to get,” Polly says. “But if you want to do something like we have done, you not only need an entire skeleton, but also one for a whole lot of species.”

    To arrive at their conclusion, Head and Polly sampled 56 skeletons from collections worldwide. They began by photographing and digitizing the bones, then chose specific landmarks on each spinal segment. Using the digital coordinates of each vertebra, they then applied a technique called geometric-morphometrics, a multi-variant analysis that plots x and y coordinates to analyze an object’s shape.

    Armed with shape information, the scientists then fit a series of regressions and tracked each vertebra’s gradient over the entire spine. This led to a secondary challenge — with 36,000 landmarks applied to 3,000 digitized vertebrae, the regression analyses required to peer into the snake’s past called for a new analytical tool.

    “The computations required iteratively fitting four or more segmented regression models, each with 10 to 83 parameters, for every regional permutation of up to 230 vertebrae per skeleton. The amount of computational power required is well beyond any desktop system,” Head observes.

    Researchers like Polly and Head increasingly find quantitative analyses of data sets this size require the computational resources to match. With 7.2 million different models making up the data for their study, nothing less than a supercomputer would do.

    5
    Jason Head with ball python. Photo courtesy Craig Chandler, University of Nebraska-Lincoln.

    “Our supercomputing environments serve a broad base of users and purposes,” says David Hancock, manager of IU’s high performance systems. “We often support the research done in the hard sciences and math such as Polly’s, but we also see analytics done for business faculty, marketing and modeling for interior design projects, and lighting simulations for theater productions.”

    Analyses of the scale Polly and Head needed would have been unapproachable even a decade ago, and without US National Science Foundation support remain beyond the reach of most institutions. “A lot of the big jobs ran on Quarry,” says Polly. “To run one of these exhaustive models on a single snake took about three and a half days. Ten years ago we could barely have scratched the surface.”

    As high-performance computing resources reshape the future, scientists like Polly and Head have greater abilities to look into the past and unlock the secrets of evolution.

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 5:00 pm on January 21, 2015 Permalink | Reply
    Tags: , , , Simulation Astronomy, Supercomputing   

    From isgtw: “Exploring the universe with supercomputing” 


    international science grid this week

    January 21, 2015
    Andrew Purcell

    The Center for Computational Astrophysics (CfCA) in Japan recently upgraded its ATERUI supercomputer, doubling the machine’s theoretical peak performance to 1.058 petaFLOPS. Eiichiro Kokubo, director of the center, tells iSGTW how supercomputers are changing the way research is conducted in astronomy.

    What’s your research background?

    I investigate the origin of planetary systems. I use many-body simulations to study how planets form and I also previously worked on the development of the Gravity Pipe, or ‘GRAPE’ supercomputer.

    Why is it important to use supercomputers in this work?

    In the standard scenario of planet formation, small solid bodies — known as ‘planetisimals’ — interact with one another and this causes their orbits around the sun to evolve. Collisions between these building blocks lead to the formation of rocky planets like the Earth. To understand this process, you really need to do very-large-scale many-body simulations. This is where the high-performance computing comes in: supercomputers act as telescopes for phenomena we wouldn’t otherwise be able to see.

    The scales of mass, energy, and time are generally huge in astronomy. However, as supercomputers have become ever more powerful, we’ve become able to program the relevant physical processes — motion, fluid dynamics, radiative transfer, etc. — and do meaningful simulation of astronomical phenomena. We can even conduct experiments by changing parameters within our simulations. Simulation is numerical exploration of the universe!

    How has supercomputing changed the way research is carried out?

    Simulation astronomy’ has now become a third major methodological approach within the field, alongside observational and theoretical astronomy. Telescopes rely on electromagnetic radiation, but there are still many things that we cannot see even with today’s largest telescopes. Supercomputers enable us to use complex physical calculations to visualize phenomena that would otherwise remain hidden to us. Their use also gives us the flexibility to simulate phenomena across a vast range of spatial and temporal scales.

    Simulation can be used to simply test hypotheses, but it can also be used to explore new worlds that are beyond our current imagination. Sometimes you get results from a simulation that you really didn’t expect — this is often the first step on the road to making new discoveries and developing new astronomical theories.

    2
    ATERUI has made the leap to become a petaFLOPS-scale supercomputer. Image courtesy NAOJ/Makoto Shizugami (VERA/CfCA, NAOJ).

    In astronomy, there are three main kinds of large-scale simulation: many-body, fluid dynamics, and radiative transfer. These problems can all be parallelized effectively, meaning that massively parallel computers — like the Cray XC30 system we’ve installed — are ideally suited to performing these kinds of simulations.

    3
    “Supercomputers act as telescopes for phenomena we wouldn’t otherwise be able to see,” says Kokubo.

    What research problems will the ATERUI enable you tackle?

    There are over 100 users in our community and they are tackling a wide variety of problems. One project, for example, is looking at supernovae: having very high-resolution 3D simulations of these explosions is vital to improving our understanding. Another project is looking at the distribution of galaxies throughout the universe, and there is a whole range of other things being studied using ATERUI too.

    Since installing ATERUI, it’s been used at over 90% of its capacity, in terms of the number of CPUs running at any given time. Basically, it’s almost full every single day!

    Don’t forget, we also have the K computer here in Japan. The National Astronomical Observatory of Japan, of which the CfCA is part, is actually one of the consortium members of the K supercomputer project. As such, we also have plenty of researchers using that machine, as well. High-end supercomputers like K are absolutely great, but it is also important to have middle-class supercomputers dedicated to specific research fields available.

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 453 other followers

%d bloggers like this: