Tagged: isgtw Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 7:33 pm on April 15, 2015 Permalink | Reply
    Tags: , , isgtw,   

    From isgtw: “Supercomputing enables researchers in Norway to tackle cancer” 


    international science grid this week

    April 15, 2015
    Yngve Vogt

    Cancer researchers are using the Abel supercomputer at the University of Oslo in Norway to detect which versions of genes are only found in cancer cells. Every form of cancer, even every tumour, has its own distinct variants.

    “This charting may help tailor the treatment to each patient,” says Rolf Skotheim, who is affiliated with the Centre for Cancer Biomedicine and the research group for biomedical informatics at the University of Oslo, as well as the Department of Molecular Oncology at Oslo University Hospital.

    Temp 0
    “Charting the versions of the genes that are only found in cancer cells may help tailor the treatment offered to each patient,” says Skotheim. Image courtesy Yngve Vogt.

    His research group is working to identify the genes that cause bowel and prostate cancer, which are both common diseases. There are 4,000 new cases of bowel cancer in Norway every year. Only six out of ten patients survive the first five years. Prostate cancer affects 5,000 Norwegians every year. Nine out of ten survive.

    Comparisons between healthy and diseased cells

    In order to identify the genes that lead to cancer, Skotheim and his research group are comparing genetic material in tumours with genetic material in healthy cells. In order to understand this process, a brief introduction to our genetic material is needed:

    Our genetic material consists of just over 20,000 genes. Each gene consists of thousands of base pairs, represented by a specific sequence of the four building blocks, adenine, thymine, guanine, and cytosine, popularly abbreviated to A, T, G, and C. The sequence of these building blocks is the very recipe for the gene. Our whole DNA consists of some six billion base pairs.

    The DNA strand carries the molecular instructions for activity in the cells. In other words, DNA contains the recipe for proteins, which perform the tasks in the cells. DNA, nevertheless, does not actually produce proteins. First, a copy of DNA is made: this transcript is called RNA and it is this molecule that is read when proteins are produced.

    RNA is only a small component of DNA, and is made up of its active constituents. Most of DNA is inactive. Only 1–2 % of the DNA strand is active.

    In cancer cells, something goes wrong with the RNA transcription. There is either too much RNA, which means that far too many proteins of a specific type are formed, or the composition of base pairs in the RNA is wrong. The latter is precisely the area being studied by the University of Oslo researchers.

    Wrong combinations

    All genes can be divided into active and inactive parts. A single gene may consist of tens of active stretches of nucleotides (exons). “RNA is a copy of a specific combination of the exons from a specific gene in DNA,” explains Skotheim. There are many possible combinations, and it is precisely this search for all of the possible combinations that is new in cancer research.

    Different cells can combine the nucleotides in a single gene in different ways. A cancer cell can create a combination that should not exist in healthy cells. And as if that didn’t make things complicated enough, sometimes RNA can be made up of stretches of nucleotides from different genes in DNA. These special, complex genes are called fusion genes.

    Temp 0
    “We need powerful computers to crunch the enormous amounts of raw data,” says Skotheim. “Even if you spent your whole life on this task, you would not be able to find the location of a single nucleotide.”

    In other words, researchers must look for errors both inside genes and between the different genes. “Fusion genes are usually found in cancer cells, but some of them are also found in healthy cells,” says Skotheim. In patients with prostate cancer, researchers have found some fusion genes that are only created in diseased cells. These fusion genes may then be used as a starting-point in the detection of and fight against cancer.

    The researchers have also found fusion genes in bowel cells, but they were not cancer-specific. “For some reason, these fusion genes can also be found in healthy cells,” adds Skotheim. “This discovery was a let-down.”
    Improving treatment

    There are different RNA errors in the various cancer diseases. The researchers must therefore analyze the RNA errors of each disease.

    Among other things, the researchers are comparing RNA in diseased and healthy tissue from 550 patients with prostate cancer. The patients that make up the study do not receive any direct benefits from the results themselves. However, the research is important in order to be able to help future patients.

    “We want to find the typical defects associated with prostate cancer,” says Skotheim. “This will make it easier to understand what goes wrong with healthy cells, and to understand the mechanisms that develop cancer. Once we have found the cancer-specific molecules, they can be used as biomarkers.” In some cases, the biomarkers can be used to find cancer, determine the level of severity of the cancer and the risk of spreading, and whether the patient should be given a more aggressive treatment.

    Even though the researchers find deviations in the RNA, there is no guarantee that there is appropriate, targeted medicine available. “The point of our research is to figure out more of the big picture,” says Skotheim. “If we identify a fusion gene that is only found in cancer cells, the discovery will be so important in itself that other research groups around the world will want to begin working on this straight away. If a cure is found that counteracts the fusion genes, this may have enormous consequences for the cancer treatment.”

    Laborious work

    Recreating RNA is laborious work. The set of RNA molecules consists of about 100 million bases, divided into a few thousand bases from each gene.

    The laboratory machine reads millions of small nucleotides. Each one is only 100 base pairs long. In order for the researchers to be able to place them in the right location, they must run large statistical analyses. The RNA analysis of a single patient can take a few days.

    All of the nucleotides must be matched with the DNA strand. Unfortunately the researchers do not have the DNA strands of each patient. In order to learn where the base pairs come from in the DNA strand, they must therefore use the reference genome of the human species. “This is not ideal, because there are individual differences,” explains Skotheim. The future potentially lies in fully sequencing the DNA of each patient when conducting medical experiments.
    Supercomputing

    There is no way this research could be carried out using pen and paper. “We need powerful computers to crunch the enormous amounts of raw data. Even if you spent your whole life on this task, you would not be able to find the location of a single nucleotide. This is a matter of millions of nucleotides that must be mapped correctly in the system of coordinates of the genetic material. Once we have managed to find the RNA versions that are only found in cancer cells, we will have made significant progress. However, the work to get that far requires advanced statistical analyses and supercomputing,” says Skotheim.

    The analyses are so demanding that the researchers must use the University of Oslo’s Abel supercomputer, which has a theoretical peak performance of over 250 teraFLOPS. “With the ability to run heavy analyses on such large amounts of data, we have an enormous advantage not available to other cancer researchers,” explains Skotheim. “Many medical researchers would definitely benefit from this possibility. This is why they should spend more time with biostatisticians and informaticians. RNA samples are taken from the patients only once. The types of analyses that can be run are only limited by the imagination.”

    “We need to be smart in order to analyze the raw data.” He continues: “There are enormous amounts of data here that can be interpreted in many different ways. We just got started. There is lots of useful information that we have not seen yet. Asking the right questions is the key. Most cancer researchers are not used to working with enormous amounts of data, and how to best analyze vast data sets. Once researchers have found a possible answer, they must determine whether the answer is chance or if it is a real finding. The solution is to find out whether they get the same answers from independent data sets from other parts of the world.”

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 4:07 am on April 2, 2015 Permalink | Reply
    Tags: , , isgtw   

    From isgtw: “Supporting research with grid computing and more” 


    international science grid this week

    April 1, 2015
    Andrew Purcell

    Temp 1
    “In order for researchers to be able to collaborate and share data with one another efficiently, the underlying IT infrastructures need to be in place,” says Gomes. “With the amount of data produced by research collaborations growing rapidly, this support is of paramount importance.”

    Jorge Gomes is the principal investigator of the computing group at the Portuguese Laboratory of Instrumentation and Experimental Particles Physics (LIP) in Lisbon and a member of the European Grid Infrastructure (EGI)executive board. As the technical coordinator of the Portuguese national grid infrastructure (INCD), he is also responsible for Portugal’s contribution to the Worldwide LHC Computing Grid (WLCG).

    iSGTW speaks to Gomes about the importance of supporting researchers through a variety of IT infrastructures ahead of the EGI Conference in Lisbon from 18 to 22 May 2015.

    What’s the main focus of your work at LIP?

    I’ve been doing research in the field of grid computing since 2001. LIP participates in both the ATLAS and CMS experiments on the Large Hadron Collider (LHC) at CERN, which is why we’ve been working on research and development projects for the grid computing infrastructure that supports these experiments.

    CERN ATLAS New
    ATLAS

    CERN CMS New II
    CMS

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC

    CERN Control Center
    CERN

    Here in Portugal, we now have a national ‘road map’ for research infrastructures, which includes IT infrastructures. Our work in the context of the Portuguese national grid infrastructure now involves supporting a wide range of research communities, not just high-energy physics. Today, we support research in fields such as astrophysics, life sciences, chemistry, civil engineering, and environmental modeling, among others. For us, it’s very important to support as wide a range of communities as possible.

    So, when you talk about supporting researchers by providing ‘IT infrastructures’, it’s about much more than grid computing, right?

    Yes, today we’re engaged in cloud computing, high-performance computing, and a wide range of data-related services. This larger portfolio of services has evolved to match the needs of the Portuguese research community.

    2
    Cloud computing metaphor: For a user, the network elements representing the provider-rendered services are invisible, as if obscured by a cloud.

    Why is it important to provide IT infrastructures to support research?

    Research is no longer done by isolated individuals; instead, it is increasingly common for it to be carried out by large collaborations, often on an international or even an intercontinental basis. So, in order for researchers to be able to collaborate and share data with one another efficiently, the underlying IT infrastructures need to be in place. With the amount of data produced by research collaborations growing rapidly, this support is of paramount importance.

    Here in Portugal, we have a lot of communities that don’t yet have access to these services, but they really do need them. Researchers don’t want to have to set up their own IT infrastructures, they want to concentrate on doing research in their own specialist field. This is why it’s important for IT specialists to provide them with these underlying services.

    Also, particularly in relatively small countries like Portugal, it’s important that resources scattered across universities and other research institutions can be integrated, in order to extract the maximum possible value.

    When it comes to encouraging researchers to make use of the IT infrastructures you provide, what are the main challenges you face?

    Trust, in particular, is a very important aspect. For researchers to build scientific software on top of IT infrastructures, they need to have confidence that the infrastructures will still be there several years down the line. This is also connected to challenges like ‘vendor lock in’ and standards in relation to cloud computing infrastructure. We need to have common solutions so that if a particular IT infrastructure provider — either public or private — fails, users can move to other available resources.

    Another challenge is related to the structure of some research communities. The large, complex experimental apparatuses involved in high-energy physics means that these research communities are very structured and there is often a high degree of collaboration between research groups. In other domains however, where it is common to have much smaller research groups, this is often not the case, which means it can be much more difficult to develop standard IT solutions and to achieve agreement on a framework for sharing IT resources.

    Why do you believe it is important to provide grid computing infrastructure at a European scale, through EGI, rather than just at a national scale?

    More and more research groups are working internationally, so it’s no longer enough to provide IT infrastructures at a national level. That’s why we also collaborate with our colleagues in Spain to provide IberGrid.

    EGI is of great strategic importance to research in Europe. We’re now exploring a range of exciting opportunities through the European Strategy Forum on Research Infrastructures (ESFRI) to support large flagship European research projects.

    The theme for the upcoming EGI conference is ‘engaging the research community towards an open science commons’. What’s the role of EGI in helping to establish this commons?

    In Europe we still have a fragmented ecosystem of services provided by many entities with interoperability issues. A better level of integration and sharing is needed to take advantage of the growing amounts of scientific data available. EGI proposes an integrated vision that encompasses data, instruments, ICT services, and knowledge to reduce the barriers to scientific collaboration and result sharing.

    EGI is in a strategic position to integrate services at the European level and to enable access to open data, thus promoting knowledge sharing. By gathering key players, next month’s conference will be an excellent opportunity to further develop this vision.

    Finally, what are you most looking forward to about the conference?

    The conference is a great opportunity for users, developers, and resource providers to meet and exchange experiences and ideas at all levels. It’s also an excellent opportunity for researchers to discuss their requirements and to shape the development of future IT infrastructures. I look forward to seeing a diverse range of people at the event!

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 11:43 am on January 24, 2015 Permalink | Reply
    Tags: , isgtw,   

    From isgtw: “Unlocking the secrets of vertebrate evolution” 


    international science grid this week

    January 21, 2015
    Lance Farrell

    Conventional wisdom holds that snakes evolved a particular form and skeleton by losing regions in their spinal column over time. These losses were previously explained by a disruption in Hox genes responsible for patterning regions of the vertebrae.

    Paleobiologists P. David Polly, professor of geological sciences at Indiana University, US, and Jason Head, assistant professor of earth and atmospheric sciences at the University of Nebraska-Lincoln, US, overturned that assumption. Recently published in Nature, their research instead reveals that snake skeletons are just as regionalized as those of limbed vertebrates.

    Using Quarry [being taken out of service Jan 30, 2015 and replaced by Karst, a supercomputer at Indiana University, Polly and Head arrived at a compelling new explanation for why snake skeletons are so different: Vertebrates like mammals, birds, and crocodiles evolved additional skeletal regions independently from ancestors like snakes and lizards.

    Karst
    Karst

    “Our study finds that snakes did not require extensive modification to their regulatory gene systems to evolve their elongate bodies,” Head notes.

    Despite having no limbs and more vertebrae, snake skeletons are just as regionalized as lizards’ skeletons.

    “Our study finds that snakes did not require extensive modification to their regulatory gene systems to evolve their elongate bodies,” Head notes.

    3
    P. David Polly. Photo courtesy Indiana University.

    Polly and Head had to overcome challenges in collection and analysis to arrive at this insight. “If you are sequencing a genome all you really need is a little scrap of tissue, and that’s relatively easy to get,” Polly says. “But if you want to do something like we have done, you not only need an entire skeleton, but also one for a whole lot of species.”

    To arrive at their conclusion, Head and Polly sampled 56 skeletons from collections worldwide. They began by photographing and digitizing the bones, then chose specific landmarks on each spinal segment. Using the digital coordinates of each vertebra, they then applied a technique called geometric-morphometrics, a multi-variant analysis that plots x and y coordinates to analyze an object’s shape.

    Armed with shape information, the scientists then fit a series of regressions and tracked each vertebra’s gradient over the entire spine. This led to a secondary challenge — with 36,000 landmarks applied to 3,000 digitized vertebrae, the regression analyses required to peer into the snake’s past called for a new analytical tool.

    “The computations required iteratively fitting four or more segmented regression models, each with 10 to 83 parameters, for every regional permutation of up to 230 vertebrae per skeleton. The amount of computational power required is well beyond any desktop system,” Head observes.

    Researchers like Polly and Head increasingly find quantitative analyses of data sets this size require the computational resources to match. With 7.2 million different models making up the data for their study, nothing less than a supercomputer would do.

    5
    Jason Head with ball python. Photo courtesy Craig Chandler, University of Nebraska-Lincoln.

    “Our supercomputing environments serve a broad base of users and purposes,” says David Hancock, manager of IU’s high performance systems. “We often support the research done in the hard sciences and math such as Polly’s, but we also see analytics done for business faculty, marketing and modeling for interior design projects, and lighting simulations for theater productions.”

    Analyses of the scale Polly and Head needed would have been unapproachable even a decade ago, and without US National Science Foundation support remain beyond the reach of most institutions. “A lot of the big jobs ran on Quarry,” says Polly. “To run one of these exhaustive models on a single snake took about three and a half days. Ten years ago we could barely have scratched the surface.”

    As high-performance computing resources reshape the future, scientists like Polly and Head have greater abilities to look into the past and unlock the secrets of evolution.

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 5:00 pm on January 21, 2015 Permalink | Reply
    Tags: , , isgtw, Simulation Astronomy,   

    From isgtw: “Exploring the universe with supercomputing” 


    international science grid this week

    January 21, 2015
    Andrew Purcell

    The Center for Computational Astrophysics (CfCA) in Japan recently upgraded its ATERUI supercomputer, doubling the machine’s theoretical peak performance to 1.058 petaFLOPS. Eiichiro Kokubo, director of the center, tells iSGTW how supercomputers are changing the way research is conducted in astronomy.

    What’s your research background?

    I investigate the origin of planetary systems. I use many-body simulations to study how planets form and I also previously worked on the development of the Gravity Pipe, or ‘GRAPE’ supercomputer.

    Why is it important to use supercomputers in this work?

    In the standard scenario of planet formation, small solid bodies — known as ‘planetisimals’ — interact with one another and this causes their orbits around the sun to evolve. Collisions between these building blocks lead to the formation of rocky planets like the Earth. To understand this process, you really need to do very-large-scale many-body simulations. This is where the high-performance computing comes in: supercomputers act as telescopes for phenomena we wouldn’t otherwise be able to see.

    The scales of mass, energy, and time are generally huge in astronomy. However, as supercomputers have become ever more powerful, we’ve become able to program the relevant physical processes — motion, fluid dynamics, radiative transfer, etc. — and do meaningful simulation of astronomical phenomena. We can even conduct experiments by changing parameters within our simulations. Simulation is numerical exploration of the universe!

    How has supercomputing changed the way research is carried out?

    Simulation astronomy’ has now become a third major methodological approach within the field, alongside observational and theoretical astronomy. Telescopes rely on electromagnetic radiation, but there are still many things that we cannot see even with today’s largest telescopes. Supercomputers enable us to use complex physical calculations to visualize phenomena that would otherwise remain hidden to us. Their use also gives us the flexibility to simulate phenomena across a vast range of spatial and temporal scales.

    Simulation can be used to simply test hypotheses, but it can also be used to explore new worlds that are beyond our current imagination. Sometimes you get results from a simulation that you really didn’t expect — this is often the first step on the road to making new discoveries and developing new astronomical theories.

    2
    ATERUI has made the leap to become a petaFLOPS-scale supercomputer. Image courtesy NAOJ/Makoto Shizugami (VERA/CfCA, NAOJ).

    In astronomy, there are three main kinds of large-scale simulation: many-body, fluid dynamics, and radiative transfer. These problems can all be parallelized effectively, meaning that massively parallel computers — like the Cray XC30 system we’ve installed — are ideally suited to performing these kinds of simulations.

    3
    “Supercomputers act as telescopes for phenomena we wouldn’t otherwise be able to see,” says Kokubo.

    What research problems will the ATERUI enable you tackle?

    There are over 100 users in our community and they are tackling a wide variety of problems. One project, for example, is looking at supernovae: having very high-resolution 3D simulations of these explosions is vital to improving our understanding. Another project is looking at the distribution of galaxies throughout the universe, and there is a whole range of other things being studied using ATERUI too.

    Since installing ATERUI, it’s been used at over 90% of its capacity, in terms of the number of CPUs running at any given time. Basically, it’s almost full every single day!

    Don’t forget, we also have the K computer here in Japan. The National Astronomical Observatory of Japan, of which the CfCA is part, is actually one of the consortium members of the K supercomputer project. As such, we also have plenty of researchers using that machine, as well. High-end supercomputers like K are absolutely great, but it is also important to have middle-class supercomputers dedicated to specific research fields available.

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 6:51 pm on January 20, 2015 Permalink | Reply
    Tags: , , isgtw,   

    From isgtw and Sandia Lab: “8 Mind-Blowing Scientific Research Machines” 

    ISGTW

    Sandia Lab

    Scientific innovation and discovery are defining characteristics of humanity’s innate curiosity. Mankind has developed advanced scientific research machines to help us better understand the universe. They constitute some of the greatest human endeavors for the sake of technological and scientific progress. These projects also connect people of many nations and cultures, and inspire future generations of engineers and scientists.

    Apart from the last two experiments that are under construction, the images in this article are not fake or altered; they are real and showcase machines on the frontier of scientific innovation and discovery. Read on to learn more about the machines, what the images show, and how NI technology helps make them possible.

    1
    Borexino, a solar neutrino experiment, recently confirmed the energy output of the sun has not changed in 100,000 years. Its large underground spherical detector contains 2,000 soccer-ball-sized photomultiplier tubes.

    Borexino and DarkSide

    Gran Sasso National Laboratory, Assergi, Italy

    2
    PMTs are contained inside the Liquid Scintillator Veto spherical tank, a component of the DarkSide Experiment used to actively suppress background events from radiogenic and cosmogenic neutrons.

    Borexino and DarkSide are located 1.4 km (0.87 miles) below the earth’s surface in the word’s largest underground laboratory for experiments in particle astrophysics. Only a tiny fraction of the contents of the universe is visible matter, the rest is thought to be composed of dark matter and dark energy. A leading hypothesis for dark matter is that it comprises Weakly Interacting Massive Particles (WIMPs). The DarkSide experiment attempts to detect these particles to better understand the nature of dark matter and its interactions.

    These experiments use NI oscilloscopes to acquire electrical signals resulting from scintillation light captured by the photomultiplier tubes (PMTs). In DarkSide, 200 high-speed, high-resolution channels need to be tightly synchronized to make time-of-flight measurements of photons. Watch the NIWeek 2013 keynote or view a technical presentation for more information.

    Joint European Torus (JET)

    Culham Centre for Fusion Energy (CCFE), Oxfordshire, United Kingdom

    5
    Plasma is contained and heated in a torus within the interior of the JET tokamak.

    Currently the largest experimental tokamak fusion reactor in the world, JET uses magnetic confinement to contain plasma at around 100 million degrees Celsius, nearly seven times the temperature of the sun’s core (15 million degrees Celsius). Nuclear fusion is the process that powers the sun. Harnessing this type of energy can help solve the world’s growing energy demand. This facility is crucial to the research and development for future larger fusion reactors.

    Large Hadron Collider (LHC)
    CERN, Geneva, Switzerland

    a

    The A Toroidal LHC ApparatuS (ATLAS) is LHC’s largest particle detector involved in the recent discovery of the Higgs boson.

    The LHC is the largest and most powerful particle accelerator in the world, located in a 27 km (16.78 mile) ring tunnel underneath Switzerland and France. The experiment recently discovered the Higgs boson, deemed the “God Particle” that gives everything its mass. CERN is set to reopen the upgraded LHC in early 2015 at much higher energies to help physicists probe deeper into the nature of the universe and address the questions of supersymmetry and dark matter.

    National Ignition Facility (NIF)
    Lawrence Livermore National Laboratory (LLNL), California, USA

    7

    The image looks up into NIF’s 10 m (33 ft) diameter spherical target chamber with the target held on the protruding pencil-shaped arm.

    NIF is the largest inertial confinement fusion device in the world. The experiment converges the beams of 192 high-energy lasers on a single fuel-filled target, producing a 500 TW flash of light to trigger nuclear fusion. The aim of this experiment is to produce a condition known as ignition, in which the fusion reaction becomes self-sustaining. The machine was also used as the set for the warp drive in the latest Star Trek movie.

    Z Machine
    Sandia National Laboratories, Albuquerque, New Mexico, USA

    8

    The Z Machine creates residual lightning as it releases 350 TW of stored energy.

    The world’s largest X-ray generator is used for various high-pulsed power experiments requiring extreme temperatures and pressures. This includes inertial confinement fusion research. The extremely high voltages are achieved by rapidly discharging huge capacitors in a large insulated bath of oil and water onto a central target.

    European Extremely Large Telescope (E-ELT)

    European Southern Observatory (ESO), Cerro Armazones, Chile

    8

    This artist’s rendition of the E-ELT shows it at its high-altitude Atacama Desert site.

    The E-ELT is the largest optical/near-infrared ground-based telescope being built by ESO in northern Chile. It will allow astronomers to probe deep into space and investigate many unanswered questions about the universe. Images from E-ELT will be 16 times sharper than those from the Hubble Space Telescope, allowing astronomers to study the creation and atmospheres of extrasolar planets. The primary M1 mirror (shown in the image) is nearly 40 m (131 ft) in diameter, consisting of about 800 hexagonal segments.

    NASA Hubble Telescope
    Hubble

    International Thermonuclear Experimental Reactor (ITER)
    ITER Organization, Cadarache, France

    9

    This cutaway computer model shows ITER with plasma at its core. A technician is shown to demonstrate the machine’s size.

    ITER is an international effort to build the largest experimental fusion tokamak in the world, a critical step toward future fusion power plants. The European Union, India, Japan, China, Russia, South Korea, and United States are collaborating on the project, which is currently under construction in southern France.

     
  • richardmitnick 5:55 pm on December 10, 2014 Permalink | Reply
    Tags: , , , , isgtw   

    From isgtw: “Supercomputer compares modern and ancient DNA” 


    international science grid this week

    December 10, 2014
    Jorge Salazar, Texas Advanced Computing Center
    tc

    What if you researched your family’s genealogy, and a mysterious stranger turned out to be an ancestor? A team of scientists who peered back into Europe’s murky prehistoric past thousands of years ago had the same surprise. With sophisticated genetic tools, supercomputing simulations and modeling, they traced the origins of modern Europeans to three distinct populations.The international research team’s results are published in the journal Nature.

    s
    The Stuttgart skull, from a 7,000-year-old skeleton found in Germany among artifacts from the first widespread farming culture of central Europe. Right: Blue eyes and dark skin – how the European hunter-gatherer appeared 7,000 years ago. Artist depiction based on La Braña 1, whose remains were recovered at La Braña-Arintero site in León, Spain. Images courtesy Consejo Superior de Investigaciones Cientificas.

    “Europeans seem to be a mixture of three different ancestral populations,” says study co-author Joshua Schraiber, a National Science Foundation postdoctoral fellow at the University of Washington, in Seattle, US. Schraiber says the results surprised him because the prevailing view among scientists held that only two distinct groups mixed between 7,000 and 8,000 years ago in Europe, as humans first started to adopt agriculture.

    Scientists have only a handful of ancient remains well preserved enough for genome sequencing. An 8,000-year-old skull discovered in Loschbour, Luxembourg provided DNA evidence for the study. The remains were found at the caves of Loschbour, La Braña, Stuttgart, a ritual site at Motala, and at Mal’ta.

    The third mystery group that emerged from the data is ancient northern Eurasians. “People from the Siberia area is how I conceptualize it,” says Schraiber. “We don’t know too much anthropologically about who these people are. But the genetic evidence is relatively strong because we do have ancient DNA from an individual that’s very closely related to that population, too.”

    The individual is a three-year-old boy whose remains were found near Lake Baikal in Siberia at the Mal’ta site. Scientists determined his arm bone to be 24,000 years old. They then sequence his genome, making it the second oldest modern human sequenced. Interestingly enough, in late 2013 scientists used the Mal’ta genome to find that about one-third of Native American ancestry originated through gene flow from these ancient North Eurasians.

    The researchers took the genomes from these ancient humans and compared them to those from 2,345 modern-day Europeans. “I used the POPRES data set, which had been used before to ask similar questions just looking at modern Europeans,” Schraiber says. “Then I used software called Beagle, which was written by Brian Browning and Sharon Browning at the University of Washington, which computationally detects these regions of identity by descent.”

    The National Science Foundation’s XSEDE (Extreme Science and Engineering Discovery Environment) and Stampede supercomputer at the Texas Advanced Computing Center provided computational resources used in the study. The research was funded in part by the National Cancer Institute of the National Institutes of Health.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 10:28 pm on December 3, 2014 Permalink | Reply
    Tags: , , , , , , , isgtw, , ,   

    From isgtw: “Volunteer computing: 10 years of supporting CERN through LHC@home” 


    international science grid this week

    December 3, 2014
    Andrew Purcell

    LHC@home recently celebrated a decade since its launch in 2004. Through its SixTrack project, the LHC@home platform harnesses the power of volunteer computing to model the progress of sub-atomic particles traveling at nearly the speed of light around the Large Hadron Collider (LHC) at CERN, near Geneva, Switzerland. It typically simulates about 60 particles whizzing around the collider’s 27km-long ring for ten seconds, or up to one million loops. Results from SixTrack were used to help the engineers and physicists at CERN design stable beam conditions for the LHC, so today the beams stay on track and don’t cause damage by flying off course into the walls of the vacuum tube. It’s now also being used to carry out simulations relevant to the design of the next phase of the LHC, known as the High-Luminosity LHC.

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    “The results of SixTrack played an essential role in the design of the LHC, and the high-luminosity upgrades will naturally require additional development work on SixTrack,” explains Frank Schmidt, who works in CERN’s Accelerators and Beam Physics Group of the Beams Department and is the main author of the SixTrack code. “In addition to its use in the design stage, SixTrack is also a key tool for the interpretation of data taken during the first run of the LHC,” adds Massimo Giovannozzi, who also works in CERN’s Accelerators and Beams Physics Group. “We use it to improve our understanding of particle dynamics, which will help us to push the LHC performance even further over the coming years of operation.” He continues: “Managing a project like SixTrack within LHC@home requires resources and competencies that are not easy to find: Igor Zacharov, a senior scientist at the Particle Accelerator Physics Laboratory (LPAP) of the Swiss Federal Institute of Technology in Lausanne (EPFL), provides valuable support for SixTrack by helping with BOINC integration.”

    c
    Volunteer computing is a type of distributed computing through which members of the public donate computing resources (usually processing power) to aid research projects. Image courtesy Eduardo Diez Viñuela, Flickr (CC BY-SA 2.0).

    Before LHC@home was created, SixTrack was run only on desktop computers at CERN, using a platform called the Compact Physics Screen Saver (CPSS). This proved to be a useful tool for a proof of concept, but it was first with the launch of the LHC@home platform in 2004 that things really took off. “I am surprised and delighted by the support from our volunteers,” says Eric McIntosh, who formerly worked in CERN’s IT Department and is now an honorary member of the Beams Department. “We now have over 100,000 users all over the world and many more hosts. Every contribution is welcome, however small, as our strength lies in numbers.”

    Virtualization to the rescue

    Building on the success of SixTrack, the Virtual LHC@home project (formerly known as Test4Theory) was launched in 2011. It enables users to run simulations of high-energy particle physics using their home computers, with the results submitted to a database used as a common resource by both experimental and theoretical scientists working on the LHC.

    Whereas the code for SixTrack was ported for running on Windows, OS X, and Linux, the high-energy-physics code used by each of the LHC experiments is far too large to port in a similar way. It is also being constantly updated. “The experiments at CERN have their own libraries and they all run on Linux, while the majority of people out there have common-or-garden variety Windows machines,” explains CERN honorary staff member of the IT department and chief technology officer of the Citizen Cyberscience Centre Ben Segal. “Virtualization is the way to solve this problem.”

    The birth of the LHC@home platform

    In 2004, Ben Segal and François Grey , who were both members of CERN’s IT department at the time, were asked to plan an outreach event for CERN’s 50th anniversary that would help people around the world to get an impression of the computational challenges facing the LHC. “I had been an early volunteer for SETI@home after it was launched in 1999,” explains Grey. “Volunteer computing was often used as an illustration of what distributed computing means when discussing grid technology. It seemed to me that it ought to be feasible to do something similar for LHC computing and perhaps even combine volunteer computing and grid computing this way.”

    “I contacted David Anderson, the person behind SETI@Home, and it turned out the timing was good, as he was working on an open-source platform called BOINC to enable many projects to use the SETI@home approach,” Grey continues. BOINC (Berkeley Open Infrastructures for Network Computing)is an open-source software platform for computing with volunteered resources. It was first developed at the University of California, Berkeley in the US to manage the SETI@Home project, and uses the unused CPU and GPU cycles on a computer to support scientific research.

    “I vividly remember the day we phoned up David Anderson in Berkeley to see if we could make a SETI-like computing challenge for CERN,” adds Segal. “We needed a CERN application that ran on Windows, as over 90% of BOINC volunteers used that. The SixTrack people had ported their code to Windows and had already built a small CERN-only desktop grid to run it on, as they needed lots of CPU power. So we went with that.”

    A runaway success

    “I was worried that no one would find the LHC as interesting as SETI. Bear in mind that this was well before the whole LHC craziness started with the Angels and Demons movie, and news about possible mini black holes destroying the planet making headlines,” says Grey. “We made a soft launch, without any official announcements, in 2004. To our astonishment, the SETI@home community immediately jumped in, having heard about LHC@home by word of mouth. We had over 1,000 participants in 24 hours, and over 7,000 by the end of the week — our server’s maximum capacity.” He adds: “We’d planned to run the volunteer computing challenge for just three months, at the time of the 50th anniversary. But the accelerator physicists were hooked and insisted the project should go on.”

    Predrag Buncic, who is now coordinator of the offline group within the ALICE experiment, led work to create the CERN Virtual Machine in 2008. He, Artem Harutyunyan (former architect and lead developer of CernVM Co-Pilot), and Segal subsequently adopted this virtualization technology for use within Virtual LHC@home. This has made it significantly easier for the experiments at CERN to create their own volunteer computing applications, since it is no longer necessary for them to port their code. The long-term vision for Virtual LHC@home is to support volunteer-computing applications for each of the large LHC experiments.
    Growth of the platform

    The ATLAS experiment recently launched a project that simulates the creation and decay of supersymmetric bosons and fermions. “ATLAS@Home offers the chance for the wider public to participate in the massive computation required by the ATLAS experiment and to contribute to the greater understanding of our universe,” says David Cameron, a researcher at the University of Oslo in Norway. “ATLAS also gains a significant computing resource at a time when even more resources will be required for the analysis of data from the second run of the LHC.”

    CERN ATLAS New
    ATLAS

    ATLAS@home

    Meanwhile, the LHCb experiment has been running a limited test prototype for over a year now, with an application running Beauty physics simulations set to be launched for the Virtual LHC@home project in the near future. The CMS and ALICE experiments also have plans to launch similar applications.

    CERN LHCb New
    LHCb

    CERN CMS New
    CMS

    CERN ALICE New
    ALICE

    An army of volunteers

    “LHC@home allows CERN to get additional computing resources for simulations that cannot easily be accommodated on regular batch or grid resources,” explains Nils Høimyr, the member of the CERN IT department responsible for running the platform. “Thanks to LHC@home, thousands of CPU years of accelerator beam dynamics simulations for LHC upgrade studies have been done with SixTrack, and billions of events have been simulated with Virtual LHC@home.” He continues: “Furthermore, the LHC@home platform has been an outreach channel, giving publicity to LHC and high-energy physics among the general public.”

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 2:59 pm on October 22, 2014 Permalink | Reply
    Tags: , , isgtw,   

    From isgtw: “Laying the groundwork for data-driven science” 


    international science grid this week

    October 22, 2014
    Amber Harmon

    he ability to collect and analyze massive amounts of data is rapidly transforming science, industry, and everyday life — but many of the benefits of big data have yet to surface. Interoperability, tools, and hardware are still evolving to meet the needs of diverse scientific communities.

    data
    Image courtesy istockphoto.com.

    One of the US National Science Foundation’s (NSF’s) goals is to improve the nation’s capacity in data science by investing in the development of infrastructure, building multi-institutional partnerships to increase the number of data scientists, and augmenting the usefulness and ease of using data.

    As part of that effort, the NSF announced $31 million in new funding to support 17 innovative projects under the Data Infrastructure Building Blocks (DIBBs) program. Now in its second year, the 2014 DIBBs awards support research in 22 states and touch on research topics in computer science, information technology, and nearly every field of science supported by the NSF.

    “Developed through extensive community input and vetting, NSF has an ambitious vision and strategy for advancing scientific discovery through data,” says Irene Qualters, division director for Advanced Cyberinfrastructure. “This vision requires a collaborative national data infrastructure that is aligned to research priorities and that is efficient, highly interoperable, and anticipates emerging data policies.”

    Of the 17 awards, two support early implementations of research projects that are more mature; the others support pilot demonstrations. Each is a partnership between researchers in computer science and other science domains.

    One of the two early implementation grants will support a research team led by Geoffrey Fox, a professor of computer science and informatics at Indiana University, US. Fox’s team plans to create middleware and analytics libraries that enable large-scale data science on high-performance computing systems. Fox and his team plan to test their platform with several different applications, including geospatial information systems (GIS), biomedicine, epidemiology, and remote sensing.

    “Our innovative architecture integrates key features of open source cloud computing software with supercomputing technology,” Fox said. “And our outreach involves ‘data analytics as a service’ with training and curricula set up in a Massive Open Online Course or MOOC.”Among others, US institutions collaborating on the project include Arizona State University in Phoenix; Emory University in Atlanta, Georgia; and Rutgers University in New Brunswick, New Jersey.

    Ken Koedinger, professor of human computer interaction and psychology at Carnegie Mellon University in Pittsburgh, Pennsylvania, US, leads the other early implementation project. Koedinger’s team concentrates on developing infrastructure that will drive innovation in education.

    The team will develop a distributed data infrastructure, LearnSphere, that will make more educational data accessible to course developers, while also motivating more researchers and companies to share their data with the greater learning sciences community.

    “We’ve seen the power that data has to improve performance in many fields, from medicine to movie recommendations,” Koedinger says. “Educational data holds the same potential to guide the development of courses that enhance learning while also generating even more data to give us a deeper understanding of the learning process.”

    The DIBBs program is part of a coordinated strategy within NSF to advance data-driven cyberinfrastructure. It complements other major efforts like the DataOne project, the Research Data Alliance, and Wrangler, a groundbreaking data analysis and management system for the national open science community.

    See the full article here.

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 6:57 pm on July 23, 2014 Permalink | Reply
    Tags: , isgtw,   

    From isgtw: “A case for computational mechanics in medicine” 

    international science grid this week

    July 23, 2014
    Monica Kortsha

    Members of the US National Committee on Theoretical and Applied Mechanics and collaborators, including Thomas Hughes, director of the computational mechanics group at the Institute for Computational Engineering and Sciences (ICES) at The University of Texas at Austin, US, and Shaolie Hossain, ICES research fellow and research scientist at the Texas Heart Institute, have published an article reviewing the new opportunities computational mechanics is creating in medicine.

    New treatments for tumor growth and heart disease are just two opportunities presenting themselves. The article is published in the Journal of the Royal Society Interface. “This journal truly serves as an interface between medicine and science,” Hossain says. “If physicians are looking for computational research advancements, the article is sure to grab their attention.”

    The article presents three research areas where computational medicine has already made important progress, and will likely continue to do so: nano and microdevices, biomedical devices — including diagnostic systems, and organ models — and cellular mechanics.

    “[Disease is a] multi-scale phenomena and investigators research diverse aspects of it,” says Hossain, explaining that although disease may be perceived at an organ level, treatments usually function at the molecular and cellular scales.

    Hughes and Hossain’s research on vulnerable plaques (VPs), a category of atherosclerosis responsible for 70% of all lethal heart attacks, is an example of applied research incorporating all three notable areas.

    two
    Hughes and Hossain pictured next to a simulation of a vulnerable plaque within an artery. Current medical techniques cannot effectively detect vulnerable plaques. However, Hughes and Hossain say that nano-particles and computational modeling technologies offer diagnostic and treatment solutions. Image courtesy the Institute for Computational Engineering and Sciences at The University of Texas at Austin, US.

    “The detection and treatment of VPs represents an enormous unmet clinical need,” says Hughes. “Progress on this has the potential to save innumerable lives. Computational mechanics combined with high-performance computing provides new and unique technologies for investigating disease, unlike anything that has been traditionally used in medical research.”

    heart
    HeartFlow uses anatomic data from coronary artery CT scans to create a 3D model of the coronary arteries. Coronary blood flow and pressure are computed by applying the principles of coronary physiology and computational fluid dynamics. Fractional flow reserve (FFRCT) is calculated as the ratio of distal coronary pressure to proximal aortic pressure, under conditions simulating maximal coronary hyperemia. The image demonstrates a stenosis (narrowing) of the left anterior descending coronary artery with an FFRCT of 0.58 distal to the stenosis (in red). FFR values ≤0.80 are hemodynamically significant (meaning they obstruct blood flow) and indicate that the patient may benefit from coronary revascularization (removing or bypassing blockages). Image courtesy HeartFlow.

    The high mortality rate attributed to VPs stems from their near clinical invisibility; conventional plaque detection techniques such as MRI and CT scanning do not register VPs because significant vascular narrowing is not present. Hughes and Hossain, however, have developed a computational toolset that can aid in making the plaques visible through targeted delivery of functionalized nanoparticles.

    Their computational models draw on patient-specific data to predict how well nanoparticles can adhere to a potential plaque, thus enabling researchers to test and refine site-specific treatments. If a VP is detected, the same techniques can be employed to send nanoparticles containing medicine directly to the VP.

    The models are being applied at the Texas Heart Institute, where Hossain is a research scientist and assistant professor. “Early intervention and prevention of heart attacks are where we certainly want to go and we are excited about the possibilities for computational mechanics being a vehicle to get us there safely and more rapidly,” says James Willerson, Texas Heart Institute president.

    Other computationally aided models are already being used to help physicians evaluate and treat patients. HeartFlow, a company founded by Charles Taylor, uses CT scan data to create patient-specific models of arteries, which can be used to diagnose coronary artery disease.

    Despite its success and demonstrated potential, computational mechanics in the medical field is still a new concept for scientists and physicians alike, says Hossain. “The potential that we have, in my opinion, hasn’t been tapped to the fullest because of the gap in knowledge.”

    To help integrate medicine into a field that has historically focused on more traditional engineering domains, the article advocates for incorporating biology and chemistry questions into computational mechanics classes, as well as offering classes that can benefit both medical and computational science students.

    See the full article here.

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 6:02 pm on October 2, 2013 Permalink | Reply
    Tags: isgtw,   

    From isgtw: “Preparing for tomorrow’s big data” 

    Until recently, the large CERN experiments, ATLAS and CMS, owned and controlled the computing infrastructure they operated on in the US, and accessed data only when it was locally available on the hardware they operated. However, [Frank] Würthwein, UC San Diego, explains, with data-taking rates set to increase dramatically by the end of LS1 in 2015, the current operational model is no longer viable to satisfy peak processing needs. Instead, he argues, large-scale processing centers need to be created dynamically to cope with spikes in demand. To this end, Würthwein and colleagues carried out a successful proof-of-concept study, in which the Gordon Supercomputer at the San Diego Supercomputer Center was dynamically and seamlessly integrated into the CMS production system to process a 125-terabyte data set.

    gordon
    SDSC’s Gordon Supercomputer. Photo: Alan Decker. Gordon is part of the National Science Foundation’s (NSF) Extreme Science and Engineering Discovery Environment, or XSEDE program, a nationwide partnership comprising 16 supercomputers and high-end visualization and data analysis resources.

    See the full article here.

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”


    ScienceSprings is powered by MAINGEAR computers

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 434 other followers

%d bloggers like this: