Tagged: Extreme Science and Engineering Discovery Environment (XSEDE) Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:50 pm on December 23, 2016 Permalink | Reply
    Tags: , Extreme Science and Engineering Discovery Environment (XSEDE), ,   

    From Science Node: “Supercomputing an earthquake-ready building” 

    Science Node bloc
    Science Node

    19 Dec, 2016
    Tristan Fitzpatrick

    Preparing for an earthquake takes more than luck, thanks to natural hazard engineers and their supercomputers.

    1
    Courtesy Ellen Rathje.

    If someone is inside a building during an earthquake, there isn’t much they can do except duck under a table and hope for the best.

    That’s why designing safe buildings is an important priority for natural hazards researchers.

    Natural hazards engineering involves experimentation, numerical simulation, and data analysis to improve seismic design practices.

    To facilitate this research, the US National Science Foundation (NSF) has invested in the DesignSafe cyberinfrastructure so that researchers can fully harness the vast amount of data available in natural hazards engineering.

    Led by Ellen Rathje at the University of Texas and developed by the Texas Advanced Computing Center (TACC), DesignSafe includes an interactive web interface, repositories to share data sets, and a cloud-based workspace for researchers to perform simulation, computation, data analysis, and other tasks.

    TACC bloc

    For example, scientists may use a device known as a shake table to simulate earthquake movement and measure how buildings respond to them.

    “From a shaking table test we can measure the movements of a building due to a certain seismic loading,” Rathje says, “and then we can develop a numerical model of that building subjected to the same earthquake loading.”

    Researchers then compare the simulation to experimental data that’s been collected previously from observations in the field.

    “In natural hazards engineering, we take advantage of a lot of experimental data,” Rathje says, “and try to couple it with numerical simulations, as well as field data from observations, and bring it all together to make advances.”

    The computational resources of Extreme Science and Engineering Discovery Environment (XSEDE) make these simulations possible. DesignSafe facilitates the use of these resources within the natural hazards engineering research community.

    2
    Taming the tsunami? The 2011 Tohuko tsunami caused severe structural damage and the loss of many lives — almost 16,000 dead, over 6,000 injured, and 2,500 missing. Natural hazards engineers use supercomputer simulations and shake tables to minimize damage by designing safer buildings. Courtesy EPA.

    According to Rathje, the merger between the two groups is beneficial for both and for researchers interested in natural hazards engineering.

    Rathje previously researched disasters such as the Haiti earthquake in 2010 and earthquakes in Japan. While the collaboration between XSEDE and TACC is a step forward for natural hazards research, Rathje says it’s just another step toward making buildings safer during earthquakes.

    “There’s still a lot of work to be done in natural hazards engineering,” she admits, “but we’ve been able to bring it all under one umbrella so that natural hazards researchers can come to one place to get the data they need for their research.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 12:44 pm on August 13, 2016 Permalink | Reply
    Tags: , Extreme Science and Engineering Discovery Environment (XSEDE), , ,   

    From Science Node: “Opening the spigot at XSEDE” 

    Science Node bloc
    Science Node

    09 Aug, 2016
    Ken Chiacchia

    A boost from sequencing technologies and computational tools is in store for scientists studying how cells change which of their genes are active.

    Researchers using the Extreme Science and Engineering Discovery Environment (XSEDE) collaboration of supercomputing centers have reported advances in reconstructing cells’ transcriptomes — the genes activated by ‘transcribing’ them from DNA into RNA.

    The work aims to clarify the best practices in assembling transcriptomes, which ultimately can aid researchers throughout the biomedical sciences.

    1
    Digital detectives. Researchers from Texas A&M are using XSEDE resources to manage the data from transcriptome assembly. Studying transcriptomes will offer critical clues of how cells change their behavior in response to disease processes.

    “It’s crucial to determine the important factors that affect transcriptome reconstruction,” says Noushin Ghaffari of AgriLife Genomics and Bioinformatics, at Texas A&M University. “This work will particularly help generate more reliable resources for scientists studying non-model species” — species not previously well studied.

    Ghaffari is principal investigator in an ongoing project whose preliminary findings and computational aspects were presented at the XSEDE16 conference in Miami in July. She is leading a team of students and supercomputing experts from Texas A&M, Indiana University, and the Pittsburgh Supercomputing Center (PSC).

    The scientists sought to improve the quality and efficiency of assembling transcriptomes, and they tested their work on two real data sets from the Sequencing Quality Control Consortium (SEQC) RNA-Seq data: One of cancer cell lines and one of brain tissues from 23 human donors.

    What’s in a transcriptome?

    The transcriptome of a cell at a given moment changes as it reacts to its environment. Transcriptomes offer critical clues of how cells change their behavior in response to disease processes like cancer, or normal bodily signals like hormones.

    Assembling a transcriptome is a big undertaking with current technology, though. Scientists must start with samples containing tens or hundreds of thousands of RNA molecules that are each thousands of RNA ‘base units’ long. Trouble is, most of the current high-speed sequencing technologies can only read a couple hundred bases at one time.

    So researchers must first chemically cut the RNA into small pieces, sequence it, remove RNA not directing cell activity, and then match the overlapping fragments to reassemble the original RNA molecules.

    Harder still, they must identify and correct sequencing mistakes, and deal with repetitive sequences that make the origin and number of repetitions of a given RNA sequence unclear.

    While software tools exist to undertake all of these tasks, Ghaffari’s report was the most comprehensive yet to examine a variety of factors that affect assembly speed and accuracy when these tools are combined in a start-to-finish workflow.

    Heavy lifting

    The most comprehensive study of its kind, the report used data from SEQC to assemble a transcriptome, incorporating many quality control steps to ensure results were accurate. The process required vast amounts of computer memory, made possible by PSC’s high-memory supercomputers Blacklight, Greenfield, and now the new Bridges system’s 3-terabyte ‘large memory nodes.’

    2
    Blacklight supercomputer at the Pittsburgh Supercomputing Center.

    3
    Bridges HPE/Intel supercomputer

    4
    Bridges, a new PSC supercomputer, is designed for unprecedented flexibility and ease of use. It will include database and web servers to support gateways, collaboration, and powerful data management functions. Courtesy Pittsburgh Supercomputing Center.

    “As part of this work, we are running some of the largest transcriptome assemblies ever done,” says coauthor Philip Blood of PSC, an expert in XSEDE’s Extended Collaborative Support Service. “Our effort focused on running all these big data sets many different ways to see what factors are important in getting the best quality. Doing this required the large memory nodes on Bridges, and a lot of technical expertise to manage the complexities of the workflow.”

    During the study, the team concentrated on optimizing the speed of data movement from storage to memory to the processors and back.

    They also incorporated new verification steps to avoid perplexing errors that arise when wrangling big data through complex pipelines. Future work will include the incorporation of ‘checkpoints’ — storing the computations regularly so that work is not lost if a software error happens.

    Ultimately, Blood adds, the scientists would like to put the all the steps of the process into an automated workflow that will make it easy for other biomedical researchers to replicate.

    The work promises a better understanding of how living organisms respond to disease, environment and evolutionary changes, the scientists reported.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: