Tagged: Computational research Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 8:06 pm on December 19, 2014 Permalink | Reply
    Tags: , Computational research,   

    From LBL: “A Standard for Neuroscience Data” 

    Berkeley Logo

    Berkeley Lab

    December 16, 2014
    Linda Vu, +1 510 495 2402, lvu@lbl.gov

    Thanks to standardized image file formats—like JPEG, PNG or TIFF—which store information every time you take a digital photo, you can easily share selfies and other pictures with anybody connected to a computer, mobile phone or the Internet. Nobody needs to download any special software to see your picture.

    But in many science fields—like neuroscience—sharing data isn’t that simple because no standard data format exists. So in November 2014, the Neurodata without Borders initiative—which is supported by the Kavli Foundation, GE, Janelia Farm, Allen Institute for Brain Science and the International <strong>Neuroinformatics Coordinating Facility (INCF)—hosted a hackathon to consolidate ideas for designing and implementing a standard neuroscience file format. And BrainFormat, a neuroscience data standardization framework developed at the Lawrence Berkeley National Laboratory (Berkeley Lab), is among the candidates selected for further investigation. It is now a strong contender to contribute to and develop a community-wide data format and storage standard for the neuroscience research community. BrainFormat is free to use, and can be downloaded here: https://bitbucket.org/oruebel/brainformat.

    “This issue of standardizing data formats and sharing files isn’t unique to neuroscience. Many science areas, including the global climate community, have grappled with this,” says Oliver Ruebel, Berkeley Lab Computational Scientist who developed BrainFormat. “Sharing data allows researchers to do larger, more comprehensive studies. This in-turn increases confidence in scientific results and ultimately leads to breakthroughs.”

    In conjunction with this work, Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC) is also working with Jeff Teeters and Fritz Sommer of the Redwood Center for Theoretical Neuroscience at UC Berkeley on the Collaborative Research Computational Neuroscience (CRCNS) data-sharing portal, which will allow neuroscience researchers worldwide to easily share files without having to download any special software.

    Both BrainFormat and CRCNS are being developed as part of a tri-institutional partnership between Berkeley Lab, UC Berkeley and UC San Francisco (UCSF). The computational tools could also help facilitate the White House’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative.

    Dealing With the Deluge of Brain Data

    3
    Image Credit: Wikimedia Commons

    In 2013, President Barack Obama challenged the neuroscience community to gain fundamental insights into how the mind develops and functions, and discover new ways to address brain diseases and trauma. He called this the BRAIN Initiative.

    This work is expected to generate a deluge of data for the neuroscience community. After all, measuring activity from a fraction of neurons in the brain of a single mouse could generate almost as much data as the [Large] Hadron Collider, which is 17-miles in circumference. So before researchers can even begin taking measurements, they must first develop a standard format for labeling and organizing data, sharing files, and scaling up analytical and visualization methods and software to handle massive amounts of information.

    “Neuroscience is currently a field of individual principle investigators, doing individual experiments, and analyzing that data on customized software. This means that data is stored in many different formats and described in different ways, which hinders community access to data,” says Kristofer Bouchard, a neuroscientist at Berkeley Lab. “As data volumes grow, we are going to need more people to look at the same data in different ways.”

    Berkeley Lab is actively seeking ways to expand its contribution to the BRAIN Initiative, and as a scientist in the Computational Research Division (CRD) Ruebel is familiar with helping scientists from a variety of disciplines organize, store, access, analyze, share and massive complex datasets.

    To come up with a convention for labeling, organizing, storing and accessing neuroscience data, Ruebel worked closely with Bouchard for applications from UCSF neurosurgeon Edward Chang and Berkeley Lab physicist Peter Denes to design BrainFormat using open source Hierarchical Data Format (HDF) technologies. Over the last 15 years, HDF has helped a variety of scientific disciplines organize and share their data. One prominent user of HDF is NASA’s Earth Observing System, the primary data repository for understanding global climate change.

    In addition to data format standardization, HDF is also optimized to run on supercomputers. So by building BrainFormat on this technology, neuroscientists will be able to use supercomputers to process and analyze their massive datasets.

    “This work really highlights the unique strength of a Berkeley Lab, UC Berkeley and UCSF partnership,” says Denes. “UCSF is renowned for its clinical and experimental neuroscience experience with in vivo cortical electrophysiology; UC Berkeley contributes world-class expertise in theoretical neuroscience, statistical learning and data analysis; and Berkeley Lab brings supercomputing and applied mathematics expertise together with electronics and micro- and nano-fabrication.”

    Denes heads Berkeley Lab’s contingent of the tri-institutional partnership to develop instrumentation and computational methods for recording neuroscience data. In addition to developing tools to deal with the data deluge, the BRAIN Initiative is also going to require new hardware to collect more data at higher-resolution, and process it in real-time. Researchers will also need novel algorithms for analyzing data. The tri-institutional partnership is also leveraging tools and expertise from different areas of science to tackle these challenges as well.

    “Berkeley Lab’s strength has always been in science of scale,” says Prabhat, Berkeley Lab computational scientist. “Over the years, many science areas have struggled with issues of file format standardization, as well as managing and sharing massive datasets, and our staff built similar infrastructures for them. This isn’t a new problem, with BrainFormat and the CRCNS portal we’ve just extended these solutions to the field of neuroscience.”

    About Berkeley Lab Computing Sciences

    The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy’s research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe. ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 5,500 scientists at national laboratories and universities, including those at Berkeley Lab’s Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
  • richardmitnick 3:33 pm on November 21, 2014 Permalink | Reply
    Tags: , , Computational research, ,   

    From BNL: “Women @ Energy: Meifeng Lin” 

    Brookhaven Lab

    November 14, 2014
    Joe Gettler

    l
    Meifeng Lin is a theoretical particle physicist and a computational scientist at the Computational Science Center of Brookhaven National Laboratory.

    Meifeng Lin is a theoretical particle physicist and a computational scientist at the Computational Science Center of Brookhaven National Laboratory. Her research focuses on advancing scientific discovery through high performance computing. One such area of her focus is lattice gauge theory, in which large-scale Monte Carlo simulations of the strong interaction between the sub-atomic particles called quarks and gluons are performed to study fundamental symmetries of the universe and internal structure of hadronic matter. She obtained her Bachelor of Science degree in Physics from Peking University in Beijing, China. After getting her PhD in theoretical particle physics from Columbia University, she held postdoctoral positions at MIT, Yale University and Boston University. Prior to joining BNL in 2013, she was an assistant computational scientist at Argonne Leadership Computing Facility.

    1) What inspired you to work in STEM?

    I always like to solve problems and figure out how things work. Being a farm girl in a small village in China, I was very close to nature and had a lot of opportunities to see physics at work in the daily life, even though I didn’t realize it then. For example, in the starch making process, farmers would drain the water out of the barrels using the siphon principle. Such experiences fostered my curiosity and later on when I learned physics and could make such connections, I was quite fascinated. I guess I also inherited the “curiosity” genes from my parents, who, although did not have the chance to get much education, were always trying to figure out how things work and fix everything by themselves. My father, in particular, also accidentally cultivated my interest in math and logic through things like puzzles and Chinese chess when I was a little kid.

    But the realization that I would like to work in STEM has been gradual and the fact that I do is more a happy accident than determination. There wasn’t an “aha” moment that made me decide to choose science as my career. Growing up, I always wanted to be a writer. Sort of by chance I was admitted to the Physics Department at Peking University. Once I started studying physics as a major, I grew to love the problem-solving aspects of it and was amazed by the mathematical simplicity of the laws of physics. Even more importantly, I saw intelligence, dedication and constant hunger for new knowledge in my professors and colleagues throughout the years. And I enjoyed working and learning with them very much. I think that’s what got me to work in STEM eventually and stay with it.

    2) What excites you about your work at the Energy Department?

    Working in a field that strives to understand the most fundamental properties of our universe gives me this feeling that I am making a small contribution to the advancement of human knowledge, and that is very satisfying for me. At the Energy Department, I am surrounded by some of the smartest people and constantly exposed to new ideas and new technologies. It makes my work both challenging and exciting. Now that I am in an interdisciplinary research center, I am excited to have the opportunity to learn from my colleagues about their areas of interests and hopefully expand my research horizon.

    3) How can our country engage more women, girls, and other underrepresented groups in STEM?

    For young girls who are thinking about entering the field, some guidance and encouragement from the teachers, both male and female, will certainly help a great deal. When I was in high school, I had female teachers telling me that I just needed to marry well. But I was lucky to have several of my male teachers who saw my potential in math and physics and offered me very generous support and guided me through difficult times. Without them I would probably have followed a more stereotypical path for girls. This may be less an issue in the US now, but we still need to be careful not to typecast girls and minorities.

    On the other hand, we need to have a more supportive system which can retain women and underrepresented groups already working in STEM. I almost gave up working in STEM at one point, because it was so hard to find a job in my field that would allow me and my husband to stay in one place—the notorious “two-body problem”. I was fortunate enough to have some very understanding and supportive supervisors and colleagues. At both Boston University and Argonne, I was given the green light to work from home most of the time. I am immensely grateful for this arrangement, as it gave me the necessary transition to eventually get my current job which is close to where my husband works. Of course other people in STEM may have more constraints due to the nature of their work and don’t have the luxury of working remotely. But some flexibility and understanding will go a long way.

    4) Do you have tips you’d recommend for someone looking to enter your field of work?

    Take your time to find a field that interests and excites you. I always thought I wanted to be an experimental condensed matter physicist, but after a few summers in the labs, it turned out I did not like to do the experiments or be in the clean room. But I enjoyed writing computer programs to control the instruments or do simulations and data analysis. Then I found the field of lattice gauge theory where theoretical physics and supercomputers meet, which is perfect for me.

    For lattice gauge theory, and for computational sciences in general, the requirements on both mathematical and computational skills are pretty high. So it is important to have a solid mathematical foundation from early on. Some experience with scientific computing will be helpful. It probably sounds harder than it really is. Just don’t expect to know everything from the beginning. Nobody does. A lot of the skills, especially programming skills, can be picked up and improved on the job. As long as this is something you are interested in, be passionate, persevere, and don’t be afraid to ask for help.

    5) When you have free time, what are your hobbies?

    I enjoy reading, jogging, traveling and just checking out new neighborhoods with my husband. Occasionally when the mood strikes, I also like to write. I still hope someday I will be able to write a book or two. But with my first baby on the way, all this may change. Time will tell.

    See the full article here.

    BNL Campus

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 4:46 pm on November 19, 2014 Permalink | Reply
    Tags: , Computational research, , ,   

    Fron LLNL: “Lawrence Livermore tops Graph 500″ 


    Lawrence Livermore National Laboratory

    Nov. 19, 2014

    Don Johnston
    johnston19@llnl.gov
    925-784-3980

    Lawrence Livermore National Laboratory scientists’ search for new ways to solve large complex national security problems led to the top ranking on Graph 500 and new techniques for solving large graph problems on small high performance computing (HPC) systems, all the way down to a single server.

    “To fulfill our missions in national security and basic science, we explore different ways to solve large, complex problems, most of which include the need to advance data analytics,” said Dona Crawford, associate director for Computation at Lawrence Livermore. “These Graph 500 achievements are a product of that work performed in collaboration with our industry partners. Furthermore, these innovations are likely to benefit the larger scientific computing community.”

    3
    Photo from left: Robin Goldstone, Dona Crawford and Maya Gokhale with the Graph 500 certificate. Missing is Scott Futral.

    Lawrence Livermore’s Sequoia supercomputer, a 20-petaflop IBM Blue Gene/Q system, achieved the world’s best performance on the Graph 500 data analytics benchmark, announced Tuesday at SC14. LLNL and IBM computer scientists attained the No. 1 ranking by completing the largest problem scale ever attempted — scale 41 — with a performance of 23.751 teraTEPS (trillions of traversed edges per second). The team employed a technique developed by IBM.

    ibm
    LLNL Sequoia supercomputer, a 20-petaflop IBM Blue Gene/Q system

    The Graph 500 offers performance metrics for data intensive computing or ‘big data,’ an area of growing importance to the high performance computing (HPC) community.

    In addition to achieving the top Graph 500 ranking, Lawrence Livermore computer scientists also have demonstrated scalable Graph 500 performance on small clusters and even a single node. To achieve these results, Livermore computational researchers have combined innovative research in graph algorithms and data-intensive runtime systems.

    Robin Goldstone, a member of LLNL’s HPC Advanced Technologies Office said: “These are really exciting results that highlight our approach of leveraging HPC to solve challenging large-scale data science problems.”

    The results achieved demonstrate, at two different scales, the ability to solve very large graph problems on modest sized computing platforms by integrating flash storage into the memory hierarchy of these systems. Enabling technologies were provided through collaborations with Cray, Intel, Saratoga Speed and Mellanox.

    A scale 40-graph problem, containing 17.6 trillion edges, was solved on 300 nodes of LLNL’s Catalyst cluster. Catalyst, designed in partnership with Intel and Cray, augments a standard HPC architecture with additional capabilities targeted at data intensive computing. Each Catalyst computer node features 128 gigabytes (GB) of dynamic random access memory (DRAM) plus an additional 800 GB of high performance flash storage and uses the LLNL DI-MMAP runtime that integrates flash into the memory hierarchy. With the HavoqGT graph traversal framework, Catalyst was able to store and process the 217 TB scale 40 graph, a feat that is otherwise only achievable on the world’s largest supercomputers. The Catalyst run was No. 4 in size on the list.

    DI-MMAP and HavoqGT also were used to solve a smaller, but equally impressive, scale 37-graph problem on a single server with 50 TB of network-attached flash storage. The server, equipped with four Intel E7-4870 v2 processors and 2 TB of DRAM, was connected to two Altamont XP all-flash arrays from Saratoga Speed Inc., over a high bandwidth Mellanox FDR Infiniband interconnect. The other scale 37 entries on the Graph 500 list required clusters of 1,024 nodes or larger to process the 2.2 trillion edges.

    “Our approach really lowers the barrier of entry for people trying to solve very large graph problems,” said Roger Pearce, a researcher in LLNL’s Center for Applied Scientific Computing (CASC).

    “These results collectively demonstrate LLNL’s preeminence as a full service data intensive HPC shop, from single server to data intensive cluster to world class supercomputer,” said Maya Gokhale, LLNL principal investigator for data-centric computing architectures.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 1:47 pm on November 11, 2014 Permalink | Reply
    Tags: , , , , Computational research, ,   

    From DDDT at WCG: “Discovering Dengue Drugs – Together” 

    New WCG Logo

    10 Nov 2014
    By: Dr. Stan Watowich, PhD
    University of Texas Medical Branch (UTMB) in Galveston, Texas

    Summary
    For week five of our decade of discovery celebrations we’re looking back at the Discovering Dengue Drugs – Together project, which helped researchers at the University of Texas Medical Branch at Galveston search for drugs to help combat dengue – a debilitating tropical disease that threatens 40% of the world’s population. Thanks to World Community Grid volunteers, researchers have identified a drug lead that has the potential to stop the virus in its tracks.

    mic

    Dengue fever, also known as “breakbone fever”, causes excruciating joint and muscle pain, high fever and headaches. Severe dengue, known as “dengue hemorrhagic fever”, has become a leading cause of hospitalization and death among children in many Asian and Latin American countries. According to the World Health Organization (WHO), over 40% of the world’s population is at risk from dengue; another study estimated there were 390 million cases in 2010 alone.

    The disease is a mosquito-borne infection found in tropical and sub-tropical regions – primarily in the developing world. It belongs to the flavivirus family of viruses, together with Hepatitis C, West Nile and Yellow Fever.

    Despite the fact dengue represents a critical global health concern, it has received limited attention from affluent countries until recently and is widely considered to be a neglected tropical disease. Currently, no approved vaccines or treatments exist for the disease. We launched Discovering Dengue Drugs – Together on World Community Grid in 2007 to search for drugs to treat dengue infections using a computer-based discovery approach.

    In the first phase of the project, we aimed to identify compounds that could be used to develop dengue drugs. Thanks to the computing power donated by World Community Grid volunteers, my fellow researchers and I at the University of Texas Medical Branch in Galveston, Texas, screened around three million chemical compounds to determine which ones would bind to the dengue virus and disable it.

    By 2009 we had found several thousand promising compounds to take to the next stage of testing. We began identifying the strongest compounds from the thousands of potentials, with the goal of turning these into molecules that could be suitable for human clinical trials.

    We have recently made an exciting discovery using insights from Discovering Dengue Drugs – Together to guide additional calculations on our web portal for advanced computer-based drug discovery, DrugDiscovery@TACC. A molecule has demonstrated success in binding to and disabling a key dengue enzyme that is necessary for the virus to replicate.

    Furthermore, it also shows signs of being able to effectively disable related flaviviruses, such as the West Nile virus. Importantly, our newly discovered drug lead also demonstrates no negative side effects such as adverse toxicity, carcinogenicity or mutagenicity risks, making it a promising antiviral drug candidate for dengue and potentially other flavivirues. We are working with medicinal chemists to synthesize variants of this exciting candidate molecule with the goal of improving its activity for planned pre-clinical and clinical trials.

    I’d like to express my gratitude for the dedication of World Community Grid volunteers. The advances we are making, and our improved understanding of drug discovery software and its current limitations, would not have been possible without your donated computing power.

    If you’d like to help researchers make more ground-breaking discoveries like this – and have the chance of winning some fantastic prizes – take part in our decade of discovery competition by encouraging your friends to sign up to World Community Grid today. There’s a week left and the field is wide open – get started today!

    See the full article here.

    World Community Grid (WCG) brings people together from across the globe to create the largest non-profit computing grid benefiting humanity. It does this by pooling surplus computer processing power. We believe that innovation combined with visionary scientific research and large-scale volunteerism can help make the planet smarter. Our success depends on like-minded individuals – like you.”

    WCG projects run on BOINC software from UC Berkeley.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing.

    CAN ONE PERSON MAKE A DIFFERENCE? YOU BETCHA!!

    “Download and install secure, free software that captures your computer’s spare power when it is on, but idle. You will then be a World Community Grid volunteer. It’s that simple!” You can download the software at either WCG or BOINC.

    Please visit the project pages-

    Mapping Cancer Markers
    mappingcancermarkers2

    Uncovering Genome Mysteries
    Uncovering Genome Mysteries

    Say No to Schistosoma

    GO Fight Against Malaria

    Drug Search for Leishmaniasis

    Computing for Clean Water

    The Clean Energy Project

    Discovering Dengue Drugs – Together

    Help Cure Muscular Dystrophy

    Help Fight Childhood Cancer

    Help Conquer Cancer

    Human Proteome Folding

    FightAIDS@Home

    World Community Grid is a social initiative of IBM Corporation
    IBM Corporation
    ibm

    IBM – Smarter Planet
    sp

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 2:59 pm on October 22, 2014 Permalink | Reply
    Tags: Computational research, , ,   

    From isgtw: “Laying the groundwork for data-driven science” 


    international science grid this week

    October 22, 2014
    Amber Harmon

    he ability to collect and analyze massive amounts of data is rapidly transforming science, industry, and everyday life — but many of the benefits of big data have yet to surface. Interoperability, tools, and hardware are still evolving to meet the needs of diverse scientific communities.

    data
    Image courtesy istockphoto.com.

    One of the US National Science Foundation’s (NSF’s) goals is to improve the nation’s capacity in data science by investing in the development of infrastructure, building multi-institutional partnerships to increase the number of data scientists, and augmenting the usefulness and ease of using data.

    As part of that effort, the NSF announced $31 million in new funding to support 17 innovative projects under the Data Infrastructure Building Blocks (DIBBs) program. Now in its second year, the 2014 DIBBs awards support research in 22 states and touch on research topics in computer science, information technology, and nearly every field of science supported by the NSF.

    “Developed through extensive community input and vetting, NSF has an ambitious vision and strategy for advancing scientific discovery through data,” says Irene Qualters, division director for Advanced Cyberinfrastructure. “This vision requires a collaborative national data infrastructure that is aligned to research priorities and that is efficient, highly interoperable, and anticipates emerging data policies.”

    Of the 17 awards, two support early implementations of research projects that are more mature; the others support pilot demonstrations. Each is a partnership between researchers in computer science and other science domains.

    One of the two early implementation grants will support a research team led by Geoffrey Fox, a professor of computer science and informatics at Indiana University, US. Fox’s team plans to create middleware and analytics libraries that enable large-scale data science on high-performance computing systems. Fox and his team plan to test their platform with several different applications, including geospatial information systems (GIS), biomedicine, epidemiology, and remote sensing.

    “Our innovative architecture integrates key features of open source cloud computing software with supercomputing technology,” Fox said. “And our outreach involves ‘data analytics as a service’ with training and curricula set up in a Massive Open Online Course or MOOC.”Among others, US institutions collaborating on the project include Arizona State University in Phoenix; Emory University in Atlanta, Georgia; and Rutgers University in New Brunswick, New Jersey.

    Ken Koedinger, professor of human computer interaction and psychology at Carnegie Mellon University in Pittsburgh, Pennsylvania, US, leads the other early implementation project. Koedinger’s team concentrates on developing infrastructure that will drive innovation in education.

    The team will develop a distributed data infrastructure, LearnSphere, that will make more educational data accessible to course developers, while also motivating more researchers and companies to share their data with the greater learning sciences community.

    “We’ve seen the power that data has to improve performance in many fields, from medicine to movie recommendations,” Koedinger says. “Educational data holds the same potential to guide the development of courses that enhance learning while also generating even more data to give us a deeper understanding of the learning process.”

    The DIBBs program is part of a coordinated strategy within NSF to advance data-driven cyberinfrastructure. It complements other major efforts like the DataOne project, the Research Data Alliance, and Wrangler, a groundbreaking data analysis and management system for the national open science community.

    See the full article here.

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 10:33 am on August 8, 2013 Permalink | Reply
    Tags: , , , Computational research, ,   

    From SLAC: “New Analysis Shows How Proteins Shift Into Working Mode” 

    August 8, 2013
    Mike Ross

    “In an advance that will help scientists design and engineer proteins, a team including researchers from SLAC and Stanford has found a way to identify how protein molecules flex into specific atomic arrangements required to catalyze chemical reactions essential for life.

    The achievement, published Sunday (Aug. 4, 2013) in Nature Methods, uses a new computer algorithm to analyze data from X-ray studies of crystallized proteins. Scientists were able to identify cascades of atomic adjustments that shift protein molecules into new shapes, or conformations.

    prot
    This 3-D figure of the enzyme dihydrofolate reductase (dhfr) shows the nine different areas where a small fluctuation in one part of this flexible molecule causes a sequence of atomic movements to propagate like falling dominoes. A new computer algorithm, CONTACT, identified these areas, which are colored red, yellow, green, orange, salmon, grey, light blue, dark blue and purple.

    ‘Proteins need to move around to do their part in keeping the organism alive,’ said Henry van den Bedem, first author on the paper and a researcher with the Structure Determination Core of the Joint Center for Structural Genomics (JCSG) at the SSRL Directorate of SLAC. ‘But often these movements are very subtle and difficult to discern. Our research is aimed at identifying those fluctuations from X-ray data and linking them to a protein’s biological functions. Our work provides important new insights, which will eventually allow us to re-engineer these molecular machines.'”

    hb
    Henry van den Bedem. (Matt Beardsley/SLAC)

    Central to the new technique is a new computer algorithm, called CONTACT, that analyzes protein structures determined by room temperature X-ray crystallography. Built upon an earlier algorithm created by van den Bedem, CONTACT detects how subtle features in the experimental data produced by changing conformations propagate through the protein and identifies regions within the protein where these cascades of small changes are likely to result in stable conformations.

    The research team also included scientists from University of California-San Francisco and The Scripps Research Institute in La Jolla.

    See the full article here.

    SLAC Campus
    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.
    i1


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 11:39 am on July 3, 2013 Permalink | Reply
    Tags: , Computational research,   

    From Fermilab: “Synergia pushes the state of the art” 

    Fermilab is an enduring source of strength for the US contribution to scientific research world wide.

    Wednesday, July 3, 2013

    ja
    Jim Amundson, deputy head of the Computational Physics Department and leader of the Computational Physics for Accelerators Group, wrote this column.

    “In an era when the field of particle physics is looking to decide what accelerator projects to pursue, accelerator modeling expertise is of tremendous importance. Fermilab’s Synergia simulation tool is helping accelerator experts here and at CERN optimize their machines and plan for the future.

    A little over 10 years ago, a small accelerator modeling team at Fermilab received its first grant from the then newly established Scientific Discovery through Advanced Computing (SciDAC) program. The thrust of this grant was to combine state-of-the-art space charge calculations with similarly advanced software for single-particle beam dynamics—a capability that did not exist in this field at that point. This work requires the sort of advanced high-performance computing (HPC) platforms championed by the SciDAC program. We named our new program Synergia (Συνεργια)—the Greek word for synergy.

    Our first application of Synergia in 2002 was to improve the modeling and, ultimately, performance of the Fermilab Booster, when the delivery of protons for the Tevatron collider experiments and MiniBooNE were the lab’s highest priorities. After our initial successes modeling the Booster, we have continued to use SciDAC to enhance Synergia through both funding and collaborating with other physicists and computer scientists. In the past decade Synergia has evolved into a general framework for the calculation of intensity-dependent effects in beam dynamics. And while running Synergia efficiently on 128 parallel processors used to seem like a major accomplishment, we now have demonstrated efficient running on 131,072 cores, keeping us at the leading edge of the rapidly changing field of HPC.”

    See the full article here.

    Fermilab campus

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics.


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 7:33 pm on May 3, 2013 Permalink | Reply
    Tags: , Computational research,   

    From Berkeley Lab: “Brain Visualization Prototype Holds Promise for Precision Medicine” 


    Berkeley Lab

    From the Computational Research Division
    Berkeley Computational Research Division

    Berkeley Lab, UCSF and Oblong Industries Show Brain Browser at Summit

    “The ability to combine all of a patient’s neurological test results into one detailed, interactive “brain map” could help doctors diagnose and tailor treatment for a range of neurological disorders, from autism to epilepsy. But before this can happen, researchers need a suite of automated tools and techniques to manage and make sense of these massive complex datasets.

    brain
    Computational researchers from Berkeley Lab used existing computational tools to translate laboratory data collected at UCSF into 3D visualizations of brain structures and activity.

    To get an idea of what these tools would look like, computational researchers from the Lawrence Berkeley National Laboratory (Berkeley Lab) are working with neuroscientists from the University of California, San Francisco (UCSF). So far, the Berkeley Lab team has used existing computational tools to translate UCSF laboratory data into 3D visualizations of brain structures and activity. Earlier this year, Los Angeles-based Oblong Industries joined the collaboration and implemented a state-of-the-art, gesture-based navigation interface that allows researchers to interactively explore 3D brain visualizations with hand poses movements.

    This is terrific new science.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 10:14 am on February 13, 2013 Permalink | Reply
    Tags: , Computational research, , ,   

    From ESA Technology: “Silicon brains to oversee satellites” 

    ESASpaceForEuropeBanner
    European Space Agency

    XMM Newton
    XMM-Newton

    herschelHerschel


    Planck

    13 February 2013
    No Writer Credit

    A beautiful and expensive sight: upwards of €6 million-worth of silicon wafers, crammed with the complex integrated circuits that sit at the heart of each and every ESA mission. Years of meticulous design work went into these tiny brains, empowering satellites with intelligence.

    chgips
    Silicon wafers etched with integrated circuits for space missions. No image credit.

    The image shows a collection of six silicon wafers that contain some 14 different chip designs developed by several European companies during the last eight years with ESA’s financial and technical support.

    Each of these 20 cm-diameter wafers contains between 30 and 80 replicas of each chip, each one carrying up to about 10 million transistors or basic circuit switches.

    To save money on the high cost of fabrication, various chips designed by different companies and destined for multiple ESA projects are crammed onto the same silicon wafers, etched into place at specialised semiconductor manufacturing plants or ‘fabs’, in this case LFoundry (formerly Atmel) in France.

    Once manufactured, the chips, still on the wafer, are tested. The wafers are then chopped up. They become ready for use when placed inside protective packages – just like standard terrestrial microprocessors – and undergo final quality tests.

    Through little metal pins or balls sticking out of their packages these miniature brains are then connected to other circuit elements – such as sensors, actuators, memory or power systems – used across the satellite.

    To save the time and money needed to develop complex chips like these, ESA’s Microelectronics section maintains a catalogue of chip designs, known as Intellectual Property (IP) cores, available to European industry through ESA licence.”

    See the full article here.

    The European Space Agency (ESA), established in 1975, is an intergovernmental organization dedicated to the exploration of space, currently with 19 member states. Headquartered in Paris, ESA has a staff of more than 2,000. ESA’s space flight program includes human spaceflight, mainly through the participation in the International Space Station program, the launch and operations of unmanned exploration missions to other planets and the Moon, Earth observation, science, telecommunication as well as maintaining a major spaceport, the Guiana Space Centre at Kourou, French Guiana, and designing launch vehicles. ESA science missions are based at ESTEC in Noordwijk, Netherlands, Earth Observation missions at ESRIN in Frascati, Italy, ESA Mission Control (ESOC) is in Darmstadt, Germany, the European Astronaut Centre (EAC) that trains astronauts for future missions is situated in Cologne, Germany, and the European Space Astronomy Centre is located in Villanueva de la Cañada, Spain.

    ESA Technology


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 6:12 pm on February 10, 2013 Permalink | Reply
    Tags: , , , , Computational research,   

    From Argonne Lab: “New classes of magnetoelectric materials promise advances in computing technology” 

    News from Argonne National Laboratory

    February 7, 2013
    Jared Sagoff

    Although scientists have been aware that magnetism and electricity are two sides of the same proverbial coin for almost 150 years, researchers are still trying to find new ways to use a material’s electric behavior to influence its magnetic behavior, or vice versa.

    star
    An illustration of a titanium-europium oxide cage lattice studied in the experiment.Image by Renee Carlson.

    Thanks to new research by an international team of researchers led by the U.S. Department of Energy’s Argonne National Laboratory, physicists have developed new methods for controlling magnetic order in a particular class of materials known as “magnetoelectrics.”

    Magnetoelectrics get their name from the fact that their magnetic and electric properties are coupled to each other. Because this physical link potentially allows control of their magnetic behavior with an electrical signal or vice versa, scientists have taken a special interest in magnetoelectric materials.

    ‘Electricity and magnetism are intrinsically coupled – they’re the same entity,’ said Philip Ryan, a physicist at Argonne’s Advanced Photon Source. ‘Our research is designed to accentuate the coupling between the electric and magnetic parameters by subtly altering the structure of the material.

    This new approach to cross-coupling magnetoelectricity could prove a key step toward the development of next-generation memory storage, improved magnetic field sensors, and many other applications long dreamed about. Unfortunately, scientists still have a ways to go to translating these findings into commercial devices.’

    ‘Instead of having just a ‘0’ or a ‘1,’ you could have a broader range of different values,’ Ryan said. ‘A lot of people are looking into what that kind of logic would look like.’

    A paper based on the research, “Reversible control of magnetic interactions by electric field in a single-phase material,” was published in Nature Communications. “

    See the full article here.

    Argonne Lab Campus

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science


    ScienceSprings is powered by MAINGEAR computers

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 377 other followers

%d bloggers like this: