Tagged: Computation Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:05 pm on July 10, 2014 Permalink | Reply
    Tags: , , Computation, , ,   

    From SLAC: “Uncertainty Gives Scientists New Confidence in Search for Novel Materials “ 


    SLAC Lab

    July 10, 2014
    Andrew Gordon, agordon@slac.stanford.edu, (650) 926-2282

    Scientists at Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory have found a way to estimate uncertainties in computer calculations that are widely used to speed the search for new materials for industry, electronics, energy, drug design and a host of other applications. The technique, reported in the July 11 issue of Science, should quickly be adopted in studies that produce some 30,000 scientific papers per year.

    “Over the past 10 years our ability to calculate the properties of materials and chemicals, such as reactivity and mechanical strength, has increased enormously. It’s totally exploded,” said Jens Nørskov, a professor at SLAC and Stanford and director of the SUNCAT Center for Interface Science and Catalysis, who led the research.

    “As more and more researchers use computer simulations to predict which materials have the interesting properties we’re looking for – part of a process called ‘materials by design’ ­– knowing the probability for error in these calculations is essential,” he said. “It tells us exactly how much confidence we can put in our results.”

    Nørskov and his colleagues have been at the forefront of developing this approach, using it to find better and cheaper catalysts to speed ammonia synthesis and generate hydrogen gas for fuel, among other things. But the technique they describe in the paper can be broadly applied to all kinds of scientific studies.

    graph
    This image shows the results of calculations aimed at determining which of six chemical elements would make the best catalyst for promoting an ammonia synthesis reaction. Researchers at SLAC and Stanford used Density Functional Theory (DFT) to calculate the strength of the bond between nitrogen atoms and the surfaces of the catalysts. The bond strength, plotted on the horizontal axis, is a key factor in determining the reaction speed, plotted on the vertical axis. Based on thousands of these calculations, which yielded a range of results (colored dots) that reveal the uncertainty involved, researchers estimated an 80 percent chance that ruthenium (Ru, in red) will be a better catalyst than iron (Fe, in orange.) (Andrew Medford and Aleksandra Vojvodic/SUNCAT, Callie Cullum)

    Speeding the Material Design Cycle

    The set of calculations involved in this study is known as DFT, for Density Functional Theory. It predicts bond energies between atoms based on the principles of quantum mechanics. DFT calculations allow scientists to predict hundreds of chemical and materials properties, from the electronic structures of compounds to density, hardness, optical properties and reactivity.

    Because researchers use approximations to simplify the calculations – otherwise they’d take too much computer time – each of these calculated material properties could be off by a fairly wide margin.

    To estimate the size of those errors, the team applied a statistical method: They calculated each property thousands of times, each time tweaking one of the variables to produce slightly different results. That variation in results represents the possible range of error.

    “Even with the estimated uncertainties included, when we compared the calculated properties of different materials we were able to see clear trends,” said Andrew J. Medford, a graduate student with SUNCAT and first author of the study. “We could predict, for instance, that ruthenium would be a better catalyst for synthesizing ammonia than cobalt or nickel, and say what the likelihood is of our prediction being right.”

    An Essential New Tool for Thousands of Studies

    DFT calculations are used in the materials genome initiative to search through millions of solids and compounds, and also widely used in drug design, said Kieron Burke, a professor of chemistry and physics at the University of California-Irvine who was not involved in the study.

    “There were roughly 30,000 papers published last year using DFT,” he said. “I believe the technique they’ve developed will become absolutely necessary for these kinds of calculations in all fields in a very short period of time.”

    Thomas Bligaard, a senior staff scientist in charge of theoretical method development at SUNCAT, said the team has a lot of work ahead in implementing these ideas, especially in calculations attempting to make predictions of new phenomena or new functional materials.

    Other researchers involved in the study were Jess Wellendorff, Aleksandra Vojvodic, Felix Studt, and Frank Abild-Pedersen of SUNCAT and Karsten W. Jacobsen of the Technical University of Denmark. Funding for the research came from the DOE Office of Science.

    See the full article here.

    SLAC Campus
    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.
    i1


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 1:00 pm on January 7, 2013 Permalink | Reply
    Tags: , Computation, , ,   

    From ESA Space Engineering: “LEON: the space chip that Europe built” 

    ESA Space Engineering Banner

    ESASpaceForEuropeBanner
    European Space Agency

    XMM Newton
    XMM-Newton

    herschelHerschel


    Planck

    7 January 2013
    No Writer Credit

    Just like home computers, the sophisticated capabilities of today’s space missions are made possible by the power of their processor chips. ESA’s coming Alphasat telecom satellite, the Proba-V microsatellite, the Earth-monitoring Sentinel family and the BepiColombo mission to Mercury are among the first missions to use an advanced 32-bit microprocessor – engineered and built in Europe.

    leon2
    Layout of the LEON2-FT chip, alias AT697

    All of them incorporate the new LEON2-FT chip, commercially known as the AT697. Engineered to operate within spacecraft computers, this microprocessor is manufactured by Atmel in France but originally designed by ESA.

    The underlying LEON design has also been made available to Europe’s space industry as the basis for company-owned ‘system-on-chip’ microprocessors optimised for dedicated tasks. For instance, Astrium is using it to create a space-based GPS/Galileo satnav receiver.

    chip
    LEON2-FT chip within Proba-2’s computer

    Independence from non-European parts is also a driver of our European Components Initiative, in place for the last decade, which is working with European industry to bring new components to market.”

    See the full article here.

    The European Space Agency (ESA), established in 1975, is an intergovernmental organization dedicated to the exploration of space, currently with 19 member states. Headquartered in Paris, ESA has a staff of more than 2,000. ESA’s space flight program includes human spaceflight, mainly through the participation in the International Space Station program, the launch and operations of unmanned exploration missions to other planets and the Moon, Earth observation, science, telecommunication as well as maintaining a major spaceport, the Guiana Space Centre at Kourou, French Guiana, and designing launch vehicles. ESA science missions are based at ESTEC in Noordwijk, Netherlands, Earth Observation missions at ESRIN in Frascati, Italy, ESA Mission Control (ESOC) is in Darmstadt, Germany, the European Astronaut Centre (EAC) that trains astronauts for future missions is situated in Cologne, Germany, and the European Space Astronomy Centre is located in Villanueva de la Cañada, Spain.


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 11:37 am on January 3, 2013 Permalink | Reply
    Tags: , , Computation, ,   

    From Berkeley Lab: “How Computers Push on the Molecules They Simulate” 


    Berkeley Lab

    Berkeley Lab bioscientists and their colleagues decipher a far-reaching problem in computer simulations

    January 03, 2013
    Paul Preuss

    Because modern computers have to depict the real world with digital representations of numbers instead of physical analogues, to simulate the continuous passage of time they have to digitize time into small slices. This kind of simulation is essential in disciplines from medical and biological research, to new materials, to fundamental considerations of quantum mechanics, and the fact that it inevitably introduces errors is an ongoing problem for scientists.

    image
    Dynamic computer simulations of molecular systems depend on finite time steps, but these introduce apparent extra work that pushes the molecules around. Using models of water molecules in a box, researchers have learned to separate this shadow work from the protocol work explicitly modeled in the simulations. No image credit.

    Scientists at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have now identified and characterized the source of tenacious errors and come up with a way to separate the realistic aspects of a simulation from the artifacts of the computer method. The research was done by David Sivak and his advisor Gavin Crooks in Berkeley Lab’s Physical Biosciences Division and John Chodera, a colleague at the California Institute of Quantitative Biosciences (QB3) at the University of California at Berkeley. The three report their results in Physical Review X.

    See the full and very informative article here.
    A U.S. Department of Energy National Laboratory Operated by the University of California

    i1


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 10:18 am on August 2, 2012 Permalink | Reply
    Tags: , , Computation, , ,   

    From Berkeley Lab: “A Direct Look at Graphene” 


    Berkeley Lab

    August 01, 2012
    Lynn Yarris

    Perhaps no other material is generating as much excitement in the electronics world as graphene, sheets of pure carbon just one atom thick through which electrons can race at nearly the speed of light – 100 times faster than they move through silicon.

    image
    This zoom-in STM topograph shows one of the cobalt trimers placed on graphene for the creation of Coulomb potentials – charged impurities – to which electrons and holes could respond. (Image courtesy of Crommie group)

    Superthin, superstrong, superflexible and superfast as an electrical conductor, graphene has been touted as a potential wonder material for a host of electronic applications, starting with ultrafast transistors. For the vast potential of graphene to be fully realized, however, scientists must first learn more about what makes graphene so super. The latest step in this direction has been taken by researchers with the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California (UC) Berkeley.

    Michael Crommie, a physicist who holds joint appointments with Berkeley Lab’s Materials Sciences Division and UC Berkeley’s Physics Department, led a study in which the first direct observations at microscopic lengths were recorded of how electrons and holes respond to a charged impurity – a single Coulomb potential – placed on a gated graphene device. The results provide experimental support to the theory that interactions between electrons are critical to graphene’s extraordinary properties.”

    mc
    Michael Crommie is a physicist who holds joint appointments with Berkeley Lab’s Materials Sciences Division and UC Berkeley’s Physics Department. (Photo by Roy Kaltschmidt)

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    i1
    i2

     
  • richardmitnick 3:42 pm on October 27, 2010 Permalink | Reply
    Tags: Computation,   

    Sandia computational researchers awarded $2.6 million in grants from DOE Office of Science 

    sandia

    “October 27, 2010

    ALBUQUERQUE, N.M. — Four Sandia researchers have been awarded three-year grants totalling $2.6 million from the Department of Energy’s Office of Science to pursue computational research proposals that would help create an exascale computer.

    An exascale computer would be 1,000 times faster than a petascale computer, the fastest now available, which operates at a quadrillion operations per second.

    The faster technology would be useful in computation-intensive areas that include basic research, engineering, earth science, biology, materials science, energy issues and national security.

    The winners are:

    * Robert Armstrong, $834,000 for his project, “COMPOSE-HPC: Software Composition for Extreme Scale Computational Science and Engineering,” in the category of X-Stack Software Research. (HPC is an abbreviation for high-performance computing.) X-Stack refers to the scientific software stack that supports extreme-scale scientific computing, from operating systems to development environments.
    * Kenneth Moreland, $598,406 for “A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale.” Current visualization systems rely on a coarse partitioning of data for their units to operate in parallel and thus cannot scale to an exascale system. The intent of this project is to design a better programming paradigm able to guide visualization algorithms in all parts of the algorithm and its implementation.
    * Ronald Minnich, $615,000 for “A Fault-Oblivious Extreme-Scale Execution Environment,” in the category of X-Stack Software Research. The Fault-Oblivious X-stack project is led by Sandia and includes Los Alamos National Laboratory, Pacific Northwest National Laboratory, Ohio State University, Boston University, IBM, and Bell Labs. The goal is to build an HPC system software stack that runs correctly even as faults occur.
    * Kevin Pedretti, $572,500 for “Enabling Exascale Hardware and Software Design through Scalable System Virtualization,” in the category of X-Stack Software Research. The objective of this project is to apply scalable system virtualization techniques to enable the wide range of innovation necessary to realize productive exascale computing. In addition to Sandia, the project involves researchers at the University of New Mexico, Northwestern University and Oak Ridge National Laboratory.

    Sandia National Laboratories is a multiprogram laboratory operated and managed by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major R&D responsibilities in national security, energy and environmental technologies, and economic competitiveness.”

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 348 other followers

%d bloggers like this: