Tagged: ORNL Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:19 pm on May 9, 2017 Permalink | Reply
    Tags: $3.9 Million to Help Industry Address High Performance Computing Challenges, , ORNL   

    From ORNL via energy.gov: “Energy Department Announces $3.9 Million to Help Industry Address High Performance Computing Challenges” 

    i1

    Oak Ridge National Laboratory

    ENERGY.GOV

    May 8, 2017
    Today, the U.S. Department of Energy announced nearly $3.9 million for 13 projects designed to stimulate the use of high performance supercomputing in U.S. manufacturing. The Office of Energy Efficiency and Renewable Energy (EERE) Advanced Manufacturing Office’s High Performance Computing for Manufacturing (HPC4Mfg) program enables innovation in U.S. manufacturing through the adoption of high performance computing (HPC) to advance applied science and technology relevant to manufacturing. HPC4Mfg aims to increase the energy efficiency of manufacturing processes, advance energy technology, and reduce energy’s impact on the environment through innovation.

    The 13 new project partnerships include application of world-class computing resources and expertise of the national laboratories including Lawrence Livermore National Laboratory, Oak Ridge National Laboratory, Lawrence Berkley National Laboratory, National Renewable Energy Laboratory, and Argonne National Laboratory. These projects will address key challenges in U.S. manufacturing proposed in partnership with companies and improve energy efficiency across the manufacturing industry through applied research and development of energy technologies.

    Each of the 13 newly selected projects will receive up to $300,000 to support work performed by the national lab partners and allow the partners to use HPC compute cycles.

    The 13 projects selected for awards are led by:

    7AC Technologies
    8 Rivers Capital
    Applied Materials, Inc.
    Arconic Inc.*
    Ford Motor Company
    General Electric Global Research Center*
    LanzaTech
    Samsung Semiconductor, Inc.
    Sierra Energy
    The Timken Company
    United Technologies Research Corporation

    *Awarded two projects

    Read more about the individual projects.

    The Advanced Manufacturing Office (AMO) recently published a draft of its Multi-year Program Plan that identifies the technology, research and development, outreach, and crosscutting activities that AMO plans to focus on over the next five years. Some of the technical focus areas in the plan align with the high-priority, energy-related manufacturing activities that the HPC4Mfg program also aims to address.

    Led by Lawrence Livermore National Laboratory, with Lawrence Berkeley National Laboratory and Oak Ridge National Laboratory as strong partners, the HPC4Mfg program has a diverse portfolio of small and large companies, consortiums, and institutes within varying industry sectors that span the country. Established in 2015, it currently supports 28 projects that range from improved turbine blades for aircraft engines and reduced heat loss in electronics, to steel-mill energy efficiency and improved fiberglass production.

    ORNL Cray XK7 Titan Supercomputer

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 10:08 am on April 26, 2017 Permalink | Reply
    Tags: , , Building the Bridge to Exascale, ECP- Exascale Computing Project, , , ORNL,   

    From OLCF at ORNL: “Building the Bridge to Exascale” 

    i1

    Oak Ridge National Laboratory

    OLCF

    April 18, 2017 [Where was this hiding?]
    Katie Elyce Jones

    Building an exascale computer—a machine that could solve complex science problems at least 50 times faster than today’s leading supercomputers—is a national effort.

    To oversee the rapid research and development (R&D) of an exascale system by 2023, the US Department of Energy (DOE) created the Exascale Computing Project (ECP) last year. The project brings together experts in high-performance computing from six DOE laboratories with the nation’s most powerful supercomputers—including Oak Ridge, Argonne, Lawrence Berkeley, Lawrence Livermore, Los Alamos, and Sandia—and project members work closely with computing facility staff from the member laboratories.

    ORNL IBM Summit supercomputer depiction.

    At the Exascale Computing Project’s (ECP’s) annual meeting in February 2017, Oak Ridge Leadership Computing Facility (OLCF) staff discussed OLCF resources that could be leveraged for ECP research and development, including the facility’s next flagship supercomputer, Summit, expected to go online in 2018.

    At the first ECP annual meeting, held January 29–February 3 in Knoxville, Tennessee, about 450 project members convened to discuss collaboration in breakout sessions focused on project organization and upcoming R&D milestones for applications, software, hardware, and exascale systems focus areas. During facility-focused sessions, senior staff from the Oak Ridge Leadership Computing Facility (OLCF) met with ECP members to discuss opportunities for the project to use current petascale supercomputers, test beds, prototypes, and other facility resources for exascale R&D. The OLCF is a DOE Office of Science User Facility located at DOE’s Oak Ridge National Laboratory (ORNL).

    “The ECP’s fundamental responsibilities are to provide R&D to build exascale machines more efficiently and to prepare the applications and software that will run on them,” said OLCF Deputy Project Director Justin Whitt. “The facilities’ responsibilities are to acquire, deploy, and operate the machines. We are currently putting advanced test beds and prototypes in place to evaluate technologies and enable R&D efforts like those in the ECP.”

    ORNL has a unique connection to the ECP. The Tennessee-based laboratory is the location of the project office that manages collaboration within the ECP and among its facility partners. ORNL’s Laboratory Director Thom Mason delivered the opening talk at the conference, highlighting the need for coordination in a project of this scope.

    On behalf of facility staff, Mark Fahey, director of operations at the Argonne Leadership Computing Facility, presented the latest delivery and deployment plans for upcoming computing resources during a plenary session. From the OLCF, Project Director Buddy Bland and Director of Science Jack Wells provided a timeline for the availability of Summit, OLCF’s next petascale supercomputer, which is expected to go online in 2018; it will be at least 5 times more powerful than the OLCF’s 27-petaflop Titan supercomputer.

    ORNL Cray XK7 Titan Supercomputer.

    “Exascale hardware won’t be around for several more years,” Wells said. “The ECP will need access to Titan, Summit, and other leadership computers to do the work that gets us to exascale.”

    Wells said he was able to highlight the spring 2017 call for Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, proposals, which will give 2-year projects the first opportunity for computing time on Summit. OLCF staff also introduced a handful of computing architecture test beds—including the developmental environment for Summit known as Summitdev, NVIDIA’s deep learning and accelerated analytics system DGX-1, an experimental cluster of ARM 64-bit compute nodes, and a Cray XC40 cluster of 168 nodes known as Percival—that are now available for OLCF users.

    In addition to leveraging facility resources for R&D, the ECP must understand the future needs of facilities to design an exascale system that is ready for rigorous computational science simulations. Facilities staff can offer insight about the level of performance researchers will expect from science applications on exascale systems and estimate the amount of space and electrical power that will be available in the 2023 timeframe.

    “Getting to capable exascale systems will require careful coordination between the ECP and the user facilities,” Whitt said.

    One important collaboration so far was the development of a request for information, or RFI, for exascale R&D that the ECP released in February to industry vendors. The RFI enables the ECP to evaluate potential software and hardware technologies for exascale systems—a step in the R&D process that facilities often undertake. Facilities will later release requests for proposals when they are ready to begin building exascale systems

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

    The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of accelerating scientific discovery and engineering progress by providing outstanding computing and data management resources to high-priority research and development projects.

    ORNL’s supercomputing program has grown from humble beginnings to deliver some of the most powerful systems in the world. On the way, it has helped researchers deliver practical breakthroughs and new scientific knowledge in climate, materials, nuclear science, and a wide range of other disciplines.

    The OLCF delivered on that original promise in 2008, when its Cray XT “Jaguar” system ran the first scientific applications to exceed 1,000 trillion calculations a second (1 petaflop). Since then, the OLCF has continued to expand the limits of computing power, unveiling Titan in 2013, which is capable of 27 petaflops.


    ORNL Cray XK7 Titan Supercomputer

    Titan is one of the first hybrid architecture systems—a combination of graphics processing units (GPUs), and the more conventional central processing units (CPUs) that have served as number crunchers in computers for decades. The parallel structure of GPUs makes them uniquely suited to process an enormous number of simple computations quickly, while CPUs are capable of tackling more sophisticated computational algorithms. The complimentary combination of CPUs and GPUs allow Titan to reach its peak performance.

    The OLCF gives the world’s most advanced computational researchers an opportunity to tackle problems that would be unthinkable on other systems. The facility welcomes investigators from universities, government agencies, and industry who are prepared to perform breakthrough research in climate, materials, alternative energy sources and energy storage, chemistry, nuclear physics, astrophysics, quantum mechanics, and the gamut of scientific inquiry. Because it is a unique resource, the OLCF focuses on the most ambitious research projects—projects that provide important new knowledge or enable important new technologies.

     
  • richardmitnick 10:31 am on March 29, 2017 Permalink | Reply
    Tags: A Seismic Mapping Milestone, , , , ORNL, ,   

    From ORNL: “A Seismic Mapping Milestone” 

    i1

    Oak Ridge National Laboratory

    March 28, 2017

    Jonathan Hines
    hinesjd@ornl.gov
    865.574.6944

    1
    This visualization is the first global tomographic model constructed based on adjoint tomography, an iterative full-waveform inversion technique. The model is a result of data from 253 earthquakes and 15 conjugate gradient iterations with transverse isotropy confined to the upper mantle. Credit: David Pugmire, ORNL

    When an earthquake strikes, the release of energy creates seismic waves that often wreak havoc for life at the surface. Those same waves, however, present an opportunity for scientists to peer into the subsurface by measuring vibrations passing through the Earth.

    Using advanced modeling and simulation, seismic data generated by earthquakes, and one of the world’s fastest supercomputers, a team led by Jeroen Tromp of Princeton University is creating a detailed 3-D picture of Earth’s interior. Currently, the team is focused on imaging the entire globe from the surface to the core–mantle boundary, a depth of 1,800 miles.

    These high-fidelity simulations add context to ongoing debates related to Earth’s geologic history and dynamics, bringing prominent features like tectonic plates, magma plumes, and hotspots into view. In September 2016, the team published a paper in Geophysical Journal International on its first-generation global model. Created using data from 253 earthquakes captured by seismograms scattered around the world, the team’s model is notable for its global scope and high scalability.

    “This is the first global seismic model where no approximations—other than the chosen numerical method—were used to simulate how seismic waves travel through the Earth and how they sense heterogeneities,” said Ebru Bozdag, a coprincipal investigator of the project and an assistant professor of geophysics at the University of Nice Sophia Antipolis. “That’s a milestone for the seismology community. For the first time, we showed people the value and feasibility of running these kinds of tools for global seismic imaging.”

    The project’s genesis can be traced to a seismic imaging theory first proposed in the 1980s. To fill in gaps within seismic data maps, the theory posited a method called adjoint tomography, an iterative full-waveform inversion technique. This technique leverages more information than competing methods, using forward waves that travel from the quake’s origin to the seismic receiver and adjoint waves, which are mathematically derived waves that travel from the receiver to the quake.

    The problem with testing this theory? “You need really big computers to do this,” Bozdag said, “because both forward and adjoint wave simulations are performed in 3-D numerically.”

    In 2012, just such a machine arrived in the form of the Titan supercomputer, a 27-petaflop Cray XK7 managed by the US Department of Energy’s (DOE’s) Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at Oak Ridge National Laboratory.


    ORNL Cray XK7 Titan Supercomputer

    After trying out its method on smaller machines, Tromp’s team gained access to Titan in 2013. Working with OLCF staff, the team continues to push the limits of computational seismology to deeper depths.

    Stitching Together Seismic Slices

    As quake-induced seismic waves travel, seismograms can detect variations in their speed. These changes provide clues about the composition, density, and temperature of the medium the wave is passing through. For example, waves move slower when passing through hot magma, such as mantle plumes and hotspots, than they do when passing through colder subduction zones, locations where one tectonic plate slides beneath another.

    Each seismogram represents a narrow slice of the planet’s interior. By stitching many seismograms together, researchers can produce a 3-D global image, capturing everything from magma plumes feeding the Ring of Fire, to Yellowstone’s hotspots, to subducted plates under New Zealand.

    This process, called seismic tomography, works in a manner similar to imaging techniques employed in medicine, where 2-D x-ray images taken from many perspectives are combined to create 3-D images of areas inside the body.

    In the past, seismic tomography techniques have been limited in the amount of seismic data they can use. Traditional methods forced researchers to make approximations in their wave simulations and restrict observational data to major seismic phases only. Adjoint tomography based on 3-D numerical simulations employed by Tromp’s team isn’t constrained in this way. “We can use the entire data—anything and everything,” Bozdag said.

    Digging Deeper

    To improve its global model further, Tromp’s team is experimenting with model parameters on Titan. For example, the team’s second-generation model will introduce anisotropic inversions, which are calculations that better capture the differing orientations and movement of rock in the mantle. This new information should give scientists a clearer picture of mantle flow, composition, and crust–mantle interactions.

    Additionally, team members Dimitri Komatitsch of Aix-Marseille University in France and Daniel Peter of King Abdullah University in Saudi Arabia are leading efforts to simulate higher-frequency seismic waves. This would allow the team to model finer details in the Earth’s mantle and even begin mapping the Earth’s core.

    To make this leap, Tromp’s team is preparing for Summit, the OLCF’s next-generation supercomputer.


    ORNL IBM Summit supercomputer depiction

    Set to arrive in 2018, Summit will provide at least five times the computing power of Titan. As part of the OLCF’s Center for Accelerated Application Readiness, Tromp’s team is working with OLCF staff to take advantage of Summit’s computing power upon arrival.

    “With Summit, we will be able to image the entire globe from crust all the way down to Earth’s center, including the core,” Bozdag said. “Our methods are expensive—we need a supercomputer to carry them out—but our results show that these expenses are justified, even necessary.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 10:20 am on October 22, 2016 Permalink | Reply
    Tags: , , , From greenhouse gas to usable ethanol, ORNL,   

    From Science Node: “From greenhouse gas to usable ethanol” 

    Science Node bloc
    Science Node

    19 Oct, 2016
    Morgan McCorkle

    ORNL scientists find a way to use nano-spike catalysts to convert carbon dioxide directly into ethanol.

    In a new twist to waste-to-fuel technology, scientists at the Department of Energy’s Oak Ridge National Laboratory (ORNL) have developed an electrochemical process that uses tiny spikes of carbon and copper to turn carbon dioxide, a greenhouse gas, into ethanol. Their finding, which involves nanofabrication and catalysis science, was serendipitous.


    Access mp4 video here .
    Serendipitous science. Looking to understand a chemical reaction, scientists accidentally discovered a method for converting combustion waste products into ethanol. The chance discovery may revolutionize the ability to use variable energy sources. Courtesy ORNL.

    “We discovered somewhat by accident that this material worked,” said ORNL’s Adam Rondinone, lead author of the team’s study published in ChemistrySelect. “We were trying to study the first step of a proposed reaction when we realized that the catalyst was doing the entire reaction on its own.”

    The team used a catalyst made of carbon, copper and nitrogen and applied voltage to trigger a complicated chemical reaction that essentially reverses the combustion process. With the help of the nanotechnology-based catalyst which contains multiple reaction sites, the solution of carbon dioxide dissolved in water turned into ethanol with a yield of 63 percent. Typically, this type of electrochemical reaction results in a mix of several different products in small amounts.

    “We’re taking carbon dioxide, a waste product of combustion, and we’re pushing that combustion reaction backwards with very high selectivity to a useful fuel,” Rondinone said. “Ethanol was a surprise — it’s extremely difficult to go straight from carbon dioxide to ethanol with a single catalyst.”

    The catalyst’s novelty lies in its nanoscale structure, consisting of copper nanoparticles embedded in carbon spikes. This nano-texturing approach avoids the use of expensive or rare metals such as platinum that limit the economic viability of many catalysts.

    “By using common materials, but arranging them with nanotechnology, we figured out how to limit the side reactions and end up with the one thing that we want,” Rondinone said.

    The researchers’ initial analysis suggests that the spiky textured surface of the catalysts provides ample reactive sites to facilitate the carbon dioxide-to-ethanol conversion.

    “They are like 50-nanometer lightning rods that concentrate electrochemical reactivity at the tip of the spike,” Rondinone said.

    Given the technique’s reliance on low-cost materials and an ability to operate at room temperature in water, the researchers believe the approach could be scaled up for industrially relevant applications. For instance, the process could be used to store excess electricity generated from variable power sources such as wind and solar.

    “A process like this would allow you to consume extra electricity when it’s available to make and store as ethanol,” Rondinone said. “This could help to balance a grid supplied by intermittent renewable sources.”

    The researchers plan to refine their approach to improve the overall production rate and further study the catalyst’s properties and behavior.

    ORNL’s Yang Song, Rui Peng, Dale Hensley, Peter Bonnesen, Liangbo Liang, Zili Wu, Harry Meyer III, Miaofang Chi, Cheng Ma, Bobby Sumpter and Adam Rondinone are coauthors on the study.

    The work was supported by DOE’s Office of Science and used resources at the ORNL’s Center for Nanophase Materials Sciences, which is a DOE Office of Science User Facility.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 11:58 am on October 12, 2016 Permalink | Reply
    Tags: "Oak Ridge Scientists Are Writing Code That Not Even The World's Fastest Computers Can Run (Yet), Department of Energy’s Exascale Computing Project, ORNL, , Summit supercomputer,   

    From ORNL via Nashville Public Radio: “Oak Ridge Scientists Are Writing Code That Not Even The World’s Fastest Computers Can Run (Yet)” 

    i1

    Oak Ridge National Laboratory

    1

    Nashville Public Radio

    Oct 10, 2016
    Emily Siner

    2
    The current supercomputer at Oak Ridge National Lab, Titan, will be replaced by what could be the fastest computer in the world, Summit — and even that won’t even be fast enough for some of the programs that are being written at the lab. Oak Ridge National Laboratory, U.S. Dept. of Energy

    ORNL IBM Summit supercomputer depiction
    ORNL IBM Summit supercomputer depiction

    Scientists at Oak Ridge National Laboratory are starting to build applications for a supercomputer that might not go live for another seven years.

    The lab recently received more than $5 million from the Department of Energy to start developing several longterm projects.

    Thomas Evans’s research is among those funded, and it’s a daunting task: His team is trying to predict how small sections of particles inside a nuclear reactor will behave over a long period time.

    The more precisely they can simulate nuclear reactors on a computer, the better engineers can build them in real life.

    “Analysts can use that [data] to design facilities, experiments and working engineering platforms,” Evans says.

    But these very elaborate simulations that Evans is creating take so much computing power that they cannot run on Oak Ridge’s current supercomputer, Titan — nor will it be able to run on the lab’s new supercomputer, Summit, which could be the fastest in the world when it goes live in two years.

    So Evans is thinking ahead, he says, “to ultimately harness the power of the next generation — technically two generations from now — of supercomputing.

    “And of course, the challenge is, that machine doesn’t exist yet.”

    The current estimate is that this exascale computer, as it’s called, will be several times faster than Summit and go live around 2023. And it could very well take that long for Evans’s team to write code for it.

    The machine won’t just be faster, Evans says. It’s also going to work in a totally new way, which changes how applications are written.

    “In other words, I can’t take a simulation code that we’ve been using now and just drop it in the new machine and expect it to work,” he says.

    The computer will not necessarily be housed at Oak Ridge, but Tennessee researchers are playing a major role in the Department of Energy’s Exascale Computing Project. In addition to Evans’ nuclear reactor project, scientists at Oak Ridge will be leading the development of two other applications, including one that will simulate complex 3D printing. They’ll also assist in developing nine other projects.

    Doug Kothe, who leads the lab’s exascale application development, says the goal is not just to think ahead to 2023. The code that the researchers write should be able run on any supercomputer built in next several decades, he says.

    Despite the difficulty, working on incredibly fast computers is also an exciting prospect, Kothe says.

    “For a lot of very inquisitive scientists who love challenges, it’s just a way cool toy that you can’t resist.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 8:54 am on October 6, 2016 Permalink | Reply
    Tags: , Geothermal heat pump (GHP) technology, ORNL   

    From ORNL: “Xiaobing Liu: Making geothermal heat pump technology a household name” 

    i1

    Oak Ridge National Laboratory

    October 5, 2016
    Bill Cabage, Communications
    cabagewh@ornl.gov
    865.574.4399

    1
    ORNL researcher Xiaobing Liu works in the laboratory’s Building Technologies Research and Integration Center.

    As a boy growing up in China, Xiaobing Liu knew all about Oak Ridge and the World War II Manhattan Project. He had no idea that he would one day work at DOE’s Oak Ridge National Laboratory, the Secret City’s successor.

    Liu is a lead researcher in geothermal heat pump (GHP) technology, developing software and smart controls and performing characterization and modeling for GHPs in both component and system levels. Accessing energy stored in the Earth’s crust to heat and cool buildings is a no-brainer to Liu. His years of research have made him confident, if not passionate, about using GHP as a viable option to clean and renewable energy.

    “Living in China, I saw how people made homes in the sides of hills—they’re called Yao Dong— and the temperature inside was always comfortable, both in summer and winter,” said Liu. “Visiting my grandpa at age six, I was aware of the comfort in these cave-like structures compared to the outside temperatures.”

    Perhaps his early exposure to these Chinese abodes is what fueled his drive to tackle geothermal energy in his career, so much so that his division director introduces him as the “evangelist of geothermal heat pumps.”

    After receiving his bachelor’s and master’s degrees in mechanical engineering at Tongji University in Shanghai, Liu came to the U.S. for his Ph.D. at Oklahoma State University, which is the epicenter of GHP technology. After earning his Ph.D., Liu went to work as the system engineering manager at ClimateMaster, which at the time, was the largest GHP manufacturer in North America. At ClimateMaster, Liu developed software used in designing, analyzing, and optimizing GHP systems. The software is still in wide use today. He also helped design more than 30 GHP systems in the United States and other countries.

    In 2009, Liu joined the ORNL Building Technologies Research and Integration Center.

    “It was a difficult decision for me but very hard to refuse,” said Liu. “I had admired and respected the scientists at ORNL for a long time and was familiar with their work.” His boyhood knowledge of Oak Ridge’s history also enticed him to the laboratory.

    Liu was the technical lead of building equipment research in the US-China Clean Energy Research Center for Building Energy Efficiency (CERC BEE), a partnership that was just renewed for another five years. He also serves as the research chair for the International Ground Source Heat Pump Association (IGSHPA), and the research chair for the TC 6.8 Geothermal Heat Pump and Energy Recovery Applications technical committee of the American Society of Heating, Refrigeration & Air-Conditioning Engineers (ASHRAE).

    What would Liu like to see in GHP development if given more opportunities?

    “First, invest R&D in ground heat exchanger designs, installation, and drillings. If we can send people to the moon, we can make drilling cheap,” he says. “Geothermal systems need to be cheap and reliable.”

    Second, enable wider adoption of the GHP technology. Liu said people need to be able to easily and accurately measure the energy savings they achieve when using GHP.

    And third, Liu says the industry needs software tools and cost effective systems for performance monitoring, as well as real-time diagnosis and optimization, to ensure GHP systems are running at optimal performance at all times.

    Liu makes a strong statement for conditioning buildings with GHP technology with one simple comparison. “We use a flame that burns at approximately 3,000°F to heat our homes when we only need 76 – 80°F. Why do we do that?”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 9:10 am on September 30, 2016 Permalink | Reply
    Tags: , MAESTRO code for supercomputing, , OLCF Team Resolves Performance Bottleneck in OpenACC Code, ORNL, ,   

    From ORNL: “OLCF Team Resolves Performance Bottleneck in OpenACC Code” 

    i1

    Oak Ridge National Laboratory

    1

    Oak Ridge Leadership Computing Facility

    September 28, 2016
    Elizabeth Rosenthal

    2
    By improving its MAESTRO code, a team led by Michael Zingale of Stony Brook University is modeling astrophysical phenomena with improved fidelity. Pictured above, a three-dimensional simulation of Type I x-ray bursts, a recurring explosive event triggered by the buildup of hydrogen and helium on the surface of a neutron star. No image caption.

    For any high-performance computing code, the best performance is both highly effective and highly efficient, using little power but producing high-quality results. However, performance bottlenecks can arise within these codes, which can hinder projects and require researchers to search for the underlying problem.

    A team at the Oak Ridge Leadership Computing Facility (OLCF), a US Department of Energy (DOE) Office of Science User Facility located at DOE’s Oak Ridge National Laboratory, recently addressed a performance bottleneck in one portion of an OLCF user’s application. Because of its efforts, the user’s team saw a sixfold performance improvement in the code. Team members for this project include Frank Winkler (OLCF), Oscar Hernandez (OLCF), Adam Jacobs (Stony Brook University), Jeff Larkin (NVIDIA), and Robert Dietrich (Dresden University of Technology).

    “If the code runs faster, then you need less power. Everything is better, more efficient,” said Winkler, performance tools specialist at the OLCF. “That’s why we have performance analysis tools.”

    Known as MAESTRO, the astrophysics code in question models the burning of exploding stars and other stellar phenomena. Such modeling is possible because of the code’s OpenACC configuration, an approach meant to simplify the programming of CPU and GPU systems. The OLCF team worked specifically with the piece of the algorithm that models the physics of nuclear burning.

    Initially that portion of MAESTRO did not perform as well as expected because the GPUs could not quickly access the data. To remedy the situation the team used diagnostic analysis tools to discover the reason for the delay. Winkler explained that Score-P, a performance measurement tool, traces the application, whereas VAMPIR, a performance visualization tool, conceptualizes the trace file, allowing users to see a timeline of activity within a code.

    “When you trace the code, you record each significant event in sequence,” Winkler said.

    By analyzing the results the team found that although data moving from CPUs to GPUs performed adequately, the code was significantly slower when sending data from GPUs to CPUs. Larkin, an NVIDIA software engineer, suggested using a compiler flag—custom instructions that modify how programming commands are expressed in code—to store data in a more convenient location for the GPUs, which resulted in the code’s dramatic speedup.

    Jacobs, an astrophysicist working on a PhD at Stony Brook, brought the OpenACC code to the OLCF in June to get expert assistance. Jacobs is a member of a research group led by Michael Zingale, also of Stony Brook.

    During the week Jacobs spent at the OLCF, the team ran MAESTRO on the Titan supercomputer, the OLCF’s flagship hybrid system.

    ORNL Cray Titan Supercomputer
    ORNL Cray Titan Supercomputer

    By leveraging tools like Score-P and VAMPIR on this system, the team employed problem-solving skills and computational analysis to resolve the bottleneck—and did so after just a week of working with the code. Both Winkler and Jacobs stressed that their rapid success depended on collaboration; the individuals involved, as well as the OLCF, provided the necessary knowledge and resources to reach a mutually beneficial outcome.

    “We are working with technology in a way that was not possible a year ago,” Jacobs said. “I am so grateful that the OLCF hosted me and gave me their time and experience.”

    Because of these improvements, the MAESTRO code can run the latest nuclear burning models faster and perform higher-level physics than before—capabilities that are vital to computational astrophysicists’ investigation of astronomical events like supernovas and x-ray bursts.

    “There are two main benefits to this performance improvement,” Jacobs said. “First, your code is now getting to a solution faster, and second, you can now spend a similar amount of time working on something much more complicated.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 3:05 pm on September 20, 2016 Permalink | Reply
    Tags: , , Innovation Crossroads accelerator, ORNL   

    From ORNL: “ORNL launches new accelerator for energy tech entrepreneurs” 

    i1

    Oak Ridge National Laboratory

    September 20, 2016
    Morgan McCorkle, Communications
    mccorkleml@ornl.gov
    865.574.7308

    The nation’s top innovators will soon have the opportunity to advance their promising energy technology ideas at the Department of Energy’s (DOE’s) Oak Ridge National Laboratory (ORNL) in a new program called Innovation Crossroads. Up to five entrepreneurs will receive a fellowship that covers living costs, benefits and a travel stipend for up to two years, plus up to $350,000 to use on collaborative research and development at ORNL. The first cohort is expected to start the program in early 2017.

    A growing global population and increased industrialization require new approaches to energy that are reliable, affordable and carbon neutral. While important progress has been made in cost reduction and deployment of clean energy technologies, a new program at DOE’s Office of Energy Efficiency and Renewable Energy (EERE) will invest in the next generation of first-time clean energy entrepreneurs to accelerate the pace of innovation.

    Innovation Crossroads is the most recent clean energy accelerator to launch at a DOE national laboratory and the first located in the Southeast. ORNL is the nation’s largest science and energy laboratory, with expertise and resources in clean energy, computing, neutron science, advanced materials, and nuclear science.

    “There is a huge opportunity and need to develop an emerging American energy ecosystem where cleantech entrepreneurs can thrive,” said Mark Johnson, director of EERE’s Advanced Manufacturing Office (AMO). “This program gives the next generation of clean energy innovators a chance to make a transformative impact on the way we generate, process and use our energy resources. Innovation Crossroads will play an important role in strengthening the Southeast region’s entrepreneurial ecosystem.”

    Located on ORNL’s main campus, Innovation Crossroads entrepreneurs will have access to ORNL’s world-class research talent and DOE facilities including the Manufacturing Demonstration Facility, the National Transportation Research Center, the Oak Ridge Leadership Computing Facility and the Spallation Neutron Source. Through a partnership with mentor organizations in the Southeast, participants will also receive assistance with developing business strategies, conducting market research, and finding long-term financing and commercial partners.

    “ORNL has an excellent reputation for collaborating with industry and moving innovation to the commercial marketplace,” said ORNL Director Thom Mason. “We look forward to expanding our focus to include clean energy entrepreneurship. We recognize that growing new energy technology companies is not easy: entrepreneurs need to develop and validate technologies, build prototypes, secure customers, and raise several rounds of capital. Support from Innovation Crossroads can significantly improve the prospects for promising new energy ventures.”

    Innovation Crossroads is part of EERE’s Lab-Embedded Entrepreneurship Program (LEEP), sponsored by EERE’s Advanced Manufacturing Office (AMO) and co-managed by EERE’s Technology-to-Market Program. LEEP includes Lawrence Berkeley National Laboratory’s Cyclotron Road and Chain Reaction Innovations, which launched at Argonne National Laboratory earlier this year. Innovation Crossroads will be led by Tom Rogers, ORNL Director of Industrial Partnerships and Economic Development.

    “LEEP is a new model for energy R&D,” said Johanna Wolfson, director of EERE’s Technology-to-Market Program. “The combination of having top technical talent embedded in a world-class R&D facility, and maintaining a laser focus on entrepreneurial endeavors is creating a new generation of energy entrepreneurs working to bring really challenging solutions to fruition.”

    Interested entrepreneurs can learn about the Innovation Crossroads at innovationcrossroads.ornl.gov and submit a pre-application.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 5:44 am on September 10, 2016 Permalink | Reply
    Tags: , Electron beam microscope directly writes nanoscale features in liquid with metal ink, , ORNL   

    From ORNL: “Electron beam microscope directly writes nanoscale features in liquid with metal ink” 

    i1

    Oak Ridge National Laboratory

    September 9, 2016
    Dawn Levy, Communications
    levyd@ornl.gov
    865.576.6448

    1
    To direct-write the logo of the Department of Energy’s Oak Ridge National Laboratory, scientists started with a gray-scale image. They used the electron beam of an aberration-corrected scanning transmission electron microscope to induce palladium from a solution to deposit as nanocrystals. Image credit: Oak Ridge National Laboratory, U.S. Dept. of Energy.

    Scientists at the Department of Energy’s Oak Ridge National Laboratory are the first to harness a scanning transmission electron microscope (STEM) to directly write tiny patterns in metallic “ink,” forming features in liquid that are finer than half the width of a human hair.

    The automated process is controlled by weaving a STEM instrument’s electron beam through a liquid-filled cell to spur deposition of metal onto a silicon microchip. The patterns created are “nanoscale,” or on the size scale of atoms or molecules.

    Usually fabrication of nanoscale patterns requires lithography, which employs masks to prevent material from accumulating on protected areas. ORNL’s new direct-write technology is like lithography without the mask.

    Details of this unique capability are published online in Nanoscale, a journal of the Royal Society of Chemistry, and researchers are applying for a patent. The technique may provide a new way to tailor devices for electronics and other applications.

    “We can now deposit high-purity metals at specific sites to build structures, with tailored material properties for a specific application,” said lead author Raymond Unocic of the Center for Nanophase Materials Sciences (CNMS), a DOE Office of Science User Facility at ORNL. “We can customize architectures and chemistries. We’re only limited by systems that are dissolvable in the liquid and can undergo chemical reactions.”

    The experimenters used grayscale images to create nanoscale templates. Then they beamed electrons into a cell filled with a solution containing palladium chloride. Pure palladium separated out and deposited wherever the electron beam passed.

    Liquid environments are a must for chemistry. Researchers first needed a way to encapsulate the liquid so the extreme dryness of the vacuum inside the microscope would not evaporate the liquid. The researchers started with a cell made of microchips with a silicon nitride membrane to serve as a window through which the electron beam could pass.

    Then they needed to elicit a new capability from a STEM instrument. “It’s one thing to utilize a microscope for imaging and spectroscopy. It’s another to take control of that microscope to perform controlled and site-specific nanoscale chemical reactions,” Unocic said. “With other techniques for electron-beam lithography, there are ways to interface that microscope where you can control the beam. But this isn’t the way that aberration-corrected scanning transmission electron microscopes are set up.”

    Enter Stephen Jesse, leader of CNMS’s Directed Nanoscale Transformations theme. This group looks at tools that scientists use to see and understand matter and its nanoscale properties in a new light, and explores whether those tools can also transform matter one atom at a time and build structures with specified functions. “Think of what we are doing as working in nanoscale laboratories,” Jesse said. “This means being able to induce and stop reactions at will, as well as monitor them while they are happening.”

    Jesse had recently developed a system that serves as an interface between a nanolithography pattern and a STEM’s scan coils, and ORNL researchers had already used it to selectively transform solids. The microscope focuses the electron beam to a fine point, which microscopists could move just by taking control of the scan coils. Unocic with Andrew Lupini, Albina Borisevich and Sergei Kalinin integrated Jesse’s scan control/nanolithography system within the microscope so that they could control the beam entering the liquid cell. David Cullen performed subsequent chemical analysis.

    “This beam-induced nanolithography relies critically on controlling chemical reactions in nanoscale volumes with a beam of energetic electrons,” said Jesse. The system controls electron-beam position, speed and dose. The dose—how many electrons are being pumped into the system—governs how fast chemicals are transformed.

    This nanoscale technology is similar to larger-scale activities, such as using electron beams to transform materials for 3D printing at ORNL’s Manufacturing Demonstration Facility. In that case, an electron beam melts powder so that it solidifies, layer by layer, to create an object.

    “We’re essentially doing the same thing, but within a liquid,” Unocic said. “Now we can create structures from a liquid-phase precursor solution in the shape that we want and the chemistry that we want, tuning the physiochemical properties for a given application.”

    Precise control of the beam position and the electron dose produces tailored architectures. Encapsulating different liquids and sequentially flowing them during patterning customizes the chemistry too.

    The current resolution of metallic “pixels” the liquid ink can direct-write is 40 nanometers, or twice the width of an influenza virus. In future work, Unocic and colleagues would like to push the resolution down to approach the state of the art of conventional nanolithography, 10 nanometers. They would also like to fabricate multi-component structures.

    The title of the paper is “Direct-write liquid phase transformations with a scanning transmission electron microscope.”

    This research was conducted at the Center for Nanophase Materials Sciences, a DOE Office of Science User Facility at ORNL. The DOE Office of Science supported the work. ORNL Laboratory Directed Research and Development funds supported a portion of the work.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 3:33 pm on August 16, 2016 Permalink | Reply
    Tags: Energy Department to invest $16 million in computer design of materials, ORNL, ,   

    From ORNL: “Energy Department to invest $16 million in computer design of materials” 

    i1

    Oak Ridge National Laboratory

    August 16, 2016
    Dawn Levy, Communications
    levyd@ornl.gov
    865.576.6448

    1
    Paul Kent of Oak Ridge National Laboratory directs the Center for Predictive Simulation of Functional Materials. No image credit.

    The U.S. Department of Energy announced today that it will invest $16 million over the next four years to accelerate the design of new materials through use of supercomputers.

    Two four-year projects—one team led by DOE’s Oak Ridge National Laboratory (ORNL), the other team led by DOE’s Lawrence Berkeley National Laboratory (LBNL)—will take advantage of superfast computers at DOE national laboratories by developing software to design fundamentally new functional materials destined to revolutionize applications in alternative and renewable energy, electronics, and a wide range of other fields. The research teams include experts from universities and other national labs.

    The new grants—part of DOE’s Computational Materials Sciences (CMS) program begun in 2015 as part of the U.S. Materials Genome Initiative—reflect the enormous recent growth in computing power and the increasing capability of high-performance computers to model and simulate the behavior of matter at the atomic and molecular scales.

    The teams are expected to develop sophisticated and user-friendly open-source software that captures the essential physics of relevant systems and can be used by the broader research community and by industry to accelerate the design of new functional materials.

    “Given the importance of materials to virtually all technologies, computational materials science is a critical area in which the United States needs to be competitive in the twenty-first century and beyond through global leadership in innovation,” said Cherry Murray, director of DOE’s Office of Science, which is funding the research. “These projects will both harness DOE existing high-performance computing capabilities and help pave the way toward ever-more sophisticated software for future generations of machines.”

    “ORNL researchers will partner with scientists from national labs and universities to develop software to accurately predict the properties of quantum materials with novel magnetism, optical properties and exotic quantum phases that make them well-suited to energy applications,” said Paul Kent of ORNL, director of the Center for Predictive Simulation of Functional Materials, which includes partners from Argonne, Lawrence Livermore, Oak Ridge and Sandia National Laboratories and North Carolina State University and the University of California–Berkeley. “Our simulations will rely on current petascale and future exascale capabilities at DOE supercomputing centers. To validate the predictions about material behavior, we’ll conduct experiments and use the facilities of the Advanced Photon Source [ANL/APS], Spallation Neutron Source and the Nanoscale Science Research Centers.”

    ANL APS
    ANL/APS

    ORNL Spallation Neutron Source
    ORNL Spallation Neutron Source

    Said the center’s thrust leader for prediction and validation, Olle Heinonen, “At Argonne, our expertise in combining state-of-the-art, oxide molecular beam epitaxy growth of new materials with characterization at the Advanced Photon Source and the Center for Nanoscale Materials will enable us to offer new and precise insight into the complex properties important to materials design. We are excited to bring our particular capabilities in materials, as well as expertise in software, to the center so that the labs can comprehensively tackle this challenge.”

    Researchers are expected to make use of the 30-petaflop/s Cori supercomputer now being installed at the National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab, the 27-petaflop/s Titan computer at the Oak Ridge Leadership Computing Facility (OLCF) and the 10-petaflop/s Mira computer at Argonne Leadership Computing Facility (ALCF).

    NERSC CRAY Cori supercomputer
    NERSC CRAY Cori supercomputer

    ORNL Cray Titan Supercomputer
    ORNL Cray Titan Supercomputer

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    OLCF, ALCF and NERSC are all DOE Office of Science User Facilities. One petaflop/s is1015 or a million times a billion floating-point operations per second.

    In addition, a new generation of machines is scheduled for deployment between 2016 and 2019 that will take peak performance as high as 200 petaflops. Ultimately the software produced by these projects is expected to evolve to run on exascale machines, capable of 1,000 petaflops and projected for deployment in the mid-2020s.

    LLNL IBM Sierra supercomputer
    LLNL IBM Sierra supercomputer

    ORNL IBM Summit supercomputer depiction
    ORNL IBM Summit supercomputer

    ANL Cray Aurora supercomputer
    ANL Cray Aurora supercomputer

    Research will combine theory and software development with experimental validation, drawing on the resources of multiple DOE Office of Science User Facilities, including the Advanced Light Source [ALS] at LBNL, the Advanced Photon Source at Argonne National Laboratory (ANL), the Spallation Neutron Source at ORNL, and several of the five Nanoscience Research Centers across the DOE National Laboratory complex.

    LBL ALS interior
    LBL/ALS

    The new research projects will begin in Fiscal Year 2016. They expand the ongoing CMS research effort, which began in FY 2015 with three initial projects, led respectively by ANL, Brookhaven National Laboratory and the University of Southern California.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: