Tagged: D.O.E. Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:03 pm on July 24, 2019 Permalink | Reply
    Tags: D.O.E., Director Doug Kothe sat down with Mike Bernhardt ECP communications manager., , Red Team Review   

    From Exascale Computing Project: Assessing Progress, Considering ECP’s Enduring Legacy, and More 

    From Exascale Computing Project

    1

    Exascale Computing Project (ECP) Director Doug Kothe sat down with Mike Bernhardt, ECP communications manager, this month to talk about a variety of topics. Covered in the discussion were the aspects of ensuring that a capable exascale computing ecosystem will come to fruition in conjunction with the arrival of the nation’s first exascale systems, objectively assessing whether the project’s efforts are on track, correcting course and instilling confidence through Red Team reviews, addressing the challenges posed by hardware accelerators, consolidating projects for greater technical synergy, acknowledging behind-the-scenes leadership, tracking costs and schedule performance, and reflecting on ECP’s enduring legacy.

    2

    The following is a transcript of the interview.

    Bernhardt: Doug, now that plans for the first exascale systems have been formally and officially announced—Aurora at Argonne and Frontier at Oak Ridge—and we know that the El Capitan system at Lawrence Livermore is right around the corner.

    Depiction of ANL ALCF Cray Intel SC18 Shasta Aurora exascale supercomputer

    ORNL Cray Frontier Shasta based Exascale supercomputer with Slingshot interconnect featuring high-performance AMD EPYC CPU and AMD Radeon Instinct GPU technology

    Kothe: Right. Right.

    Bernhardt: The announcement of the nation’s first exascale systems is such a huge milestone for this country and for the Department of Energy. What do we do to ensure that we will have a capable exascale ecosystem, the software stack, and the exascale-ready applications when these systems are actually standing up?

    Kothe: A good question. As we’ve talked in the past, Mike, we—the ECP team and the staff—knew enough in terms of what we thought was coming to where we weren’t shooting in the dark, so to speak, with regard to building our software stack and our apps. But now with these announcements coming out, we have a more refined, focused target and for the most part, it’s not surprising; it’s a matter of expectations, and I feel like our preparations have us on a good path.

    I do believe that both architectures, as we know of them—Aurora at Argonne and Frontier at Oak Ridge—are very exciting, have tremendous upsides, and are consistent with our overall preparations, meaning the nodes feature what we call accelerators, which is hardware acceleration of certain floating-point operations; so, it allows us to exploit those accelerators for our good. Overall, we’re really excited. We are three years in and we have very specific targets now. The announced systems really met with our expectations, and from what we can tell in terms of the speeds and feeds and the overall design, they really do look very solid, very exciting, and I’m very confident we’re going to be able to deliver on those systems.

    Bernhardt: And as you mentioned, we’re into our third year with the project. If you think back three years ago, where you thought the project would be at this point in time, are we on track? Have there been any surprises that have come across that changed scheduling in your vision?

    Kothe: First of all, we are on track. There are always surprises in R&D and many of them are good. Some, are, I would say, setbacks or things you have to really prepare for, so relative to three years ago, we have teams now that have really figured out how to fit well into our project structure. We have defined very specific metrics that are quantitative but also directly reflect the overall goals and objectives and, in particular, the science and engineering, and national security objectives.

    What we are seeing over the past year and now is that we have a really good sense of how to track performance. So when we say that, it’s not really just a subjective answer. We really have a lot of the objective evidence for being on track and being a formal project with very specific metrics helps us to make collective decisions about what matters and what doesn’t, and it’s all about achieving our objectives, which we’ve mapped into these specific metrics.

    So, yeah, we are definitely on track. That doesn’t mean that it’s not going to be a challenging, tough road ahead. I think we understand what the risks are, and certainly with Aurora and Frontier being announced, many of the unknown unknowns become known unknowns and so I think we’re even better prepared moving forward.

    Bernhardt: You’ve mentioned that it’s not just simply a subjective view of whether or not we’re on track. As I understand it, the team just went through something called a Red Team Review.

    Kothe: Right.

    Bernhardt: Could you explain for our followers what is a Red Team Review and what’s the significance and the impact to the project?

    3

    Kothe: Yeah. And so Red Team, many years ago for me, 30-plus years ago, in the DOE was a new term. One could Google it and kind of see the historical aspects. I think it’s a term that’s readily used in business, in large organizations, in the military. For us it has a very specific meaning that at a high level is not too dissimilar from other organizations and agencies, and that is, we bring in a team that is there specifically to poke, to prod, to find any sort of flaw or hole in our plan, and the team is there to help us. They’re not there to be punitive, but they’re, also not friends and family. They are there to help us and specifically find problems in our plan, in our objectives. So, typically we go through at least one formal review a year by our Department of Energy sponsors, and so what we do with the Red Team is two to three months before that formal review, we have a Red Team Review. You could view it as an internal review where we essentially mimic the formal DOE review in terms of what we’re going to do, in terms of presentations, and breakout sessions, et cetera, but with an external, independent, separate team with no conflict of interest with ECP, meaning folks working on ECP aren’t a part of this. So it is very formal and independent.

    These reviews at times are painful, because it requires a lot of work and a lot of heads-down focus, but in the end, they always help us in terms of finding areas where we need to do better and make necessary course corrections. In the end, at post review, I think we all sit back and go, boy, that was painful, but we’re glad we went through it because we’re better off now; we have a better plan; we have corrective actions in place.

    We did recently go through a Red Team Review and design review. Our next big review with DOE is this December, and so we intend to not wait for the last minute to really be ready to show DOE that we’re on a good path or on track, as you say.

    Bernhardt: Got it. So, the Red Team Reviews go a long way in helping instill confidence with you, the leadership team, and the program office, your sponsors, that things are, in fact, on track, that you’ve identified the risk, you’re taking all of the mitigation steps, et cetera?

    Kothe: They do. I would say that the outcomes of a Red Team Review are typically, hey, we recommend you consider doing this, this, and this; or, fix this, this, and this. So, we call those formal recommendations. Typically we give ourselves two or three months to respond to those recommendations, to make those fixes. Obviously, if there is a systemic problem found by the Red Team that takes longer to fix, that’s a problem for us. In the end, our Red Team reviews have been very successful suggesting kind of minor tweaks, sort of realizing or relaying to us that we’re, for the most part, on track. And so right now we are actually making a few course corrections and a few changes in our plans as we prepare for our next DOE review; but I do feel like they’re all necessary, needed, and probably the best news is they’re consistent with our expectations as to where we needed to work. So, we’ve not heard things that were sort of orthogonal to our own internal assessment. Having independent assessments is good, and it’s even better when the results are consistent with our own view of where we still need to put some work in.

    Bernhardt: Awesome. So Doug, I’d like to dive into one discussion just quickly here, and it’s in reference to something we’ve heard recently at a number of conferences. Would it be accurate to say that accelerated computing and the implementation of GPUs is going to play a key role in delivering the necessary performance of our DOE exascale systems, and if that is the case, what’s ECP doing to prepare the community for this?

    Kothe: Good question. It is accurate. I think we’re going to see more and more of this, and maybe it’s disingenuous to even call them GPUs, because they’re very purpose-fit, hardware accelerators for specific floating-point operations, or specific operations that may not be floating point. The way I like to think about it is, in ECP—and this is an aspect of co-design—we’re working on hardware-driven algorithm design, but we’re also working on algorithm-driven hardware design; and so there is really a give and take there. Based on our experience with Summit and Sierra, Summit at Oak Ridge, Sierra at Lawrence Livermore, and the coming Perlmutter system, and certainly Titan at Oak Ridge, we have seen, and will continue to see, hardware acceleration on a node. That doesn’t mean it’s easy. The point is we’ve been through this. I think we know what to expect. It is a tremendous potential, this sort of design. So there’s a lot of concurrency, local concurrency, that we can exploit with an accelerator.

    LLNL IBM NVIDIA Mellanox ATS-2 Sierra Supercomputer, NO.2 on the TOP500

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    Cray Shasta Perlmutter SC18 AMD Epyc Nvidia pre-exascale supeercomputer

    ORNL Cray XK7 Titan Supercomputer, once the fastest in the world, to be decommissioned

    I can now embody my simulation with richer physics, with broader, deeper physical phenomena, with higher-confidence results, because I can afford now to offload some additional physics on the hardware acceleration, or the current algorithms I have in place. If they don’t adapt well to the accelerator, I’ve got to redesign and rethink my algorithms. And so, we’ve been doing that, and the recent announcements of Aurora and Frontier basically tell us that we’re on a good path.

    I think with regard to acceleration moving into the future, my own opinion is we’ll continue to see this post exascale and it could be even more purpose fit, more along the line of ASICs that are very specific to current algorithms. And again, I think what we’re doing in ECP now is really hardware-driven algorithm design, meaning we know accelerators are here. We are figuring out how to best exploit them. In many cases it’s rethinking of our algorithms. I think the hardest part is to figure out how do I change my data structures, how do I rework my algorithms, and so in some cases it’s a wholesale restructure of an application or a software technology. In some cases it’s very surgical for the compute-intensive portions.

    In the end implementation, the hard part is rethinking your algorithms, and the implementation of those reworked algorithms often is much easier than the algorithm rethinking. So whether we’re looking at accelerators from NVIDIA, or AMD, or Intel, the programming models won’t be as dissimilar as one might think.

    4
    nvidia

    6
    AMD

    7
    Intel

    The real challenge is rethinking your algorithms and we’ve been doing that since the start of ECP. So, not that we’re not going to have some challenges and hurdles, but I do think that these recent announcements have pretty much met with where we thought things were going to go, and so in that sense I do believe we really are on track relative to our objectives.

    Bernhardt: Recent comments from some conferences indicates folks in the application development community think that this (wider spread use of accelerators) is a pretty big, heavy lift. It’s a learning curve that they’re going to have to go through with the growing use of accelerators. Is that the proper way to frame it, do you think?

    Kothe: It certainly isn’t easy, and I don’t want to downplay the fact that this can be difficult and challenging. I think it requires conceptual rethinking of algorithms. Now, in ECP we have a whole spectrum of application software maturity relative to the accelerators. We have many applications in software technology products that have already reworked their algorithm design and are achieving fantastic performance on, say, Summit.

    And so we would anticipate, I think with fairly low risk, that moving that implementation from Summit to Aurora or Frontier may not be seamless, but won’t be a heavy lift, so to speak.

    We have other applications and software technology products that are not quite there yet in terms of rethinking and redesigning their algorithms, and so these comments certainly do apply to some aspect of ECP.

    In terms of, say, our upcoming DOE review, one key aspect of this review is determining if we are prepared to really help those teams move along more quickly, more with a sense of urgency. Can we take the successful experiences of some applications and apply those lessons learned and best practices to others, and I think we can.

    In our three focus areas—software, applications, and hardware and integration—we have a number of projects that have more or less a direct line of sight to essentially figuring out the techniques for exploiting those hardware accelerations. So I feel that in terms of the way we’re scoped, we have the efforts in place to help bring along everybody and, you know, the fact that we’re a large project with lots of teams allows us to cross-fertilize and share experiences and lessons learned, and that helps reduce risk with regard to moving things along.

    So, I think when you’re first exposed to these accelerators, you have to sit back and go, okay, wow; this is a tremendous opportunity, but I’ve also got to rethink how I’ve been doing things. In many cases it’s back to the future. Some of the algorithms designed for Cray vector machines in the 70s and 80s, now are apropos and work well on accelerators such as Summit. We have direct evidence that this is not necessarily reinventing or inventing from whole cloth. It might be sort of accessing an algorithm that was used successfully in the past and is again useful now.

    Bernhardt: Just another tool in the application developer’s bag of tricks, huh?

    Kothe: That’s right. Indeed it is, and I think the teams realize that they’re not going to succeed or succeed on the path that we have in front of us by closing their door and trying to do all of this on their own. And so, we really are managing and tracking and forcing, frankly, integration of efforts, especially the software stack. Key products that applications need to not just be aware of, but actually use. And so in many cases the applications are, to some extent, passing the risk or the challenge of exploiting on-node accelerators to the software technology products, and that makes a lot of sense. And in many cases as well, they’re not doing that, for a good reason. So, this is one of the advantages of having a large project where we can plug pieces together to make basically the whole greater than the sum of the parts.

    Bernhardt: Got it. Yeah. It makes a lot of sense. So, within ECP, some efforts that I’ve noticed have been expanding and some have been consolidating. Maybe you could give us a few of the current stats to frame where we are today for the listeners, more like ECP by the numbers.

    Kothe: Okay. So, we always are taking a hard look at how we’re organized and trying to see if there is a simpler way to put our organization together in terms of managing. Really, it’s not about the boxology so much, because the challenges are always managing at the interfaces, but we have worked hard to consolidate and simplify where possible and where it makes sense. So right now in ECP we have 81 R&D projects and that’s come down from about 100. So, where we found areas where we could consolidate, we did that, and it wasn’t oil and water. We didn’t force it just for the sake of trying to decrease the number; but in every case that we’ve done this, it has helped. So, let me give an example: In software technology, led by Mike Heroux at Sandia and Jonathan Carter at Berkeley, they recognized that there were several smaller projects, say, looking at I/O and by putting them together there were synergies there that we could take advantage of where they could adopt and use each other’s approaches, and we could move toward maybe one API for a particular I/O instance. And so the consolidation wasn’t just, hey, let’s reduce the number of projects—this is too hard to manage. It was really driven by what makes technical sense, and so right now I think we’re in really good shape to move into what we call our performance baseline period, which will be this fall and early next year, meaning our current structure of 81 teams, still over 1,000 researchers across the HPC and computational science community and industry, academia, and DOE labs; but I think this restructuring has us in really good position for the stretch run as we see Aurora and Frontier delivered.

    Bernhardt: You mentioned a few of the folks there, and that leads into what I wanted to get to next. ECP’s success, in fact, the Nation’s success with exascale and bringing it to life depends on a very, very large group of people and it’s more than just the ECP. You know, the collaborating agencies, the collaborating universities, the technical vendor community that ultimately will stand up the systems. I know it’s difficult to single out just a few individuals when there are so many that are making these important contributions, but perhaps you could take a few minutes to acknowledge at least some of the folks, maybe from the leadership team level and so forth that often work behind the scenes a fair amount and don’t get the recognition they deserve.

    Kothe: Yeah. That’s a good point. Let me start first with our Department of Energy sponsors. Barb Helland in the Advanced Scientific Computing Research (ASCR) office and the Office of Science (SC), Thuc Hoang on the Advanced Simulation and Computing Program (ASC) in the National Nuclear Security Administration (NNSA), and Dan Hogue in the Oak Ridge National Lab site office here, who’s our Federal Project Director. They have been fantastic in their support, and that doesn’t always mean it’s a thumbs up, team, you guys are doing great. It could mean you guys need to work on this, and so they give us a good, honest, objective assessment and they’re always there. We speak to them daily, weekly, all of the time. So our sponsors have been fantastic in making sure we’re on the right course and giving us the support that we need.

    Our leadership team, again, consists of about 30 or so—I think 32 by last count—DOE staff across six labs; and we’ve been really fortunate to have leaders in the community with a proven track record and the trust and respect of their colleagues. We’ve been together now as a team for most of the time ECP has been in existence, meaning there hasn’t been a lot of turnover—not that that’s bad—but people are all in; they’re committed; they have the passion and the energy.

    Many people, to quote some of our leaders, feel like this is the culmination of their careers, feeling like their whole career was built for this, and so, you know, that really helps during, say, tough times where you’re trying to prepare for a review when you realize that this is something that I feel like my whole career was built around. We have many people who feel that way.

    To single out some names, our three focus areas, software technology, led by Mike Heroux at Sandia National Lab and Jonathan Carter at Lawrence Berkeley, really are up and running on all cylinders. And they, Mike and Jonathan, have made a lot of very productive and useful changes in how things are running and organized. They work hard to make sure our software products have a line of sight up into software development kits and are released and deployed on the facilities.

    Terry Quinn at Lawrence Livermore and Susan Coghlan at Argonne National Lab run our hardware and technology focus area. Both Terry and Susan—I don’t know if people appreciate this—are dual hatted in that Terry is really on point for a large part of the El Capitan procurement and deployment at Lawrence Livermore, and Susan for Aurora; and so, we’re really fortunate to have two leaders in the field for procuring, deploying, and operating HPC systems but also leading our staff in terms of what does it take to make sure that products and applications are production quality and get deployed and used on these systems. So, their feet are sort of on both sides of the fence there.

    And then in the applications area, Andrew Siegel at Argonne and Eric Draeger at Livermore lead that area, and they’ve really taken our applications from what looked like, say, three years ago some interesting may-work sort of R&D efforts to the applications now that have very specific challenge problems.

    We’re assessing them annually. They have very specific metrics and they’re really, for the most part, all on track. So, these folks have been fantastic in leading these efforts. And I said there were over 30 leaders, so Andrew and Eric, for example, have a team of five or six that each oversee over half a dozen of these R&D projects. But the 81 R&D projects all have principal investigators leading these projects who are, for the most part, senior people with career-track records. I try to, and I think our leadership team does as well, call out these PIs, because that’s really where the work is getting done; and we’re lucky to have these PIs, who are all in, just like the leadership team, to make sure we succeed.

    Bernhardt: And a lot of the behind-the-scenes, heavy lifting that takes place is with the project office, which happens to be housed at Oak Ridge.

    Kothe: Yes, and I’m glad you brought that up. They are. This really isn’t a customer-client relationship between the PhD scientists and the project office. They really are our peers. The PhD scientists responsible for leading the technical areas have learned a lot from the project office about what good project management looks like; what is our responsibility; how do we need to track costs and schedule performance. It’s a tremendous responsibility with the budget we have. And so the project office is in itself a small organization that’s made up of people who care about risk, project controls, budget, procurement. All of these things are day-to-day sort of contact sports, so to speak, with regard to our technical leaders. So, I sit personally at Oak Ridge National Lab, and I think this lab in particular, as many other labs, has a very good track record in project management and leading and executing on large projects. So, we’re fortunate to have a project office staffed almost entirely here at Oak Ridge that has been through the trenches in running and being part of large projects, and knows what to expect. This is a unique gig in ECP, but I think we’ve figured out how to really tailor this to formal project management, sort of in and around doing more exploratory high-risk research.

    Bernhardt: Great. This has been a good update, Doug. I’d like to wrap up with one topic that I know is near and dear to you. Talk a little bit about, if you could, the enduring legacy of the Exascale Computing Project.

    Kothe: Very good point. I wouldn’t be here, and I don’t think the leadership team or the staff would be here if we didn’t think that there was going to be an enduring legacy. The beauty of a seven-year project is it allows you to have a sense of urgency, and a sprint, and you pay attention to metrics, and you really make sure you can dot I’s and cross T’s, but a project would fail if the leave-behind wasn’t useful. So, let me take you through applications, for example.

    Enduring legacy translates to having dozens of application technologies that will be used to tackle some of the toughest problems in DOE and the nation, and so the applications are now going to be positioned to address their challenge problems and in many cases help solve them or be a part of the solution. So, an enduring legacy for us is the applications now are going to be ready at exascale to tackle currently intractable problems and when I say tackle, many, many program offices in DOE—by last count there were ten of them—and other federal agencies are going to essentially use these as their science and engineering tools, so that’s an important legacy. In software technology I think what we’re seeing with the leadership of Mike and Jonathan is the genesis of a probably multi-decade software stack that’s going to be used and deployed on many HPC systems, well beyond Aurora, Frontier, and El Capitan. And I think that by paying attention to what it takes to containerize and package things up, and make them production quality, and make them basically adhering to application and hardware requirements, we’re going to see a software stack that I think DOE will continue to support, maintain, and require on HPC systems in the future. Time will tell post ECP. But we wouldn’t be involved in the ECP if we didn’t expect and, frankly, require our efforts to really have a line of sight well beyond 2023.

    Bernhardt: Great. That’s all I have. Is there anything else that you’d like to throw out there for the community at this point in time?

    Kothe: Just that we appreciate the support, the engagement of the HPC, R&D, and computational science community. I’m not going to claim that we always have all of the answers, so we encourage the community to feel free to touch base with us, myself personally, or the leadership team. There are ways that you can collaborate and work with us. There are certainly ways that you can engage and help us move forward. We’re really lucky to be a part of this big project and always happy to hear about new suggestions and new possibilities from the community at large.

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    About ECP

    The ECP is a collaborative effort of two DOE organizations – the Office of Science and the National Nuclear Security Administration. As part of the National Strategic Computing initiative, ECP was established to accelerate delivery of a capable exascale ecosystem, encompassing applications, system software, hardware technologies and architectures, and workforce development to meet the scientific and national security mission needs of DOE in the early-2020s time frame.

    About the Office of Science

    DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit https://science.energy.gov/.

    About NNSA

    Established by Congress in 2000, NNSA is a semi-autonomous agency within the DOE responsible for enhancing national security through the military application of nuclear science. NNSA maintains and enhances the safety, security, and effectiveness of the U.S. nuclear weapons stockpile without nuclear explosive testing; works to reduce the global danger from weapons of mass destruction; provides the U.S. Navy with safe and effective nuclear propulsion; and responds to nuclear and radiological emergencies in the United States and abroad. https://nnsa.energy.gov

    The Goal of ECP’s Application Development focus area is to deliver a broad array of comprehensive science-based computational applications that effectively utilize exascale HPC technology to provide breakthrough simulation and data analytic solutions for scientific discovery, energy assurance, economic competitiveness, health enhancement, and national security.

    Awareness of ECP and its mission is growing and resonating—and for good reason. ECP is an incredible effort focused on advancing areas of key importance to our country: economic competiveness, breakthrough science and technology, and national security. And, fortunately, ECP has a foundation that bodes extremely well for the prospects of its success, with the demonstrably strong commitment of the US Department of Energy (DOE) and the talent of some of America’s best and brightest researchers.

    ECP is composed of about 100 small teams of domain, computer, and computational scientists, and mathematicians from DOE labs, universities, and industry. We are tasked with building applications that will execute well on exascale systems, enabled by a robust exascale software stack, and supporting necessary vendor R&D to ensure the compute nodes and hardware infrastructure are adept and able to do the science that needs to be done with the first exascale platforms.

     
  • richardmitnick 9:51 am on May 23, 2019 Permalink | Reply
    Tags: , , D.O.E., , , Sandia is planning another pair of launches this August.,   

    From Sandia Lab: “Sandia launches a bus into space” 

    From Sandia Lab

    May 23, 2019

    HOT SHOT sounding rocket program picks up flight pace.
    1
    A sounding rocket designed and launched by Sandia National Laboratories lifts off from the Kauai Test Facility in Hawaii on April 24. (Photo by Mike Bejarano and Mark Olona)

    Sandia National Laboratories recently launched a bus into space. Not the kind with wheels that go round and round, but the kind of device that links electronic devices (a USB cable, short for “universal serial bus,” is one common example).

    The bus was among 16 total experiments aboard two sounding rockets that were launched as part of the National Nuclear Security Administration’s HOT SHOT program, which conducts scientific experiments and tests developing technologies on non-weaponized rockets. The respective flights took place on April 23 and April 24 at the Kauai Test Facility in Hawaii.

    The pair of flights marked an increase in the program’s tempo.

    “Sandia’s team was able to develop, fabricate, and launch two distinct payloads in less than 11 months,” said Nick Leathe, who oversaw the payload development. The last HOT SHOT flight — a single rocket launched in May 2018 — took 16 months to develop.

    Sandia, Lawrence Livermore National Laboratory, Kansas City National Security Campus, and the U.K.-based Atomic Weapons Establishment provided experiments for this series of HOT SHOTs.

    The rockets also featured several improvements over the previous one launched last year, including new sensors to measure pressure, temperature, and acceleration. These additions provided researchers more details about the conditions their experiments endured while traveling through the atmosphere.

    The experimental bus, for example, was tested to find out whether components would be robust enough to operate during a rocket launch. The new technology was designed expressly for power distribution in national security applications and could make other electronic easier to upgrade. It includes Sandia-developed semiconductors and was made to withstand intense radiation.

    Sandia is planning another pair of launches this August. The name HOT SHOT comes from the term “high operational tempo,” which refers to the relatively high frequency of flights. A brisk flight schedule allows scientists and engineers to perform multiple tests in a highly specialized test environment in quick succession.

    For the recent flight tests, one Sandia team prepared two experiments, one for each flight, to observe in different ways the dramatic temperature and pressure swings that are normal in rocketry but difficult to reproduce on the ground. The researchers are aiming to improve software that models these conditions for national security applications, and they are now analyzing the flight data for discrepancies between what they observed and what their software predicted. Differences could lead to scientific insights that would help refine the program.

    Some experiments also studied potential further improvements for HOT SHOT itself, including additively manufactured parts that could be incorporated into future flights and instruments measuring rocket vibration.

    The sounding rockets are designed to achieve an altitude of about 1.2 million feet and to fly about 220 nautical miles down range into the Pacific Ocean. Sandia uses refurbished, surplus rocket engines, making these test flights more economical than conventional flight tests common at the end of a technology’s development.

    The HOT SHOT program enables accelerated cycles of learning for engineers and experimentalists. “Our goal is to take a 10-year process and truncate it to three years without losing quality in the resulting technologies. HOT SHOT is the first step in that direction,” said Todd Hughes, NNSA’s HOT SHOT Federal Program Manager.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Sandia Campus
    Sandia National Laboratory

    Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major R&D responsibilities in national security, energy and environmental technologies, and economic competitiveness.

    6

     
  • richardmitnick 11:38 am on September 29, 2018 Permalink | Reply
    Tags: Actinide chemistry, , , , , Computational chemistry, D.O.E., , Microsoft Quantum Development Kit, NWChem an open source high-performance computational chemistry tool funded by DOE, ,   

    From Pacific Northwest National Lab: “PNNL’s capabilities in quantum information sciences get boost from DOE grant and new Microsoft partnership” 

    PNNL BLOC
    From Pacific Northwest National Lab

    September 28, 2018
    Susan Bauer, PNNL,
    susan.bauer@pnnl.gov
    (509) 372-6083

    1
    No image caption or credit

    On Monday, September 24, the U.S. Department of Energy announced $218 million in funding for dozens of research awards in the field of Quantum Information Science. Nearly $2 million was awarded to DOE’s Pacific Northwest National Laboratory for a new quantum computing chemistry project.

    “This award will be used to create novel computational chemistry tools to help solve fundamental problems in catalysis, actinide chemistry, and materials science,” said principal investigator Karol Kowalski. “By collaborating with the quantum computing experts at Lawrence Berkeley National Laboratory, Oak Ridge National Laboratory, and the University of Michigan, we believe we can help reshape the landscape of computational chemistry.”

    Kowalski’s proposal was chosen along with 84 others to further the nation’s research in QIS and lay the foundation for the next generation of computing and information processing as well as an array of other innovative technologies.

    While Kowalski’s work will take place over the next three years, computational chemists everywhere will experience a more immediate upgrade to their capabilities in computational chemistry made possible by a new PNNL-Microsoft partnership.

    “We are working with Microsoft to combine their quantum computing software stack with our expertise on high-performance computing approaches to quantum chemistry,” said Sriram Krishnamoorthy who leads PNNL’s side of this collaboration.

    Microsoft will soon release an update to the Microsoft Quantum Development Kit which will include a new chemical simulation library developed in collaboration with PNNL. The library is used in conjunction with NWChem, an open source, high-performance computational chemistry tool funded by DOE. Together, the chemistry library and NWChem will help enable quantum solutions and allow researchers and developers a higher level of study and discovery.

    “Researchers everywhere will be able to tackle chemistry challenges with an accuracy and at a scale we haven’t experienced before,” said Nathan Baker, director of PNNL’s Advanced Computing, Mathematics, and Data Division. Wendy Shaw, the lab’s division director for physical sciences, agrees with Baker. “Development and applications of quantum computing to catalysis problems has the ability to revolutionize our ability to predict robust catalysts that mimic features of naturally occurring, high-performing catalysts, like nitrogenase,” said Shaw about the application of QIS to her team’s work.

    PNNL’s aggressive focus on quantum information science is driven by a research interest in the capability and by national priorities. In September, the White House published the National Strategic Overview for Quantum Information Science and hosted a summit on the topic. Through their efforts, researchers hope to unleash quantum’s unprecedented processing power and challenge traditional limits for scaling and performance.

    In addition to the new DOE funding, PNNL is also pushing work in quantum conversion through internal investments. Researchers are determining which software architectures allow for efficient use of QIS platforms, designing QIS systems for specific technologies, imagining what scientific problems can best be solved using QIS systems, and identifying materials and properties to build quantum systems. The effort is cross-disciplinary; PNNL scientists from its computing, chemistry, physics, and applied mathematics domains are all collaborating on quantum research and pushing to apply their discoveries. “The idea for this internal investment is that PNNL scientists will take that knowledge to build capabilities impacting catalysis, computational chemistry, materials science, and many other areas,” said Krishnamoorthy.

    Krishnamoorthy wants QIS to be among the priorities that researchers think about applying to all of PNNL’s mission areas. With continued investment from the DOE and partnerships with industry leaders like Microsoft, that just might happen.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Pacific Northwest National Laboratory (PNNL) is one of the United States Department of Energy National Laboratories, managed by the Department of Energy’s Office of Science. The main campus of the laboratory is in Richland, Washington.

    PNNL scientists conduct basic and applied research and development to strengthen U.S. scientific foundations for fundamental research and innovation; prevent and counter acts of terrorism through applied research in information analysis, cyber security, and the nonproliferation of weapons of mass destruction; increase the U.S. energy capacity and reduce dependence on imported oil; and reduce the effects of human activity on the environment. PNNL has been operated by Battelle Memorial Institute since 1965.

    i1

     
  • richardmitnick 12:44 pm on March 9, 2018 Permalink | Reply
    Tags: D.O.E., ECP LANL Cray XC 40 Trinity supercomputer, , , ,   

    From ECP: What is Exascale Computing and Why Do We Need It? 


    Exascale Computing Project


    Los Alamos National Lab


    The Trinity supercomputer, with both Xeon Haswell and the Xeon Phi Knights Landing processors, is the seventh fastest supercomputer on the TOP 500 list, and number three on the High Performance Conjugate Gradients Benchmark project.

    Meeting national security science challenges with reliable computing

    As part of the National Strategic Computing Initiative (NSCI), the Exascale Computing Project (ECP) was established to develop a capable exascale ecosystem, encompassing applications, system software, hardware technologies and architectures, and workforce development to meet the scientific and national security mission needs of the U.S. Department of Energy (DOE) in the mid-2020s time frame.

    The goal of ECP is to deliver breakthrough modeling and simulation solutions that analyze more data in less time, providing insights and answers to the most critical U.S. challenges in scientific discovery, energy assurance, economic competitiveness and national security.

    The Trinity Supercomputer at Los Alamos National Laboratory was recently named as a top 10 supercomputer on two lists: it made number three on the High Performance Conjugate Gradients (HPCG) Benchmark project, and is number seven on the TOP500 list.

    “Trinity has already made unique contributions to important national security challenges, and we look forward to Trinity having a long tenure as one of the most powerful supercomputers in the world.” said John Sarrao, associate director for Theory, Simulation and Computation at Los Alamos.

    Trinity, a Cray XC40 supercomputer at the Laboratory, was recently upgraded with Intel “Knights Landing” Xeon Phi processors, which propelled it from 8.10 petaflops six months ago to 14.14 petaflops.

    The Trinity Supercomputer Phase II project was completed during the summer of 2017, and the computer became fully operational during an unclassified “open science” run; it has now transitioned to classified mode. Trinity is designed to provide increased computational capability for the National Nuclear Security Agency in support of increasing geometric and physics fidelities in nuclear weapons simulation codes, while maintaining expectations for total time to solution.

    The capabilities of Trinity are required for supporting the NNSA Stockpile Stewardship program’s certification and assessments to ensure that the nation’s nuclear stockpile is safe, secure and effective.

    The Trinity project is managed and operated by Los Alamos National Laboratory and Sandia National Laboratories under the Alliance for Computing at Extreme Scale (ACES) partnership. The system is located at the Nicholas Metropolis Center for Modeling and Simulation at Los Alamos and covers approximately 5,200 square feet of floor space.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    LANL campus

    What is exascale computing?
    Exascale computing refers to computing systems capable of at least one exaflop or a billion billion calculations per second (1018). That is 50 times faster than the most powerful supercomputers being used today and represents a thousand-fold increase over the first petascale computer that came into operation in 2008. How we use these large-scale simulation resources is the key to solving some of today’s most pressing problems, including clean energy production, nuclear reactor lifetime extension and nuclear stockpile aging.

    The Los Alamos role

    In the run-up to developing exascale systems, at Los Alamos we will be taking the lead on a co-design center, the Co-Design Center for Particle-Based Methods: From Quantum to Classical, Molecular to Cosmological. The ultimate goal is the creation of scalable open exascale software platforms suitable for use by a variety of particle-based simulations.

    Los Alamos is leading the Exascale Atomistic capability for Accuracy, Length and Time (EXAALT) application development project. EXAALT will develop a molecular dynamics simulation platform that will fully utilize the power of exascale. The platform will allow users to choose the point in accuracy, length or time-space that is most appropriate for the problem at hand, trading the cost of one over another. The EXAALT project will be powerful enough to address a wide range of materials problems. For example, during its development, EXAALT will examine the degradation of UO2 fission fuel and plasma damage in tungsten under fusion first-wall conditions.

    In addition, Los Alamos and partnering organizations will be involved in key software development proposals that cover many components of the software stack for exascale systems, including programming models and runtime libraries, mathematical libraries and frameworks, tools, lower-level system software, data management and I/O, as well as in situ visualization and data analysis.

    A collaboration of partners

    ECP is a collaborative effort of two DOE organizations—the Office of Science and the National Nuclear Security Administration (NNSA). DOE formalized this long-term strategic effort under the guidance of key leaders from six DOE and NNSA National Laboratories: Argonne, Lawrence Berkeley, Lawrence Livermore, Los Alamos, Oak Ridge and Sandia. The ECP leads the formalized project management and integration processes that bridge and align the resources of the DOE and NNSA laboratories, allowing them to work with industry more effectively.

     
  • richardmitnick 11:57 am on November 8, 2017 Permalink | Reply
    Tags: At higher temperature snapshots at different times show the moments pointing in different random directions, D.O.E., Magnetic moments, , , , , , Rules of attraction   

    From ORNL OLCF via D.O.E.: “Rules of attraction” 

    i1

    Oak Ridge National Laboratory

    OLCF

    November 8, 2017
    No writer credit

    1
    A depiction of magnetic moments obtained using the hybrid WL-LSMS modeling technique inside nickel (Ni) as the temperature is increased from left to right. At low temperature (left), Ni atoms in their magnetic moments all point in one direction and align. At higher temperature (right) snapshots at different times show the moments pointing in different, random directions, and the individual atoms no longer perfectly align. Image courtesy of Oak Ridge National Laboratory.

    The atoms inside materials are not always perfectly ordered, as usually depicted in models. In magnetic, ferroelectric (or showing electric polarity) and alloy materials, there is competition between random arrangement of the atoms and their desire to align in a perfect pattern. The change between these two states, called a phase transition, happens at a specific temperature.

    Markus Eisenbach, a computational scientist at the Department of Energy’s Oak Ridge National Laboratory, heads a group of researchers who’ve set out to model the behavior of these materials using first principles – from fundamental physics without preset conditions that fit external data.

    “We’re just scratching the surface of comprehending the underlying physics of these three classes of materials, but we have an excellent start,” Eisenbach says. “The three are actually overlapping in that their modes of operation involve disorder, thermal excitations and resulting phase transitions – from disorder to order – to express their behavior.”

    Eisenbach says he’s fascinated by “how magnetism appears and then disappears at varying temperatures. Controlling magnetism from one direction to another has implications for magnetic recording, for instance, and all sorts of electric machines – for example, motors in automobiles or generators in wind turbines.”

    The researchers’ models also could help find strong, versatile magnets that don’t use rare earth elements as an ingredient. Located at the bottom of the periodic table, these 17 materials come almost exclusively from China and, because of their limited source, are considered critical. They are a mainstay in the composition of many strong magnets.

    Eisenbach and his collaborators, which includes his ORNL team and Yang Wang with the Pittsburgh Supercomputing Center, are in the second year of a DOE INCITE (Innovative and Novel Computational Impact on Theory and Experiment) award to model all three materials at the atomic level. They’ve been awarded 100 million processor hours on ORNL’s Titan supercomputer and already have impressive results in magnetics and alloys. Titan is housed at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science user facility.

    The researchers tease out atomic-scale behavior using, at times, a hybrid code that combines Wang-Landau (WL) Monte Carlo and locally self-consistent multiple scattering (LSMS) methods. WL is a statistical approach that samples the atomic energy landscape in terms of finite temperature effects; LSMS determines energy value. With LSMS alone, they’ve calculated the ground state magnetic properties of an iron-platinum particle. And without making any assumption beyond the chemical composition, they’ve determined the temperature at which copper-zinc alloy goes from a disordered state to an ordered one.

    Moreover, Eisenbach has co-authored two materials science papers in the past year, one in Leadership Computing, the other a letter in Nature, in which he and colleagues reported using the three-dimensional coordinates of a real iron-platinum nanoparticle with 6,560 iron and 16,627 platinum atoms to find its magnetic properties.

    “We’re combining the efficiency of WL sampling, the speed of the LSMS and the computing power of Titan to provide a solid first-principles thermodynamics description of magnetism,” Eisenbach says. “The combination also is giving us a realistic treatment of alloys and functional materials.”

    Alloys are comprised of at least two metals. Brass, for instance, is an alloy of copper and zinc. Magnets, of course, are used in everything from credit cards to MRI machines and in electric motors. Ferroelectric materials, such as barium titanate and zirconium titanate, form what’s known as an electric moment, in a transition phase, when temperatures drop beneath the ferroelectric Curie temperature – the point where atoms align, triggering spontaneous magnetism. The term – named after the French physicist Pierre Curie, who in the late 19th century described how magnetic materials respond to temperature changes – applies to both ferroelectric and ferromagnetic transitions. Eisenbach and his collaborators are interested in both phenomena.

    Eisenbach is particularly intrigued by high-entropy alloys, a relatively new sub-class discovered a decade ago that may hold useful mechanical properties. Conventional alloys have a dominant element – for instance, iron in stainless steel. High-entropy alloys, on the other hand, evenly spread out their elements on a crystal lattice. They don’t get brittle when chilled, remaining pliable at extremely low temperatures.

    To understand the configuration of high-entropy alloys, Eisenbach uses the analogy of a chess board sprinkled with black and white beads. In an ordered material, black beads occupy black squares and white beads, white squares. In high-entropy alloys, however, the beads are scattered randomly across the lattice regardless of color until the material reaches a low temperature, much lower than normal alloys, when it almost grudgingly orders itself.

    Eisenbach and his colleagues have modelled a material as large as 100,000 atoms using the Wang-Landau/LSMS method. “If I want to represent disorder, I want a simulation that calculates for hundreds if not thousands of atoms, rather than just two or three,” he says.

    To model an alloy, the researchers first deploy the Schrodinger equation to determine the state of electrons in the atoms. “Solving the equation lets you understand the electrons and their interactions, which is the glue that holds the material together and determines their physical properties.”

    All of a material’s properties and energies are calculated by many hundreds of thousands of calculations over many possible configurations and over varying temperatures to give a rendering so that modelers can determine at what temperature a material loses or gains its magnetism, or at what temperature an alloy goes from a disordered state to a perfectly ordered one.

    Eisenbach eagerly awaits the arrival of the Summit supercomputer – five to six times more powerful than Titan – to OLCF in late 2018.

    Two views of Summit-

    ORNL IBM Summit Supercomputer

    ORNL IBM Summit supercomputer depiction

    “Ultimately, we can do larger simulations and possibly look at even more complex disordered materials with more components and widely varying compositions, where the chemical disorder might lead to qualitatively new physical behaviors.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

    The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of accelerating scientific discovery and engineering progress by providing outstanding computing and data management resources to high-priority research and development projects.

    ORNL’s supercomputing program has grown from humble beginnings to deliver some of the most powerful systems in the world. On the way, it has helped researchers deliver practical breakthroughs and new scientific knowledge in climate, materials, nuclear science, and a wide range of other disciplines.

    The OLCF delivered on that original promise in 2008, when its Cray XT “Jaguar” system ran the first scientific applications to exceed 1,000 trillion calculations a second (1 petaflop). Since then, the OLCF has continued to expand the limits of computing power, unveiling Titan in 2013, which is capable of 27 petaflops.


    ORNL Cray XK7 Titan Supercomputer

    Titan is one of the first hybrid architecture systems—a combination of graphics processing units (GPUs), and the more conventional central processing units (CPUs) that have served as number crunchers in computers for decades. The parallel structure of GPUs makes them uniquely suited to process an enormous number of simple computations quickly, while CPUs are capable of tackling more sophisticated computational algorithms. The complimentary combination of CPUs and GPUs allow Titan to reach its peak performance.

    The OLCF gives the world’s most advanced computational researchers an opportunity to tackle problems that would be unthinkable on other systems. The facility welcomes investigators from universities, government agencies, and industry who are prepared to perform breakthrough research in climate, materials, alternative energy sources and energy storage, chemistry, nuclear physics, astrophysics, quantum mechanics, and the gamut of scientific inquiry. Because it is a unique resource, the OLCF focuses on the most ambitious research projects—projects that provide important new knowledge or enable important new technologies.

     
  • richardmitnick 6:15 pm on September 18, 2017 Permalink | Reply
    Tags: , , , , , D.O.E., , , ,   

    From BNL: “Three Brookhaven Lab Scientists Selected to Receive Early Career Research Program Funding” 

    Brookhaven Lab

    August 15, 2017 [Just caught up with this via social media.]
    Karen McNulty Walsh,
    kmcnulty@bnl.gov
    (631) 344-8350
    Peter Genzer,
    genzer@bnl.gov
    (631) 344-3174

    Three scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory have been selected by DOE’s Office of Science to receive significant research funding through its Early Career Research Program.

    The program, now in its eighth year, is designed to bolster the nation’s scientific workforce by providing support to exceptional researchers during the crucial early career years, when many scientists do their most formative work. The three Brookhaven Lab recipients are among a total of 59 recipients selected this year after a competitive review of about 700 proposals.

    The scientists are each expected to receive grants of up to $2.5 million over five years to cover their salary plus research expenses. A list of the 59 awardees, their institutions, and titles of research projects is available on the Early Career Research Program webpage.

    This year’s Brookhaven Lab awardees include:

    1
    Sanjaya Senanayake

    Brookhaven Lab chemist Sanjaya D. Senanayake was selected by DOE’s Office of Basic Energy Sciences to receive funding for “Unraveling Catalytic Pathways for

    Low Temperature Oxidative Methanol Synthesis from Methane.” His overarching goal is to study and improve catalysts that enable the conversion of methane (CH4), the primary component of natural gas, directly into methanol (CH3OH), a valuable chemical intermediate and potential renewable fuel.

    This research builds on the recent discovery of a single step catalytic process for this reaction that proceeds at low temperatures and pressures using inexpensive earth abundant catalysts. The reaction promises to be more efficient than current multi-step processes, which are energy-intensive, and a significant improvement over other attempts at one-step reactions where higher temperatures convert most of the useful hydrocarbon building blocks into carbon monoxide and carbon dioxide rather than methanol. With Early Career funding, Senanayake’s team will explore the nature of the reaction, and build on ways to further improve catalytic performance and specificity.

    The project will exploit unique capabilities of facilities at Brookhaven Lab, particularly at the National Synchrotron Light Source II (NSLS-II), that make it possible to study catalysts in real-world reaction environments (in situ) using x-ray spectroscopy, electron imaging, and other in situ methods.

    BNL NSLS-II


    BNL NSLS II

    Experiments using well defined model surfaces and powders will reveal atomic level catalytic structures and reaction dynamics. When combined with theoretical modeling, these studies will help the scientists identify the essential interactions that take place on the surface of the catalyst. Of particular interest are the key features that activate stable methane molecules through “soft” oxidative activation of C-H bonds so methane can be converted to methanol using oxygen (O2) and water (H2O) as co-reactants.

    This work will establish and experimentally validate principles that can be used to design improved catalysts for synthesizing fuel and other industrially relevant chemicals from abundant natural gas.

    “I am grateful for this funding and the opportunity to pursue this promising research,” Senanayake said. “These fundamental studies are an essential step toward overcoming key challenges for the complex conversion of methane into valued chemicals, and for transforming the current model catalysts into practical versions that are inexpensive, durable, selective, and efficient for commercial applications.”

    Sanjaya Senanayake earned his undergraduate degree in material science and Ph.D. in chemistry from the University of Auckland in New Zealand in 2001 and 2006, respectively. He worked as a research associate at Oak Ridge National Laboratory from 2005-2008, and served as a local scientific contact at beamline U12a at the National Synchrotron Light Source (NSLS) at Brookhaven Lab from 2005 to 2009. He joined the Brookhaven staff as a research associate in 2008, was promoted to assistant chemist and associate chemist in 2014, while serving as the spokesperson for NSLS Beamline X7B. He has co-authored over 100 peer reviewed publications in the fields of surface science and catalysis, and has expertise in the synthesis, characterization, reactivity of catalysts and reactions essential for energy conversion. He is an active member of the American Chemical Society, North American Catalysis Society, the American Association for the Advancement of Science, and the New York Academy of Science.

    3
    Alessandro Tricoli

    Brookhaven Lab physicist Alessandro Tricoli will receive Early Career Award funding from DOE’s Office of High Energy Physics for a project titled “Unveiling the Electroweak Symmetry Breaking Mechanism at ATLAS and at Future Experiments with Novel Silicon Detectors.”

    CERN/ATLAS detector

    His work aims to improve, through precision measurements, the search for exciting new physics beyond what is currently described by the Standard Model [SM], the reigning theory of particle physics.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    The discovery of the Higgs boson at the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) in Switzerland confirmed how the quantum field associated with this particle generates the masses of other fundamental particles, providing key insights into electroweak symmetry breaking—the mass-generating “Higgs mechanism.”

    CERN ATLAS Higgs Event

    But at the same time, despite direct searches for “new physics” signals that cannot be explained by the SM, scientists have yet to observe any evidence for such phenomena at the LHC—even though they know the SM is incomplete (for example it does not include an explanation for gravity).

    Tricoli’s research aims to make precision measurements to test fundamental predictions of the SM to identify anomalies that may lead to such discoveries. He focuses on the analysis of data from the LHC’s ATLAS experiment to comprehensively study electroweak interactions between the Higgs and particles called W and Z bosons. Any discovery of anomalies in such interactions could signal new physics at very high energies, not directly accessible by the LHC.

    This method of probing physics beyond the SM will become even more stringent once the high-luminosity upgrade of ATLAS, currently underway, is completed for longer-term LHC operations planned to begin in 2026.

    Tricoli’s work will play an important role in the upgrade of ATLAS’s silicon detectors, using novel state-of-the art technology capable of precision particle tracking and timing so that the detector will be better able to identify primary particle interactions and tease out signals from the background events. Designing these next-generation detector components could also have a profound impact on the development of future instruments that can operate in high radiation environments, such as in future colliders or in space.

    “This award will help me build a strong team around a research program I feel passionate about at ATLAS and the LHC, and for future experiments,” Tricoli said.

    “I am delighted and humbled by the challenge given to me with this award to take a step forward in science.”

    Alessandro Tricoli received his undergraduate degree in physics from the University of Bologna, Italy, in 2001, and his Ph.D. in particle physics from Oxford University in 2007. He worked as a research associate at Rutherford Appleton Laboratory in the UK from 2006 to 2009, and as a research fellow and then staff member at CERN from 2009 to 2015, receiving commendations on his excellent performance from both institutions. He joined Brookhaven Lab as an assistant physicist in 2016. A co-author on multiple publications, he has expertise in silicon tracker and detector design and development, as well as the analysis of physics and detector performance data at high-energy physics experiments. He has extensive experience tutoring and mentoring students, as well as coordinating large groups of physicists involved in research at ATLAS.

    4
    Chao Zhang

    Brookhaven Lab physicist Chao Zhang was selected by DOE’s Office of High Energy Physics to receive funding for a project titled, “Optimization of Liquid Argon TPCs for Nucleon Decay and Neutrino Physics.” Liquid Argon TPCs (for Time Projection Chambers) form the heart of many large-scale particle detectors designed to explore fundamental mysteries in particle physics.

    Among the most compelling is the question of why there’s a predominance of matter over antimatter in our universe. Though scientists believe matter and antimatter were created in equal amounts during the Big Bang, equal amounts would have annihilated one another, leaving only light. The fact that we now have a universe made almost entirely of matter means something must have tipped the balance.

    A US-hosted international experiment scheduled to start collecting data in the mid-2020s, called the Deep Underground Neutrino Experiment (DUNE), aims to explore this mystery through the search for two rare but necessary conditions for the imbalance: 1) evidence that some processes produce an excess of matter over antimatter, and 2) a sizeable difference in the way matter and antimatter behave.

    FNAL LBNF/DUNE from FNAL to SURF, Lead, South Dakota, USA


    FNAL DUNE Argon tank at SURF


    Surf-Dune/LBNF Caverns at Sanford



    SURF building in Lead SD USA

    The DUNE experiment will look for signs of these conditions by studying how protons (one of the two “nucleons” that make up atomic nuclei) decay as well as how elusive particles called neutrinos oscillate, or switch identities, among three known types.

    The DUNE experiment will make use of four massive 10-kiloton detector modules, each with a Liquid Argon Time Projection Chamber (LArTPC) at its core. Chao’s aim is to optimize the performance of the LArTPCs to fully realize their potential to track and identify particles in three dimensions, with a particular focus on making them sensitive to the rare proton decays. His team at Brookhaven Lab will establish a hardware calibration system to ensure their ability to extract subtle signals using specially designed cold electronics that will sit within the detector. They will also develop software to reconstruct the three-dimensional details of complex events, and analyze data collected at a prototype experiment (ProtoDUNE, located at Europe’s CERN laboratory) to verify that these methods are working before incorporating any needed adjustments into the design of the detectors for DUNE.

    “I am honored and thrilled to receive this distinguished award,” said Chao. “With this support, my colleagues and I will be able to develop many new techniques to enhance the performance of LArTPCs, and we are excited to be involved in the search for answers to one of the most intriguing mysteries in science, the matter-antimatter asymmetry in the universe.”

    Chao Zhang received his B.S. in physics from the University of Science and Technology of China in 2002 and his Ph.D. in physics from the California Institute of Technology in 2010, continuing as a postdoctoral scholar there until joining Brookhaven Lab as a research associate in 2011. He was promoted to physics associate III in 2015. He has actively worked on many high-energy neutrino physics experiments, including DUNE, MicroBooNE, Daya Bay, PROSPECT, JUNO, and KamLAND, co-authoring more than 40 peer reviewed publications with a total of over 5000 citations. He has expertise in the field of neutrino oscillations, reactor neutrinos, nucleon decays, liquid scintillator and water-based liquid scintillator detectors, and liquid argon time projection chambers. He is an active member of the American Physical Society.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 4:35 pm on July 26, 2017 Permalink | Reply
    Tags: D.O.E.,   

    From ESnet: “SLAC, AIC and Zetta Move Petabyte Datasets at Unprecedented Speed via ESnet” 

    1

    ESnet

    2017-07-26
    ESNETWORK

    Twice a year, ESnet staff meet with managers and researchers associated with each of the DOE Office of Science program offices to look toward the future of networking requirements and then take the planning steps to keep networking capabilities out in front of those demands.

    Network engineers and researchers at DOE national labs take a similar forward-looking approach. Earlier this year, DOE’s SLAC National Accelerator Laboratory (SLAC) teamed up with AIC and Zettar and tapped into ESnet’s 100G backbone network to repeatedly transfer 1-petabyte files in 1.4 days over a 5,000-mile portion of ESnet’s production network. Even with the transfer bandwidth capped at 80Gbps, the milestone demo resulted in transfer rates five times faster than other technologies. The demo data accounted for a third of all ESnet traffic during the tests. Les Cottrel from SLAC presented the results at the ESnet Site Coordinators meeting (ESCC) held at Lawrence Berkeley National Laboratory in May 2017.

    1
    No image caption or credit

    Read the AICCI/Zettar news release.

    Read the story in insideHPC.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Created in 1986, the U.S. Department of Energy’s (DOE’s) Energy Sciences Network (ESnet) is a high-performance network built to support unclassified science research. ESnet connects more than 40 DOE research sites—including the entire National Laboratory system, supercomputing facilities and major scientific instruments—as well as hundreds of other science networks around the world and the Internet.

     
  • richardmitnick 3:19 pm on May 9, 2017 Permalink | Reply
    Tags: $3.9 Million to Help Industry Address High Performance Computing Challenges, D.O.E.,   

    From ORNL via energy.gov: “Energy Department Announces $3.9 Million to Help Industry Address High Performance Computing Challenges” 

    i1

    Oak Ridge National Laboratory

    ENERGY.GOV

    May 8, 2017
    Today, the U.S. Department of Energy announced nearly $3.9 million for 13 projects designed to stimulate the use of high performance supercomputing in U.S. manufacturing. The Office of Energy Efficiency and Renewable Energy (EERE) Advanced Manufacturing Office’s High Performance Computing for Manufacturing (HPC4Mfg) program enables innovation in U.S. manufacturing through the adoption of high performance computing (HPC) to advance applied science and technology relevant to manufacturing. HPC4Mfg aims to increase the energy efficiency of manufacturing processes, advance energy technology, and reduce energy’s impact on the environment through innovation.

    The 13 new project partnerships include application of world-class computing resources and expertise of the national laboratories including Lawrence Livermore National Laboratory, Oak Ridge National Laboratory, Lawrence Berkley National Laboratory, National Renewable Energy Laboratory, and Argonne National Laboratory. These projects will address key challenges in U.S. manufacturing proposed in partnership with companies and improve energy efficiency across the manufacturing industry through applied research and development of energy technologies.

    Each of the 13 newly selected projects will receive up to $300,000 to support work performed by the national lab partners and allow the partners to use HPC compute cycles.

    The 13 projects selected for awards are led by:

    7AC Technologies
    8 Rivers Capital
    Applied Materials, Inc.
    Arconic Inc.*
    Ford Motor Company
    General Electric Global Research Center*
    LanzaTech
    Samsung Semiconductor, Inc.
    Sierra Energy
    The Timken Company
    United Technologies Research Corporation

    *Awarded two projects

    Read more about the individual projects.

    The Advanced Manufacturing Office (AMO) recently published a draft of its Multi-year Program Plan that identifies the technology, research and development, outreach, and crosscutting activities that AMO plans to focus on over the next five years. Some of the technical focus areas in the plan align with the high-priority, energy-related manufacturing activities that the HPC4Mfg program also aims to address.

    Led by Lawrence Livermore National Laboratory, with Lawrence Berkeley National Laboratory and Oak Ridge National Laboratory as strong partners, the HPC4Mfg program has a diverse portfolio of small and large companies, consortiums, and institutes within varying industry sectors that span the country. Established in 2015, it currently supports 28 projects that range from improved turbine blades for aircraft engines and reduced heat loss in electronics, to steel-mill energy efficiency and improved fiberglass production.

    ORNL Cray XK7 Titan Supercomputer

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 1:37 pm on January 12, 2017 Permalink | Reply
    Tags: Argo project, , D.O.E., , Hobbes project, , , XPRESS project   

    From ASCRDiscovery via D.O.E. “Upscale computing” 

    DOE Main

    Department of Energy

    ASCRDiscovery

    ASCRDiscovery

    January 2017
    No writer credit

    National labs lead the push for operating systems that let applications run at exascale.

    2
    Image courtesy of Sandia National Laboratories.

    For high-performance computing (HPC) systems to reach exascale – a billion billion calculations per second – hardware and software must cooperate, with orchestration by the operating system (OS).

    But getting from today’s computing to exascale requires an adaptable OS – maybe more than one. Computer applications “will be composed of different components,” says Ron Brightwell, R&D manager for scalable systems software at Sandia National Laboratories.

    “There may be a large simulation consuming lots of resources, and some may integrate visualization or multi-physics.” That is, applications might not use all of an exascale machine’s resources in the same way. Plus, an OS aimed at exascale also must deal with changing hardware. HPC “architecture is always evolving,” often mixing different kinds of processors and memory components in heterogeneous designs.

    As computer scientists consider scaling up hardware and software, there’s no easy answer for when an OS must change. “It depends on the application and what needs to be solved,” Brightwell explains. On top of that variability, he notes, “scaling down is much easier than scaling up.” So rather than try to grow an OS from a laptop to an exascale platform, Brightwell thinks the other way. “We should try to provide an exascale OS and runtime environment on a smaller scale – starting with something that works at a higher scale and then scale down.”

    To explore the needs of an OS and conditions to run software for exascale, Brightwell and his colleagues conducted a project called Hobbes, which involved scientists at four national labs – Oak Ridge (ORNL), Lawrence Berkeley, Los Alamos and Sandia – plus seven universities. To perform the research, Brightwell – with Terry Jones, an ORNL computer scientist, and Patrick Bridges, a University of New Mexico associate professor of computer science – earned an ASCR Leadership Computing Challenge allocation of 30 million processor hours on Titan, ORNL’s Cray XK7 supercomputer.

    ORNL Cray Titan Supercomputer
    ORNL Cray XK7 Titan Supercomputer

    2
    The Hobbes OS supports multiple software stacks working together, as indicated in this diagram of the Hobbes co-kernel software stack. Image courtesy of Ron Brightwell, Sandia National Laboratories.

    Brightwell made a point of including the academic community in developing Hobbes. “If we want people in the future to do OS research from an HPC perspective, we need to engage the academic community to prepare the students and give them an idea of what we’re doing,” he explains. “Generally, OS research is focused on commercial things, so it’s a struggle to get a pipeline of students focusing on OS research in HPC systems.”

    The Hobbes project involved a variety of components, but for the OS side, Brightwell describes it as trying to understand applications as they become more sophisticated. They may have more than one simulation running in a single OS environment. “We need to be flexible about what the system environment looks like,” he adds, so with Hobbes, the team explored using multiple OSs in applications running at extreme scale.

    As an example, Brightwell notes that the Hobbes OS envisions multiple software stacks working together. The OS, he says, “embraces the diversity of the different stacks.” An exascale system might let data analytics run on multiple software stacks, but still provide the efficiency needed in HPC at extreme scales. This requires a computer infrastructure that supports simultaneous use of multiple, different stacks and provides extreme-scale mechanisms, such as reducing data movement.

    Part of Hobbes also studied virtualization, which uses a subset of a larger machine to simulate a different computer and operating system. “Virtualization has not been used much at extreme scale,” Brightwell says, “but we wanted to explore it and the flexibility that it could provide.” Results from the Hobbes project indicate that virtualization for extreme scale can provide performance benefits at little cost.

    Other HPC researchers besides Brightwell and his colleagues are exploring OS options for extreme-scale computing. For example, Pete Beckman, co-director of the Northwestern-Argonne Institute of Science and Engineering at Argonne National Laboratory, runs the Argo project.

    A team of 25 collaborators from Argonne, Lawrence Livermore National Laboratory and Pacific Northwest National Laboratory, plus four universities created Argo, an OS that starts with a single Linux-based OS and adapts it to extreme scale.

    When comparing the Hobbes OS to Argo, Brightwell says, “we think that without getting in that Linux box, we have more freedom in what we do, other than design choices already made in Linux. Both of these OSs are likely trying to get to the same place but using different research vehicles to get there.” One distinction: The Hobbes project uses virtualization to explore the use of multiple OSs working on the same simulation at extreme scale.

    As the scale of computation increases, an OS must also support new ways of managing a systems’ resources. To explore some of those needs, Thomas Sterling, director of Indiana University’s Center for Research in Extreme Scale Technologies, developed ParalleX, an advanced execution model for computations. Brightwell leads a separate project called XPRESS to support the ParalleX execution model. Rather than computing’s traditional static methods, ParalleX implementations use dynamic adaptive techniques.

    More work is always necessary as computation works toward extreme scales. “The important thing in going forward from a runtime and OS perspective is the ability to evaluate technologies that are developing in terms of applications,” Brightwell explains. “For high-end applications to pursue functionality at extreme scales, we need to build that capability.” That’s just what Hobbes and XPRESS – and the ongoing research that follows them – aim to do.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of the Energy Department is to ensure America’s security and prosperity by addressing its energy, environmental and nuclear challenges through transformative science and technology solutions.

     
  • richardmitnick 8:37 pm on September 14, 2015 Permalink | Reply
    Tags: , , D.O.E., ,   

    From D.O.E.: “Women @ Energy: Ingrid Fang” 

    DOE Main

    Department of Energy

    1
    Ingrid Fang is an engineering analyst at Fermi National Accelerator Laboratory (FNAL). She attended Northern Illinois University, where she earned a master of science degree in mechanical engineering.

    Ingrid Fang is an engineering analyst at Fermi National Accelerator Laboratory (Fermilab). She attended Northern Illinois University, where she earned a master of science degree in mechanical engineering. Ingrid says she enjoys her part in helping make history.

    1) What inspired you to work in STEM?

    I had 10 years of professional ballet training, started when I was 4. I traded my ballet slippers for an engineering degree to please my dad. Dad is the biggest fan of Fermi lab and its groundbreaking work. I turn an idea into a reality by analyzing complex experimental equipment under mechanical and thermal loads.

    2) What excites you about your work at the Department of Energy?

    Working with ultra-dedicated and competent professionals.

    3) How can our country engage more women, girls, and other underrepresented groups in STEM?

    Studies have shown that when told that men score better on math tests than women, women tend to score worse. When told that this isn’t true, the two genders score equally well. I think women and girls should not focus on this internal bias toward underrating their intelligence. There are so many role models to follow if they empower themselves with an intense desire. Madame Curie and her daughter set a good example for me.

    4) Do you have tips you’d recommend for someone looking to enter your field of work?

    A career can be intimidating at first, a great unknown hidden by your own anxiety and inexperience. But with passion and determination, you can succeed. Through this process you will build strong character, which money cannot buy. And when you look back on your life, those memories will be your most treasured, because those experiences defined who you really are.

    5) When you have free time, what are your hobbies?

    I like to read and study everything about life and its meaning. I am always searching to better myself, so I can make a positive contribution to our society, and in turn make a better life for everyone around me. I love to dance, and have volunteered to teach ballet to girls six years old and up. I have helped grownups get on the dance floor to feel young and alive again. I’ve helped people in poor health to find a meaning in their lives. I love to help people in need.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of the Energy Department is to ensure America’s security and prosperity by addressing its energy, environmental and nuclear challenges through transformative science and technology solutions.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: