Tagged: TACC – Texas Advanced Computer Center Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:40 pm on January 24, 2020 Permalink | Reply
    Tags: "Simulations Reveal Galaxy Clusters Details", , Astrophysicists have developed cosmological computer simulations called RomulusC where the ‘C' stands for galaxy cluster., , , , RomulusC has produced some of the highest resolution simulations ever of galaxy clusters which can contain hundreds or even thousands of galaxies., , TACC - Texas Advanced Computer Center   

    From Texas Advanced Computing Center: “Simulations Reveal Galaxy Clusters Details” 

    TACC bloc

    From Texas Advanced Computing Center

    January 23, 2020
    Jorge Salazar

    Galaxy clusters probed with Stampede2, Comet supercomputers [and others-see below]

    RomulusC has produced some of the highest resolution simulations ever of galaxy clusters, which can contain hundreds or even thousands of galaxies. The galaxy cluster simulations generated by supercomputers are helping scientists map the unknown universe. Credit: Butsky et al.

    Inspired by the science fiction of the spacefaring Romulans of Star Trek, astrophysicists have developed cosmological computer simulations called RomulusC, where the ‘C’ stands for galaxy cluster. With a focus on black hole physics, RomulusC has produced some of the highest resolution simulations ever of galaxy clusters, which can contain hundreds or even thousands of galaxies.

    On Star Trek, the Romulans powered their spaceships with an artificial black hole. In reality, it turns out that black holes can drive the formation of stars and the evolution of whole galaxies. And this galaxy cluster work is helping scientists map the unknown universe.

    An October 2019 study yielded results from RomulusC simulations, published in the Monthly Notices of the Royal Astronomical Society. It probed the ionized gas of mainly hydrogen and helium within and surrounding the intracluster medium, which fills the space between galaxies in a galaxy cluster.

    Hot, dense gas of more than a million degrees Kelvin fills the inner cluster with roughly uniform metallicity. Cool-warm gas between ten thousand and a million degrees Kelvin lurks in patchy distributions at the outskirts, with greater variety of metals. Looking like the tail of a jellyfish, the cool-warm gas traces the process of galaxies falling into the cluster and losing their gas. The gas gets stripped from the falling galaxy and eventually mixes with the inner region of the galaxy cluster.

    “We find that there’s a substantial amount of this cool-warm gas in galaxy clusters,” said study co-author Iryna Butsky, a PhD Student in the Department of Astronomy at the University of Washington. “We see that this cool-warm gas traces at extremely different and complementary structures compared to the hot gas. And we also predict that this cool-warm component can be observed now with existing instruments like the Hubble Space Telescope Cosmic Origins Spectrograph.”

    Scientists are just beginning to probe the intracluster medium, which is so diffuse that its emissions are invisible to any current telescopes. Scientists are using RomulusC to help see clusters indirectly using the ultraviolet (UV) light from quasars, which act like a beacon shining through the gas. The gas absorbs UV light, and the resulting spectrum yields density, temperature, and metallicity profiles when analyzed with instruments like the Cosmic Origins Spectrograph aboard the Hubble Space Telescope (HST).

    NASA Hubble Cosmic Origins Spectrograph

    NASA/ESA Hubble Telescope

    A 5×5 megaparsec (~18.15 light years) snapshot of the RomulusC simulation at redshift z = 0.31. The top row shows density-weighted projections of gas density, temperature, and metallicity. The bottom row shows the integrated X-ray intensity, O VI column density, and H I column density. Credit: Butsky et al.

    “One really cool thing about simulations is that we know what’s going on everywhere inside the simulated box,” Butsky said. “We can make some synthetic observations and compare them to what we actually see in absorption spectra and then connect the dots and match the spectra that’s observed and try to understand what’s really going on in this simulated box.”

    They applied a software tool called Trident developed by Cameron Hummels of Caltech and colleagues that takes the synthetic absorption line spectra and adds a bit of noise and instrument quirks known about the HST.

    “The end result is a very realistic looking spectrum that we can directly compare to existing observations,” Butsky said. “But what we can’t do with observations is reconstruct three-dimensional information from a one-dimensional spectrum. That’s what’s bridging the gap between observations and simulations.”

    One key assumption behind the RomulusC simulations supported by the latest science is that the gas making up the intracluster medium originates at least partly in the galaxies themselves. “We have to model how that gas gets out of the galaxies, which is happening through supernovae going off, and supernovae coming from young stars,” said study co-author Tom Quinn, a professor of astronomy at the University of Washington. That means a dynamic range of more than a billion to contend with.

    What’s more, clusters don’t form in isolation, so their environment has to be accounted for.

    Then there’s a computational challenge that’s particular to clusters. “Most of the computational action is happening in the very center of the cluster. Even though we’re simulating a much larger volume, most of the computation is happening at a particular spot. There’s a challenge of, as you’re trying to simulate this on a large supercomputer with tens of thousands of cores, how do you distribute that computation across those cores?” Quinn said.

    Quinn is no stranger to computational challenges. Since 1995, he’s used computing resources funded by the National Science Foundation (NSF), most recently those that are part of XSEDE, the Extreme Science and Engineering Discovery Environment.

    “Over the course of my career, NSF’s ability to provide high-end computing has helped the overall development of the simulation code that produced this,” said Quinn. “These parallel codes take a while to develop. And XSEDE has been supporting me throughout that development period. Access to a variety of high-end machines has helped with the development of the simulation code.”

    RomulusC started out as a proof-of-concept with friendly user time on the Stampede2 [below] system at the Texas Advanced Computing Center (TACC), when the Intel Xeon Phil (“Knights Landing”) processors first became available. “I got help from the TACC staff on getting the code up and running on the many-core, 68 core per chip machines.”

    Quinn and colleagues eventually scaled up RomulusC to 32,000 processors and completed the simulation on the Blue Waters system of the National Center for Supercomputing Applications.

    NCSA U Illinois Urbana-Champaign Blue Waters Cray Linux XE/XK hybrid machine supercomputer

    Along the way, they also used the NASA Pleiades supercomputer and the XSEDE-allocated Comet system at the San Diego Supercomputer Center, an Organized Research Unit of the University of California San Diego.

    NASA SGI Intel Advanced Supercomputing Center Pleiades Supercomputer

    SDSC Dell Comet supercomputer at San Diego Supercomputer Center (SDSC)

    “Comet fills a particular niche,” Quinn said. “It has large memory nodes available. Particular aspects of the analysis, for example identifying the galaxies, is not easily done on a distributed memory machine. Having the large shared memory machine available was very beneficial. In a sense, we didn’t have to completely parallelize that particular aspect of the analysis.”

    The Stampede2 supercomputer at the Texas Advanced Computing Center (left) and the Comet supercomputer at the San Diego Supercomputer Center (right) are allocated resources of the Extreme Science and Engineering Discovery Environment (XSEDE) funded by the National Science Foundation (NSF). Credit: TACC, SDSC.

    “Without XSEDE, we couldn’t have done this simulation,” Quinn recounted. “It’s essentially a capability simulation. We needed the capability to actually do the simulation, but also the capability of the analysis machines.”

    The next generation of simulations are being made using the NSF-funded Frontera [below] system, the fastest academic supercomputer and currently the #5 fastest system in the world, according to the November 2019 Top500 List.

    “Right now on Frontera, we’re doing runs at higher resolution of individual galaxies,” Quinn said. “Since we started these simulations, we’ve been working on proving how we model the star formation. And of course we have more computational power, so just purely higher mass resolution, again, to make our simulations of individual galaxies more realistic. More and bigger clusters would be good too,” Quinn added.

    Said Butsky: “What I think is really cool about using supercomputers to model the universe is that they play a unique role in allowing us to do experiments. In many of the other sciences, you have a lab where you can test your theories. But in astronomy, you can come up with a pen and paper theory and observe the universe as it is. But without simulations, it’s very hard to run these tests because it’s hard to reproduce some of the extreme phenomena in space, like temporal scales and getting the temperatures and densities of some of these extreme objects. Simulations are extremely important in being able to make progress in theoretical work.”

    The study,”Ultraviolet Signatures of the Multiphase Intracluster and Circumgalactic Media in the RomulusC Simulation,” was published in October of 2019 in the Monthly Notices of the Royal Astronomical Society. The study co-authors are Iryna S. Butsky, Thomas R. Quinn, and Jessica K. Werk of the University of Washington; Joseph N. Burchett of UC Santa Cruz, and Daisuke Nagai and Michael Tremmel of Yale University. Study funding came from the NSF and NASA.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Texas Advanced Computing Center (TACC) designs and operates some of the world’s most powerful computing resources. The center’s mission is to enable discoveries that advance science and society through the application of advanced computing technologies.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    TACC DELL EMC Stampede2 supercomputer

    TACC Frontera Dell EMC supercomputer fastest at any university

  • richardmitnick 4:21 pm on June 26, 2019 Permalink | Reply
    Tags: , BOINC@TACC, TACC - Texas Advanced Computer Center, Volunteer Citizen Science   

    From Texas Advanced Computing Center: “Science enthusiasts, researchers, and students benefit from volunteer computing using BOINC@TACC” 

    TACC bloc

    From Texas Advanced Computing Center

    June 24, 2019
    Faith Singer-Villalobos

    You don’t have to be a scientist to contribute to research projects in fields such as biomedicine, physics, astronomy, artificial intelligence, or earth sciences.

    Using specialized, open-source software from the Berkeley Open Infrastructure for Network Computing project (BOINC), hundreds of thousands of home and work computers are being used for volunteer computing using consumer devices and organizational resources. For the past 17 years, with funding primarily from the National Science Foundation (NSF), BOINC is now used by 38 projects and more than a half a million computers running these projects around the world.

    David Anderson, BOINC’s founder, is a research scientist at the University of California Berkeley Space Sciences Laboratory. His objective in creating BOINC was to build software to handle the details of distributed computing so that scientists wouldn’t have to.

    “I wanted to create a new way of doing scientific computing as an alternative to grids, clusters, and clouds,” Anderson said. “As a software system, BOINC has been very successful. It’s evolved without too many growing pains to handle multi-core CPUs, all kinds of GPUs, virtual machines and containers, and Android mobile devices.”

    The Texas Advanced Computing Center (TACC) started its own project in 2017 — BOINC@TACC — that supports virtualized, parallel, cloud, and GPU-based applications to allow the public to help solve science problems. BOINC@TACC is the first use of volunteer computing by a major high performance computing (HPC) center.

    “BOINC@TACC is an excellent project for making the general public a key contributor in science and technology projects,” said Ritu Arora, a research scientist at TACC and the project lead.

    “We love engaging with people in the community who can become science enthusiasts and connect with TACC and generate awareness of science projects,” Arora said. “And, importantly for students and researchers, there is always an unmet demand for computing cycles. If there is a way for us to connect these two communities, we’re fulfilling a major need.”

    BOINC volunteer Dick Duggan is a retired IT professional who lives in Massachusetts, and a volunteer computing enthusiast for more than a decade.

    “I’m a physics nerd. Those tend to be my favorite projects,” he said. “I contribute computing cycles to many projects, including the Large Hadron Collider (LHC). LHC is doing state-of-the-art physics — they’re doing physics on the edge of what we know about the universe and are pushing that edge out.”

    Duggan uses his laptop, desktop, tablet, and Raspberry Pi to provide computing cycles to BOINC@TACC. “When my phone is plugged in and charged, it runs BOINC@TACC, too.”

    Joining BOINC@TACC is simple: Sign up as volunteer, set up your device, and pick your projects.

    Compute cycles on more than 1,300 computing devices have been volunteered for the BOINC@TACC project and more than 300 devices have processed the jobs submitted using the BOINC@TACC infrastructure. The aggregate computer power available through the CPUs on the volunteered devices is about 3.5 teraflops (or 3.5 trillion floating point operations per second).


    It’s no secret that computational resources are in great demand, and that researchers with the most demanding computational requirements require supercomputing systems. Access to the most powerful supercomputers in the world, like the resources at TACC, is important for the advancement of science in all disciplines. However, with funding limitations, there is always an unmet need for these resources.

    “BOINC@TACC helps fill a gap in what researchers and students need and what the open-science supercomputing centers can currently provide them,” Arora said.

    Researchers from UT Austin; any of the 14 UT System institutions; and researchers around the country through XSEDE, the national advanced computing infrastructure in the U.S., are invited to submit science jobs to BOINC@TACC.

    To help researchers with this unmet need, TACC started a collaboration with Anderson at UC Berkeley to see how the center could outsource high-throughput computing jobs to BOINC.

    When a researcher is ready to submit projects through BOINC@TACC, all they need to do is log in to a TACC system and run a program from their account that will register them for BOINC@TACC, according to Arora. Thereafter, the researcher can continue running programs that will help them (1) decide whether BOINC@TACC is the right infrastructure for running their jobs; and (2) submit the qualified high-throughput computing jobs through the command-line interface. The researchers can also submit jobs through the web interface.

    Instead of the job running on Stampede2, for example, it could run on a volunteer’s home or work computer.

    “Our software matches the type of resources for a job and what’s available in the community,” Arora said. “The tightly-coupled, compute-intensive, I/O-intensive, and memory-intensive applications are not appropriate for running on the BOINC@TACC infrastructure. Therefore, such jobs are filtered out and submitted for running on Stampede2 or Lonestar5 instead of BOINC@TACC,” she clarified.

    A significant number of high-throughput computing jobs are also run on TACC systems in addition to the tightly-coupled MPI jobs. These high-throughput computing jobs consist of large sets of loosely-coupled tasks, each of which can be executed independently and in parallel to other tasks. Some of these high-throughput computing jobs have modest memory and input/output needs, and do not have an expectation of a fixed turnaround time. Such jobs qualify to run on the BOINC@TACC infrastructure.

    “Volunteer computing is well-suited to this kind of workload,” Anderson said. “The idea of BOINC@TACC is to offload these jobs to a BOINC server, freeing up the supercomputers for the tightly-coupled parallel jobs that need them.”

    To start, the TACC team deployed an instance of the BOINC server on a cloud-computing platform. Next, the team developed the software for integrating BOINC with supercomputing and cloud computing platforms. During the process, the project team developed and released innovative software components that can be used by the community to support projects from a variety of domains. For example, a cloud-based shared filesystem and a framework for creating Docker images that was developed in this project can be useful for a variety of science gateway projects.

    As soon as the project became operational, volunteers enthusiastically started signing up. The number of researchers using BOINC@TACC is gradually increasing.

    Carlos Redondo, a senior in Aerospace Engineering at UT Austin, is both a developer on the BOINC@TACC project and a researcher who uses the infrastructure.

    “The incentive for researchers to use volunteer computing is that they save on their project allocation,” Redondo said. “But researchers need to be mindful that the number of cores on volunteer systems are going to be small, and they don’t have the special optimization that servers at TACC have,” Redondo said.

    As a student researcher, Redondo has submitted multiple computational fluid dynamics jobs through BOINC@TACC. In this field, computers are used to simulate the flow of fluids and the interaction of the fluid (liquids and gases) with surfaces. Supercomputers can achieve better solutions and are often required to solve the largest and most complex problems.

    “The results in terms of the numbers produced from the volunteer devices were exactly those expected, and also identical to those running on Stampede2,” he said.

    Since jobs run whenever volunteers’ computers are available, researchers’ turnaround time is longer than that of Stampede2, according to Redondo. “Importantly, if a volunteer decides to stop a job, BOINC@TACC will automatically safeguard the progress, protect the data, and save the results.”

    TACC’s Technical Contribution to BOINC

    BOINC software works out-of-the-box. What it doesn’t support is the software for directly accepting jobs from supercomputers.

    “We’re integrating BOINC software with the software that is running on supercomputing devices so these two pieces can talk to each other when we have to route qualified high-throughput computing jobs from supercomputers to volunteer devices. The other piece TACC has contributed is extending BOINC to the cloud computing platforms,” Arora said.

    Unlike other BOINC projects, the BOINC@TACC infrastructure can execute jobs on Virtual Machines (VMs) running on cloud computing systems. These systems are especially useful for GPU jobs and for assuring a certain quality of service to the researchers. “If the pool of the volunteered resources goes down, we’re able to route the jobs to the cloud computing systems and meet the expectations of the researchers. This is another unique contribution of the project,” Arora said.

    BOINC@TACC is also pioneering the use of Docker to package custom-written science applications so that they can run on volunteered resources.

    Furthermore, the project team is planning to collaborate with companies that may have corporate social responsibility programs for soliciting compute-cycles on their office computers or cloud computing systems.

    “We have the capability to harness office desktops and laptops, and also the VMs in the cloud. We’ve demonstrated that we’re capable of routing jobs from Stampede2 to TACC’s cloud computing systems, Chameleon and Jetstream, through the BOINC@TACC infrastructure,” Arora said.

    Anderson concluded, “We hope that BOINC@TACC will provide a success story that motivates other large scientific computing centers to use the same approach. This will benefit thousands of computational scientists and, we hope, will greatly increase the volunteer population.”

    Dick Duggan expressed a common sentiment of BOINC volunteers that people want to do it for the love of science. “This is the least I can do. I may not be a scientist but I’m accomplishing something…and it’s fun to do,” Duggan said.

    Learn More: The software infrastructure that TACC developed for routing jobs from TACC systems to the volunteer devices and the cloud computing systems is described in this paper.

    BOINC@TACC is funded through NSF award #1664022. The project collaborators are grateful to TACC, XSEDE, and the Science Gateway Community Institute (SGCI) for providing the resources required for implementing this project.

    Computing power
    24-hour average: 17.707 PetaFLOPS.
    Active: 139,613 volunteers, 590,666 computers.
    Not considered for the TOP500 because it is distributed, BOINC is right now more powerful than No.9 Titan 17.590 PetaFLOPS and No 10 Sequoia 17.173 PetaFLOPS.

    My BOINC

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Texas Advanced Computing Center (TACC) designs and operates some of the world’s most powerful computing resources. The center’s mission is to enable discoveries that advance science and society through the application of advanced computing technologies.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    TACC DELL EMC Stampede2 supercomputer

  • richardmitnick 4:07 pm on March 21, 2019 Permalink | Reply
    Tags: , Quantum Corp., TACC - Texas Advanced Computer Center, TACC has selected Quantum StorNext as their archive file system with a Quantum Scalar i6000 tape library providing dedicated Hierarchical Storage Management (HSM).   

    From insideHPC: “TACC to power HSM Archives with Quantum Corp Tape Libraries” 

    From insideHPC

    Today Quantum Corp. announced the Texas Advanced Computing Center (TACC) has selected Quantum StorNext as their archive file system, with a Quantum Scalar i6000 tape library providing dedicated Hierarchical Storage Management (HSM).

    “Our ability to archive data is vital to TACC’s success, and the combination of StorNext as our archive file system managing Quantum hybrid storage, Scalar tape and our DDN primary disk will enable us to meet our commitments to the talented researchers who depend on TACC now and in the future,” said Tommy Minyard, Director of Advanced Computing at TACC.

    Tackling the Archive Challenge for Scientific Data

    TACC designs and operates some of the world’s most powerful computing resources.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC DELL EMC Stampede2 supercomputer

    TACC Frontera Dell EMC supercomputer fastest at any university

    The center’s mission is to enable discoveries that advance science and society through the application of advanced computing technologies. TACC’s environment includes a comprehensive cyberinfrastructure ecosystem of leading-edge resources in high performance computing (HPC), visualization, data analysis, storage, archive, cloud, data-driven computing, connectivity, tools, APIs, algorithms, consulting, and software. TACC experts work with thousands of researchers on more than 3,000 projects each year.

    Researchers from around the globe leverage TACC’s computing resources for projects that span pure research and include partnerships with industry, generating an enormous volume of data which must be archived and accessible for future use. The Quantum system combined with DDN SFA14KX primary storage replaces TACC’s original Oracle solution for migrating files to and from tape archive. The new system will utilize LTO technologies, taking an open approach to archive which is designed for future growth without the limitations of proprietary tape.

    “TACC’s focus on constant innovation creates an environment that places tremendous stress on storage and Quantum has long been at the forefront in managing solutions that meet the most extreme reliability, accessibility and massive scalability requirements,” said Eric Bassier, Senior Director of Product Marketing at Quantum. “Combining Scalar tape with StorNext data management capabilities creates an HSM solution that is capable of delivering under the demanding conditions of the TACC environment.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

  • richardmitnick 8:13 am on March 14, 2019 Permalink | Reply
    Tags: "Can computing change the world?", Advanced Computing for Social Change, Computing4Change, , TACC - Texas Advanced Computer Center   

    From Science Node: “Can computing change the world?” 

    Science Node bloc
    From Science Node

    13 Mar, 2019
    Ellen Glover

    Last November, sixteen undergraduate students from around the world came together in Texas to combine their skills and tackle the issue of violence.

    The Computing4Change program brings together undergraduate students for a 48-hour intensive competition to apply computing to urgent social issues. This 2018 topic was “Resisting Cultural Acceptance of Violence.”

    This was part of Computing4Change, a program dedicated to empowering students of all races, genders, and backgrounds to implement change through advanced computing and research.

    The challenge was developed by Kelly Gaither and Rosalia Gomez from the Texas Advanced Computing Center (TACC), and Linda Akli of the Southeastern Universities Research Association.

    Three years ago, as the conference chair at the 2016 XSEDE conference in Miami, Gaither wanted to ensure that she authentically represented students’ voices to other conference attendees. Akli and Gomez led the student programs at the conference, bringing together a large, diverse group of students from Miami and surrounding area.

    So she asked the students what issues they cared about. “It was shocking that most of the issues had nothing to do with their school life and everything to do with the social conditions that they deal with every day,” Gaither says.

    After that, Gaither, Gomez, and Akli promised that they would start a larger program to give students a platform for the issues they found important. They brought in Ruby Mendenhall from the University of Illinois Urbana-Champaign and Sue Fratkin, a public policy analyst concentrating on technology and communication issues.

    48-hour challenge. The student competitors had only 48 hours to do all of their research and come up with a 30-minute presentation before a panel of judges at the SC18 conference in Dallas, TX. Courtesy Computing4Change.

    Out of that collaboration came Advanced Computing for Social Change, a program that gave students a platform to use computing to investigate hot-button topics like Black Lives Matter and immigration. The inaugural competition was held at SC16 and was supported by the conference and by the National Science Foundation-funded XSEDE project.

    “The students at the SC16 competition were so empowered by being able to work on Black Lives Matter that they actually asked if they could work overnight and do the presentations later the next day,” Gaither says. “They felt like there was more work that needed to be done. I have never before seen that kind of enthusiasm for a given problem.”

    In 2018, Gaither, Gomez, and Akli made some big changes to the program and partnered with the Special Interest Group for High Performance Computing (SIGHPC). As a result of SIGHPC’s sponsorship, the program was renamed Computing4Change. Applications were opened up to national and international undergraduate students to ensure a diverse group of participants.

    “We know that the needle is not shifting with respect to diversity. We know that the pipeline is not coming in any more diverse, and we are losing diverse candidates when they do come into the pipeline,” Gaither says.

    The application included questions about what issues the applicants found important: What topics were they most passionate about and why? How did they see technology fitting into solutions?

    Within weeks, the program received almost 300 applicants for 16 available spots. An additional four students from Chaminade University of Honolulu were brought in to participate in the competition.

    In the months leading up to the conference, Gaither, Gomez, and Akli hosted a series of webinars teaching everything from data analytics to public speaking and understanding differences in personality types.

    All expenses, including flight, hotel, meals, and conference fees were covered for each student. “For some of these kids, this is the first time they’ve ever traveled on an airplane. We had a diverse set of academic backgrounds. For example, we had a student from Yale and a community college student,” says Gaither. “Their backgrounds span the gamut, but they all come in as equals.”

    Although they interacted online, the students didn’t meet in person until they showed up to the conference. That’s when they were assigned to their group of four and the competition topic of violence was revealed. The students had to individually decide what direction to take with the research and how that would mesh with their other group members’ choices.

    “Each of those kids had to have their individual hypothesis so that no one voice was more dominant than the other,” Gaither says. “And then they had to work together to find out what the common theme might be. We worked with them to assist with scope, analytics, and messaging.”

    The teams had 48 hours to do all of their research and come up with a 30-minute presentation to present to a panel of judges at the SC18 conference in Dallas, TX.

    All mentors stayed with the students, making sure they approached their research from a more personal perspective and worked through any unexpected roadblocks—just like they would have to in a real-world research situation.

    For example, one student wanted to find data on why people leave Honduras and seek asylum in the United States. Little explicit data exits on that topic, but there is data on why people from all countries seek asylum. The mentors encouraged her to look there for correlations.

    “That was a process of really trying to be creative about getting to the answer,” Gaither says. “But that’s life. With real data, that’s life.”

    The Computing4Change mentors also coached the students to analyze their data and present it clearly to the judges. Gaither hopes the students leave the program not only knowing more about advanced computing, but also more aware of their power to effect change. She says it’s easy to teach someone a skill, but it’s much more impactful to help them find a personal passion within that skill.

    “If you’re passionate about something, you’ll stick with it,” Gaither says. “You can plug into very large, complex problems that are relevant to all of us.”

    The next Computing4Change event will be held in Denver, CO, co-located with the SC19 conference Nov 16-22, 2019. Travel, housing, meals, and SC19 conference registration covered for the 20 students selected. Application deadline is April 8, 2019. Apply here.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

  • richardmitnick 12:43 pm on March 11, 2019 Permalink | Reply
    Tags: , , Frontera at TACC, , TACC - Texas Advanced Computer Center   

    From insideHPC: “New Texascale Magazine from TACC looks at HPC for the Endless Frontier” 

    From insideHPC


    March 11, 2019

    This feature story describes how the computational power of Frontera will be a game changer for research. Late last year, the Texas Advanced Computing Center announced plans to deploy Frontera, the world’s fastest supercomputer in academia.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    TACC DELL EMC Stampede2 supercomputer

    TACC Frontera Dell EMC supercomputer fastest at any university

    To prepare for launch, TACC just published the inaugural edition of Texascale, an annual magazine with stories that highlight the people, science, systems, and programs that make TACC one of the leading academic computing centers in the world.

    In an inconspicuous-looking data center on The University of Texas at Austin’s J. J. Pickle Research Campus, construction is underway on one of the world’s most powerful supercomputers.

    The Frontera system (Spanish for “frontier”) will allow the nation’s academic scientists and engineers to probe questions both cosmic and commonplace — What is the universe composed of? How can we produce enough food to feed the Earth’s growing population? — that cannot be addressed in a lab or in the field; that require the number-crunching power equivalent to a small city’s worth of computers to solve; and that may be critical to the survival of our species.

    The name Frontera pays homage to the “endless frontier” of science envisioned by Vannevar Bush and presented in a report to President Harry Truman calling for a national strategy for scientific progress. The report led to the founding of the National Science Foundation (NSF) — the federal agency that funds fundamental research and education in science and engineering. It paved the way for investments in basic and applied research that laid the groundwork for our modern world, and inspired the vision for Frontera.

    “Whenever a new technological instrument emerges that can solve previously intractable problems, it has the potential to transform science and society,” said Dan Stanzione, executive director of TACC and one of the designers behind the new machine. “We believe that Frontera will have that kind of impact.”

    The Quest for Computing Greatness

    The pursuit of Frontera formally began in May 2017 when NSF issued an invitation for proposals for a new leadership-class computing facility, the top tier of high performance computing systems funded by the agency. The program would award $60 million to construct a supercomputer that could satisfy the needs of a scientific and engineering community that increasingly relies on computation.

    “For over three decades, NSF has been a leader in providing the computing resources our nation’s researchers need to accelerate innovation,” explained NSF Director France Córdova. “Keeping the U.S. at the forefront of advanced computing capabilities and providing researchers across the country access to those resources are key elements in maintaining our status as a global leader in research and education.”

    “The Frontera project is not just about the system; our proposal is anchored by an experienced team of partners and vendors with a community-leading track record of performance.” — Dan Stanzione, TACC

    Meet the The Architects

    When TACC proposed Frontera, it didn’t simply offer to build a fastest-in-its-class supercomputer. It put together an exceptional team of supercomputer experts and power users who together have internationally recognized expertise in designing, deploying, configuring, and operating HPC systems at the largest scale. Learn more about principal investigators who led the charge.

    NSF’s invitation for proposals indicated that the initial system would only be the beginning. In addition to enabling cutting-edge computations, the supercomputer would serve as a platform for designing a future leadership-class facility to be deployed five years later that would be 10 times faster still — more powerful than anything that exists in the world today.

    TACC has deployed major supercomputers several times in the past with support from NSF. Since 2006, TACC has operated three supercomputers that debuted among the Top15 most powerful in the world — Ranger (2008-2013; #4), Stampede1 (2012-2017, #7) and Stampede2 (2017-present, #12) — and three more systems that rose to the Top25. These systems established TACC, which was founded in 2001, as one of the world leaders in advanced computing.

    TACC solidified its reputation when, on August 28, 2018, NSF announced that the center had won the competition to design, build, deploy, and run the most capable system they had ever commissioned.

    “This award is an investment in the entire U.S. research ecosystem that will enable leap-ahead discoveries,” NSF Director Córdova said at the time.

    Frontera represents a further step for TACC into the upper echelons of supercomputing — the Formula One race cars of the scientific computing world. When Frontera launches in 2019, it will be the fastest supercomputer at any U.S. university and one of the fastest in the world — a powerful, all-purpose tool for science and engineering.

    ““Many of the frontiers of research today can be advanced only by computing,” Stanzione said. “Frontera will be an important tool to solve Grand Challenges that will improve our nation’s health, well-being, competitiveness, and security.”

    Supercomputers Expand the Mission

    Supercomputers have historically had very specific uses in the world of research, performing virtual experiments and analyses of problems that can’t be easily physically experimented upon or solved with smaller computers.

    Since 1945, when the ENIAC (Electronic Numerical Integrator and Computer) at the University of Pennsylvania first calculated artillery firing tables for the United States Army’s Ballistic Research Laboratory, the uses of large-scale computing have grown dramatically.

    Today, every discipline has problems that require advanced computing. Whether it’s cellular modeling in biology, the design of new catalysts in chemistry, black hole simulations in astrophysics, or Internet-scale text analyses in the social sciences, the details change, but the need remains the same.

    “Computation is arguably the most critical tool we possess to reach more deeply into the endless frontier of science,” Stanzione says. “While specific subfields of science need equipment like radio telescopes, MRI machines, and electron microscopes, large computers span multiple fields. Computing is the universal instrument.”

    In the past decade, the uses of high performance computing have expanded further. Massive amounts of data from sensors, wireless devices, and the Internet opened up an era of big data, for which supercomputers are well suited. More recently, machine and deep learning have provided a new way of not just analyzing massive datasets, but of using them to derive new hypotheses and make predictions about the future.

    As the problems that can be solved by supercomputers expanded, NSF’s vision for cyberinfrastructure — the catch-all term for the set of information technologies and people needed to provide advanced computing to the nation — evolved as well. Frontera represents the latest iteration of that vision.

    Data-Driven Design

    TACC’s leadership knew they had to design something innovative from the ground up to win the competition for Frontera. Taking a data-driven approach to the planning process, they investigated the usage patterns of researchers on Stampede1, as well as on Blue Waters — the previous NSF-funded leadership-class system — and in the Department of Energy (DOE)’s large-scale scientific computing program, INCITE, and analyzed the types of problems that scientists need supercomputers to solve.

    They found that Stampede1 usage was dominated by 15 commonly used applications. Together these accounted for 63 percent of Stampede1’s computing hours in 2016. Some 2,285 additional applications utilized the remaining 37 percent of the compute cycles. (These trends were consistent on Blue Waters and DOE systems as well.) Digging deeper they determined that, of the top 15 applications, 97 percent of the usage solved equations that describe motions of bodies in the universe, the interactions of atoms and molecules, or electron and fluids in motion.

    Frontera will be the fastest supercomputer at a U.S. university and likely Top 5 in the world when it launches in 2019. It will support simulation, data analysis and AI on the largest scales.

    “We did a careful analysis to understand the questions our community was using our supercomputers to solve and the codes and equations they used to solve them,” said TACC’s director of High Performance Computing, Bill Barth. “This narrowed the pool of problems that Frontera would need to excel in solving.”

    But past use wasn’t the only factor they considered. “It was also important to consider emerging uses of advanced computing resources for which Frontera will be critical,” Stanzione said. “Prominent among these are data-driven and data-intensive applications, as well as machine and deep learning.”

    Though still small in terms of their overall use of Stampede2, and other current systems, these areas are growing quickly and offer new ways to solve enduring problems.

    Whereas researchers traditionally wrote HPC codes in programming languages like C++ and Fortran, data-intensive problems often require non-traditional software or frameworks, such as R, Python, or TensorFlow.

    “The coming decade will see significant efforts to integrate physics-driven and data-driven approaches to learning,” said Tommy Minyard, TACC director of Advanced Computing Systems. “We designed Frontera with the capability to address very large problems in these emerging communities of computation and serve a wide range of both simulation-based and data-driven science.”

    The Right Chips for the Right Jobs

    Anyone following computer hardware trends in recent years has noticed the blossoming of options in terms of computer processors. Today’s landscape includes a range of chip architectures, from low energy ARM processors common in cell phones, to adaptable FPGAs (field-programmable gate arrays), to many varieties of CPU, GPUs and AI-accelerating chips.

    The team considered a wide-range of system options for Frontera before concluding that a CPU-based primary system with powerful Intel Xeon x86 nodes and a fast network would be the most useful platform for most applications.


    Once built, TACC expects that the main compute system will achieve 35 to 40 petaflops of peak performance. For comparison, Frontera will be twice as powerful as Stampede2 (currently the fastest university supercomputer) and 70 times as fast as Ranger, which operated at TACC until 2013.

    To match what Frontera will compute in just one second, a person would have to perform one calculation every second for one billion years.

    In addition to its main system, Frontera will also include a subsystem made up of graphics processing units (GPUs) that have proven particularly effective for deep learning and molecular dynamics problems.

    “For certain application classes that can make effective use of GPUs, the subsystem will provide a cost-efficient path to high performance for those in the community that can fully exploit it,” Stanzione said.

    Designing a Complete Ecosystem

    The effectiveness of a supercomputer depends on more than just its processors. Storage, networking, power, and cooling are all critical as well.

    Frontera will include a storage subsystem from DataDirect Networks with almost 53 petabytes of capacity and nearly 2 terabytes per second of aggregate bandwidth. Of this, 50 petabytes will use disk-based, distributed storage, while 3 petabytes will employ a new type of very fast storage known as Non-volatile Memory Express storage, broadening the system’s usefulness for the data science community.

    Supercomputing applications often employ many compute nodes, or devices, at once, which requires passing data and instructions from one part of the system to another. Mellanox InfiniBand interconnects will provide 100 Gigabits per second (Gbps) connectivity to each node, and 200 Gbps between the central switches.

    These components will be integrated via servers from Dell EMC, who has partnered with TACC since 2003 on massive systems, including Stampede1 and 2.

    “The new Frontera system represents the next phase in the long-term relationship between TACC and Dell EMC, focused on applying the latest technical innovation to truly enable human potential,” said Thierry Pellegrino, vice president of Dell EMC High Performance Computing.”

    Though a top system in its own right, Frontera won’t operate as an island. Users will have access to TACC’s other supercomputers — Stampede2, Lonestar, Wrangler, and many more, each with a unique architecture — and storage resources, including Stockyard, TACC’s global file system; Corral, TACC’s data collection repository; and Ranch, a tape-based long-term archival system.

    Together, they compose an ecosystem for scientific computing that is arguably unmatched in the world.


    New Models of Access & Use

    Researchers traditionally interact with supercomputers through the command line — a text-only program that takes instructions and passes them on to the computer’s operating system to run.

    The bulk of a supercomputer’s time (roughly 90 percent of the cycles on Stampede2) is consumed by researchers using the system in this way. But as computing becomes more complex, having a lower barrier to entry and offering an end-to-end solution to access data, software, and computing services has grown in importance.

    Science gateways offer streamlined, user-friendly interfaces to cyberinfrastructure services. In recent years, TACC has become a leader in building these accessible interfaces for science.

    “Visual interfaces can remove much of the complexity of traditional HPC, and lower this entry barrier,” Stanzione said. “We’ve deployed more than 20 web-based gateways, including several of the most widely used in the world. On Frontera, we’ll allow any community to build their own portals, applications, and workflows, using the system as the engine for computations.”

    Though they use a minority of computing cycles, a majority of researchers actually access supercomputers through portals and gateways. To serve this group, Frontera will support high-level languages like Python, R, and Julia, and offer a set of RESTful APIs (application program interfaces) that will make the process of building community-wide tools easier.

    “We’re committed to delivering the transformative power of computing to a wide variety of domains from science and engineering to the humanities,” said Maytal Dahan, TACC’s director of Advanced Computing Interfaces. “Expanding into disciplines unaccustomed to computing from the command line means providing access in a way that abstracts the complexity and technology and lets researchers focus on their scientific impact and discoveries.”


    The Cloud

    For some years, there has been a debate in the advanced computing community about whether supercomputers or “the cloud” are more useful for science. The TACC team believes it’s not about which is better, but how they might work together. By design, Frontera takes a bold step towards bridging this divide by partnering with the nation’s largest cloud providers — Microsoft, Amazon, and Google — to provide cloud services that complement TACC’s existing offerings and have unique advantages.

    It’s no secret that supercomputers use a lot of power. Frontera will require more than 5.5 megawatts to operate — the equivalent of powering more than 3,500 homes. To limit the expense and environmental impact of running Frontera, TACC will employ a number of energy-saving measures with the new system. Some were put in place years ago; others will be deployed at TACC for the first time. All told, TACC expects one-third of the power for Frontera to come from renewable sources.

    These include long-term storage for sharing datasets with collaborators; access to additional types of computing processors and architectures that will appear after Frontera launches; cloud-based services like image classification; and Virtual Desktop Interfaces that allow a cloud-based filesystem to look like one’s home computer.

    “The modern scientific computing landscape is changing rapidly,” Stanzione said. “Frontera’s computing ecosystem will be enhanced by playing to the unique strengths of the cloud, rather than competing with them.”

    Software & Containers

    When the applications that researchers rely on are not available on HPC systems, it creates a barrier to large-scale science. For that reason, Frontera will support the widest catalog of applications of any large-scale scientific computing system in history.

    TACC will work with application teams to support highly-tuned versions of several dozen of the most widely used applications and libraries. Moreover, Frontera will provide support for container-based virtualization, which sidesteps the challenges of adapting tools to a new system while enabling entirely new types of computation.

    With containers, user communities develop and test their programs on laptops or in the cloud, and then transfer those same workflows to HPC systems using programs like Singularity. This facilitates the development of event-driven workflows, which automate computations in response to external events like natural disasters, or for the collection of data from large-scale instruments and experiments.

    “Frontera will be a more modern supercomputer, not just in the technologies it uses, but in the way people will access it,” Stanzione said.

    A Frontier System to Solve Frontier Challenges

    Talking about a supercomputer in terms of its chips and access modes is a bit like talking about a telescope in terms of its lenses and mounts. The technology is important, but the ultimate question is: what can it do that other systems can’t?

    Entirely new problems and classes of research will be enabled by Frontera. Examples of projects Frontera will tackle in its first year include efforts to explore models of the Universe beyond the Standard Model in collaboration with researchers from the Large Hadron Collider; research that uses deep learning and simulation to predict in advance when a major disruption may occur within a fusion reactor to prevent damaging these incredibly expensive systems; and data-driven genomics studies to identify the right species of crops to plant in the right place at the right time to maximize production and feed the planet. [See more about each project in the box below.]

    The LHC modeling effort, fusion disruption predictions, and genomic analyses represent the types of ‘frontier,’ Grand Challenge research problems Frontera will help address.

    “Many phenomena that were previously too complex to model with the hardware of just a few years ago are within reach for systems with tens of petaflops,” said Stanzione.

    A review committee made up of computational and domain experts will ultimately select the projects that will run on Frontera, with a small percentage of time reserved for emergencies (as in the case of hurricane forecasting), industry collaborations, or discretionary use.

    It’s impossible to say what the exact impact of Frontera will be, but for comparison, Stampede1, which was one quarter as powerful as Frontera, enabled research that led to nearly 4,000 journal articles. These include confirmations of gravitational wave detections by LIGO that contributed to a Nobel Prize in Physics in 2016; discoveries of FDA approved drugs that have been successful in treating cancer; and a greater understanding of DNA interactions enabling the design of faster and cheaper gene sequencers.

    From new machine learning techniques to diagnose and treat diseases to fundamental mathematical and computer science research that will be the basis for the next generation of scientists’ discoveries, Frontera will have an outsized impact on science nationwide.

    Frontera will be the most powerful supercomputer at any U.S. university and likely top 10 in the world when it launches in 2019. It will support simulation, data analysis, and AI on the largest scales.

    Physics Beyond the Standard Model

    The NSF program that funds Frontera is titled, Towards a Leadership-Class Computing Facility. This phrasing is important because, as powerful as Frontera is, NSF sees it as a step toward even greater support for the nation’s scientists and engineers. In fact, the program not only funds the construction and operation of Frontera — the fastest system NSF has ever deployed — it also supports the planning, experimentation, and design required to build a system in five years that will be 10 times more capable than Frontera.

    “We’ll be planning for the next generation of computational science and what that means in terms of hardware, architecture, and applications,” Stanzione said. “We’ll start with science drivers — the applications, workflows, and codes that will be used — and use those factors to determine the architecture and the balance between storage, networks, and compute needed in the future.”

    Much like the data-driven design process that influenced the blueprint for Frontera, the TACC team will employ a “design — operate — evaluate” cycle on Frontera to plan Phase 2.

    TACC has assembled a Frontera Science Engagement Team, consisting of a more than a dozen leading computational scientists from a range of disciplines and universities, to help determine the “the workload of the future” — the science drivers and requirements for the next generation of systems. The team will also act as liaisons to the broader community in their respective fields, presenting at major conferences, convening discussions, and recruiting colleagues to participate in the planning.

    Fusion physicist William Tang joined the Frontera Science Engagement Team in part because he believed in TACC’s vision for cyberinfrastructure. “AI and deep learning are huge areas of growth. TACC definitely saw that and encouraged that a lot more. That played a significant part in the winning proposal, and I’m excited to join the activities going forward,” Tang said.

    A separate technology assessment team will use a similar strategy to identify critical emerging technologies, evaluate them, and ultimately develop some as testbed systems.

    TACC will upgrade and make available their FPGA testbed, which investigates new ways of using interconnected FPGAs as computational accelerators. They also hope to add an ARM testbed and other emerging technologies.

    Other testbeds will be built offsite in collaboration with partners. TACC will work with Stanford University and Atos to deploy a quantum simulator that will allow them to study quantum systems. Partnerships with the cloud providers Microsoft, Google, and Amazon, will allow TACC to track AMD (Advanced Micro Devices) solutions, neuromorphic prototypes and tensor processing units.

    Finally, TACC will work closely with Argonne National Laboratory to assess the technologies that will be deployed in the Aurora21 system, which will enter production in 2021. TACC will have early access to the same compute and storage technologies that will be deployed in Aurora21, as well as Argonne’s simulators, prototypes, software tools, and application porting efforts, which TACC will evaluate for the academic research community.

    “The primary compute elements of Frontera represent a relatively conservative approach to scientific computing,” Minyard said. “While this may remain the best path forward through the mid-2020’s and beyond, a serious evaluation of a Phase 2 system will require not only projections and comparisons, but hands-on access to future technologies. TACC will provide the testbed systems not only for our team and Phase 2 partners, but to our full user community as well.”

    Using the “design — operate — evaluate” process, TACC will develop a quantitative understanding of present and future application performance. It will build performance models for the processors, interconnects, storage, software, and modes of computing that will be relevant in the Phase 2 timeframe.

    “It’s a push/pull process,” Stanzione said. “Users must have an environment in which they can be productive today, but that also incentivizes them to continuously modernize their applications to take advantage of emerging computational technologies.”

    The deployment of two to three small scale systems at TACC will allow the assessment team to evaluate the performance of the system against their model and gather specific feedback from the NSF science user community on usability. From this process, the design of the Phase 2 leadership class system will emerge.

    With Great Power Comes Great Responsibility

    The design process will culminate some years in the future. Meanwhile, in the coming months, Frontera’s server racks will begin to roll into TACC’s data center. From January to March 2019, TACC will integrate the system with hundreds of miles of networking cables and install the software stack. In the spring, TACC will host an early user period where experienced researchers will test the system and work out any bugs. Full production will begin in the summer of 2019.

    “We want it to be one of the most useable and accessible systems in the world,” Stanzione said. “Our design is not uniquely brilliant by us. It’s the logical next step — smart engineering choices by experienced operators.”

    It won’t be TACC’s first rodeo. Over 17 years, the team has developed and deployed more than two dozen HPC systems totaling more than $150 million in federal investment. The center has grown to nearly 150 professionals, including more than 75 PhD computational scientists and engineers, and earned a stellar reputation for providing reliable resources and superb user service. Frontera will provide a unique resource for science and engineering, capable of scaling to the very largest capability jobs, running the widest array of jobs, and supporting science in all forms.

    The project represents the achievement of TACC’s mission of “Powering Discoveries to Change the World.”

    “Computation is a key element to scientific progress, to engineering new products, to improving human health, and to our economic competitiveness. This system will be the NSF’s largest investment in computing in the next several years. For that reason, we have an enormous responsibility to our colleagues all around the U.S. to deliver a system that will enable them to be successful,” Stanzione said. “And if we succeed, we can change the world.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

  • richardmitnick 10:43 am on September 20, 2018 Permalink | Reply
    Tags: , Cornell’s Center for Advanced Computing (CAC) was named a training partner on a $60 million National Science Foundation-funded project to build the fastest supercomputer at any U.S. university and o, , , TACC - Texas Advanced Computer Center   

    From Cornell Chronicle: “Cornell writing the (how-to) book on new supercomputer” 

    Cornell Bloc

    From Cornell Chronicle

    September 18, 2018
    Melanie Lefkowitz

    Cornell’s Center for Advanced Computing (CAC) was named a training partner on a $60 million, National Science Foundation-funded project to build the fastest supercomputer at any U.S. university and one of the most powerful in the world.


    CAC will develop training materials to help users get the most out of the Frontera supercomputer, to be deployed in summer 2019 at the Texas Advanced Computing Center at the University of Texas at Austin.

    Texas Advanced Computer Center

    “Computers don’t do great work unless you have people ready to use them for great research. Being able to be the on-ramp for a system like this is really valuable,” said Rich Knepper, CAC’s deputy director. “This represents the next step in leadership computing, and it’s an opportunity for Cornell to be a very integral part of that.”

    CAC, which provides high-performance computing and cloud computing services to the Cornell community and beyond, will receive $1 million from the NSF over the next five years to create Cornell Virtual Workshops – online content explaining how to use Frontera.

    The Texas Advanced Computing Center will build the supercomputer, with the primary computing system provided by Dell EMC and powered by Intel processors. Other partners in the project are the California Institute of Technology, Princeton University, Stanford University, the University of Chicago, the University of Utah, the University of California, Davis, Ohio State University, the Georgia Institute of Technology and Texas A&M University.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Once called “the first American university” by educational historian Frederick Rudolph, Cornell University represents a distinctive mix of eminent scholarship and democratic ideals. Adding practical subjects to the classics and admitting qualified students regardless of nationality, race, social circumstance, gender, or religion was quite a departure when Cornell was founded in 1865.

    Today’s Cornell reflects this heritage of egalitarian excellence. It is home to the nation’s first colleges devoted to hotel administration, industrial and labor relations, and veterinary medicine. Both a private university and the land-grant institution of New York State, Cornell University is the most educationally diverse member of the Ivy League.

    On the Ithaca campus alone nearly 20,000 students representing every state and 120 countries choose from among 4,000 courses in 11 undergraduate, graduate, and professional schools. Many undergraduates participate in a wide range of interdisciplinary programs, play meaningful roles in original research, and study in Cornell programs in Washington, New York City, and the world over.

  • richardmitnick 11:23 am on January 8, 2018 Permalink | Reply
    Tags: , Scientist's Work May Provide Answer to Martian Mountain Mystery, , TACC - Texas Advanced Computer Center,   

    From U Texas Dallas: “Scientist’s Work May Provide Answer to Martian Mountain Mystery” 

    U Texas Dallas

    Jan. 8, 2018
    Stephen Fontenot, UT Dallas
    (972) 883-4405

    By seeing which way the wind blows, a University of Texas at Dallas fluid dynamics expert has helped propose a solution to a Martian mountain mystery.

    Dr. William Anderson

    Dr. William Anderson, an assistant professor of mechanical engineering in the Erik Jonsson School of Engineering and Computer Science, co-authored a paper published in the journal Physical Review E that explains the common Martian phenomenon of a mountain positioned downwind from the center of an ancient meteorite impact zone.

    Anderson’s co-author, Dr. Mackenzie Day, worked on the project as part of her doctoral research at The University of Texas at Austin, where she earned her PhD in geology in May 2017. Day is a postdoctoral scholar at the University of Washington in Seattle.

    Gale Crater was formed by meteorite impact early in the history of Mars, and it was subsequently filled with sediments transported by flowing water. This filling preceded massive climate change on the planet, which introduced the arid, dusty conditions that have been prevalent for the past 3.5 billion years. This chronology indicates wind must have played a role in sculpting the mountain.

    “On Mars, wind has been the only driver of landscape change for over 3 billion years,” Anderson said. “This makes Mars an ideal planetary laboratory for aeolian morphodynamics — wind-driven movement of sediment and dust. We’re studying how Mars’ swirling atmosphere sculpted its surface.”

    Wind vortices blowing across the crater slowly formed a radial moat in the sediment, eventually leaving only the off-center Mount Sharp, a 3-mile-high peak similar in height to the rim of the crater. The mountain was skewed to one side of the crater because the wind excavated one side faster than the other, the research suggests.

    Day and Anderson first advanced the concept in an initial publication on the topic in Geophysical Research Letters. Now, they have shown via computer simulation that, given more than a billion years, Martian winds were capable of digging up tens of thousands of cubic kilometers of sediment from the crater — largely thanks to turbulence, the swirling motion within the wind stream.

    A digital elevation model of Gale Crater shows the pattern of mid-latitude Martian craters with interior sedimentary mounds.

    “The role of turbulence cannot be overstated,” Anderson said. “Since sediment movement increases non-linearly with drag imposed by the aloft winds, turbulent gusts literally amplify sediment erosion and transport.”

    The location — and mid-latitude Martian craters in general — became of interest as NASA’s Curiosity rover landed in Gale Crater in 2012, where it has gathered data since then.

    “The rover is digging and cataloging data housed within Mount Sharp,” Anderson said. “The basic science question of what causes these mounds has long existed, and the mechanism we simulated has been hypothesized. It was through high-fidelity simulations and careful assessment of the swirling eddies that we could demonstrate efficacy of this model.”

    The theory Anderson and Day tested via computer simulations involves counter-rotating vortices — picture in your mind horizontal dust devils — spiraling around the crater to dig up sediment that had filled the crater in a warmer era, when water flowed on Mars.

    “These helical spirals are driven by winds in the crater, and, we think, were foremost in churning away at the dry Martian landscape and gradually scooping sediment from within the craters, leaving behind these off-center mounds,” Anderson said.

    That simulations have demonstrated that wind erosion could explain these geographical features offers insight into Mars’ distant past, as well as context for the samples collected by Curiosity.

    “It’s further indication that turbulent winds in the atmosphere could have excavated sediment from the craters,” Anderson said. “The results also provide guidance on how long different surface samples have been exposed to Mars’ thin, dry atmosphere.”

    This understanding of the long-term power of wind can be applied to Earth as well, although there are more variables on our home planet than Mars, Anderson said.

    “Swirling, gusty winds in Earth’s atmosphere affect problems at the nexus of landscape degradation, food security and epidemiological factors affecting human health,” Anderson said. “On Earth, however, landscape changes are also driven by water and plate tectonics, which are now absent on Mars. These drivers of landscape change generally dwarf the influence of air on Earth.”

    Computational resources for the study were provided by the Texas Advanced Computing Center at UT Austin.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    TACC DELL EMC Stampede2 supercomputer

    Day’s role in the research was supported by a Graduate Research Fellowship from the National Science Foundation.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The University of Texas at Dallas is a Carnegie R1 classification (Doctoral Universities – Highest research activity) institution, located in a suburban setting 20 miles north of downtown Dallas. The University enrolls more than 27,600 students — 18,380 undergraduate and 9,250 graduate —and offers a broad array of bachelor’s, master’s, and doctoral degree programs.

    Established by Eugene McDermott, J. Erik Jonsson and Cecil Green, the founders of Texas Instruments, UT Dallas is a young institution driven by the entrepreneurial spirit of its founders and their commitment to academic excellence. In 1969, the public research institution joined The University of Texas System and became The University of Texas at Dallas.

    A high-energy, nimble, innovative institution, UT Dallas offers top-ranked science, engineering and business programs and has gained prominence for a breadth of educational paths from audiology to arts and technology. UT Dallas’ faculty includes a Nobel laureate, six members of the National Academies and more than 560 tenured and tenure-track professors.

  • richardmitnick 10:32 am on December 20, 2017 Permalink | Reply
    Tags: , Computation combined with experimentation helped advance work in developing a model of osteoregeneration, Genes could be activated in human stem cells that initiate biomineralization a key step in bone formation, , , Silk has been shown to be a suitable scaffold for tissue regeneration, Silky Secrets to Make Bones, Stampede1, , TACC - Texas Advanced Computer Center,   

    From TACC: “Silky Secrets to Make Bones” 

    TACC bloc

    Texas Advanced Computing Center

    December 19, 2017
    Jorge Salazar

    Scientists used supercomputers and fused golden orb weaver spider web silk with silica to activate genes in human stem cells that initiated biomineralization, a key step in bone formation. (devra/flickr)

    Some secrets to repair our skeletons might be found in the silky webs of spiders, according to recent experiments guided by supercomputers. Scientists involved say their results will help understand the details of osteoregeneration, or how bones regenerate.
    A study found that genes could be activated in human stem cells that initiate biomineralization, a key step in bone formation. Scientists achieved these results with engineered silk derived from the dragline of golden orb weaver spider webs, which they combined with silica. The study appeared September 2017 in the journal Advanced Functional Materials and has been the result of the combined effort from three institutions: Tufts University, Massachusetts Institute of Technology and Nottingham Trent University.

    XSEDE supercomputers Stampede at TACC and Comet at SDSC helped study authors simulate the head piece domain of the cell membrane protein receptor integrin in solution, based on molecular dynamics modeling. (Davoud Ebrahimi)

    SDSC Dell Comet supercomputer

    Study authors used the supercomputers Stampede1 at the Texas Advanced Computing Center (TACC) and Comet at the San Diego Supercomputer Center (SDSC) at the University of California San Diego through an allocation from XSEDE, the eXtreme Science and Engineering Discovery Environment, funded by the National Science Foundation. The supercomputers helped scientists model how the cell membrane protein receptor called integrin folds and activates the intracellular pathways that lead to bone formation. The research will help larger efforts to cure bone growth diseases such as osteoporosis or calcific aortic valve disease.

    “This work demonstrates a direct link between silk-silica-based biomaterials and intracellular pathways leading to osteogenesis,” said study co-author Zaira Martín-Moldes, a post-doctoral scholar at the Kaplan Lab at Tufts University. She researches the development of new biomaterials based on silk. “The hybrid material promoted the differentiation of human mesenchymal stem cells, the progenitor cells from the bone marrow, to osteoblasts as an indicator of osteogenesis, or bone-like tissue formation,” Martín-Moldes said.

    “Silk has been shown to be a suitable scaffold for tissue regeneration, due to its outstanding mechanical properties,” Martín-Moldes explained. It’s biodegradable. It’s biocompatible. And it’s fine-tunable through bioengineering modifications. The experimental team at Tufts University modified the genetic sequence of silk from golden orb weaver spiders (Nephila clavipes) and fused the silica-promoting peptide R5 derived from a gene of the diatom Cylindrotheca fusiformis silaffin.

    The bone formation study targeted biomineralization, a critical process in materials biology. “We would love to generate a model that helps us predict and modulate these responses both in terms of preventing the mineralization and also to promote it,” Martín-Moldes said.

    “High performance supercomputing simulations are utilized along with experimental approaches to develop a model for the integrin activation, which is the first step in the bone formation process,” said study co-author Davoud Ebrahimi, a postdoctoral associate at the Laboratory for Atomistic and Molecular Mechanics of the Massachusetts Institute of Technology.

    Integrin embeds itself in the cell membrane and mediates signals between the inside and the outside of cells. In its dormant state, the head unit sticking out of the membrane is bent over like a nodding sleeper. This inactive state prevents cellular adhesion. In its activated state, the head unit straightens out and is available for chemical binding at its exposed ligand region.

    “Sampling different states of the conformation of integrins in contact with silicified or non-silicified surfaces could predict activation of the pathway,” Ebrahimi explained. Sampling the folding of proteins remains a classically computationally expensive problem, despite recent and large efforts in developing new algorithms.

    The derived silk–silica chimera they studied weighed in around a hefty 40 kilodaltons. “In this research, what we did in order to reduce the computational costs, we have only modeled the head piece of the protein, which is getting in contact with the surface that we’re modeling,” Ebrahimi said. “But again, it’s a big system to simulate and can’t be done on an ordinary system or ordinary computers.”

    The Computational team at MIT used the molecular dynamics package called Gromacs, a software for chemical simulation available on both the Stampede1 and Comet supercomputing systems. “We could perform those large simulations by having access to XSEDE computational clusters,” he said.

    “I have a very long-standing positive experience using XSEDE resources,” said Ebrahimi. “I’ve been using them for almost 10 years now for my projects during my graduate and post-doctoral experiences. And the staff at XSEDE are really helpful if you encounter any problems. If you need software that should be installed and it’s not available, they help and guide you through the process of doing your research. I remember exchanging a lot of emails the first time I was trying to use the clusters, and I was not so familiar. I got a lot of help from XSEDE resources and people at XSEDE. I really appreciate the time and effort that they put in order to solve computational problems that we usually encounter during our simulation,” Ebrahimi reflected.

    Computation combined with experimentation helped advance work in developing a model of osteoregeneration. “We propose a mechanism in our work,” explained Martín-Moldes, “that starts with the silica-silk surface activating a specific cell membrane protein receptor, in this case integrin αVβ3.” She said this activation triggers a cascade in the cell through three mitogen-activated protein kinsase (MAPK) pathways, the main one being the c-Jun N-terminal kinase (JNK) cascade.

    Proposed mechanism for hMSC osteogenesis induction on silica surfaces. The binding of integrin αVβ3 to the silica surface promotes its activation, that triggers an activation cascade that involves the three MAPK pathways, ERK, p38, but mainly JNK (reflected as wider arrow), which promotes AP-1 activation and translocation to the nucleus to activate Runx2 transcription factor. Runx2 is the finally responsible for the induction of bone extracellular matrix proteins and other osteoblast differentiation genes. B) In the presence of a neutralizing antibody against αVβ3, there is no activation and induction of MAPK cascades, thus no induction of bone extracellular matrix genes and hence, no differentiation. (Davoud Ebrahimi)

    She added that other factors are also involved in this process such as Runx2, the main transcription factor related to osteogenesis. According to the study, the control system did not show any response, and neither did the blockage of integrin using an antibody, confirming its involvement in this process. “Another important outcome was the correlation between the amount of silica deposited in the film and the level of induction of the genes that we analyzed,” Martín-Moldes said. “These factors also provide an important feature to control in future material design for bone-forming biomaterials.”

    “We are doing a basic research here with our silk-silica systems,” Martín-Moldes explained. “But we are helping in building the pathway to generate biomaterials that could be used in the future. The mineralization is a critical process. The final goal is to develop these models that help design the biomaterials to optimize the bone regeneration process, when the bone is required to regenerate or to minimize it when we need to reduce the bone formation.”

    These results help advance the research and are useful in larger efforts to help cure and treat bone diseases. “We could help in curing disease related to bone formation, such as calcific aortic valve disease or osteoporosis, which we need to know the pathway to control the amount of bone formed, to either reduce or increase it, Ebrahimi said.

    “Intracellular Pathways Involved in Bone Regeneration Triggered by Recombinant Silk–Silica Chimeras,” DOI: 10.1002/adfm.201702570, appeared September 2017 in the journal Advanced Functional Materials. The National Institutes of Health funded the study, and the National Science Foundation through XSEDE provided computational resources. The study authors are Zaira Martín-Moldes, Nina Dinjaski, David L. Kaplan of Tufts University; Davoud Ebrahimi and Markus J. Buehler of the Massachusetts Institute of Technology; Robyn Plowright and Carole C. Perry of Nottingham Trent University.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Texas Advanced Computing Center (TACC) designs and operates some of the world’s most powerful computing resources. The center’s mission is to enable discoveries that advance science and society through the application of advanced computing technologies.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    TACC DELL EMC Stampede2 supercomputer

  • richardmitnick 9:49 am on October 30, 2017 Permalink | Reply
    Tags: , , TACC - Texas Advanced Computer Center,   

    From University of Texas at Austin: “UT Is Now Home to the Fastest Supercomputer at Any U.S. University” 

    U Texas Austin bloc

    University of Texas at Austin

    October 27, 2017
    Anna Daugherty

    The term “medical research” might bring to mind a sterile room with white lab coats, goggles, and vials. But for cutting-edge researchers, that picture is much more high-tech: it’s a room filled with row after row of metal racks housing 300,000 computer processors, each blinking green, wires connecting each processor, and the deafening sound of a powerful machine at work. It’s a room like the one housing the 4,000-square-foot supercomputer Stampede2 at The University of Texas’ J.J. Pickle Research Campus.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    TACC DELL EMC Stampede2 supercomputer

    At peak performance, Stampede2, the flagship supercomputer at UT Austin’s Texas Advanced Computing Center (TACC), will be capable of performing 18 quadrillion operations per second (18 petaflops, in supercomputer lingo). That’s more powerful than 100,000 desktops. As the fastest supercomputer at any university in the U.S., it’s a level of computing that the average citizen can’t comprehend. Most people do their computing on phones the size of their hands—but then again, most aren’t mining cancer data, predicting earthquakes, or analyzing black holes.

    Funded by a $30 million grant from the National Science Foundation, Stampede2 replaces the original Stampede system, which went live in 2013. Designed to be twice as powerful while using half the energy of the older system, Stampede2 is already being used by researchers around the country. In June 2017, Stampede2 went public with 12 petaflops and was ranked as the 12th most powerful computer in the world. Phase two added six petaflops in September and phase three will complete the system in 2018 by adding a new type of memory capacity to the computer.

    For researchers like Rommie Amaro, professor of chemistry at the University of California, San Diego, a tool like Stampede2 is essential. As the director of the National Biomedical Computation Resource, Amaro says nearly all of their drug research is done on supercomputers.

    Most of her work with the original Stampede system focused on a protein called p53, which prevents tumor growth; the protein is mutated in approximately half of all cancer patients. Due to the nature of p53, it’s difficult to track with standard imaging tools, so Amaro’s team took available images of the protein to supercomputers and turned them into a simulation showing how the 1.6 million atoms in p53 move. Using Stampede, they were able to find weaknesses in p53 and simulate interactions with more than a million compounds; several hundred seemed capable of restoring p53. More than 30 proved successful in labs and are now being tested by a pharmaceutical company.

    “The first Stampede gave us really outstanding, breakthrough research for cancer,” Amaro says. “And we already have some really interesting preliminary data on what Stampede2 is going to give us.”

    And it’s not just the medical field that benefits. Stampede has created weather phenomena models that have shown new ways to measure tornado strength, and produced seismic hazard maps that predict the likelihood of earthquakes in California. It has also helped increase the accuracy of hurricane predictions by 20–25 percent. During Hurricane Harvey in August, researchers used TACC supercomputers to forecast how high water would rise near the coast and to predict flooding in rivers and creeks in its aftermath.

    Aaron Dubrow, strategic communications specialist at TACC, says supercomputer users either use publicly available programs or create an application from the mathematics of the problem they are researching. “You take an idea like how cells divide and turn that into a computer algorithm and it becomes a program of sorts,” he says. Researchers can log into the supercomputer remotely or send their program to TACC staff. Stampede2 also has web portals for smaller problems in topics like drug discovery or natural disasters.

    For Dan Stanzione, executive director at the TACC, some of the most important research isn’t immediately applied. “Basic science has dramatic impacts on the world, but you might not see that until decades from now.” He points to Einstein’s 100-year-old theory of gravitational waves, which was recently confirmed with the help of supercomputers across the nation, including Stampede. “You might wonder why we care about gravitational waves. But now we have satellite, TV, and instant communications around the world because of Einstein’s theories about gravitational waves 100 years ago.”

    According to Stanzione, there were nearly 40,000 users of the first Stampede and an approximate 3,500 projects completed. Similar to Stampede, the new Stampede2 is expected to have a four-year lifespan. “Your smartphone starts to feel old and slow after four or five years, and supercomputers are the same,” he says. “They may still be fast, but it’s made out of four-year-old processors. The new ones are faster and more power efficient to run.” The old processors don’t go to waste though—most will be donated to state institutions across Texas.

    In order to use a supercomputer, researchers must submit proposals to an NSF board, which then delegates hours of usage. Stanzione says there are requests for nearly a billion processor hours every quarter, which is several times higher than what is available nationwide. While Stanzione says nearly every university has some sort of supercomputer now, the U.S. still lags behind China in computing power. The world’s top two computers are both Chinese, and the first is nearly five times more powerful than the largest in the states.

    Regardless, Stampede2 will still manage to serve researchers from more than 400 universities. Other users include private businesses, such as Firefly Space Company in nearby Cedar Park, and some government users like the Department of Energy and the U.S. Department of Agriculture. Stanzione says all work done on Stampede2 must be public and published research.

    “Being the leader in large-scale computational sciences and engineering means we can attract the top researchers who need these resources,” he says. “It helps attract those top scholars to UT. And then hopefully once they’re here, it helps them reach these innovations a little faster.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    U Texas Arlington Campus

    In 1839, the Congress of the Republic of Texas ordered that a site be set aside to meet the state’s higher education needs. After a series of delays over the next several decades, the state legislature reinvigorated the project in 1876, calling for the establishment of a “university of the first class.” Austin was selected as the site for the new university in 1881, and construction began on the original Main Building in November 1882. Less than one year later, on Sept. 15, 1883, The University of Texas at Austin opened with one building, eight professors, one proctor, and 221 students — and a mission to change the world. Today, UT Austin is a world-renowned higher education, research, and public service institution serving more than 51,000 students annually through 18 top-ranked colleges and schools.

  • richardmitnick 10:43 am on October 9, 2017 Permalink | Reply
    Tags: A free flexible and secure way to provide multiple factors of authentication to your community, OpenMFA, , TACC - Texas Advanced Computer Center   

    From TACC: “A free, flexible, and secure way to provide multiple factors of authentication to your community” 

    TACC bloc

    Texas Advanced Computing Center

    TACC develops multi-factor authentication solution, makes it available open-source.

    Published on October 9, 2017 by Aaron Dubrow

    How does a supercomputing center enable tens of thousands of researchers to securely access its high-performance computing systems while still allowing ease of use? And how can it be done affordably?

    These are questions that the Texas Advanced Computing Center (TACC), asked themselves when they sought to upgrade their system security. They had previously relied on users’ names and passwords for access, but with a growing focus on hosting confidential health data and the increased compliance standards that entails, they realized they needed a more rigorous solution.

    In October 2016, use of the MFA became mandatory for TACC users. Since that time, OpenMFA has recorded more than half a million logins and counting.

    In 2015, TACC began looking for an appropriate multi-factor authentication (MFA) solution that would provide an extra layer of protection against brute-force attacks. What they quickly discovered was that the available commercial solutions would cost them tens to hundreds of thousands of dollars per year to provide to their large community of users.

    Moreover, most MFA systems lacked the flexibility needed to allow diverse researchers to access TACC systems in a variety of ways — from the command line, through science gateways (which perform computations without requiring researchers to directly access HPC systems), and using automated workflows.

    So, they did what any group of computing experts and software developers would do: they built our own MFA system, which they call OpenMFA.

    They didn’t start from scratch. Instead they scoured the pool of state-of-the-art open source tools available. Among them was LinOTP, a one-time password platform developed and maintained by KeyIdentity GmbH, a German software company. To this, they added the standard networking protocols RADIUS and HTTPS, and glued it all together using custom pluggable authentication modules (PAM) that they developed in-house.

    TACC Token App generating token code.

    This approach integrates cleanly with common data transfer protocols, adds flexibility to the system (in part, so they could create whitelists that include the IP addresses that should be exempted), and supports opt-in or mandatory deployments. Researchers can use the TACC-developed OpenMFA system in three ways: via a software token, an SMS, or a low-cost hardware token.

    Over three months, they transitioned 10,000 researchers to OpenMFA, while giving them the opportunity to test the new system at their leisure. In October 2016, use of the MFA became mandatory for TACC users.

    Since that time, OpenMFA has recorded more than half a million logins and counting. TACC has also open-sourced the tool for free, public use. The Extreme Science and Engineering Discovery Environment (XSEDE) is considering OpenMFA for its large user base, and many other universities and research centers have expressed interest in using the tool.

    TACC developed OpenMFA to suit the center’s needs and to save money. But in the end, the tool will also help many other tax-payer-funded institutions improve their security while maintaining research productivity. This allows funding to flow into other efforts, thus increasing the amount of science that can be accomplished, while making that research more secure.

    TACC staff will present the details of OpenMFA’s development at this year’s Internet2 Technology Exchange and at The International Conference for High Performance Computing, Networking, Storage and Analysis (SC17).

    To learn more about OpenMFA or explore the code, visit the Github repository.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Texas Advanced Computing Center (TACC) designs and operates some of the world’s most powerful computing resources. The center’s mission is to enable discoveries that advance science and society through the application of advanced computing technologies.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    TACC DELL EMC Stampede2 supercomputer

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: