Tagged: TACC – Texas Advanced Computer Center Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:07 pm on March 21, 2019 Permalink | Reply
    Tags: , Quantum Corp., TACC - Texas Advanced Computer Center, TACC has selected Quantum StorNext as their archive file system with a Quantum Scalar i6000 tape library providing dedicated Hierarchical Storage Management (HSM).   

    From insideHPC: “TACC to power HSM Archives with Quantum Corp Tape Libraries” 

    From insideHPC

    Today Quantum Corp. announced the Texas Advanced Computing Center (TACC) has selected Quantum StorNext as their archive file system, with a Quantum Scalar i6000 tape library providing dedicated Hierarchical Storage Management (HSM).

    1
    “Our ability to archive data is vital to TACC’s success, and the combination of StorNext as our archive file system managing Quantum hybrid storage, Scalar tape and our DDN primary disk will enable us to meet our commitments to the talented researchers who depend on TACC now and in the future,” said Tommy Minyard, Director of Advanced Computing at TACC.

    Tackling the Archive Challenge for Scientific Data

    TACC designs and operates some of the world’s most powerful computing resources.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC DELL EMC Stampede2 supercomputer

    TACC Frontera Dell EMC supercomputer fastest at any university

    The center’s mission is to enable discoveries that advance science and society through the application of advanced computing technologies. TACC’s environment includes a comprehensive cyberinfrastructure ecosystem of leading-edge resources in high performance computing (HPC), visualization, data analysis, storage, archive, cloud, data-driven computing, connectivity, tools, APIs, algorithms, consulting, and software. TACC experts work with thousands of researchers on more than 3,000 projects each year.

    Researchers from around the globe leverage TACC’s computing resources for projects that span pure research and include partnerships with industry, generating an enormous volume of data which must be archived and accessible for future use. The Quantum system combined with DDN SFA14KX primary storage replaces TACC’s original Oracle solution for migrating files to and from tape archive. The new system will utilize LTO technologies, taking an open approach to archive which is designed for future growth without the limitations of proprietary tape.

    “TACC’s focus on constant innovation creates an environment that places tremendous stress on storage and Quantum has long been at the forefront in managing solutions that meet the most extreme reliability, accessibility and massive scalability requirements,” said Eric Bassier, Senior Director of Product Marketing at Quantum. “Combining Scalar tape with StorNext data management capabilities creates an HSM solution that is capable of delivering under the demanding conditions of the TACC environment.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 8:13 am on March 14, 2019 Permalink | Reply
    Tags: "Can computing change the world?", Advanced Computing for Social Change, Computing4Change, , TACC - Texas Advanced Computer Center   

    From Science Node: “Can computing change the world?” 

    Science Node bloc
    From Science Node

    13 Mar, 2019
    Ellen Glover

    Last November, sixteen undergraduate students from around the world came together in Texas to combine their skills and tackle the issue of violence.


    The Computing4Change program brings together undergraduate students for a 48-hour intensive competition to apply computing to urgent social issues. This 2018 topic was “Resisting Cultural Acceptance of Violence.”

    This was part of Computing4Change, a program dedicated to empowering students of all races, genders, and backgrounds to implement change through advanced computing and research.

    The challenge was developed by Kelly Gaither and Rosalia Gomez from the Texas Advanced Computing Center (TACC), and Linda Akli of the Southeastern Universities Research Association.

    Three years ago, as the conference chair at the 2016 XSEDE conference in Miami, Gaither wanted to ensure that she authentically represented students’ voices to other conference attendees. Akli and Gomez led the student programs at the conference, bringing together a large, diverse group of students from Miami and surrounding area.

    So she asked the students what issues they cared about. “It was shocking that most of the issues had nothing to do with their school life and everything to do with the social conditions that they deal with every day,” Gaither says.

    After that, Gaither, Gomez, and Akli promised that they would start a larger program to give students a platform for the issues they found important. They brought in Ruby Mendenhall from the University of Illinois Urbana-Champaign and Sue Fratkin, a public policy analyst concentrating on technology and communication issues.

    2
    48-hour challenge. The student competitors had only 48 hours to do all of their research and come up with a 30-minute presentation before a panel of judges at the SC18 conference in Dallas, TX. Courtesy Computing4Change.

    Out of that collaboration came Advanced Computing for Social Change, a program that gave students a platform to use computing to investigate hot-button topics like Black Lives Matter and immigration. The inaugural competition was held at SC16 and was supported by the conference and by the National Science Foundation-funded XSEDE project.

    “The students at the SC16 competition were so empowered by being able to work on Black Lives Matter that they actually asked if they could work overnight and do the presentations later the next day,” Gaither says. “They felt like there was more work that needed to be done. I have never before seen that kind of enthusiasm for a given problem.”

    In 2018, Gaither, Gomez, and Akli made some big changes to the program and partnered with the Special Interest Group for High Performance Computing (SIGHPC). As a result of SIGHPC’s sponsorship, the program was renamed Computing4Change. Applications were opened up to national and international undergraduate students to ensure a diverse group of participants.

    “We know that the needle is not shifting with respect to diversity. We know that the pipeline is not coming in any more diverse, and we are losing diverse candidates when they do come into the pipeline,” Gaither says.

    The application included questions about what issues the applicants found important: What topics were they most passionate about and why? How did they see technology fitting into solutions?

    Within weeks, the program received almost 300 applicants for 16 available spots. An additional four students from Chaminade University of Honolulu were brought in to participate in the competition.

    In the months leading up to the conference, Gaither, Gomez, and Akli hosted a series of webinars teaching everything from data analytics to public speaking and understanding differences in personality types.

    All expenses, including flight, hotel, meals, and conference fees were covered for each student. “For some of these kids, this is the first time they’ve ever traveled on an airplane. We had a diverse set of academic backgrounds. For example, we had a student from Yale and a community college student,” says Gaither. “Their backgrounds span the gamut, but they all come in as equals.”

    Although they interacted online, the students didn’t meet in person until they showed up to the conference. That’s when they were assigned to their group of four and the competition topic of violence was revealed. The students had to individually decide what direction to take with the research and how that would mesh with their other group members’ choices.

    “Each of those kids had to have their individual hypothesis so that no one voice was more dominant than the other,” Gaither says. “And then they had to work together to find out what the common theme might be. We worked with them to assist with scope, analytics, and messaging.”

    The teams had 48 hours to do all of their research and come up with a 30-minute presentation to present to a panel of judges at the SC18 conference in Dallas, TX.

    All mentors stayed with the students, making sure they approached their research from a more personal perspective and worked through any unexpected roadblocks—just like they would have to in a real-world research situation.

    For example, one student wanted to find data on why people leave Honduras and seek asylum in the United States. Little explicit data exits on that topic, but there is data on why people from all countries seek asylum. The mentors encouraged her to look there for correlations.

    “That was a process of really trying to be creative about getting to the answer,” Gaither says. “But that’s life. With real data, that’s life.”

    The Computing4Change mentors also coached the students to analyze their data and present it clearly to the judges. Gaither hopes the students leave the program not only knowing more about advanced computing, but also more aware of their power to effect change. She says it’s easy to teach someone a skill, but it’s much more impactful to help them find a personal passion within that skill.

    “If you’re passionate about something, you’ll stick with it,” Gaither says. “You can plug into very large, complex problems that are relevant to all of us.”

    The next Computing4Change event will be held in Denver, CO, co-located with the SC19 conference Nov 16-22, 2019. Travel, housing, meals, and SC19 conference registration covered for the 20 students selected. Application deadline is April 8, 2019. Apply here.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 12:43 pm on March 11, 2019 Permalink | Reply
    Tags: , , Frontera at TACC, , TACC - Texas Advanced Computer Center   

    From insideHPC: “New Texascale Magazine from TACC looks at HPC for the Endless Frontier” 

    From insideHPC

    1
    https://www.tacc.utexas.edu/documents/1084364/1705087/Texascale-2018.pdf/

    March 11, 2019

    This feature story describes how the computational power of Frontera will be a game changer for research. Late last year, the Texas Advanced Computing Center announced plans to deploy Frontera, the world’s fastest supercomputer in academia.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    TACC DELL EMC Stampede2 supercomputer



    TACC Frontera Dell EMC supercomputer fastest at any university

    To prepare for launch, TACC just published the inaugural edition of Texascale, an annual magazine with stories that highlight the people, science, systems, and programs that make TACC one of the leading academic computing centers in the world.

    In an inconspicuous-looking data center on The University of Texas at Austin’s J. J. Pickle Research Campus, construction is underway on one of the world’s most powerful supercomputers.

    The Frontera system (Spanish for “frontier”) will allow the nation’s academic scientists and engineers to probe questions both cosmic and commonplace — What is the universe composed of? How can we produce enough food to feed the Earth’s growing population? — that cannot be addressed in a lab or in the field; that require the number-crunching power equivalent to a small city’s worth of computers to solve; and that may be critical to the survival of our species.

    The name Frontera pays homage to the “endless frontier” of science envisioned by Vannevar Bush and presented in a report to President Harry Truman calling for a national strategy for scientific progress. The report led to the founding of the National Science Foundation (NSF) — the federal agency that funds fundamental research and education in science and engineering. It paved the way for investments in basic and applied research that laid the groundwork for our modern world, and inspired the vision for Frontera.

    “Whenever a new technological instrument emerges that can solve previously intractable problems, it has the potential to transform science and society,” said Dan Stanzione, executive director of TACC and one of the designers behind the new machine. “We believe that Frontera will have that kind of impact.”

    The Quest for Computing Greatness

    The pursuit of Frontera formally began in May 2017 when NSF issued an invitation for proposals for a new leadership-class computing facility, the top tier of high performance computing systems funded by the agency. The program would award $60 million to construct a supercomputer that could satisfy the needs of a scientific and engineering community that increasingly relies on computation.

    “For over three decades, NSF has been a leader in providing the computing resources our nation’s researchers need to accelerate innovation,” explained NSF Director France Córdova. “Keeping the U.S. at the forefront of advanced computing capabilities and providing researchers across the country access to those resources are key elements in maintaining our status as a global leader in research and education.”

    “The Frontera project is not just about the system; our proposal is anchored by an experienced team of partners and vendors with a community-leading track record of performance.” — Dan Stanzione, TACC

    Meet the The Architects

    When TACC proposed Frontera, it didn’t simply offer to build a fastest-in-its-class supercomputer. It put together an exceptional team of supercomputer experts and power users who together have internationally recognized expertise in designing, deploying, configuring, and operating HPC systems at the largest scale. Learn more about principal investigators who led the charge.

    NSF’s invitation for proposals indicated that the initial system would only be the beginning. In addition to enabling cutting-edge computations, the supercomputer would serve as a platform for designing a future leadership-class facility to be deployed five years later that would be 10 times faster still — more powerful than anything that exists in the world today.

    TACC has deployed major supercomputers several times in the past with support from NSF. Since 2006, TACC has operated three supercomputers that debuted among the Top15 most powerful in the world — Ranger (2008-2013; #4), Stampede1 (2012-2017, #7) and Stampede2 (2017-present, #12) — and three more systems that rose to the Top25. These systems established TACC, which was founded in 2001, as one of the world leaders in advanced computing.

    TACC solidified its reputation when, on August 28, 2018, NSF announced that the center had won the competition to design, build, deploy, and run the most capable system they had ever commissioned.

    “This award is an investment in the entire U.S. research ecosystem that will enable leap-ahead discoveries,” NSF Director Córdova said at the time.

    Frontera represents a further step for TACC into the upper echelons of supercomputing — the Formula One race cars of the scientific computing world. When Frontera launches in 2019, it will be the fastest supercomputer at any U.S. university and one of the fastest in the world — a powerful, all-purpose tool for science and engineering.

    ““Many of the frontiers of research today can be advanced only by computing,” Stanzione said. “Frontera will be an important tool to solve Grand Challenges that will improve our nation’s health, well-being, competitiveness, and security.”

    Supercomputers Expand the Mission

    Supercomputers have historically had very specific uses in the world of research, performing virtual experiments and analyses of problems that can’t be easily physically experimented upon or solved with smaller computers.

    Since 1945, when the ENIAC (Electronic Numerical Integrator and Computer) at the University of Pennsylvania first calculated artillery firing tables for the United States Army’s Ballistic Research Laboratory, the uses of large-scale computing have grown dramatically.

    Today, every discipline has problems that require advanced computing. Whether it’s cellular modeling in biology, the design of new catalysts in chemistry, black hole simulations in astrophysics, or Internet-scale text analyses in the social sciences, the details change, but the need remains the same.

    “Computation is arguably the most critical tool we possess to reach more deeply into the endless frontier of science,” Stanzione says. “While specific subfields of science need equipment like radio telescopes, MRI machines, and electron microscopes, large computers span multiple fields. Computing is the universal instrument.”

    In the past decade, the uses of high performance computing have expanded further. Massive amounts of data from sensors, wireless devices, and the Internet opened up an era of big data, for which supercomputers are well suited. More recently, machine and deep learning have provided a new way of not just analyzing massive datasets, but of using them to derive new hypotheses and make predictions about the future.

    As the problems that can be solved by supercomputers expanded, NSF’s vision for cyberinfrastructure — the catch-all term for the set of information technologies and people needed to provide advanced computing to the nation — evolved as well. Frontera represents the latest iteration of that vision.

    Data-Driven Design

    TACC’s leadership knew they had to design something innovative from the ground up to win the competition for Frontera. Taking a data-driven approach to the planning process, they investigated the usage patterns of researchers on Stampede1, as well as on Blue Waters — the previous NSF-funded leadership-class system — and in the Department of Energy (DOE)’s large-scale scientific computing program, INCITE, and analyzed the types of problems that scientists need supercomputers to solve.

    They found that Stampede1 usage was dominated by 15 commonly used applications. Together these accounted for 63 percent of Stampede1’s computing hours in 2016. Some 2,285 additional applications utilized the remaining 37 percent of the compute cycles. (These trends were consistent on Blue Waters and DOE systems as well.) Digging deeper they determined that, of the top 15 applications, 97 percent of the usage solved equations that describe motions of bodies in the universe, the interactions of atoms and molecules, or electron and fluids in motion.

    Frontera will be the fastest supercomputer at a U.S. university and likely Top 5 in the world when it launches in 2019. It will support simulation, data analysis and AI on the largest scales.

    “We did a careful analysis to understand the questions our community was using our supercomputers to solve and the codes and equations they used to solve them,” said TACC’s director of High Performance Computing, Bill Barth. “This narrowed the pool of problems that Frontera would need to excel in solving.”

    But past use wasn’t the only factor they considered. “It was also important to consider emerging uses of advanced computing resources for which Frontera will be critical,” Stanzione said. “Prominent among these are data-driven and data-intensive applications, as well as machine and deep learning.”

    Though still small in terms of their overall use of Stampede2, and other current systems, these areas are growing quickly and offer new ways to solve enduring problems.

    Whereas researchers traditionally wrote HPC codes in programming languages like C++ and Fortran, data-intensive problems often require non-traditional software or frameworks, such as R, Python, or TensorFlow.

    “The coming decade will see significant efforts to integrate physics-driven and data-driven approaches to learning,” said Tommy Minyard, TACC director of Advanced Computing Systems. “We designed Frontera with the capability to address very large problems in these emerging communities of computation and serve a wide range of both simulation-based and data-driven science.”

    The Right Chips for the Right Jobs

    Anyone following computer hardware trends in recent years has noticed the blossoming of options in terms of computer processors. Today’s landscape includes a range of chip architectures, from low energy ARM processors common in cell phones, to adaptable FPGAs (field-programmable gate arrays), to many varieties of CPU, GPUs and AI-accelerating chips.

    The team considered a wide-range of system options for Frontera before concluding that a CPU-based primary system with powerful Intel Xeon x86 nodes and a fast network would be the most useful platform for most applications.

    3

    Once built, TACC expects that the main compute system will achieve 35 to 40 petaflops of peak performance. For comparison, Frontera will be twice as powerful as Stampede2 (currently the fastest university supercomputer) and 70 times as fast as Ranger, which operated at TACC until 2013.

    To match what Frontera will compute in just one second, a person would have to perform one calculation every second for one billion years.

    In addition to its main system, Frontera will also include a subsystem made up of graphics processing units (GPUs) that have proven particularly effective for deep learning and molecular dynamics problems.

    “For certain application classes that can make effective use of GPUs, the subsystem will provide a cost-efficient path to high performance for those in the community that can fully exploit it,” Stanzione said.

    Designing a Complete Ecosystem

    The effectiveness of a supercomputer depends on more than just its processors. Storage, networking, power, and cooling are all critical as well.

    Frontera will include a storage subsystem from DataDirect Networks with almost 53 petabytes of capacity and nearly 2 terabytes per second of aggregate bandwidth. Of this, 50 petabytes will use disk-based, distributed storage, while 3 petabytes will employ a new type of very fast storage known as Non-volatile Memory Express storage, broadening the system’s usefulness for the data science community.

    Supercomputing applications often employ many compute nodes, or devices, at once, which requires passing data and instructions from one part of the system to another. Mellanox InfiniBand interconnects will provide 100 Gigabits per second (Gbps) connectivity to each node, and 200 Gbps between the central switches.

    These components will be integrated via servers from Dell EMC, who has partnered with TACC since 2003 on massive systems, including Stampede1 and 2.

    “The new Frontera system represents the next phase in the long-term relationship between TACC and Dell EMC, focused on applying the latest technical innovation to truly enable human potential,” said Thierry Pellegrino, vice president of Dell EMC High Performance Computing.”

    Though a top system in its own right, Frontera won’t operate as an island. Users will have access to TACC’s other supercomputers — Stampede2, Lonestar, Wrangler, and many more, each with a unique architecture — and storage resources, including Stockyard, TACC’s global file system; Corral, TACC’s data collection repository; and Ranch, a tape-based long-term archival system.

    Together, they compose an ecosystem for scientific computing that is arguably unmatched in the world.

    4

    New Models of Access & Use

    Researchers traditionally interact with supercomputers through the command line — a text-only program that takes instructions and passes them on to the computer’s operating system to run.

    The bulk of a supercomputer’s time (roughly 90 percent of the cycles on Stampede2) is consumed by researchers using the system in this way. But as computing becomes more complex, having a lower barrier to entry and offering an end-to-end solution to access data, software, and computing services has grown in importance.

    Science gateways offer streamlined, user-friendly interfaces to cyberinfrastructure services. In recent years, TACC has become a leader in building these accessible interfaces for science.

    “Visual interfaces can remove much of the complexity of traditional HPC, and lower this entry barrier,” Stanzione said. “We’ve deployed more than 20 web-based gateways, including several of the most widely used in the world. On Frontera, we’ll allow any community to build their own portals, applications, and workflows, using the system as the engine for computations.”

    Though they use a minority of computing cycles, a majority of researchers actually access supercomputers through portals and gateways. To serve this group, Frontera will support high-level languages like Python, R, and Julia, and offer a set of RESTful APIs (application program interfaces) that will make the process of building community-wide tools easier.

    “We’re committed to delivering the transformative power of computing to a wide variety of domains from science and engineering to the humanities,” said Maytal Dahan, TACC’s director of Advanced Computing Interfaces. “Expanding into disciplines unaccustomed to computing from the command line means providing access in a way that abstracts the complexity and technology and lets researchers focus on their scientific impact and discoveries.”

    5

    The Cloud

    For some years, there has been a debate in the advanced computing community about whether supercomputers or “the cloud” are more useful for science. The TACC team believes it’s not about which is better, but how they might work together. By design, Frontera takes a bold step towards bridging this divide by partnering with the nation’s largest cloud providers — Microsoft, Amazon, and Google — to provide cloud services that complement TACC’s existing offerings and have unique advantages.

    It’s no secret that supercomputers use a lot of power. Frontera will require more than 5.5 megawatts to operate — the equivalent of powering more than 3,500 homes. To limit the expense and environmental impact of running Frontera, TACC will employ a number of energy-saving measures with the new system. Some were put in place years ago; others will be deployed at TACC for the first time. All told, TACC expects one-third of the power for Frontera to come from renewable sources.

    These include long-term storage for sharing datasets with collaborators; access to additional types of computing processors and architectures that will appear after Frontera launches; cloud-based services like image classification; and Virtual Desktop Interfaces that allow a cloud-based filesystem to look like one’s home computer.

    “The modern scientific computing landscape is changing rapidly,” Stanzione said. “Frontera’s computing ecosystem will be enhanced by playing to the unique strengths of the cloud, rather than competing with them.”

    Software & Containers

    When the applications that researchers rely on are not available on HPC systems, it creates a barrier to large-scale science. For that reason, Frontera will support the widest catalog of applications of any large-scale scientific computing system in history.

    TACC will work with application teams to support highly-tuned versions of several dozen of the most widely used applications and libraries. Moreover, Frontera will provide support for container-based virtualization, which sidesteps the challenges of adapting tools to a new system while enabling entirely new types of computation.

    With containers, user communities develop and test their programs on laptops or in the cloud, and then transfer those same workflows to HPC systems using programs like Singularity. This facilitates the development of event-driven workflows, which automate computations in response to external events like natural disasters, or for the collection of data from large-scale instruments and experiments.

    “Frontera will be a more modern supercomputer, not just in the technologies it uses, but in the way people will access it,” Stanzione said.

    A Frontier System to Solve Frontier Challenges

    Talking about a supercomputer in terms of its chips and access modes is a bit like talking about a telescope in terms of its lenses and mounts. The technology is important, but the ultimate question is: what can it do that other systems can’t?

    Entirely new problems and classes of research will be enabled by Frontera. Examples of projects Frontera will tackle in its first year include efforts to explore models of the Universe beyond the Standard Model in collaboration with researchers from the Large Hadron Collider; research that uses deep learning and simulation to predict in advance when a major disruption may occur within a fusion reactor to prevent damaging these incredibly expensive systems; and data-driven genomics studies to identify the right species of crops to plant in the right place at the right time to maximize production and feed the planet. [See more about each project in the box below.]

    The LHC modeling effort, fusion disruption predictions, and genomic analyses represent the types of ‘frontier,’ Grand Challenge research problems Frontera will help address.

    “Many phenomena that were previously too complex to model with the hardware of just a few years ago are within reach for systems with tens of petaflops,” said Stanzione.

    A review committee made up of computational and domain experts will ultimately select the projects that will run on Frontera, with a small percentage of time reserved for emergencies (as in the case of hurricane forecasting), industry collaborations, or discretionary use.

    It’s impossible to say what the exact impact of Frontera will be, but for comparison, Stampede1, which was one quarter as powerful as Frontera, enabled research that led to nearly 4,000 journal articles. These include confirmations of gravitational wave detections by LIGO that contributed to a Nobel Prize in Physics in 2016; discoveries of FDA approved drugs that have been successful in treating cancer; and a greater understanding of DNA interactions enabling the design of faster and cheaper gene sequencers.

    From new machine learning techniques to diagnose and treat diseases to fundamental mathematical and computer science research that will be the basis for the next generation of scientists’ discoveries, Frontera will have an outsized impact on science nationwide.

    Frontera will be the most powerful supercomputer at any U.S. university and likely top 10 in the world when it launches in 2019. It will support simulation, data analysis, and AI on the largest scales.

    Physics Beyond the Standard Model

    The NSF program that funds Frontera is titled, Towards a Leadership-Class Computing Facility. This phrasing is important because, as powerful as Frontera is, NSF sees it as a step toward even greater support for the nation’s scientists and engineers. In fact, the program not only funds the construction and operation of Frontera — the fastest system NSF has ever deployed — it also supports the planning, experimentation, and design required to build a system in five years that will be 10 times more capable than Frontera.

    “We’ll be planning for the next generation of computational science and what that means in terms of hardware, architecture, and applications,” Stanzione said. “We’ll start with science drivers — the applications, workflows, and codes that will be used — and use those factors to determine the architecture and the balance between storage, networks, and compute needed in the future.”

    Much like the data-driven design process that influenced the blueprint for Frontera, the TACC team will employ a “design — operate — evaluate” cycle on Frontera to plan Phase 2.

    TACC has assembled a Frontera Science Engagement Team, consisting of a more than a dozen leading computational scientists from a range of disciplines and universities, to help determine the “the workload of the future” — the science drivers and requirements for the next generation of systems. The team will also act as liaisons to the broader community in their respective fields, presenting at major conferences, convening discussions, and recruiting colleagues to participate in the planning.

    Fusion physicist William Tang joined the Frontera Science Engagement Team in part because he believed in TACC’s vision for cyberinfrastructure. “AI and deep learning are huge areas of growth. TACC definitely saw that and encouraged that a lot more. That played a significant part in the winning proposal, and I’m excited to join the activities going forward,” Tang said.

    A separate technology assessment team will use a similar strategy to identify critical emerging technologies, evaluate them, and ultimately develop some as testbed systems.

    TACC will upgrade and make available their FPGA testbed, which investigates new ways of using interconnected FPGAs as computational accelerators. They also hope to add an ARM testbed and other emerging technologies.

    Other testbeds will be built offsite in collaboration with partners. TACC will work with Stanford University and Atos to deploy a quantum simulator that will allow them to study quantum systems. Partnerships with the cloud providers Microsoft, Google, and Amazon, will allow TACC to track AMD (Advanced Micro Devices) solutions, neuromorphic prototypes and tensor processing units.

    Finally, TACC will work closely with Argonne National Laboratory to assess the technologies that will be deployed in the Aurora21 system, which will enter production in 2021. TACC will have early access to the same compute and storage technologies that will be deployed in Aurora21, as well as Argonne’s simulators, prototypes, software tools, and application porting efforts, which TACC will evaluate for the academic research community.

    “The primary compute elements of Frontera represent a relatively conservative approach to scientific computing,” Minyard said. “While this may remain the best path forward through the mid-2020’s and beyond, a serious evaluation of a Phase 2 system will require not only projections and comparisons, but hands-on access to future technologies. TACC will provide the testbed systems not only for our team and Phase 2 partners, but to our full user community as well.”

    Using the “design — operate — evaluate” process, TACC will develop a quantitative understanding of present and future application performance. It will build performance models for the processors, interconnects, storage, software, and modes of computing that will be relevant in the Phase 2 timeframe.

    “It’s a push/pull process,” Stanzione said. “Users must have an environment in which they can be productive today, but that also incentivizes them to continuously modernize their applications to take advantage of emerging computational technologies.”

    The deployment of two to three small scale systems at TACC will allow the assessment team to evaluate the performance of the system against their model and gather specific feedback from the NSF science user community on usability. From this process, the design of the Phase 2 leadership class system will emerge.

    With Great Power Comes Great Responsibility

    The design process will culminate some years in the future. Meanwhile, in the coming months, Frontera’s server racks will begin to roll into TACC’s data center. From January to March 2019, TACC will integrate the system with hundreds of miles of networking cables and install the software stack. In the spring, TACC will host an early user period where experienced researchers will test the system and work out any bugs. Full production will begin in the summer of 2019.

    “We want it to be one of the most useable and accessible systems in the world,” Stanzione said. “Our design is not uniquely brilliant by us. It’s the logical next step — smart engineering choices by experienced operators.”

    It won’t be TACC’s first rodeo. Over 17 years, the team has developed and deployed more than two dozen HPC systems totaling more than $150 million in federal investment. The center has grown to nearly 150 professionals, including more than 75 PhD computational scientists and engineers, and earned a stellar reputation for providing reliable resources and superb user service. Frontera will provide a unique resource for science and engineering, capable of scaling to the very largest capability jobs, running the widest array of jobs, and supporting science in all forms.

    The project represents the achievement of TACC’s mission of “Powering Discoveries to Change the World.”

    “Computation is a key element to scientific progress, to engineering new products, to improving human health, and to our economic competitiveness. This system will be the NSF’s largest investment in computing in the next several years. For that reason, we have an enormous responsibility to our colleagues all around the U.S. to deliver a system that will enable them to be successful,” Stanzione said. “And if we succeed, we can change the world.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 10:43 am on September 20, 2018 Permalink | Reply
    Tags: , Cornell’s Center for Advanced Computing (CAC) was named a training partner on a $60 million National Science Foundation-funded project to build the fastest supercomputer at any U.S. university and o, , , TACC - Texas Advanced Computer Center   

    From Cornell Chronicle: “Cornell writing the (how-to) book on new supercomputer” 

    Cornell Bloc

    From Cornell Chronicle

    September 18, 2018
    Melanie Lefkowitz
    mll9@cornell.edu

    Cornell’s Center for Advanced Computing (CAC) was named a training partner on a $60 million, National Science Foundation-funded project to build the fastest supercomputer at any U.S. university and one of the most powerful in the world.

    1
    3

    CAC will develop training materials to help users get the most out of the Frontera supercomputer, to be deployed in summer 2019 at the Texas Advanced Computing Center at the University of Texas at Austin.

    Texas Advanced Computer Center

    “Computers don’t do great work unless you have people ready to use them for great research. Being able to be the on-ramp for a system like this is really valuable,” said Rich Knepper, CAC’s deputy director. “This represents the next step in leadership computing, and it’s an opportunity for Cornell to be a very integral part of that.”

    CAC, which provides high-performance computing and cloud computing services to the Cornell community and beyond, will receive $1 million from the NSF over the next five years to create Cornell Virtual Workshops – online content explaining how to use Frontera.

    The Texas Advanced Computing Center will build the supercomputer, with the primary computing system provided by Dell EMC and powered by Intel processors. Other partners in the project are the California Institute of Technology, Princeton University, Stanford University, the University of Chicago, the University of Utah, the University of California, Davis, Ohio State University, the Georgia Institute of Technology and Texas A&M University.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Once called “the first American university” by educational historian Frederick Rudolph, Cornell University represents a distinctive mix of eminent scholarship and democratic ideals. Adding practical subjects to the classics and admitting qualified students regardless of nationality, race, social circumstance, gender, or religion was quite a departure when Cornell was founded in 1865.

    Today’s Cornell reflects this heritage of egalitarian excellence. It is home to the nation’s first colleges devoted to hotel administration, industrial and labor relations, and veterinary medicine. Both a private university and the land-grant institution of New York State, Cornell University is the most educationally diverse member of the Ivy League.

    On the Ithaca campus alone nearly 20,000 students representing every state and 120 countries choose from among 4,000 courses in 11 undergraduate, graduate, and professional schools. Many undergraduates participate in a wide range of interdisciplinary programs, play meaningful roles in original research, and study in Cornell programs in Washington, New York City, and the world over.

     
  • richardmitnick 11:23 am on January 8, 2018 Permalink | Reply
    Tags: , Scientist's Work May Provide Answer to Martian Mountain Mystery, , TACC - Texas Advanced Computer Center,   

    From U Texas Dallas: “Scientist’s Work May Provide Answer to Martian Mountain Mystery” 

    U Texas Dallas

    Jan. 8, 2018
    Stephen Fontenot, UT Dallas
    (972) 883-4405
    stephen.fontenot@utdallas.edu

    By seeing which way the wind blows, a University of Texas at Dallas fluid dynamics expert has helped propose a solution to a Martian mountain mystery.

    4
    Dr. William Anderson

    Dr. William Anderson, an assistant professor of mechanical engineering in the Erik Jonsson School of Engineering and Computer Science, co-authored a paper published in the journal Physical Review E that explains the common Martian phenomenon of a mountain positioned downwind from the center of an ancient meteorite impact zone.

    Anderson’s co-author, Dr. Mackenzie Day, worked on the project as part of her doctoral research at The University of Texas at Austin, where she earned her PhD in geology in May 2017. Day is a postdoctoral scholar at the University of Washington in Seattle.

    Gale Crater was formed by meteorite impact early in the history of Mars, and it was subsequently filled with sediments transported by flowing water. This filling preceded massive climate change on the planet, which introduced the arid, dusty conditions that have been prevalent for the past 3.5 billion years. This chronology indicates wind must have played a role in sculpting the mountain.

    “On Mars, wind has been the only driver of landscape change for over 3 billion years,” Anderson said. “This makes Mars an ideal planetary laboratory for aeolian morphodynamics — wind-driven movement of sediment and dust. We’re studying how Mars’ swirling atmosphere sculpted its surface.”

    Wind vortices blowing across the crater slowly formed a radial moat in the sediment, eventually leaving only the off-center Mount Sharp, a 3-mile-high peak similar in height to the rim of the crater. The mountain was skewed to one side of the crater because the wind excavated one side faster than the other, the research suggests.

    Day and Anderson first advanced the concept in an initial publication on the topic in Geophysical Research Letters. Now, they have shown via computer simulation that, given more than a billion years, Martian winds were capable of digging up tens of thousands of cubic kilometers of sediment from the crater — largely thanks to turbulence, the swirling motion within the wind stream.

    2
    A digital elevation model of Gale Crater shows the pattern of mid-latitude Martian craters with interior sedimentary mounds.

    “The role of turbulence cannot be overstated,” Anderson said. “Since sediment movement increases non-linearly with drag imposed by the aloft winds, turbulent gusts literally amplify sediment erosion and transport.”

    The location — and mid-latitude Martian craters in general — became of interest as NASA’s Curiosity rover landed in Gale Crater in 2012, where it has gathered data since then.

    “The rover is digging and cataloging data housed within Mount Sharp,” Anderson said. “The basic science question of what causes these mounds has long existed, and the mechanism we simulated has been hypothesized. It was through high-fidelity simulations and careful assessment of the swirling eddies that we could demonstrate efficacy of this model.”

    The theory Anderson and Day tested via computer simulations involves counter-rotating vortices — picture in your mind horizontal dust devils — spiraling around the crater to dig up sediment that had filled the crater in a warmer era, when water flowed on Mars.

    “These helical spirals are driven by winds in the crater, and, we think, were foremost in churning away at the dry Martian landscape and gradually scooping sediment from within the craters, leaving behind these off-center mounds,” Anderson said.

    That simulations have demonstrated that wind erosion could explain these geographical features offers insight into Mars’ distant past, as well as context for the samples collected by Curiosity.

    “It’s further indication that turbulent winds in the atmosphere could have excavated sediment from the craters,” Anderson said. “The results also provide guidance on how long different surface samples have been exposed to Mars’ thin, dry atmosphere.”

    This understanding of the long-term power of wind can be applied to Earth as well, although there are more variables on our home planet than Mars, Anderson said.

    “Swirling, gusty winds in Earth’s atmosphere affect problems at the nexus of landscape degradation, food security and epidemiological factors affecting human health,” Anderson said. “On Earth, however, landscape changes are also driven by water and plate tectonics, which are now absent on Mars. These drivers of landscape change generally dwarf the influence of air on Earth.”

    Computational resources for the study were provided by the Texas Advanced Computing Center at UT Austin.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    TACC DELL EMC Stampede2 supercomputer


    Day’s role in the research was supported by a Graduate Research Fellowship from the National Science Foundation.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The University of Texas at Dallas is a Carnegie R1 classification (Doctoral Universities – Highest research activity) institution, located in a suburban setting 20 miles north of downtown Dallas. The University enrolls more than 27,600 students — 18,380 undergraduate and 9,250 graduate —and offers a broad array of bachelor’s, master’s, and doctoral degree programs.

    Established by Eugene McDermott, J. Erik Jonsson and Cecil Green, the founders of Texas Instruments, UT Dallas is a young institution driven by the entrepreneurial spirit of its founders and their commitment to academic excellence. In 1969, the public research institution joined The University of Texas System and became The University of Texas at Dallas.

    A high-energy, nimble, innovative institution, UT Dallas offers top-ranked science, engineering and business programs and has gained prominence for a breadth of educational paths from audiology to arts and technology. UT Dallas’ faculty includes a Nobel laureate, six members of the National Academies and more than 560 tenured and tenure-track professors.

     
  • richardmitnick 10:32 am on December 20, 2017 Permalink | Reply
    Tags: , Computation combined with experimentation helped advance work in developing a model of osteoregeneration, Genes could be activated in human stem cells that initiate biomineralization a key step in bone formation, , , Silk has been shown to be a suitable scaffold for tissue regeneration, Silky Secrets to Make Bones, Stampede1, , TACC - Texas Advanced Computer Center,   

    From TACC: “Silky Secrets to Make Bones” 

    TACC bloc

    Texas Advanced Computing Center

    December 19, 2017
    Jorge Salazar

    1
    Scientists used supercomputers and fused golden orb weaver spider web silk with silica to activate genes in human stem cells that initiated biomineralization, a key step in bone formation. (devra/flickr)

    Some secrets to repair our skeletons might be found in the silky webs of spiders, according to recent experiments guided by supercomputers. Scientists involved say their results will help understand the details of osteoregeneration, or how bones regenerate.
    A study found that genes could be activated in human stem cells that initiate biomineralization, a key step in bone formation. Scientists achieved these results with engineered silk derived from the dragline of golden orb weaver spider webs, which they combined with silica. The study appeared September 2017 in the journal Advanced Functional Materials and has been the result of the combined effort from three institutions: Tufts University, Massachusetts Institute of Technology and Nottingham Trent University.

    2
    XSEDE supercomputers Stampede at TACC and Comet at SDSC helped study authors simulate the head piece domain of the cell membrane protein receptor integrin in solution, based on molecular dynamics modeling. (Davoud Ebrahimi)

    SDSC Dell Comet supercomputer

    Study authors used the supercomputers Stampede1 at the Texas Advanced Computing Center (TACC) and Comet at the San Diego Supercomputer Center (SDSC) at the University of California San Diego through an allocation from XSEDE, the eXtreme Science and Engineering Discovery Environment, funded by the National Science Foundation. The supercomputers helped scientists model how the cell membrane protein receptor called integrin folds and activates the intracellular pathways that lead to bone formation. The research will help larger efforts to cure bone growth diseases such as osteoporosis or calcific aortic valve disease.

    “This work demonstrates a direct link between silk-silica-based biomaterials and intracellular pathways leading to osteogenesis,” said study co-author Zaira Martín-Moldes, a post-doctoral scholar at the Kaplan Lab at Tufts University. She researches the development of new biomaterials based on silk. “The hybrid material promoted the differentiation of human mesenchymal stem cells, the progenitor cells from the bone marrow, to osteoblasts as an indicator of osteogenesis, or bone-like tissue formation,” Martín-Moldes said.

    “Silk has been shown to be a suitable scaffold for tissue regeneration, due to its outstanding mechanical properties,” Martín-Moldes explained. It’s biodegradable. It’s biocompatible. And it’s fine-tunable through bioengineering modifications. The experimental team at Tufts University modified the genetic sequence of silk from golden orb weaver spiders (Nephila clavipes) and fused the silica-promoting peptide R5 derived from a gene of the diatom Cylindrotheca fusiformis silaffin.

    The bone formation study targeted biomineralization, a critical process in materials biology. “We would love to generate a model that helps us predict and modulate these responses both in terms of preventing the mineralization and also to promote it,” Martín-Moldes said.

    “High performance supercomputing simulations are utilized along with experimental approaches to develop a model for the integrin activation, which is the first step in the bone formation process,” said study co-author Davoud Ebrahimi, a postdoctoral associate at the Laboratory for Atomistic and Molecular Mechanics of the Massachusetts Institute of Technology.

    Integrin embeds itself in the cell membrane and mediates signals between the inside and the outside of cells. In its dormant state, the head unit sticking out of the membrane is bent over like a nodding sleeper. This inactive state prevents cellular adhesion. In its activated state, the head unit straightens out and is available for chemical binding at its exposed ligand region.

    “Sampling different states of the conformation of integrins in contact with silicified or non-silicified surfaces could predict activation of the pathway,” Ebrahimi explained. Sampling the folding of proteins remains a classically computationally expensive problem, despite recent and large efforts in developing new algorithms.

    The derived silk–silica chimera they studied weighed in around a hefty 40 kilodaltons. “In this research, what we did in order to reduce the computational costs, we have only modeled the head piece of the protein, which is getting in contact with the surface that we’re modeling,” Ebrahimi said. “But again, it’s a big system to simulate and can’t be done on an ordinary system or ordinary computers.”

    The Computational team at MIT used the molecular dynamics package called Gromacs, a software for chemical simulation available on both the Stampede1 and Comet supercomputing systems. “We could perform those large simulations by having access to XSEDE computational clusters,” he said.

    “I have a very long-standing positive experience using XSEDE resources,” said Ebrahimi. “I’ve been using them for almost 10 years now for my projects during my graduate and post-doctoral experiences. And the staff at XSEDE are really helpful if you encounter any problems. If you need software that should be installed and it’s not available, they help and guide you through the process of doing your research. I remember exchanging a lot of emails the first time I was trying to use the clusters, and I was not so familiar. I got a lot of help from XSEDE resources and people at XSEDE. I really appreciate the time and effort that they put in order to solve computational problems that we usually encounter during our simulation,” Ebrahimi reflected.

    Computation combined with experimentation helped advance work in developing a model of osteoregeneration. “We propose a mechanism in our work,” explained Martín-Moldes, “that starts with the silica-silk surface activating a specific cell membrane protein receptor, in this case integrin αVβ3.” She said this activation triggers a cascade in the cell through three mitogen-activated protein kinsase (MAPK) pathways, the main one being the c-Jun N-terminal kinase (JNK) cascade.

    3
    Proposed mechanism for hMSC osteogenesis induction on silica surfaces. The binding of integrin αVβ3 to the silica surface promotes its activation, that triggers an activation cascade that involves the three MAPK pathways, ERK, p38, but mainly JNK (reflected as wider arrow), which promotes AP-1 activation and translocation to the nucleus to activate Runx2 transcription factor. Runx2 is the finally responsible for the induction of bone extracellular matrix proteins and other osteoblast differentiation genes. B) In the presence of a neutralizing antibody against αVβ3, there is no activation and induction of MAPK cascades, thus no induction of bone extracellular matrix genes and hence, no differentiation. (Davoud Ebrahimi)

    She added that other factors are also involved in this process such as Runx2, the main transcription factor related to osteogenesis. According to the study, the control system did not show any response, and neither did the blockage of integrin using an antibody, confirming its involvement in this process. “Another important outcome was the correlation between the amount of silica deposited in the film and the level of induction of the genes that we analyzed,” Martín-Moldes said. “These factors also provide an important feature to control in future material design for bone-forming biomaterials.”

    “We are doing a basic research here with our silk-silica systems,” Martín-Moldes explained. “But we are helping in building the pathway to generate biomaterials that could be used in the future. The mineralization is a critical process. The final goal is to develop these models that help design the biomaterials to optimize the bone regeneration process, when the bone is required to regenerate or to minimize it when we need to reduce the bone formation.”

    These results help advance the research and are useful in larger efforts to help cure and treat bone diseases. “We could help in curing disease related to bone formation, such as calcific aortic valve disease or osteoporosis, which we need to know the pathway to control the amount of bone formed, to either reduce or increase it, Ebrahimi said.

    “Intracellular Pathways Involved in Bone Regeneration Triggered by Recombinant Silk–Silica Chimeras,” DOI: 10.1002/adfm.201702570, appeared September 2017 in the journal Advanced Functional Materials. The National Institutes of Health funded the study, and the National Science Foundation through XSEDE provided computational resources. The study authors are Zaira Martín-Moldes, Nina Dinjaski, David L. Kaplan of Tufts University; Davoud Ebrahimi and Markus J. Buehler of the Massachusetts Institute of Technology; Robyn Plowright and Carole C. Perry of Nottingham Trent University.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Texas Advanced Computing Center (TACC) designs and operates some of the world’s most powerful computing resources. The center’s mission is to enable discoveries that advance science and society through the application of advanced computing technologies.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    TACC DELL EMC Stampede2 supercomputer

     
  • richardmitnick 9:49 am on October 30, 2017 Permalink | Reply
    Tags: , , TACC - Texas Advanced Computer Center,   

    From University of Texas at Austin: “UT Is Now Home to the Fastest Supercomputer at Any U.S. University” 

    U Texas Austin bloc

    University of Texas at Austin

    October 27, 2017
    Anna Daugherty

    The term “medical research” might bring to mind a sterile room with white lab coats, goggles, and vials. But for cutting-edge researchers, that picture is much more high-tech: it’s a room filled with row after row of metal racks housing 300,000 computer processors, each blinking green, wires connecting each processor, and the deafening sound of a powerful machine at work. It’s a room like the one housing the 4,000-square-foot supercomputer Stampede2 at The University of Texas’ J.J. Pickle Research Campus.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    TACC DELL EMC Stampede2 supercomputer

    At peak performance, Stampede2, the flagship supercomputer at UT Austin’s Texas Advanced Computing Center (TACC), will be capable of performing 18 quadrillion operations per second (18 petaflops, in supercomputer lingo). That’s more powerful than 100,000 desktops. As the fastest supercomputer at any university in the U.S., it’s a level of computing that the average citizen can’t comprehend. Most people do their computing on phones the size of their hands—but then again, most aren’t mining cancer data, predicting earthquakes, or analyzing black holes.

    Funded by a $30 million grant from the National Science Foundation, Stampede2 replaces the original Stampede system, which went live in 2013. Designed to be twice as powerful while using half the energy of the older system, Stampede2 is already being used by researchers around the country. In June 2017, Stampede2 went public with 12 petaflops and was ranked as the 12th most powerful computer in the world. Phase two added six petaflops in September and phase three will complete the system in 2018 by adding a new type of memory capacity to the computer.

    For researchers like Rommie Amaro, professor of chemistry at the University of California, San Diego, a tool like Stampede2 is essential. As the director of the National Biomedical Computation Resource, Amaro says nearly all of their drug research is done on supercomputers.

    Most of her work with the original Stampede system focused on a protein called p53, which prevents tumor growth; the protein is mutated in approximately half of all cancer patients. Due to the nature of p53, it’s difficult to track with standard imaging tools, so Amaro’s team took available images of the protein to supercomputers and turned them into a simulation showing how the 1.6 million atoms in p53 move. Using Stampede, they were able to find weaknesses in p53 and simulate interactions with more than a million compounds; several hundred seemed capable of restoring p53. More than 30 proved successful in labs and are now being tested by a pharmaceutical company.

    “The first Stampede gave us really outstanding, breakthrough research for cancer,” Amaro says. “And we already have some really interesting preliminary data on what Stampede2 is going to give us.”

    And it’s not just the medical field that benefits. Stampede has created weather phenomena models that have shown new ways to measure tornado strength, and produced seismic hazard maps that predict the likelihood of earthquakes in California. It has also helped increase the accuracy of hurricane predictions by 20–25 percent. During Hurricane Harvey in August, researchers used TACC supercomputers to forecast how high water would rise near the coast and to predict flooding in rivers and creeks in its aftermath.

    Aaron Dubrow, strategic communications specialist at TACC, says supercomputer users either use publicly available programs or create an application from the mathematics of the problem they are researching. “You take an idea like how cells divide and turn that into a computer algorithm and it becomes a program of sorts,” he says. Researchers can log into the supercomputer remotely or send their program to TACC staff. Stampede2 also has web portals for smaller problems in topics like drug discovery or natural disasters.

    For Dan Stanzione, executive director at the TACC, some of the most important research isn’t immediately applied. “Basic science has dramatic impacts on the world, but you might not see that until decades from now.” He points to Einstein’s 100-year-old theory of gravitational waves, which was recently confirmed with the help of supercomputers across the nation, including Stampede. “You might wonder why we care about gravitational waves. But now we have satellite, TV, and instant communications around the world because of Einstein’s theories about gravitational waves 100 years ago.”

    According to Stanzione, there were nearly 40,000 users of the first Stampede and an approximate 3,500 projects completed. Similar to Stampede, the new Stampede2 is expected to have a four-year lifespan. “Your smartphone starts to feel old and slow after four or five years, and supercomputers are the same,” he says. “They may still be fast, but it’s made out of four-year-old processors. The new ones are faster and more power efficient to run.” The old processors don’t go to waste though—most will be donated to state institutions across Texas.

    In order to use a supercomputer, researchers must submit proposals to an NSF board, which then delegates hours of usage. Stanzione says there are requests for nearly a billion processor hours every quarter, which is several times higher than what is available nationwide. While Stanzione says nearly every university has some sort of supercomputer now, the U.S. still lags behind China in computing power. The world’s top two computers are both Chinese, and the first is nearly five times more powerful than the largest in the states.

    Regardless, Stampede2 will still manage to serve researchers from more than 400 universities. Other users include private businesses, such as Firefly Space Company in nearby Cedar Park, and some government users like the Department of Energy and the U.S. Department of Agriculture. Stanzione says all work done on Stampede2 must be public and published research.

    “Being the leader in large-scale computational sciences and engineering means we can attract the top researchers who need these resources,” he says. “It helps attract those top scholars to UT. And then hopefully once they’re here, it helps them reach these innovations a little faster.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    U Texas Arlington Campus

    In 1839, the Congress of the Republic of Texas ordered that a site be set aside to meet the state’s higher education needs. After a series of delays over the next several decades, the state legislature reinvigorated the project in 1876, calling for the establishment of a “university of the first class.” Austin was selected as the site for the new university in 1881, and construction began on the original Main Building in November 1882. Less than one year later, on Sept. 15, 1883, The University of Texas at Austin opened with one building, eight professors, one proctor, and 221 students — and a mission to change the world. Today, UT Austin is a world-renowned higher education, research, and public service institution serving more than 51,000 students annually through 18 top-ranked colleges and schools.

     
  • richardmitnick 10:43 am on October 9, 2017 Permalink | Reply
    Tags: A free flexible and secure way to provide multiple factors of authentication to your community, OpenMFA, , TACC - Texas Advanced Computer Center   

    From TACC: “A free, flexible, and secure way to provide multiple factors of authentication to your community” 

    TACC bloc

    Texas Advanced Computing Center

    TACC develops multi-factor authentication solution, makes it available open-source.

    2
    Published on October 9, 2017 by Aaron Dubrow

    How does a supercomputing center enable tens of thousands of researchers to securely access its high-performance computing systems while still allowing ease of use? And how can it be done affordably?

    These are questions that the Texas Advanced Computing Center (TACC), asked themselves when they sought to upgrade their system security. They had previously relied on users’ names and passwords for access, but with a growing focus on hosting confidential health data and the increased compliance standards that entails, they realized they needed a more rigorous solution.

    3
    In October 2016, use of the MFA became mandatory for TACC users. Since that time, OpenMFA has recorded more than half a million logins and counting.

    In 2015, TACC began looking for an appropriate multi-factor authentication (MFA) solution that would provide an extra layer of protection against brute-force attacks. What they quickly discovered was that the available commercial solutions would cost them tens to hundreds of thousands of dollars per year to provide to their large community of users.

    Moreover, most MFA systems lacked the flexibility needed to allow diverse researchers to access TACC systems in a variety of ways — from the command line, through science gateways (which perform computations without requiring researchers to directly access HPC systems), and using automated workflows.

    So, they did what any group of computing experts and software developers would do: they built our own MFA system, which they call OpenMFA.

    They didn’t start from scratch. Instead they scoured the pool of state-of-the-art open source tools available. Among them was LinOTP, a one-time password platform developed and maintained by KeyIdentity GmbH, a German software company. To this, they added the standard networking protocols RADIUS and HTTPS, and glued it all together using custom pluggable authentication modules (PAM) that they developed in-house.

    3
    TACC Token App generating token code.

    This approach integrates cleanly with common data transfer protocols, adds flexibility to the system (in part, so they could create whitelists that include the IP addresses that should be exempted), and supports opt-in or mandatory deployments. Researchers can use the TACC-developed OpenMFA system in three ways: via a software token, an SMS, or a low-cost hardware token.

    Over three months, they transitioned 10,000 researchers to OpenMFA, while giving them the opportunity to test the new system at their leisure. In October 2016, use of the MFA became mandatory for TACC users.

    Since that time, OpenMFA has recorded more than half a million logins and counting. TACC has also open-sourced the tool for free, public use. The Extreme Science and Engineering Discovery Environment (XSEDE) is considering OpenMFA for its large user base, and many other universities and research centers have expressed interest in using the tool.

    TACC developed OpenMFA to suit the center’s needs and to save money. But in the end, the tool will also help many other tax-payer-funded institutions improve their security while maintaining research productivity. This allows funding to flow into other efforts, thus increasing the amount of science that can be accomplished, while making that research more secure.

    TACC staff will present the details of OpenMFA’s development at this year’s Internet2 Technology Exchange and at The International Conference for High Performance Computing, Networking, Storage and Analysis (SC17).

    To learn more about OpenMFA or explore the code, visit the Github repository.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Texas Advanced Computing Center (TACC) designs and operates some of the world’s most powerful computing resources. The center’s mission is to enable discoveries that advance science and society through the application of advanced computing technologies.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    TACC DELL EMC Stampede2 supercomputer

     
  • richardmitnick 7:53 pm on June 25, 2017 Permalink | Reply
    Tags: , Burmese pythons (as well as other snakes) massively downregulate their metabolic and physiological functions during extended periods of fasting During this time their organs atrophy saving energy Howe, Evolution takes eons but it leaves marks on the genomes of organisms that can be detected with DNA sequencing and analysis, Researchers use Supercomputer to Uncover how Pythons Regenerate Their Organs, , TACC - Texas Advanced Computer Center, The Role of Supercomputing in Genomics Research, understanding the mechanisms by which Burmese pythons regenerate their organs including their heart liver kidney and small intestines after feeding, Within 48 hours of feeding Burmese pythons can undergo up to a 44-fold increase in metabolic rate and the mass of their major organs can increase by 40 to 100 percent   

    From UT Austin: “Researchers use Supercomputer to Uncover how Pythons Regenerate Their Organs” 

    U Texas Austin bloc

    University of Texas at Austin

    06/22/2017
    No writer credit found

    1
    A Burmese python superimposed on an analysis of gene expression that uncovers how the species changes in its organs upon feeding.Todd Castoe

    Evolution takes eons, but it leaves marks on the genomes of organisms that can be detected with DNA sequencing and analysis.

    As methods for studying and comparing genetic data improve, scientists are beginning to decode these marks to reconstruct the evolutionary history of species, as well as how variants of genes give rise to unique traits.

    A research team at the University of Texas at Arlington led by assistant professor of biology Todd Castoe has been exploring the genomes of snakes and lizards to answer critical questions about these creatures’ evolutionary history. For instance, how did they develop venom? How do they regenerate their organs? And how do evolutionarily-derived variations in genes lead to variations in how organisms look and function?

    “Some of the most basic questions drive our research. Yet trying to understand the genetic explanations of such questions is surprisingly difficult considering most vertebrate genomes, including our own, are made up of literally billions of DNA bases that can determine how an organism looks and functions,” says Castoe. “Understanding these links between differences in DNA and differences in form and function is central to understanding biology and disease, and investigating these critical links requires massive computing power.”

    To uncover new insights that link variation in DNA with variation in vertebrate form and function, Castoe’s group uses supercomputing and data analysis resources at the Texas Advanced Computing Center or TACC, one of the world’s leading centers for computational discovery.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    Recently, they used TACC’s supercomputers to understand the mechanisms by which Burmese pythons regenerate their organs — including their heart, liver, kidney, and small intestines — after feeding.

    Burmese pythons (as well as other snakes) massively downregulate their metabolic and physiological functions during extended periods of fasting. During this time their organs atrophy, saving energy. However, upon feeding, the size and function of these organs, along with their ability to generate energy, dramatically increase to accommodate digestion.

    Within 48 hours of feeding, Burmese pythons can undergo up to a 44-fold increase in metabolic rate and the mass of their major organs can increase by 40 to 100 percent.

    Writing in BMC Genomics in May 2017, the researchers described their efforts to compare gene expression in pythons that were fasting, one day post-feeding and four days post-feeding. They sequenced pythons in these three states and identified 1,700 genes that were significantly different pre- and post-feeding. They then performed statistical analyses to identify the key drivers of organ regeneration across different types of tissues.

    What they found was that a few sets of genes were influencing the wholesale change of pythons’ internal organ structure. Key proteins, produced and regulated by these important genes, activated a cascade of diverse, tissue-specific signals that led to regenerative organ growth.

    Intriguingly, even mammalian cells have been shown to respond to serum produced by post-feeding pythons, suggesting that the signaling function is conserved across species and could one day be used to improve human health.

    “We’re interested in understanding the molecular basis of this phenomenon to see what genes are regulated related to the feeding response,” says Daren Card, a doctoral student in Castoe’s lab and one of the authors of the study. “Our hope is that we can leverage our understanding of how snakes accomplish organ regeneration to one day help treat human diseases.”

    Making Evolutionary Sense of Secondary Contact

    Castoe and his team used a similar genomic approach to understand gene flow in two closely related species of western rattlesnakes with an intertwined genetic history.

    The two species live on opposite sides of the Continental Divide in Mexico and the U.S. They were separated for thousands of years and evolved in response to different climates and habitat. However, over time their geographic ranges came back together to the point that the rattlesnakes began to crossbreed, leading to hybrids, some of which live in a region between the two distinct climates.

    The work was motivated by a desire to understand what forces generate and maintain distinct species, and how shifts in the ranges of species (for example, due to global change) may impact species and speciation.

    The researchers compared thousands of genes in the rattlesnakes’ nuclear DNA to study genomic differentiation between the two lineages. Their comparisons revealed a relationship between genetic traits that are most important in evolution during isolation and those that are most important during secondary contact, with greater-than-expected overlap between genes in these two scenarios.

    However, they also found regions of the rattlesnake genome that are important in only one of these two scenarios. For example, genes functioning in venom composition and in reproductive differences — distinct traits that are important for adaptation to the local habitat — likely diverged under selection when these species were isolated. They also found other sets of genes that were not originally important for diversification of form and function, that later became important in reducing the viability of hybrids. Overall, their results provide a genome-scale perspective on how speciation might work that can be tested and refined in studies of other species.

    The team published their results in the April 2017 issue of Ecology and Evolution.

    The Role of Supercomputing in Genomics Research

    The studies performed by members of the Castoe lab rely on advanced computing for several aspects of the research. First, they use advanced computing to create genome assemblies — putting millions of small chunks of DNA in the correct order.

    “Vertebrate genomes are typically on the larger side, so it takes a lot of computational power to assemble them,” says Card. “We use TACC a lot for that.”

    Next, the researchers use advanced computing to compare the results among many different samples, from multiple lineages, to identify subtle differences and patterns that would not be distinguishable otherwise.

    Castoe’s lab has their own in-house computers, but they fall short of what is needed to perform all of the studies the group is interested in working on.

    “In terms of genome assemblies and the very intensive analyses we do, accessing larger resources from TACC is advantageous,” Card says. “Certain things benefit substantially from the general output from TACC machines, but they also allow us to run 500 jobs at the same time, which speeds up the research process considerably.”

    A third computer-driven approach lets the team simulate the process of genetic evolution over millions of generations using synthetic biological data to deduce the rules of evolution, and to identify genes that may be important for adaptation.

    For one such project, the team developed a new software tool called GppFst that allows researchers to differentiate genetic drift – a neutral process whereby genes and gene sequences naturally change due to random mating within a population – from genetic variations that are indicative of evolutionary changes caused by natural selection.

    The tool uses simulations to statistically determine which changes are meaningful and can help biologists better understand the processes that underlie genetic variation. They described the tool in the May 2017 issue of Bioinformatics.

    Lab members are able to access TACC resources through a unique initiative, called the University of Texas Research Cyberinfrastructure, which gives researchers from the state’s 14 public universities and health centers access to TACC’s systems and staff expertise.

    “It’s been integral to our research,” said Richard Adams, another doctoral student in Castoe’s group and the developer of GppFst. “We simulate large numbers of different evolutionary scenarios. For each, we want to have hundreds of replicates, which are required to fully vet our conclusions. There’s no way to do that on our in-house systems. It would take 10 to 15 years to finish what we would need to do with our own machines — frankly, it would be impossible without the use of TACC systems.”

    Though the roots of evolutionary biology can be found in field work and close observation, today, the field is deeply tied to computing, since the scale of genetic material — tiny but voluminous — cannot be viewed with the naked eye or put in order by an individual.

    “The massive scale of genomes, together with rapid advances in gathering genome sequence information, has shifted the paradigm for many aspects of life science research,” says Castoe.

    “The bottleneck for discovery is no longer the generation of data, but instead is the analysis of such massive datasets. Data that takes less than a few weeks to generate can easily take years to analyze, and flexible shared supercomputing resources like TACC have become more critical than ever for advancing discovery in our field, and broadly for the life sciences.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    U Texas Arlington Campus

    In 1839, the Congress of the Republic of Texas ordered that a site be set aside to meet the state’s higher education needs. After a series of delays over the next several decades, the state legislature reinvigorated the project in 1876, calling for the establishment of a “university of the first class.” Austin was selected as the site for the new university in 1881, and construction began on the original Main Building in November 1882. Less than one year later, on Sept. 15, 1883, The University of Texas at Austin opened with one building, eight professors, one proctor, and 221 students — and a mission to change the world. Today, UT Austin is a world-renowned higher education, research, and public service institution serving more than 51,000 students annually through 18 top-ranked colleges and schools.

     
  • richardmitnick 3:27 pm on June 25, 2017 Permalink | Reply
    Tags: , , , , TACC - Texas Advanced Computer Center, TACC Lonestar supercomputer, TACC Stampede supercomputer   

    From Science Node: “Computer simulations and big data advance cancer immunotherapy” 

    Science Node bloc
    Science Node

    09 Jun, 2017 [Where has this been?]
    Aaron Dubrow

    1
    Courtesy National Institute of Allergy and Infectious Diseases.

    Supercomputers help classify immune response, design clinical trials, and analyze immune repertoire data.
    Scanning electron micrograph of a human T lymphocyte (also called a T cell) from the immune system of a healthy donor. Immunotherapy fights cancer by supercharging the immune system’s natural defenses (include T-cells) or contributing additional immune elements that can help the body kill cancer cells. [Credit: NIAID]

    The body has a natural way of fighting cancer – it’s called the immune system, and it is tuned to defend our cells against outside infections and internal disorder. But occasionally, it needs a helping hand.

    In recent decades, immunotherapy has become an important tool in treating a wide range of cancers, including breast cancer, melanoma and leukemia.

    But alongside its successes, scientists have discovered that immunotherapy sometimes has powerful — even fatal — side-effects.

    Identifying patient-specific immune treatments

    Not every immune therapy works the same on every patient. Differences in an individual’s immune system may mean one treatment is more appropriate than another. Furthermore, tweaking one’s system might heighten the efficacy of certain treatments.

    1
    Scanning electron micrograph of a human T lymphocyte (also called a T cell) from the immune system of a healthy donor. Immunotherapy fights cancer by supercharging the immune system’s natural defenses (include T-cells) or contributing additional immune elements that can help the body kill cancer cells. [Credit: NIAID]

    Researchers from Wake Forest School of Medicine and Zhejiang University in China developed a novel mathematical model to explore the interactions between prostate tumors and common immunotherapy approaches, individually and in combination.

    In a study published in Nature Scientific Reports, they used their model to predict how prostate cancer would react to four common immunotherapies.

    The researchers incorporated data from animal studies into their complex mathematical models and simulated tumor responses to the treatments using the Stampede supercomputer at the Texas Advanced Computing Center (TACC).

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    “We do a lot of modeling which relies on millions of simulations,” says Jing Su, a researcher at the Center for Bioinformatics and Systems Biology at Wake Forest School of Medicine and assistant professor in the Department of Diagnostic Radiology.

    “To get a reliable result, we have to repeat each computation at least 100 times. We want to explore the combinations and effects and different conditions and their results.”

    TACC’s high performance computing resources allowed the researchers to highlight a potential therapeutic strategy that may manage prostate tumor growth more effectively.

    Designing more efficient clinical trials

    Biological agents used in immunotherapy — including those that target a specific tumor pathway, aim for DNA repair, or stimulate the immune system to attack a tumor — function differently from radiation and chemotherapy.

    Because traditional dose-finding designs are not suitable for trials of biological agents, novel designs that consider both the toxicity and efficacy of these agents are imperative.

    Chunyan Cai, assistant professor of biostatistics at UT Health Science Center (UTHSC)’s McGovern Medical School, uses TACC systems to design new kinds of dose-finding trials for combinations of immunotherapies.

    4

    Writing in the Journal of the Royal Statistics Society Series C (Applied Statistics), Cai and her collaborators, Ying Yuan, and Yuan Ji, described efforts to identify biologically optimal dose combinations for agents that target the PI3K/AKT/mTOR signaling pathway, which has been associated with several genetic aberrations related to the promotion of cancer.

    After 2,000 simulations on the Lonestar supercomputer for each of six proposed dose-finding designs, they discovered the optimal combination gives higher priority to trying new doses in the early stage of the trial.

    TACC Lonestar Cray XC40 supercomputer

    The best case also assigns patients to the most effective dose that is safe toward the end of the trial.

    “Extensive simulation studies show that the design proposed has desirable operating characteristics in identifying the biologically optimal dose combination under various patterns of dose–toxicity and dose–efficacy relationships,” Cai concludes.

    Whether in support of population-level immune response studies, clinical dosing trials, or community-wide efforts, TACC’s advanced computing resources are helping scientists put the immune system to work to better fight cancer.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: