Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:19 pm on September 13, 2017 Permalink | Reply
    Tags: , , , , , PHENIX (Python-based Hierarchical ENvironment for Integrated Xtallography), Supercomputing, TFIIH-Transcription factor IIH   

    From LBNL: “Berkeley Lab Scientists Map Key DNA Protein Complex at Near-Atomic Resolution” 

    Berkeley Logo

    Berkeley Lab

    September 13, 2017
    Sarah Yang
    scyang@lbl.gov
    (510) 486-4575

    1
    The cryo-EM structure of Transcription Factor II Human (TFIIH). The atomic coordinate model, colored according to the different TFIIH subunits, is shown inside the semi-transparent cryo-EM map. (Credit: Basil Greber/Berkeley Lab and UC Berkeley)

    Chalking up another success for a new imaging technology that has energized the field of structural biology, researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) obtained the highest resolution map yet of a large assembly of human proteins that is critical to DNA function.

    The scientists are reporting their achievement today in an advanced online publication of the journal Nature. They used cryo-electron microscopy (cryo-EM) to resolve the 3-D structure of a protein complex called transcription factor IIH (TFIIH) at 4.4 angstroms, or near-atomic resolution. This protein complex is used to unzip the DNA double helix so that genes can be accessed and read during transcription or repair.

    “When TFIIH goes wrong, DNA repair can’t occur, and that malfunction is associated with severe cancer propensity, premature aging, and a variety of other defects,” said study principal investigator Eva Nogales, faculty scientist at Berkeley Lab’s Molecular Biophysics and Integrated Bioimaging Division. “Using this structure, we can now begin to place mutations in context to better understand why they give rise to misbehavior in cells.”

    TFIIH’s critical role in DNA function has made it a prime target for research, but it is considered a difficult protein complex to study, especially in humans.

    ___________________________________________________________________
    How to Capture a Protein
    1
    It takes a large store of patience and persistence to prepare specimens of human transcription factor IIH (TFIIH) for cryo-EM. Because TFIIH exists in such minute amounts in a cell, the researchers had to grow 50 liters of human cells in culture to yield a few micrograms of the purified protein.

    Human TFIIH is particularly fragile and prone to falling apart in the flash-freezing process, so researchers need to use an optimized buffer solution to help protect the protein structure.

    “These compounds that protect the proteins also work as antifreeze agents, but there’s a trade-off between protein stability and the ability to produce a transparent film of ice needed for cryo-EM,” said study lead author Basil Greber.

    Once Greber obtains a usable sample, he settles down for several days at the cryo-electron microscope at UC Berkeley’s Stanley Hall for imaging.

    “Once you have that sample inside the microscope, you keep collecting data as long as you can,” he said. “The process can take four days straight.”
    ___________________________________________________________________

    Mapping complex proteins

    “As organisms get more complex, these proteins do, too, taking on extra bits and pieces needed for regulatory functions at many different levels,” said Eva Nogales, who is also a UC Berkeley professor of molecular and cell biology and a Howard Hughes Medical Institute investigator. “The fact that we resolved this protein structure from human cells makes this even more relevant to disease research. There’s no need to extrapolate the protein’s function based upon how it works in other organisms.”

    Biomolecules such as proteins are typically imaged using X-ray crystallography, but that method requires a large amount of stable sample for the crystallization process to work. The challenge with TFIIH is that it is hard to produce and purify in large quantities, and once obtained, it may not form crystals suitable for X-ray diffraction.

    Enter cryo-EM, which can work even when sample amounts are very small. Electrons are sent through purified samples that have been flash-frozen at ultracold temperatures to prevent crystalline ice from forming.

    Cryo-EM has been around for decades, but major advances over the past five years have led to a quantum leap in the quality of high-resolution images achievable with this technique.

    “When your goal is to get resolutions down to a few angstroms, the problem is that any motion gets magnified,” said study lead author Basil Greber, a UC Berkeley postdoctoral fellow at the California Institute for Quantitative Biosciences (QB3). “At high magnifications, the slight movement of the specimen as electrons move through leads to a blurred image.”

    Making movies

    The researchers credit the explosive growth in cryo-EM to advanced detector technology that Berkeley Lab engineer Peter Denes helped develop. Instead of a single picture taken for each sample, the direct detector camera shoots multiple frames in a process akin to recording a movie. The frames are then put together to create a high-resolution image. This approach resolves the blur from sample movement. The improved images contain higher quality data, and they allow researchers to study the sample in multiple states, as they exist in the cell.

    Since shooting a movie generates far more data than a single frame, and thousands of movies are being collected during a microscopy session, the researchers needed the processing punch of supercomputers at the National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab.

    NERSC Cray Cori II supercomputer

    LBL NERSC Cray XC30 Edison supercomputer

    NERSC Hopper Cray XE6 supercomputer

    The output from these computations was a 3-D map that required further interpretation.

    “When we began the data processing, we had 1.5 million images of individual molecules to sort through,” said Greber. “We needed to select particles that are representative of an intact complex. After 300,000 CPU hours at NERSC, we ended up with 120,000 images of individual particles that were used to compute the 3-D map of the protein.”

    To obtain an atomic model of the protein complex based on this 3-D map, the researchers used PHENIX (Python-based Hierarchical ENvironment for Integrated Xtallography), a software program whose development is led by Paul Adams, director of Berkeley Lab’s Molecular Biophysics and Integrated Bioimaging Division and a co-author of this study.

    Not only does this structure improve basic understanding of DNA repair, the information could be used to help visualize how specific molecules are binding to target proteins in drug development.

    “In studying the physics and chemistry of these biological molecules, we’re often able to determine what they do, but how they do it is unclear,” said Nogales. “This work is a prime example of what structural biologists do. We establish the framework for understanding how the molecules function. And with that information, researchers can develop finely targeted therapies with more predictive power.”

    Other co-authors on this study are Pavel Afonine and Thi Hoang Duong Nguyen, both of whom have joint appointments at Berkeley Lab and UC Berkeley; and Jie Fang, a researcher at the Howard Hughes Medical Institute.

    NERSC is a DOE Office of Science User Facility located at Berkeley Lab. In addition to NERSC, the researchers used the Lawrencium computing cluster at Berkeley Lab. This work was funded by the National Institute of General Medical Sciences and the Swiss National Science Foundation.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    Advertisements
     
  • richardmitnick 2:10 pm on August 30, 2017 Permalink | Reply
    Tags: ALCF, , , Dealing with massive data, Supercomputing,   

    From ANL: “Big Bang – The Movie” 

    Argonne Lab
    News from Argonne National Laboratory

    August 24, 2017
    Jared Sagoff
    Austin Keating

    If you have ever had to wait those agonizing minutes in front of a computer for a movie or large file to load, you’ll likely sympathize with the plight of cosmologists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory. But instead of watching TV dramas, they are trying to transfer, as fast and as accurately as possible, the huge amounts of data that make up movies of the universe – computationally demanding and highly intricate simulations of how our cosmos evolved after the Big Bang.

    In a new approach to enable scientific breakthroughs, researchers linked together supercomputers at the Argonne Leadership Computing Facility (ALCF) and at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign (UI). This link enabled scientists to transfer massive amounts of data and to run two different types of demanding computations in a coordinated fashion – referred to technically as a workflow.

    What distinguishes the new work from typical workflows is the scale of the computation, the associated data generation and transfer and the scale and complexity of the final analysis. Researchers also tapped the unique capabilities of each supercomputer: They performed cosmological simulations on the ALCF’s Mira supercomputer, and then sent huge quantities of data to UI’s Blue Waters, which is better suited to perform the required data analysis tasks because of its processing power and memory balance.

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    U Illinois Blue Waters Cray supercomputer

    For cosmology, observations of the sky and computational simulations go hand in hand, as each informs the other. Cosmological surveys are becoming ever more complex as telescopes reach deeper into space and time, mapping out the distributions of galaxies at farther and farther distances, at earlier epochs of the evolution of the universe.

    The very nature of cosmology precludes carrying out controlled lab experiments, so scientists rely instead on simulations to provide a unique way to create a virtual cosmological laboratory. “The simulations that we run are a backbone for the different kinds of science that can be done experimentally, such as the large-scale experiments at different telescope facilities around the world,” said Argonne cosmologist Katrin Heitmann. “We talk about building the ‘universe in the lab,’ and simulations are a huge component of that.”

    Not just any computer is up to the immense challenge of generating and dealing with datasets that can exceed many petabytes a day, according to Heitmann. “You really need high-performance supercomputers that are capable of not only capturing the dynamics of trillions of different particles, but also doing exhaustive analysis on the simulated data,” she said. “And sometimes, it’s advantageous to run the simulation and do the analysis on different machines.”

    Typically, cosmological simulations can only output a fraction of the frames of the computational movie as it is running because of data storage restrictions. In this case, Argonne sent every data frame to NCSA as soon it was generated, allowing Heitmann and her team to greatly reduce the storage demands on the ALCF file system. “You want to keep as much data around as possible,” Heitmann said. “In order to do that, you need a whole computational ecosystem to come together: the fast data transfer, having a good place to ultimately store that data and being able to automate the whole process.”

    In particular, Argonne transferred the data produced immediately to Blue Waters for analysis. The first challenge was to set up the transfer to sustain the bandwidth of one petabyte per day.

    Once Blue Waters performed the first pass of data analysis, it reduced the raw data – with high fidelity – into a manageable size. At that point, researchers sent the data to a distributed repository at Argonne, the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory and the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Cosmologists can access and further analyze the data through a system built by researchers in Argonne’s Mathematics and Computer Science Division in collaboration with Argonne’s High Energy Physics Division.

    Argonne and University of Illinois built one such central repository on the Supercomputing ’16 conference exhibition floor in November 2016, with memory units supplied by DDN Storage. The data moved over 1,400 miles to the conference’s SciNet network. The link between the computers used high-speed networking through the Department of Energy’s Energy Science Network (ESnet). Researchers sought, in part, to take full advantage of the fast SciNET infrastructure to do real science; typically it is used for demonstrations of technology rather than solving real scientific problems.

    “External data movement at high speeds significantly impacts a supercomputer’s performance,” said Brandon George, systems engineer at DDN Storage. “Our solution addresses that issue by building a self-contained data transfer node with its own high-performance storage that takes in a supercomputer’s results and the responsibility for subsequent data transfers of said results, leaving supercomputer resources free to do their work more efficiently.”

    The full experiment ran successfully for 24 hours without interruption and led to a valuable new cosmological data set that Heitmann and other researchers started to analyze on the SC16 show floor.

    Argonne senior computer scientist Franck Cappello, who led the effort, likened the software workflow that the team developed to accomplish these goals to an orchestra. In this “orchestra,” Cappello said, the software connects individual sections, or computational resources, to make a richer, more complex sound.

    He added that his collaborators hope to improve the performance of the software to make the production and analysis of extreme-scale scientific data more accessible. “The SWIFT workflow environment and the Globus file transfer service were critical technologies to provide the effective and reliable orchestration and the communication performance that were required by the experiment,” Cappello said.

    “The idea is to have data centers like we have for the commercial cloud. They will hold scientific data and will allow many more people to access and analyze this data, and develop a better understanding of what they’re investigating,” said Cappello, who also holds an affiliate position at NCSA and serves as director of the international Joint Laboratory on Extreme Scale Computing, based in Illinois. “In this case, the focus was cosmology and the universe. But this approach can aid scientists in other fields in reaching their data just as well.”

    Argonne computer scientist Rajkumar Kettimuthu and David Wheeler, lead network engineer at NCSA, were instrumental in establishing the configuration that actually reached this performance. Maxine Brown from University of Illinois provided the Sage environment to display the analysis result at extreme resolution. Justin Wozniak from Argonne developed the whole workflow environment using SWIFT to orchestrate and perform all operations.

    The Argonne Leadership Computing Facility, the Oak Ridge Leadership Computing Facility, the Energy Science Network and the National Energy Research Scientific Computing Center are DOE Office of Science User Facilities. Blue Waters is the largest leadership-class supercomputer funded by the National Science Foundation. Part of this work was funded by DOE’s Office of Science.

    The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50 for more than 30 years by bringing industry, researchers, and students together to solve grand challenges at rapid speed and scale.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition
    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    The Advanced Photon Source at Argonne National Laboratory is one of five national synchrotron radiation light sources supported by the U.S. Department of Energy’s Office of Science to carry out applied and basic research to understand, predict, and ultimately control matter and energy at the electronic, atomic, and molecular levels, provide the foundations for new energy technologies, and support DOE missions in energy, environment, and national security. To learn more about the Office of Science X-ray user facilities, visit http://science.energy.gov/user-facilities/basic-energy-sciences/.

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 7:59 am on August 25, 2017 Permalink | Reply
    Tags: , , , Caty Pilachowski, , , , , Supercomputing,   

    From Science Node: Women in Stem -“A Hoosier’s view of the heavens” Caty Pilachowski 

    Science Node bloc
    Science Node

    24 Aug, 2017
    Tristan Fitzpatrick

    6
    Caty Pilachowski

    1
    Courtesy Emily Sterneman; Indiana University.

    “An eclipse violates our sense of what’s right.”

    So says Caty Pilachowski. Pilachowski, past president of the American Astronomical Society and now the Kirkwood Chair in Astronomy at Indiana University, has just returned from Hopkinsville, Kentucky where she observed the eclipse on the path of totality and watched the phenomena associated with a solar eclipse.

    “There are all kinds of effects that we can see during an eclipse,” says Pilachowski. “For example, we’re able to see the corona, which we can never see during the daytime without special equipment.”

    The surface of the sun, Pilachowski explains, has a temperature of roughly 5,780 kelvins (10,000º Fahrenheit). The thin gas that makes up the corona far above the sun, however, has a much hotter temperature— over a million degrees K.

    “That process of transporting energy into the highest atmosphere of the sun is not well understood,” she observes. “It’s the region just above the bright lower atmosphere of the sun that we’re best able to see during the eclipse, and that’s where the energy transport occurs.”

    Smile for the camera

    But the star in our own neighborhood isn’t the only one Pilachowski is keeping her eye on.

    When they’re not watching eclipses, Pilachowski and her colleagues at the IU Department of Astronomy use the One Degree Imager (ODI) on the WIYN 3.5M Observatory at Kitt Peak outside Tucson, Arizona.

    2
    One Degree Imager (ODI) on the WIYN 3.5M Observatory


    NOAO WIYN 3.5 meter telescope at Kitt Peak, AZ, USA

    The ODI was designed to image one square degree of sky at a time (the full moon takes up about half a square degree). Each image produced with the ODI is potentially 1 – 2 gigabytes in size.


    Kitt Peak outside of Tucson, Arizona hosts the 3.5 meter WIYN telescope, the primary research telescope for IU astronomers. Courtesy IU Astronomy; UITS Advanced Visualization Laboratory.

    IU astronomers collect thousands of these images, creating huge datasets that need to be examined quickly for scholarly insight.

    “Datasets from the ODI are much larger than can be handled with methods astronomers previously used, such as a CD-ROM or a portable hard drive” says Arvind Gopu, manager of the Scalable Compute Archive team.

    This is where IU’s computationally rich resources are critically important.

    The ODI Portal, Pipeline, and Archive (ODI-PPA) leverages the Karst, Big Red II, and Carbonate supercomputers at IU to quickly process these large amounts of data for analysis.

    3
    Karst supercomputer

    4
    Big Red II supercomputer

    These HPC tools allow researchers to perform statistical analysis and source extraction from the original image data. With these resources, they can determine if they’ve located stars, galaxies, or other items of interest from the large slice of the universe they’ve been viewing.

    “The advantage of using ODI-PPA is that you don’t have to have a lot of supercomputing experience,” says Gopu. “The idea is for astronomers to do the astronomy, and for us at UITS Research Technologies to do the computer science for them.”

    This makes the workflow on the ODI much faster than for other optical instruments. When collecting images of the universe, some instruments run into the crowded field problem, where stars are so close to each other they blend together when imaged. Teasing them apart requires a lot of computational heft.

    Another advantage ODI-PPA offers is its user-friendly web portal that makes it easy for researchers to view out-of-this-world images on their own machines, without requiring multiple trips to Kitt Peak.

    “Without the portal, IU astronomers would be dead in the water,” Pilachowski admits. “Lots and lots of data, with no way to get the science done.”

    Out of the fire and into the frying pan

    Pilachowski is also a principal investigtor on the Blanco DECam Bulge Survey (BDBS). A three-year US National Science Foundation-funded project, BDBS uses the Dark Energy Camera (DECam) attached to the Blanco Telescope in Chile to map the bulge at the heart of the Milky Way.

    Dark Energy Survey


    Dark Energy Camera [DECam], built at FNAL


    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam at an altitude of 7200 feet

    Like the yolk of a fried egg rising above egg whites in a frying pan, billions of stars orbit together to form a bulge that rises out of the galactic center.

    With the help of the DECam, Pilachowski can analyze populations of stars in the Milky Way’s bulge to study their properties.

    Astronomers use three different variables to catalog stars: How much hydrogen a star has, how much helium it has, and how much ‘metals’ it has (or, all the elements that aren’t hydrogen or helium).

    When the data from the survey is processed, Pilchowski can explore a large amount of information about stares in the Bulge, giving her clues about how the Milky Way’s central star system formed.

    “Most large astronomical catalogues are in the range of 500 million stars,” says Michael Young, astronomer and senior developer analyst at UITS Research Technologies. “When we’re done with this project, we should have a catalog of about a billion stars for researchers to use.”

    Journey of two eclipses

    As a child of the atomic age, Pilachowski grew up devouring books about the evolution of stars. She read as many books as she could about how they were formed, what stages they went through, and how they died.

    “That interest in stars has been a lifelong love for me,” Pilachowski says. “It’s neat to me that what I found exciting as a kid is what I get to spend my whole career studying.”

    She observed the last total solar eclipse in the continental US on February 26, 1979, an event she says further inspired her research in astronomy.

    “For me that eclipse was a combination of, ‘Wow, this is so amazing,’” Pilachowski recalls.

    “On the other hand, the observer in me saw cool things that were present, like planets that were visible right near the sun in the day time.”

    Regardless of whether scientists get closer to answering why the sun’s outer atmosphere is much hotter than its surface, Pilachowski says the eclipse has an eerie, unnerving effect on viewers.

    “We have this deep, ingrained understanding that the sun rises every morning and sets every evening,” says Pilachowski. “Things are as they’re supposed to be. An eclipse is something so rare and counter to our intuition that it just affects us deeply.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 12:09 pm on August 14, 2017 Permalink | Reply
    Tags: , , , , , New 3-D Simulations Show How Galactic Centers Cool Their Jets, Supercomputing   

    From LBNL: “New 3-D Simulations Show How Galactic Centers Cool Their Jets” 

    Berkeley Logo

    Berkeley Lab

    August 14, 2017
    Glenn Roberts Jr
    geroberts@lbl.gov
    (510) 486-5582

    1
    This rendering illustrates magnetic kink instability in simulated jets beaming from a galaxy’s center. The jets are believed to be associated with supermassive black holes. The magnetic field line (white) in each jet is twisted as the central object (black hole) rotates. As the jets contact higher-density matter the magnetic fields build up and become unstable. The irregular bends and asymmetries of the magnetic field lines are symptomatic of kink instability. The instability dissipates the magnetic fields into heat with the change in density, leading them to become less tightly wound. (Credit: Berkeley Lab, Purdue University, NASA).

    Some of the most extreme outbursts observed in the universe are the mysterious jets of energy and matter beaming from the center of galaxies at nearly the speed of light. These narrow jets, which typically form in opposing pairs are believed to be associated with supermassive black holes and other exotic objects, though the mechanisms that drive and dissipate them are not well understood.

    Now, a small team of researchers has developed theories supported by 3-D simulations to explain what’s at work.

    Finding common causes for instabilities in space jets

    “These jets are notoriously hard to explain,” said Alexander “Sasha” Tchekhovskoy, a former NASA Einstein fellow who co-led the new study as a member of the Nuclear Science Division at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), and the Astronomy and Physics departments and Theoretical Astrophysics Center at UC Berkeley. “Why are they so stable in some galaxies and in others they just fall apart?”

    As much as half of the jets’ energy can escape in the form of X-rays and stronger forms of radiation. The researchers showed how two different mechanisms – both related to the jets’ interaction with surrounding matter, known as the “ambient medium” – serve to reduce about half of the energy of these powerful jets.

    “The exciting part of this research is that we are now coming to understand the full range of dissipation mechanisms that are working in the jet,” no matter the size or type of jet, he said.

    2
    An animation showing magnetic field instabilities in two jets of radiation and matter beaming from a supermassive black hole (center). The magnetic field (white) is twisted by the black hole’s spin. (Credit: Berkeley Lab, Purdue University)

    The study that Tchekhovskoy co-led with Purdue University scientists Rodolfo Barniol Duran and Dimitrios Giannios is published in the Aug. 21 edition of Monthly Notices of the Royal Astronomical Society. The study concludes that the ambient medium itself has a lot to do with how the jets release energy.

    “We were finally able to simulate jets that start from the black hole and propagate to very large distances – where they bump into the ambient medium,” said Duran, formerly a postdoctoral research associate at Purdue University who is now a faculty member at California State University, Sacramento.

    Tchekhovskoy, who has studied these jets for over a decade, said that an effect known as magnetic kink stability, which causes a sudden bend in the direction of some jets, and another effect that triggers a series of shocks within other jets, appear to be the primary mechanisms for energy release. The density of the ambient medium that the jets encounter serves as the key trigger for each type of release mechanism.

    “For a long time, we have speculated that shocks and instabilities trigger the spectacular light displays from jets. Now these ideas and models can be cast on a much firmer theoretical ground,” said Giannios, assistant professor of physics and astronomy at Purdue.

    The length and intensity of the jets can illuminate the properties of their associated black holes, such as their age and size and whether they are actively “feeding” on surrounding matter. The longest jets extend for millions of light years into surrounding space.

    “When we look at black holes, the first things we notice are the central streaks of these jets. You can make images of these streaks and measure their lengths, widths, and speeds to get information from the very center of the black hole,” Tchekhovskoy noted. “Black holes tend to eat in binges of tens and hundreds of millions of years. These jets are like the ‘burps’ of black holes – they are determined by the black holes’ diet and frequency of feeding.”

    3
    This animation shows the propagation of a jet of high-energy radiation and matter from a black hole (at the base of the animation) in a simulation, at four different time points. The frames show what happens as the jet contacts denser matter as it reaches out into surrounding space. (Credit: Berkeley Lab, Purdue University)

    While nothing – not even light – can escape a black hole’s interior, the jets somehow manage to draw their energy from the black hole. The jets are driven by a sort of accounting trick, he explained, like writing a check for a negative amount and having money appear in your account. In the black hole’s case, it’s the laws of physics rather than a banking loophole that allow black holes to spew energy and matter even as they suck in surrounding matter.

    The incredible friction and heating of gases spiraling in toward the black hole cause extreme temperatures and compression in magnetic fields, resulting in an energetic backlash and an outflow of radiation that escapes the black hole’s strong pull.

    A tale of magnetic kinks and sequenced shocks

    Earlier studies had shown how magnetic instabilities (kinks) in the jets can occur when jets run into the ambient medium. This instability is like a magnetic spring. If you squish the spring from both ends between your fingers, the spring will fly sideways out of your hand. Likewise, a jet experiencing this instability can change direction when it rams into matter outside of the black hole’s reach.

    The same type of instability frustrated scientists working on early machines that attempted to create and harness a superhot, charged state of matter known as a plasma in efforts to develop fusion energy, which powers the sun. The space jets, also known as active galactic nuclei (AGN) jets, also are a form of plasma.

    The latest study found that in cases where an earlier jet had “pre-drilled” a hole in the ambient medium surrounding a black hole and the matter impacted by the newly formed jet was less dense, a different process is at work in the form of “recollimation” shocks.

    These shocks form as matter and energy in the jet bounce off the sides of the hole. The jet, while losing energy from every shock, immediately reforms a narrow column until its energy eventually dissipates to the point that the beam loses its tight focus and spills out into a broad area.

    “With these shocks, the jet is like a phoenix. It comes out of the shock every time,” though with gradually lessening energy, Tchekhovskoy said. “This train of shocks cumulatively can dissipate quite a substantial amount of the total energy.”

    The researchers designed the models to smash against different densities of matter in the ambient medium to create instabilities in the jets that mimic astrophysical observations.

    Peering deeper into the source of jets

    New, higher-resolution images of regions in space where supermassive black holes are believed to exist – from the Event Horizon Telescope (EHT), for example – should help to inform and improve models and theories explaining jet behavior, Tchekhovskoy said, and future studies could also include more complexity in the jet models, such as a longer sequence of shocks.

    “It would be really interesting to include gravity into these models,” he said, “and to see the dynamics of buoyant cavities that the jet fills up with hot magnetized plasma as it drills a hole” in the ambient medium.

    4
    Side-by-side comparison of density “snapshots” produced in a 3-D simulation of jets beaming out from a black hole (at the base of images). Red shows higher density and blue shows lower density. The black directional lines show magnetic field streamlines. The perturbed magnetic lines reflect both the emergence of irregular magnetic fields in the jets and the large-scale deviations of the jets out of the image plane, both caused by the 3-D magnetic kink instability. (Credit: Berkeley Lab, Purdue University)

    He added, “Seeing deeper into where the jets come from – we think the jets start at the black hole’s event horizon (a point of no return for matter entering the black hole) – would be really helpful to see in nature these ‘bounces’ in repeating shocks, for example. The EHT could resolve this structure and provide a nice test of our work.”

    This work was supported by NASA through the Astrophysics Theory Program and Einstein Fellowship, the National Science Foundation through an XSEDE supercomputer allocation, the NASA High-End Computing Program through the NASA Advanced Supercomputing Division at Ames Research Center, Purdue University, and UC Berkeley through the Theoretical Astrophysics Center fellowship and access to the Savio supercomputer.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
  • richardmitnick 5:22 pm on August 2, 2017 Permalink | Reply
    Tags: , , New Simulations Could Help in Hunt for Massive Mergers of Neutron Stars and Black Holes, Supercomputing   

    From LBNL: “New Simulations Could Help in Hunt for Massive Mergers of Neutron Stars, Black Holes” 

    Berkeley Logo

    Berkeley Lab

    August 2, 2017
    Glenn Roberts Jr
    geroberts@lbl.gov

    1
    This image, from a computerized simulation, shows the formation of an inner disk of matter and a wide, hot disk of matter 5.5 milliseconds after the merger of a neutron star and a black hole. (Credit: Classical and Quantum Gravity)

    Now that scientists can detect the wiggly distortions in space-time created by the merger of massive black holes, they are setting their sights on the dynamics and aftermath of other cosmic duos that unify in catastrophic collisions.

    Working with an international team, scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have developed new computer models to explore what happens when a black hole joins with a neutron star – the superdense remnant of an exploded star.

    Using supercomputers to rip open neutron stars

    The simulations, carried out in part at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC), are intended to help detectors home in on the gravitational-wave signals.

    NERSC

    NERSC Cray Cori II supercomputer

    LBL NERSC Cray XC30 Edison supercomputer

    NERSC Hopper Cray XE6 supercomputer

    Telescopes, too, can search for the brilliant bursts of gamma-rays and the glow of the radioactive matter that these exotic events can spew into surrounding space.

    In separate papers published in a special edition of the scientific journal Classical and Quantum Gravity, Berkeley Lab and other researchers present the results of detailed simulations.

    One of the studies models the first milliseconds (thousandths of a second) in the merger of a black hole and neutron star, and the other details separate simulations that model the formation of a disk of material formed within seconds of the merger, and of the evolution of matter that is ejected in the merger.

    2
    Early “snapshots” from a simulation of a neutron star-black hole merger. This entire animated sequence occurs within 43 milliseconds (43 thousandths of a second). (Credit: Classical and Quantum Gravity)

    That ejected matter likely includes gold and platinum and a range of radioactive elements that are heavier than iron.

    Any new information scientists can gather about how neutron stars rip apart in these mergers can help to unlock their secrets, as their inner structure and their likely role in seeding the universe with heavy elements are still shrouded in mystery.

    “We are steadily adding more realistic physics to the simulations,” said – Foucart, who served as a lead author for one of the studies as a postdoctoral researcher in Berkeley Lab’s Nuclear Science Division.

    “But we still don’t know what’s happening inside neutron stars. The complicated physics that we need to model make the simulations very computationally intensive.”

    Finding signs of a black hole–neutron star merger

    Foucart, who will soon be an assistant professor at the University of New Hampshire, added, “We are trying to move more toward actually making models of the gravitational-wave signals produced by these mergers,” which create a rippling in space-time that researchers hope can be detected with improvements in the sensitivity of experiments including Advanced LIGO, the Laser Interferometer Gravitational-Wave Observatory.


    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project


    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    ESA/eLISA the future of gravitational wave research

    In February 2016, LIGO scientists confirmed the first detection of a gravitational wave, believed to be generated by the merger of two black holes, each with masses about 30 times larger than the sun.

    The signals of a neutron star merging with black holes or another neutron star are expected to generate gravitational waves that are slightly weaker but similar to those of black hole–black hole mergers, Foucart said.

    Radioactive ‘waste’ in space

    Daniel Kasen, a scientist in the Nuclear Science Division at Berkeley Lab and associate professor of physics and astronomy at UC Berkeley who participated in the research, said that inside neutron stars “there may be exotic states of matter unlike anything realized anywhere else in the universe.”

    In some computer simulations the neutron stars were swallowed whole by the black hole, while in others there was a fraction of matter coughed up into space. This ejected matter is estimated to range up to about one-tenth of the mass of the sun.

    While much of the matter gets sucked into the larger black hole that forms from the merger, “the material that gets flung out eventually turns into a kind of radioactive ‘waste,’” he said. “You can see the radioactive glow of that material for a period of days or weeks, from more than a hundred million light years away.” Scientists refer to this observable radioactive glow as a “kilonova.”

    The simulations use different sets of calculations to help scientists visualize how matter escapes from these mergers. By modeling the speed, trajectory, amount and type of matter, and even the color of the light it gives off, astrophysicists can learn how to track down actual events.

    The weird world of neutron stars

    The size range of neutron stars is set by the ultimate limit on how densely matter can be compacted, and neutron stars are among the most superdense objects we know about in the universe.

    Neutron stars have been observed to have masses up to at least two times that of our sun but measure only about 12 miles in diameter, on average, while our own sun has a diameter of about 865,000 miles. At large enough masses, perhaps about three times the mass of the sun, scientists expect that neutron stars must collapse to form black holes.

    A cubic inch of matter from a neutron star is estimated to weigh up to 10 billion tons. As their name suggests, neutron stars are thought to be composed largely of the neutrally charged subatomic particles called neutrons, and some models expect them to contain long strands of matter – known as “nuclear pasta” – formed by atomic nuclei that bind together.

    Neutron stars are also expected to be almost perfectly spherical, with a rigid and incredibly smooth crust and an ultrapowerful magnetic field. They can spin at a rate of about 43,000 revolutions per minute (RPMs), or about five times faster than a NASCAR race car engine’s RPMs.

    The aftermath of neutron star mergers

    The researchers’ simulations showed that the radioactive matter that first escapes the black hole mergers may be traveling at speeds of about 20,000 to 60,000 miles per second, or up to about one-third the speed of light, as it is swung away in a long “tidal tail.”

    “This would be strange material that is loaded with neutrons,” Kasen said. “As that expanding material cools and decompresses, the particles may be able to combine to build up into the heaviest elements.” This latest research shows how scientists might find these bright bundles of heavy elements.

    “If we can follow up LIGO detections with telescopes and catch a radioactive glow, we may finally witness the birthplace of the heaviest elements in the universe,” he said. “That would answer one of the longest-standing questions in astrophysics.”

    Most of the matter in a black hole–neutron star merger is expected to be sucked up by the black hole within a millisecond of the merger, and other matter that is not flung away in the merger is likely to form an extremely dense, thin, donut-shaped halo of matter.

    The thin, hot disk of matter that is bound by the black hole is expected to form within about 10 milliseconds of the merger, and to be concentrated within about 15 to 70 miles of it, the simulations showed. This first 10 milliseconds appears to be key in the long-term evolution of these disks.

    Over timescales ranging from tens of milliseconds to several seconds, the hot disk spreads out and launches more matter into space. “A number of physical processes – from magnetic fields to particle interactions and nuclear reactions – combine in complex ways to drive the evolution of the disk,” said Rodrigo Fernández, an assistant professor of physics at the University of Alberta in Canada who led one of the studies.

    Simulations carried out on NERSC’s Edison supercomputer were crucial in understanding how the disk ejects matter and in providing clues for how to observe this matter, said Fernández, a former UC Berkeley postdoctoral researcher.

    What’s next?

    Eventually, it may be possible for astronomers scanning the night sky to find the “needle in a haystack” of radioactive kilonovae from neutron star mergers that had been missed in the LIGO data, Kasen said.

    “With improved models, we are better able to tell the observers exactly which flashes of light are the signals they are looking for,” he said. Kasen is also working to build increasingly sophisticated models of neutron star mergers and supernovae through his involvement in the DOE Exascale Computing Project.

    As the sensitivity of gravitational-wave detectors improves, Foucart said, it may be possible to detect a continuous signal produced by even a tiny bump on the surface of a neutron star, for example, or signals from theorized one-dimensional objects known as cosmic strings.

    “This could also allow us to observe events that we have not even imagined,” he said.

    5

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
  • richardmitnick 4:13 pm on July 28, 2017 Permalink | Reply
    Tags: , , DEll EMC Stampede 2 supercomputer, , Supercomputing,   

    From NSF: “Stampede2 forges new frontier in advanced computing” 

    nsf
    National Science Foundation

    July 28, 2017

    The National Science Foundation (NSF) today realized the initial phase of its $30 million investment to upgrade the nation’s computational research infrastructure through the dedication of Stampede2, one of the most powerful supercomputing systems in the world. Based at the Texas Advanced Computing Center (TACC) at The University of Texas at Austin, this strategic national resource will serve tens of thousands of researchers and educators across the U.S.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    TACC DELL EMC Stampede2 supercomputer

    “Building on the success of the initial Stampede system, the Stampede team has partnered with other institutions as well as industry to bring the latest in forward-looking computing technologies combined with deep computational and data science expertise to take on some of the most challenging science and engineering frontiers,” said Irene Qualters, director of NSF’s Office of Advanced Cyberinfrastructure.

    Stampede2 is the newest strategic supercomputing resource for the nation’s research and education community, enabling scientists and engineers across the U.S., from multiple disciplines, to answer questions at the forefront of science and engineering. Importantly, Stampede2 leverages NSF’s existing investments in computational and data science, as well as user services, allowing academic researchers access to capabilities beyond the reach of a single institution while complementing other national high-performance computing infrastructure.

    Further, Stampede2 builds upon the initial Stampede system, also funded by NSF, which processed more than eight million successful jobs and delivered over three billion core hours of computation since it became operational in 2013.

    Stampede2 will offer more than twice the overall memory, storage capacity, bandwidth and system performance of the initial Stampede system. Yet Stampede2 will consume only half as much power and occupy just half the physical space of its predecessor. Innovations in how the supercomputer is cooled also resulted in efficiencies: Stampede2 is connected to a chilled water system that cools more cost-effectively and with less impact to the power grid than the standard air-conditioned approach.

    Once additional hardware and processors are added in the summer, Stampede2 will be able to process jobs at 18 petaflops, or 18 quadrillion mathematical operations per second, at peak performance. When Stampede2 is fully operational later this fall, the system will have roughly the processing power of 100,000 desktop computers; this increased speed and power will allow scientists and engineers to tackle larger, more complex problems that were not previously possible.

    Computational scientists and engineers pursuing a wide range of applications — from researchers who conduct large-scale simulations and data analyses on large swaths of the system, to those who interact with the system through web-based community platforms — will access Stampede2 through the NSF-supported eXtreme Science and Engineering Discovery Environment (XSEDE).

    Researchers have already started using the system to conduct large-scale scientific studies. Some preliminary findings from early user projects include:

    Tumor identification from magnetic resonance imaging (MRI) data at The University of Texas at Austin.
    Real-time weather forecasting at the University of Oklahoma that has helped direct storm-chaser trucks.
    Earthquake predictions for the Southern California region at the University of California, San Diego that achieved a fivefold performance improvement over previously reported results.
    Teams from Stephen Hawking’s cosmology research laboratory at Cambridge University, leveraging Stampede2, achieved unprecedented comparisons of previously performed simulations with gravitational wave data observed by the NSF-funded Laser Interferometer Gravitational-wave Observatory (LIGO).

    Several leading universities are collaborating with TACC to enable Stampede2, including Clemson University, Cornell University, Indiana University, The Ohio State University and the University of Colorado at Boulder. They are joined by industry partners Dell EMC, Intel Corporation and Seagate Technology, who are providing cyberinfrastructure expertise and services for the project.

    Stampede2 is expected to serve the scientific community through 2021, supporting tens of thousands of researchers during this period. An additional NSF award for $24 million was recently granted to cover upcoming operations and maintenance costs for the system.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    The National Science Foundation (NSF) is an independent federal agency created by Congress in 1950 “to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense…we are the funding source for approximately 24 percent of all federally supported basic research conducted by America’s colleges and universities. In many fields such as mathematics, computer science and the social sciences, NSF is the major source of federal backing.

    seal

     
  • richardmitnick 2:33 pm on July 13, 2017 Permalink | Reply
    Tags: , Intel “Skylake” Xeon Scalable processors, Supercomputing   

    From HPC Wire: “Intel Skylake: Xeon Goes from Chip to Platform” 

    HPC Wire

    1
    No image caption or credit

    July 13, 2017
    Doug Black

    With yesterday’s New York unveiling of the new “Skylake” Xeon Scalable processors, Intel made multiple runs at multiple competitive threats and strategic markets. Skylake will carry Intel’s flag in the fight for leadership in the emerging advanced data center encompassing highly demanding network workloads, cloud computing, real time analytics, virtualized infrastructures, high-performance computing and artificial intelligence.

    Most interesting, Skylake takes a big step toward accommodating what one industry analyst has called “the wild west of technology disaggregation,” life in the post-CPU-centric era.

    “What surprised me most is how much platform goodness Intel brought to the table,” said industry watcher Patrick Moorhead, Moor Insights & Strategy, soon after the launch announcement. “I wasn’t expecting so many enhancements outside of the CPU chip itself.”

    In fact, Moorhead said, Skylake turns Xeon into a platform, one that “consists of CPUs, chipset, internal and external accelerators, SSD flash and software stacks.”

    The successor to the Intel Xeon processor E5 and E7 product lines, Skylake has up to 28 high-performance cores and provides platform features with, according to Intel, significant performance increases, including:

    Artificial Intelligence: Delivers 2.2x higher deep learning training and inference compared to the previous generation, according to Intel, and 113x deep learning performance gains compared to a three-year-old non-optimized server system when combined with software optimizations accelerating delivery of AI-based services.
    Networking: Delivers up to 2.5x increased IPSec forwarding rate for networking applications compared to the previous generation when using Intel QuickAssist and Deep Platform Development Kit.
    Virtualization: Operates up to approximately 4.2x more virtual machines versus a four-year-old system for faster service deployment, server utilization, lower energy costs and space efficiency.
    High Performance Computing: Provides up to a 2x FLOPs/clock improvement with Intel AVX-512 (the 512-bit extensions to the 256-bit Advanced Vector Extensions SIMD instructions for the x86 instruction set architecture) as well as integrated Intel Omni-Path Architecture ports, delivering improved compute capability, I/O flexibility and memory bandwidth, Intel said.
    Storage: Processes up to 5x more IOPS while reducing latency by up to 70 percent versus out-of-the-box NVMe SSDs when combined with Intel Optane SSDs and Storage Performance Development Kit, making data more accessible for advanced analytics.

    2
    No image caption or credit.

    Overall, Intel said, Skylake delivers performance increase up to 1.65x versus the previous generation of Intel processors, and up to 5x OLTP warehouse workloads versus the current install base.

    The company also introduced Intel Select Solutions, aimed at simplifying deployment of data center and network infrastructure, with initial solutions delivery on Canonical Ubuntu, Microsoft SQL 16 and VMware vSAN 6.6. Intel said this is an expansion of the Intel Builders ecosystem collaborations and will offer Intel-verified configurations for specific workloads, such as machine learning inference, and is then sold and marketed as a package by OEMs and ODMs under the “Select Solution” sub-brand.

    Intel said Xeon Scalable platform is supported by hundreds of ecosystem of partners, more than 480 Intel builders and 7,000-plus software vendors, including support from Amazon, AT&T, BBVA, Google, Microsoft, Montefiore, Technicolor and Telefonica.

    But it’s Intel’s support for multiple processing architectures that drew the most attention.

    Moorhead said Skylake enables heterogeneous compute in several ways. “First off, Intel provides the host processer, a Xeon, as you can’t boot to an accelerator. Inside of Xeon, they provide accelerators like AVX-512. Inside Xeon SoCs, Intel has added FPGAs. Inside the PCH contains a QAT accelerator. Intel also has PCIe accelerator cards for QAT and FPGAs.”

    In the end, Moorhead said, the Skylark announcement is directed at datacenter managers “who want to run their apps and do inference on the same machines using the new Xeons.” He cited Amazon’s support for this approach, “so it has merit.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    HPCwire is the #1 news and information resource covering the fastest computers in the world and the people who run them. With a legacy dating back to 1987, HPC has enjoyed a legacy of world-class editorial and topnotch journalism, making it the portal of choice selected by science, technology and business professionals interested in high performance and data-intensive computing. For topics ranging from late-breaking news and emerging technologies in HPC, to new trends, expert analysis, and exclusive features, HPCwire delivers it all and remains the HPC communities’ most reliable and trusted resource. Don’t miss a thing – subscribe now to HPCwire’s weekly newsletter recapping the previous week’s HPC news, analysis and information at: http://www.hpcwire.com.

     
  • richardmitnick 2:16 pm on July 13, 2017 Permalink | Reply
    Tags: , Supercomputing   

    From HPC Wire: “Satellite Advances, NSF Computation Power Rapid Mapping of Earth’s Surface” 

    HPC Wire

    July 13, 2017
    Ken Chiacchia
    Tiffany Jolley

    New satellite technologies have completely changed the game in mapping and geographical data gathering, reducing costs and placing a new emphasis on time series and timeliness in general, according to Paul Morin, director of the Polar Geospatial Center at the University of Minnesota.

    In the second plenary session of the PEARC conference in New Orleans on July 12, Morin described how access to the DigitalGlobe satellite constellation, the NSF XSEDE network of supercomputing centers and the Blue Waters supercomputer at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign have enabled his group to map Antarctica—an area of 5.4 million square miles, compared with the 3.7 million square miles of the “lower 48” United States—at 1-meter resolution in two years.

    U Illinois Blue Waters Cray supercomputer

    Nine months later, then-president Barack Obama announced a joint White House initiative involving the NSF and the National Geospatial Intelligence Agency (NGIA) in which Morin’s group mapped a similar area in the Arctic including the entire state of Alaska in two years.

    “If I wrote this story in a single proposal I wouldn’t have been able to write [any proposals] afterward,” Morin said. “It’s that absurd.” But the leaps in technology have made what used to be multi-decadal mapping projects—when they could be done at all—into annual events, with even more frequent updates soon to come.

    The inaugural Practice and Experience in Advanced Research Computing (PEARC) conference—with the theme Sustainability, Success and Impact—stresses key objectives for those who manage, develop and use advanced research computing throughout the U.S. and the world. Organizations supporting this new HPC conference include the Advancing Research Computing on Campuses: Best Practices Workshop (ARCC), the Extreme Science and Engineering Development Environment (XSEDE), the Science Gateways Community Institute, the Campus Research Computing (CaRC) Consortium, the Advanced CyberInfrastructure Research and Education Facilitators (ACI-REF) consortium, the National Center for Supercomputing Applications’ Blue Waters project, ESnet, Open Science Grid, Compute Canada, the EGI Foundation, the Coalition for Academic Scientific Computation (CASC) and Internet2.

    Follow the Poop

    One project made possible with the DigitalGlobe constellation—a set of Hubble-like multispectral orbiting telescopes “pointed the other way”—was a University of Minnesota census of emperor penguin populations in Antarctica.

    “What’s the first thing you do if you get access to a bunch of sub-meter-resolution [orbital telescopes covering] Antarctica?” Morin asked. “You point them at penguins.”

    Thanks in part to a lack of predators the birds over-winter on the ice, huddling in colonies for warmth. Historically these colonies were discovered by accident: Morin’s project enabled the first continent-wide survey to find and estimate the population size of all the colonies.

    The researchers realized that they had a relatively easy way to spot the colonies in the DigitalGlobe imagery: Because the penguins eat beta-carotene-rich krill, their excrement stains the ice red.

    “You can identify their location by looking for poo,” Morin said. The project enabled the first complete population count of emperor penguins: 595,000 birds, +14%

    “We started to realize we were onto something,” he added. His group began to wonder if they could leverage the sub-meter-resolution, multispectral, stereo view of the constellation’s WorldView I, II and III satellites to derive the topography of the Antarctic, and later the Arctic. One challenge, he knew, would be finding the computational power to extract topographic data from the stereo images in a reasonable amount of time. He found his answer at the NSF and the NGIA.

    “We proposed to a science agency and a combat support agency that we were going to map the topography of 30 degrees of the globe in 24 months.”

    Blue Waters on the Ice

    Morin and his collaborators found themselves in the middle of a seismic shift in topographic technology.

    “Eight years ago, people were doing [this] from the ground,” with a combination of land-based surveys and accurate but expensive LIDAR mapping from aircraft, he said. These methods made sense in places where population and industrial density made the cost worthwhile. But it had left the Antarctic and Arctic largely unmapped.

    Deriving topographic information from the photographs posed a computational problem well beyond the capabilities of a campus cluster. The group did initial computations at the Ohio Supercomputer Center, but needed to expand for the final data analysis.

    Ohio Super Computer Center

    Ohio Oakley HP supercommputer

    Ohio Ruby HP supercomputer

    Ohio Dell Owens supercompter

    From 2014 to 2015, Morin used XSEDE resources, most notably Gordon at San Diego Supercomputer Center and XSEDE’s Extended Collaborative Support Service to carry out his initial computations.

    SDSC home built Gordon-Simons supercomputer

    XSEDE then helped his group acquire an allocation on Blue Waters, an NSF-funded Cray Inc. system at Indiana and NCSA with 49,000 CPUs and a peak performance of 13.3 petaFLOPS.

    Collecting the equivalent area of California daily, a now-expanded group of subject experts made use of the polar-orbiting satellites and Blue Waters to derive elevation data. They completed a higher-resolution map of Alaska—the earlier version of which had taken the U.S. Geological Survey 50 years—in a year. While the initial images are licensed for U.S. government use only, the group was able to release the resulting topographic data for public use.

    Mapping Change

    Thanks to the one-meter resolution of their initial analysis, the group quickly found they could identify many man-made structures on the surface. They could also spot vegetation changes such as clearcutting. They could even quantify vegetation regrowth after replanting.

    “We’re watching individual trees growing here.”

    Another set of images he showed in his PEARC17 presentation were before-and-after topographic maps of Nuugaatsiaq, Greenland, which was devastated by a tsunami last month. The Greenland government is using the images, which show both human structures and the landslide that caused the 10-meter tsunami, to plan recovery efforts.

    The activity of the regions’ ice sheets was a striking example of the technology’s capabilities.

    “Ice is a mineral that flows,” Morin said, and so the new topographic data offer much more frequent information about ice-sheet changes driven by climate change than previously available. “We not only have an image of the ice but we know exactly how high it is.”

    Morin also showed an image of the Larsen Ice Shelf revealing a crack that had appeared in the glacier. The real news, though, was that the crack—which created an iceberg the size of the big island of Hawaii—was less than 24 hours old. It had appeared sometime after midnight on July 12.

    “We [now] have better topography for Siberia than we have for Montana,” he noted.

    New Directions

    While the large, high-resolution satellites have already transformed the field, innovations are already coming that could create another shift, Morin said.

    “This is not your father’s topography,” he noted. “Everything has changed; everything is time sensitive; everything is on demand.” In an interview later that morning, he added, “XSEDE, Blue Waters and NSF have changed how earth science happens now.”

    One advance won’t require new technology: just a little more time. While the current topographic dataset is at 1-meter resolution, the data can go tighter with more computation. The satellite images actually have a 30-centimeter resolution, which would allow for the project to shift from imaging objects the size of automobiles to those the size of a coffee table.

    At that point, he said, “instead of [just the] presence or absence of trees we’ll be able to tell what species of tree. It doesn’t take recollection of imagery; it just takes reprocessing.”

    The new, massive constellation of CubeSats such as the Planet company’s toaster-sized Dove satellites now being launched promises an even more disruptive advance. A swarm of these satellites will provide much more frequent coverage of the entire Earth’s surface than possible with the large telescopes.

    “The quality isn’t as good, but right now we’re talking about coverage,” Morin said. His group’s work has taken advantage of a system that allows mapping of a major portion of the Earth in a year. “What happens when we have monthly coverage?”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    HPCwire is the #1 news and information resource covering the fastest computers in the world and the people who run them. With a legacy dating back to 1987, HPC has enjoyed a legacy of world-class editorial and topnotch journalism, making it the portal of choice selected by science, technology and business professionals interested in high performance and data-intensive computing. For topics ranging from late-breaking news and emerging technologies in HPC, to new trends, expert analysis, and exclusive features, HPCwire delivers it all and remains the HPC communities’ most reliable and trusted resource. Don’t miss a thing – subscribe now to HPCwire’s weekly newsletter recapping the previous week’s HPC news, analysis and information at: http://www.hpcwire.com.

     
  • richardmitnick 12:20 pm on July 13, 2017 Permalink | Reply
    Tags: Chinese Sunway ThaihuLight supercomputer currently #1 on the TOP500 list of supercomputers, How supercomputers are uniting the US and China, , Supercomputing   

    From Science Node: “How supercomputers are uniting the US and China” 

    Science Node bloc
    Science Node

    12 July 2017
    Tristan Fitzpatrick

    38 years ago, US President Jimmy Carter and China Vice Premier Deng Xiaoping signed the US – China Agreement on Cooperation in Science and Technology, outlining broad opportunities to promote science and technology research.

    Since then the two nations have worked together on a variety of projects, including energy and climate research. Now, however, there is another goal that each country is working towards: The pursuit of exascale computing.

    At the PEARC17 conference in New Orleans, Louisiana, representatives from the high-performance computing communities in the US and China participated in the first international workshop on American and Chinese collaborations in experience and best practice in supercomputing.

    Both countries face the same challenges implementing and managing HPC resources across a large nation-state. The hardware and software technologies are rapidly evolving, the user base is ever-expanding, and the technical requirements for maintaining these large and fast machines is accelerating.

    It would be a major coup for either country’s scientific prowess if exascale computing could be reached, as it’s believed to be the order of processing for the human brain at the neural level. Initiatives like the Human Brain Project consider it to be a hallmark to advance computational power.

    “It’s less like an arms race between the two countries to see who gets there first and more like the Olympics,” says Dan Stanzione, executive director at the Texas Advanced Computing Center (TACC).

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    “We’d like to win and get the gold medal but hearing what China is doing with exascale research is going to help us get closer to this goal.”

    ___________________________________________________________________

    Exascale refers to computing systems that can perform a billion billion calculations per second — at least 50 times faster than the fastest supercomputers in the US.

    ___________________________________________________________________

    Despite the bona fides that would be awarded to whomever achieves the milestone first, TACC data mining and statistics group manager Weijia Xu stresses that collaboration is a greater motivator for both the US and China than just a race to see who gets there first.

    “I don’t think it’s really a competition,” Xu says. “It’s more of a common goal we all want to reach eventually. How you reach the goal is not exactly clear to everyone yet. Furthermore, there are many challenges ahead, such as how systems can be optimized for various applications.”

    The computational resources at China’s disposal could make it a great ally in the pursuit of exascale power. As of June 2017, China has the two fastest supercomputers in the top 500 supercomputers list, followed by five entries from the United States in the top ten.

    1
    Chinese Sunway ThaihuLight supercomputer, currently #1 on the TOP500 list of supercomputers.

    “While China has the top supercomputer in the world, China and the US probably have about fifty percent each of those top 500 machines besides the European countries,” says Si Liu, HPC software tools researcher at TACC. “We really believe if we have some collaboration between the US and China, we could do some great projects together and benefit the whole HPC community.”

    Besides pursuing the elusive exascale goal, Stanzione says the workshop opened up other ideas for how to improve the overall performance of HPC efforts in both nations. Co-located participants spoke on topics ranging from in situ simulations, artificial intelligence, and deep learning, among others.

    “We also ask questions like how do we run HPC systems, what do we run on them, and how it’s going to change in the next few years,” Stanzione says.“It’s a great time to get together and talk about details of processors, speeds, and feeds.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 1:21 pm on July 8, 2017 Permalink | Reply
    Tags: , , , , , , , , Supercomputing, UCSD Comet supercomputer   

    From Science Node: “Cracking the CRISPR clock” 

    Science Node bloc
    Science Node

    05 Jul, 2017
    Jan Zverina

    SDSC Dell Comet supercomputer

    Capturing the motion of gyrating proteins at time intervals up to one thousand times greater than previous efforts, a team led by University of California, San Diego (UCSD) researchers has identified the myriad structural changes that activate and drive CRISPR-Cas9, the innovative gene-splicing technology that’s transforming the field of genetic engineering.

    By shedding light on the biophysical details governing the mechanics of CRISPR-Cas9 (clustered regularly interspaced short palindromic repeats) activity, the study provides a fundamental framework for designing a more efficient and accurate genome-splicing technology that doesn’t yield ‘off-target’ DNA breaks currently frustrating the potential of the CRISPR-Cas9- system, particularly for clinical uses.


    Shake and bake. Gaussian accelerated molecular dynamics simulations and state-of-the-art supercomputing resources reveal the conformational change of the HNH domain (green) from its inactive to active state. Courtesy Giulia Palermo, McCammon Lab, UC San Diego.

    “Although the CRISPR-Cas9 system is rapidly revolutionizing life sciences toward a facile genome editing technology, structural and mechanistic details underlying its function have remained unknown,” says Giulia Palermo, a postdoctoral scholar with the UC San Diego Department of Pharmacology and lead author of the study [PNAS].

    See the full article here
    .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: