Tagged: Science Node Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 6:37 am on April 12, 2018 Permalink | Reply
    Tags: , , , Burçin Mutlu-Pakdil, Burçin’s Galaxy - PGC 1000174, Carnegie’s Las Campanas Observatory Chile over 2500 m (8200 ft) high, , , Science Node, ,   

    From Science Node: Women in STEM – “Burçin’s galaxy” Burçin Mutlu-Pakdil 

    Science Node bloc
    Science Node

    30 Mar, 2018
    Ellen Glover

    1
    Burçin Mutlu-Pakdil

    As a little girl growing up in Turkey, Burçin Mutlu-Pakdil loved the stars.


    Burçin’s galaxy, AKA PGC 1000714, is a unique, double-ringed, Hoag-type galaxy exhibiting features never observed before. Courtesy North Carolina Museum of Natural Sciences.

    “How is it possible not to fall in love with stars?” wonders Mutlu-Pakdil. “I find it very difficult not to be curious about the Universe, about the Milky Way and how everything got together. I really want to learn more. I love my job because of that.”

    1
    Young or old? The object’s blue outer rings suggests it may have formed more recently than the center.

    Her job is at The University of Arizona’s Steward Observatory, one of the world’s premier astronomy facilities, where she works as a postdoctoral astrophysics research associate.

    U Arizona Steward Observatory at Kitt Peak, AZ, USA, altitude 2,096 m (6,877 ft)

    Just a few years ago, while earning her Ph.D. at the University of Minnesota, Mutlu-Pakdil and her colleagues discovered PGC 1000174, a galaxy with qualities so rare they’ve never been observed anywhere else. For now, it’s known as Burçin’s Galaxy.

    The object was originally detected by Patrick Treuthardt, who was observing a different galaxy when he spotted it in the background. It piqued the astronomers’ attention because of an initial resemblance to Hoag’s Object. This rare galaxy is known for its yellow-orange center surrounded by a detached outer ring.

    “Our object looks very similar to Hoag’s Object. It has a very symmetric central body with a very symmetric outer ring,” explains Mutlu-Pakdil. “But my work showed that there is actually a second ring on this object. This makes it much more complex.”

    Through extensive imaging and analysis, Mutlu-Pakdil found that, unlike Hoag’s Object, this new galaxy has two rings with no visible materials attaching them, a phenomenon not seen before. It offered the first-ever observation and description of a double-ringed elliptical galaxy.

    4
    Eye on the universe. Sophisticated instruments like the 8.2 meter optical-infrared Subaru Telescope on the summit of Mauna Kea in Hawaii allow astronomers to peer ever further into the stars–and into the origins of the universe.


    NAOJ/Subaru Telescope at Mauna Kea Hawaii, USA,4,207 m (13,802 ft) above sea level

    Since spotting the intriguing galaxy, Mutlu-Pakdil and her team have evaluated it in several ways. They initially observed it via the Irénéé du Pont two-meter telescope at the Las Campanas observatory in Chile. And they recently captured infrared images with the Magellan 6.5-meter telescope also at Las Campanas.


    Carnegie Las Campanas Dupont telescope, Atacama Desert, over 2,500 m (8,200 ft) high approximately 100 kilometres (62 mi) northeast of the city of La Serena,Chile

    Carnegie 6.5 meter Magellan Baade and Clay Telescopes located at Carnegie’s Las Campanas Observatory, Chile. over 2,500 m (8,200 ft) high

    The optical images reveal that the components of Burçin’s Galaxy have different histories. Some parts of the galaxy are significantly older than others. The blue outer ring suggests a newer formation, while the red inner ring indicates the presence of older stars.

    Mutlu-Pakdil and her colleagues suspect that this galaxy was formed as some material accumulated into one massive object through gravitational attraction, AKA an accretion event.

    However, beyond that, PGC1000174’s unique qualities largely remain a mystery. There are about three trillion galaxies in our observable universe and more are being found all the time.

    “In such a vast universe, finding these rare objects is really important,” says Mutlu-Pakdil. “We are trying to create a complete picture of how the Universe works. These peculiar systems challenge our understanding. So far, we don’t have any theory that can explain the existence of this particular object, so we still have a lot to learn.”

    Challenging norms and changing lives

    In a way, Mutlu-Pakdil has been challenging the norms of science all her life.

    Though her parents weren’t educated beyond elementary school, they supported her desire to pursue her dreams of the stars.

    “When I was in college, I was the only female in my class, and I remember I felt so much like an outsider. I felt like I wasn’t fitting in,” she recalls of her time studying physics at Bilkent University in Ankara, Turkey.

    7
    Bilkent University

    8
    Astronomical ambassador. Mutlu-Pakdil believes in sharing her fascination for space and works to encourage students from all backgrounds to explore astronomy and other STEM fields.

    Throughout her education and career, Mutlu-Pakdil has experienced being a minority in an otherwise male-dominated field. It hasn’t slowed her down, but it has made her more passionate about promoting diversity in science and being a mentor to young people.

    “I realized, it is not about me, it is society that needs to change,” she says. “Now I really want to inspire people to do similar things. So kids from all backgrounds will be able to understand they can do science, too.”

    That’s why she serves as an ambassador for the American Astronomical Society and volunteers to mentor children in low-income neighborhoods to encourage them to pursue college and, hopefully, a career in STEM.

    She was also recently selected to be a 2018 TED Fellow and will present a TED talk about her discoveries and career on April 10.

    Through her work, Mutlu-Pakdil hopes to show people how important it is to learn about our universe. It behooves us all to take an interest in the night sky and the groundbreaking discoveries being made by astronomers like her around the world.

    “We are a part of this Universe, and we need to know what is going on in it. We have strong theories about how common galaxies form and evolve, but, for rare ones, we don’t have much information,” says Mutlu-Pakdil. “Those unique objects present the extreme cases, so they really give us a big picture for the Universe’s evolution — they stretch our understanding of everything.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

    Advertisements
     
  • richardmitnick 6:50 am on March 29, 2018 Permalink | Reply
    Tags: , , , , , , , Science Node,   

    From Science Node: “CERN pushes back the frontiers of physics” 

    Science Node bloc
    Science Node

    27 Mar, 2018
    Maria Girone
    CERN openlab Chief Technology Officer

    “Researchers at the European Organization for Nuclear Research (CERN) are probing the fundamental structure of the universe. They use the world’s largest and most complex scientific machines to study the basic constituents of matter — the fundamental particles.

    These particles are made to collide at close to the speed of light. This process gives physicists clues about how the particles interact, and provides insights into the laws of nature.

    CERN is home to the Large Hadron Collider (LHC), the world’s most powerful particle accelerator.

    LHC

    CERN/LHC Map

    CERN LHC Tunnel

    CERN LHC particles

    It consists of a 27km ring of superconducting magnets, combined with accelerating structures to boost the energy of the particles prior to the collisions. Special detectors — similar to large, 3D digital cameras built in cathedral-sized caverns —observe and record the results of these collisions.

    One billion collisions per second

    Up to about 1 billion particle collisions can take place every second inside the LHC experiments’ detectors. It is not possible to examine all of these events. Hardware and software filtering systems are used to select potentially interesting events for further analysis.

    Even after filtering, the CERN data center processes hundreds of petabytes (PB) of data every year. Around 150 PB are stored on disk at the site in Switzerland, with over 200 PB on tape — the equivalent of about 2,000 years of HD video.

    Physicists must sift through the 30-50 PB of data produced annually by the LHC experiments to determine if the collisions have revealed any interesting physics. The Worldwide LHC Computing Grid (WLCG), a distributed computing infrastructure arranged in tiers, gives a community of thousands of physicists near-real-time access to LHC data.

    2
    Power up. The planned upgrades to the Large Hadron Collider. Image courtesy CERN.

    With 170 computing centers in 42 countries, the WLCG is the most sophisticated data-taking and analysis system ever built for science. It runs more than two million jobs per day.

    The LHC has been designed to follow a carefully planned program of upgrades. The LHC typically produces particle collisions for a period of around three years (known as a ‘run’), followed by a period of about two years for upgrade and maintenance work (known as a ‘long shutdown’).

    The High-Luminosity Large Hadron Collider (HL-LHC), scheduled to come online around 2026, will crank up the performance of the LHC and increase the potential for discoveries. The higher the luminosity, the more collisions, and the more data the experiments can gather.

    An increased rate of collision events means that digital reconstruction becomes significantly more complex. At the same time, the LHC experiments plan to employ new, more flexible filtering systems that will collect a greater number of events.

    This will drive a huge increase in computing needs. Using current software, hardware, and analysis techniques, the estimated computing capacity required would be around 50-100 times higher than today. Data storage needs are expected to be in the order of exabytes by this time.

    Technology advances over the next seven to ten years will likely yield an improvement of approximately a factor ten in both the amount of processing and storage available at the same cost, but will still leave a significant resource gap. Innovation is therefore vital; we are exploring new technologies and methodologies together with the world’s leading information and communications technology (ICT) companies.

    Tackling tomorrow’s challenges today

    CERN openlab works to develop and test the new ICT techniques that help to make groundbreaking physics discoveries possible. Established in 2001, the unique public-private partnership provides a framework through which CERN collaborates with leading companies to accelerate the development of cutting-edge technologies.

    My colleagues and I have been busy working to identify the key challenges that will face the LHC research community in the coming years. Last year, we carried out an in-depth consultation process, involving workshops and discussions with representatives of the LHC experiments, the CERN IT department, our collaborators from industry, and other ‘big science’ projects.

    Based on our findings, we published the CERN openlab white paper on future ICT challenges in scientific research. We identified 16 ICT challenge areas, grouped into major R&D topics that are ripe for tackling together with industry collaborators.

    In data-center technologies, we need to ensure that data-center architectures are flexible and cost effective and that cloud computing resources can be used in a scalable, hybrid manner. New technologies for solving storage capacity issues must be thoroughly investigated, and long-term data-storage systems should be reliable and economically viable.

    We also need modernized code to ensure that maximum performance can be achieved on the new hardware platforms. Sucessfully translating the huge potential of machine learning into concrete solutions will play a role in monitoring the accelerator chain, optimizing the use of IT resources, and even hunting for new physics.

    Several IT challenges are common across research disciplines. With ever more research fields adopting methodologies driven by big data, it’s vital that we collaborate with research communities such as astrophysics, biomedicine, and Earth sciences.

    As well as sharing tools and learning from one another’s experience, working together to address common challenges can increase our ability to ensure that leading ICT companies are producing solutions that meet our common needs.

    These challenges must be tackled over the coming years in order to ensure that physicists across the globe can exploit CERN’s world-leading experimental infrastructure to its maximum potential. We believe that working together with industry leaders through CERN openlab can play a key role in overcoming these challenges, for the benefit of both the high-energy physics community and wider society.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 11:33 am on February 14, 2018 Permalink | Reply
    Tags: 1. Sunway TaihuLight (China), 2. Tianhe-2 (China), 3. Piz Daint (Switzerland), 4. Gyoukou (Japan), 5. Titan (United States), Science Node, The 5 fastest supercomputers in the world   

    From Science Node: “The 5 fastest supercomputers in the world” 

    Science Node bloc
    Science Node

    Peak performance within supercomputing is a constantly moving target. In fact, a supercomputer is defined as being any machine “that performs at or near the currently highest operational rate.” The field is a continual battle to be the best. Those who achieve the top rank may only hang on to it for a fleeting moment.

    Competition is what makes supercomputing so exciting, continually driving engineers to reach heights that were unimaginable only a few years ago. To celebrate this amazing technology, let’s take a look at the fastest computers as defined by computer ranking project TOP500 — and at what these machines are used for.

    5. Titan (United States)

    ORNL Cray XK7 Titan Supercomputer

    Built by Cray, Oak Ridge National Laboratory’s Titan is the follow-up to the company’s 2005 Jaguar supercomputer. Like Jaguar, Titan is unique due to its reliance on both CPUs and GPUs. According to Cray, GPUs can handle more calculations at a time than CPUs, which allow the GPUs to do “the heavy lifting”.

    Cray hopes to get Titan’s performance up to 20 petaFLOPS, but TOP 500 clocked the machine at 17.59 petaFLOPS in November 2017. For reference, 17.59 petaFLOPS is equal to 17,590 trillion calculations per second. The machine also has 299,008 CPU cores and 261,632 GPU cores.

    What’s more, this machine’s power is being put to good use. The S3D project focuses on modeling the physics behind combustion, which might give researchers the ability to create biofuel surrogates for gasoline. Another project called Denovo is working to find ways to increase efficiency within nuclear reactors. And a team at Brown University is using the supercomputer to model sickle cell disease, hoping to devise better treatments for a disease that affects around 100,000 Americans.

    4. Gyoukou (Japan)

    Although Titan nearly finished last year as the 4th fastest computer in the world, Japan’s Gyoukou stole the spot in November. Created by ExaScaler and PEZY Computing, this machine is currently housed at the Japan Agency for Marine-Earth Science and Technology. The machine reportedly has 19,860,000 cores and runs at speeds of up to 19.14 petaFLOPS.

    Japan Agency for Marine-Earth Science and Technology ExaScaker Gyoukou supercomputer

    Gyoukou is an extremely new system, presented to the public for the first time at SC17 in November. This, combined with the fact that PEZY’s president was arrested for fraud on December 4, 2017, means that the machine hasn’t had much time to prove its usefulness with real world projects. However, Gyoukou is incredibly energy efficient, with a power efficiency of 14.17 gigaFLOPS per watt.

    3. Piz Daint (Switzerland)

    Cray Cray XC30 Piz Daint supercomputer of the Swiss National Supercomputing Center (CSCS)

    Named after a mountain in the Swiss Alps, Piz Daint is the Swiss National Supercomputer Centre’s contribution to the field, running at 19.59 petaFLOPS and utilizing 361,760 cores.

    The machine has helped scientists at the University of Basel make discoveries about “memory molecules” in the brain. Other Swiss scientists have taken advantage of its ultra-high resolutions to set up a near-global climate simulation.

    2. Tianhe-2 (China)

    Tianhe-2 supercomputer China

    If supercomputing were a foot race, China would be a dot on the horizon compared to the rest of the competitors. Years of hard work and research enabled the country to grab the top two spots, with Tianhe-2 coming in second. The name translates as “MilkyWay-2,” and it’s much more powerful than Piz Daint, boasting a whopping 3,120,000 cores and running at 33.86 petaFLOPS.

    Developed by the National University of Defense Technology (NUDT) in China, TOP500 reported that the machine is intended mainly for government security applications. This means that much of the work done by Tianhe-2 is kept secret, but if its processing power is anything to judge by, it must be working on some pretty important projects.

    1. Sunway TaihuLight (China)

    5

    When it comes to supercomputing, no other machine can touch the Sunway TaihuLight. Its processing power exceeds 93.01 petaFLOPS and it relies on 10,649,000 cores, making it the strongest supercomputer in the world by a wide margin. That’s more than five times the processing power of Titan and nearly 19 times more cores.

    Located at the National Supercomputing Center in Wuxi, China, TaihuLight’s creators are using the supercomputer for tasks ranging from climate science to advanced manufacturing. It has also found success in marine forecasting, helping ships avoid rough seas while also helping with offshore oil drilling.

    The race to possess the most powerful supercomputer never really ends. This friendly competition between countries has propelled a boom in processing power, and it doesn’t look like it’ll be slowing down anytime soon. With scientists using supercomputers for important projects such as curing debilitating diseases, we can only hope it will continue for years to come.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 8:05 am on October 19, 2017 Permalink | Reply
    Tags: , , For the first time researchers could calculate the quantitative contributions from constituent quarks gluons and sea quarks –– to nucleon spin, Nucleons — protons and neutrons — are the principal constituents of the atomic nuclei, Piz Daint super computer, Quarks contribute only 30 percent of the proton spin, Science Node, , Theoretical models originally assumed that the spin of the nucleon came only from its constituent quarks, To calculate the spin of the different particles in their simulations the researchers consider the true physical mass of the quarks   

    From Science Node: “The mysterious case of Piz Daint and the proton spin puzzle” 

    Science Node bloc
    Science Node

    10 Oct, 2017 [Better late than…]
    Simone Ulmer

    Nucleons — protons and neutrons — are the principal constituents of the atomic nuclei. Those particles in turn are made up of yet smaller elementary particles: Their constituent quarks and gluons.

    Each nucleon has its own intrinsic angular momentum, or spin. Knowing the spin of elementary particles is important for understanding physical and chemical processes. University of Cyprus researchers may have solved the proton spin puzzle – with a little help from the Piz Daint supercomputer.

    Cray Piz Daint supercomputer of the Swiss National Supercomputing Center (CSCS)

    Proton spin crisis

    Spin is responsible for a material’s fundamental properties, such as phase changes in non-conducting materials that suddenly turn them into superconductors at very low temperatures.

    1
    Inside job. Artist’s impression of what the proton is made of. The quarks and gluons contribute to give exactly half the spin of the proton. The question of how is it done and how much each contributes has been a puzzle since 1987. Courtesy Brookhaven National Laboratory.

    Theoretical models originally assumed that the spin of the nucleon came only from its constituent quarks. But then in 1987, high-energy physics experiments conducted by the European Muon Collaboration precipitated what came to be known as the ‘proton spin crisis’: experiments performed at European Organization for Nuclear Research (CERN), Deutsches Elektronen-Synchrotron (DESY) and Stanford Linear Accelerator Center (SLAC) showed that quarks contribute only 30 percent of the proton spin.

    LHC

    CERN/LHC Map

    CERN LHC Tunnel

    CERN LHC particles

    DESY

    DESY Belle II detector

    DESY European XFEL

    DESY Helmholtz Centres & Networks

    DESY Nanolab II

    DESY Helmholtz Centres & Networks

    SLAC

    SLAC Campus

    SLAC/LCLS

    SLAC/LCLS II

    Since then, it has been unclear what other effects are contributing to the spin, and to what extent. Furhter high-energy physics studies suggested that quark-antiquark pairs, with their short-lived intermediate states might be in play here – in other words, purely relativistic quantum effects.

    Thirty years later, these mysterious effects have finally been accounted for in the calculations performed on CSCS supercomputer Piz Daint by a research group led by Constantia Alexandrou of the Computation-based Science and Technology Research Center of the Cyprus Institute and the Physics Department of the University of Cyprus in Nicosia. That group also included researchers from DESY-Zeuthen, Germany, and from the University of Utah and Temple University in the US.

    For the first time, researchers could calculate the quantitative contributions from constituent quarks, gluons, and sea quarks –– to nucleon spin. (Sea quarks are a short-lived intermediate state of quark-antiquark pairs inside the nucleon). With their calculations, the group made a crucial step towards solving the puzzle that brought on the proton spin crisis.

    To calculate the spin of the different particles in their simulations, the researchers consider the true physical mass of the quarks.

    “This is a numerically challenging task, but of essential importance for making sure that the values of the used parameters in the simulations correspond to reality,” says Karl Jansen, lead scientist at DESY-Zeuthen and project co-author.

    The strong [interaction] acting here, which is transmitted by the gluons, is one of the four fundamental forces of physics. The strong [interaction] is indeed strong enough to prevent the removal of a quark from a proton. This property, known as confinement, results in huge binding energy that ultimately holds together the nucleon constituents.

    The researchers used the mass of the pion, a so-called meson, consisting of one up and one down antiquark –the ‘light quarks’ – to fix the mass of the up and down quarks to the physical quark mass entering in the simulations. If the mass of the pion calculated from the simulation corresponds with the experimentally determined value, then the researchers consider that the simulation is done with the actual physical values for the quark mass.

    And that is exactly what Alexandrou and her team have achieved in their recently published research, Physical Review Letters.

    Their simulations also took into account the valence quarks (constituent quarks), sea quarks, and gluons. The researchers used the lattice theory of quantum chromodynamics (lattice QCD) to calculate this sea of particles and their QCD interactions [ETH Zürich].

    Elaborate conversion to physical values

    The biggest challenge with the simulations was to reduce statistical errors in calculating the ‘spin contributions’ from sea quarks and gluons, says Alexandrou. “In addition, a significant part was to carry out the renormalisation of these quantities.”

    3
    Spin cycle. Composition of the proton spin among the constituent quarks (blue and purple columns with the lines), sea quarks (blue, purple, and red solid columns) and gluons (green column). The errors are shown by the bars. Courtesy Constantia Alexandrou.

    In other words, they had to convert the dimensionless values determined by the simulations into a physical value that can be measured experimentally – such as the spin carried by the constituent and sea quarks and the gluons that the researchers were seeking.

    Alexandrou’s team is the first to have achieved this computation including gluons, whereby they had to calculate millions of the ‘propagators’ describing how quarks move between two points in space-time.

    “Making powerful supercomputers like Piz Daint open and available across Europe is extremely important for European science,” notes Jansen.

    “Simulations as elaborate as this were possible only thanks to the power of Piz Daint,” adds Alexandrou.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 11:23 am on October 9, 2017 Permalink | Reply
    Tags: , , Aurora supercomputer, , , , Science Node,   

    From Science Node: “US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021” 

    Science Node bloc
    Science Node

    September 27, 2017
    Tiffany Trader

    ANL ALCF Cray Aurora supercomputer

    At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the “Aurora” supercomputer is on track to be the United States’ first exascale system. Aurora, originally named as the third pillar of the CORAL “pre-exascale” project, will still be built by Intel and Cray for Argonne National Laboratory, but the delivery date has shifted from 2018 to 2021 and target capability has been expanded from 180 petaflops to 1,000 petaflops (1 exaflop).

    2

    The fate of the Argonne Aurora “CORAL” supercomputer has been in limbo since the system failed to make it into the U.S. DOE budget request, while the same budget proposal called for an exascale machine “of novel architecture” to be deployed at Argonne in 2021.

    Until now, the only official word from the U.S. Exascale Computing Project was that Aurora was being “reviewed for changes and would go forward under a different timeline.”

    Officially, the contract has been “extended,” and not cancelled, but the fact remains that the goal of the Collaboration of Oak Ridge, Argonne, and Lawrence Livermore (CORAL) initiative to stand up two distinct pre-exascale architectures was not met.

    According to sources we spoke with, a number of people at the DOE are not pleased with the Intel/Cray (Intel is the prime contractor, Cray is the subcontractor) partnership. It’s understood that the two companies could not deliver on the 180-200 petaflops system by next year, as the original contract called for. Now Intel/Cray will push forward with an exascale system that is some 50x larger than any they have stood up.

    It’s our understanding that the cancellation of Aurora is not a DOE budgetary measure as has been speculated, and that the DOE and Argonne wanted Aurora. Although it was referred to as an “interim,” or “pre-exascale” machine, the scientific and research community was counting on that system, was eager to begin using it, and they regarded it as a valuable system in its own right. The non-delivery is regarded as disruptive to the scientific/research communities.

    Another question we have is that since Intel/Cray failed to deliver Aurora, and have moved on to a larger exascale system contract, why hasn’t their original CORAL contract been cancelled and put out again to bid?

    With increased global competitiveness, it seems that the DOE stakeholders did not want to further delay the non-IBM/Nvidia side of the exascale track. Conceivably, they could have done a rebid for the Aurora system, but that would leave them with an even bigger gap if they had to spin up a new vendor/system supplier to replace Intel and Cray.

    Starting the bidding process over again would delay progress toward exascale – and it might even have been the death knell for exascale by 2021, but Intel and Cray now have a giant performance leap to make and three years to do it. There is an open question on the processor front as the retooled Aurora will not be powered by Phi/Knights Hill as originally proposed.

    These events beg the question regarding the IBM-led effort and whether IBM/Nvidia/Mellanox are looking very good by comparison. The other CORAL thrusts — Summit at Oak Ridge and Sierra at Lawrence Livermore — are on track, with Summit several weeks ahead of Sierra, although it is looking like neither will make the cut-off for entry onto the November Top500 list as many had speculated.

    ORNL IBM Summit supercomputer depiction

    LLNL IBM Sierra supercomputer

    We reached out to representatives from Cray, Intel and the Exascale Computing Project (ECP) seeking official comment on the revised Aurora contract. Cray and Intel declined to comment and we did not hear back from ECP by press time. We will update the story as we learn more.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 11:48 am on October 4, 2017 Permalink | Reply
    Tags: , , Science Node, , Women once powered the tech industry: Can they do it again?   

    From Science Node: “Women once powered the tech industry: Can they do it again?” 

    Science Node bloc
    Science Node

    02 Oct, 2017
    Alisa Alering

    As women enter a field, compensation tends to decline. Is the tech meritocracy a lie?

    1

    Marie Hicks wants us to think about how gender and sexuality influence technological progress and why diversity might matter more in tech than in other fields.

    An assistant professor of history at the University of Wisconsin-Madison, Hicks studies the history of technological progress and the global computer revolution.

    In Programmed Inequality: How Britain discarded women technologists and lost its edge in computing, Hicks discusses how Britain undermined its early success in computation after World War II by neglecting its trained technical workforce — at that time largely composed of women.

    We had a few questions for Hicks about what lessons Britain’s past mistakes might hold for the seemingly-unstoppable economic engine that is Silicon Valley today.

    ‘Technical’ used to be associated with low status, less-skilled work, but now tech jobs are seen as high-status. How did the term evolve?

    In the UK, the class system was such that touching a machine in any way, even if it was an office computer, was seen as lower-class. For a time, there was enormous resistance to doing anything technical by the white men who were in the apex position of society.

    The US had less of that sort of built-in bias against technical work, but there was still the assumption that if you were working with a machine, the machine was doing most of the work. You were more of a tender or a minder—you were pushing buttons.

    The change resulted from a very intentional, concerted push from people inside these nascent fields to professionalize and raise the status of their jobs. All of these professional bodies that we have today, the IEEE and so on, were created in this period. They were helped along by the fact that this is difficult work, and there was a lot of call for it, leading to persistent shortages of people who could do the work.

    We’re in an interesting moment, when these professions are at their peak, and now we’re starting to see them decline in importance and remuneration. More and more, people are hired into jobs that are broken down in ways that require less skill or less training. New college hires are brought into them and the turnover is such that people no longer have the guarantee of a career.

    Will diversity initiatives, rather than elevating women, devalue the status of the field, as happened previously in professions like teaching and librarianship?

    We can see that already happening for certain subfields. Women are pushed into areas like quality assurance rather than what would be considered higher-level, more important, infrastructural engineering positions. The jobs require, in many cases, identical skills, and yet those subfields are paid less and have a lower status.

    The discrepancies are very much linked to the fact that there are a higher proportion of women doing the work. It’s a cycle: High pay and high status professions usually become more male-dominated. If that changes and more women enter the field, pay declines. The perception of the field changes, even if the work remains the same.

    Does the tech industry have a greater problem with structural inequality, or is the conversation just more visible?

    The really significant thing about tech is that it’s so powerful. It’s becoming the secondary branch of our government at this point. That’s why it’s so critical to look at lack of diversity in Silicon Valley.

    There’s just so much at stake in terms of who has the power to decide how we live, how we die, how we’re governed, just the entire shape of our lives.

    How do you suggest we tackle the problem?

    There’s this whole myth of meritocracy that attempts to solve the problem of diversity in STEM through the pipeline model — that, essentially, if we get enough white women and people of color into the beginning end of the pipeline, they’ll come out the other end as captains of industry who are in a position to make real changes in these fields.

    ut, of course, what happens is that they just leak out of the pipeline, because stuffing more and more people into a discriminatory system in an attempt to fix it doesn’t work.

    If you want more women and people of color in management, you have to hire them into those higher positions. You have to get people to make lateral moves from other industries. You have to promote people rather than saying, “Oh, you come in at the bottom level, and you somehow prove yourself.” It’s not going to be possible to get people to the top in that way.

    What we’ve seen is decades and decades where people have been kept at the bottom after they come in at the bottom. We have to have a real disruption in how we think about these jobs and how we hire for them. We can’t just do the same old thing but try to add more women and more people of color into the mix.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 1:46 pm on September 21, 2017 Permalink | Reply
    Tags: , Barcelona Supercomputer Center, Science Node, Turbulence   

    From Science Node: “Smoothing out our notion of turbulence” 

    Science Node bloc
    Science Node

    1
    Barcelona Supercomputer Center

    2
    Barcelona Supercomputer Center MareNostrum Intel Xeon Platinum supercomputer

    21 Sep, 2017
    Alisa Alering

    Advances in high-performance computing at the Barcelona Supercomputing Center create greater accuracy in turbulence modeling.

    Turbulence is what makes you clutch your seat and contemplate your mortality 36,000 feet above the ground as the flimsy tin can you’re flying in bounces violently.

    Turbulence is also present in ocean currents and the lava discharged from a volcano. It’s active in the smoke from a chimney, oil churning through pipelines and, yes, the air around aircraft wings.

    In fact, the flow of most fluids is turbulent, including the movement of blood in our arteries.

    Since turbulence is so prevalent, its study has many industrial and environmental applications.

    Scientists model turbulence to improve vehicle design, diagnose atherosclerosis, build safer bridges, and reduce air pollution.

    Caused by excess energy that can’t be absorbed by a fluid’s viscosity, turbulent flow is by nature irregular and therefore hard to predict. The speed of the fluid at any point constantly fluctuates in both magnitude and direction, presenting researchers with a long-standing challenge.

    But new research has validated a theory proposed in the early twentieth-century, but whose math was too complex to confirm until recently.

    The need to follow fluid lumps in time, space, and scale results in equations that generate too much information: Even now, only a small part of the flow will fit in a computer simulation.

    Scientists use models to make up the missing part. But if those models are wrong, then the simulation is also wrong and no longer represents the flow it’s attempting to simulate.

    Recent research by José Cardesa [Science], an aeronautical engineer in Javier Jiménez’s Fluid Dynamics Group at Universidad Politécnica de Madrid (UPM), attempts to gain new insights into the physics behind turbulent flows and reduce the gaps between simulated flows and the flows around real devices.

    “A main source of discrepancy between computer-modeled flows and the flow around a real airplane is given by the poor performance of the models,” says Cardesa.

    An underlying simplicity

    In 1940s, mathematician Andrey Kolmogorov proposed that turbulence occurs in a cascade.

    A turbulent flow contains whirls of many different sizes. According to Kolmogorov, energy is transferred from the large whirls to smaller and more numerous whirls, rather than dispersing to farther distances.

    But, Cardesa says, the chaotic behavior of a fluid makes it hard to observe any trend with the naked eye.

    Hoping to track individual eddy structures and determine if a recurrent behavior is at work in how turbulence spreads, Cardesa and his colleagues at UPM simulated a turbulent flow using the MinoTauro cluster at the Barcelona Supercomputing Center.

    3
    MinoTauro cluster at the Barcelona Supercomputing Center

    The code was run in parallel on 32 NVIDIA Tesla M2090 cards, using a hybrid CUDA-MPI code developed by Alberto Vela-Martin. The simulation took almost three months to complete and resulted in over one hundred terabytes of compressed data.

    Progress in analyzing the stored simulation data was initially slow, until Cardesa adjusted the code so it would fit on a single node of a computer cluster with 48 GB of RAM per node. This way, he could run the process independently on twelve different nodes and was able to complete the task within just one month.

    Their results validated Kolmogorov’s theory, revealing an underlying simplicity in the apparently random motion of turbulent wind or water. The next step may be to try to understand the cause of the trend Cardesa has detected or to implement the new insights into flow simulation software.

    Cardesa’s work has benefited from advances in computational speed and storage capacity. He points out that his work would have been possible about ten years ago, but the expense would have been such that it would have required a ‘heroic’ computational effort.

    “The reduced cost of technology has made it possible for us to play with these datasets,” says Cardesa. “This is an extremely useful situation to be in when doing fundamental research and throwing all our efforts at an unsolved problem.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 7:59 am on August 25, 2017 Permalink | Reply
    Tags: , , , Caty Pilachowski, , , , Science Node, ,   

    From Science Node: Women in Stem -“A Hoosier’s view of the heavens” Caty Pilachowski 

    Science Node bloc
    Science Node

    24 Aug, 2017
    Tristan Fitzpatrick

    6
    Caty Pilachowski

    1
    Courtesy Emily Sterneman; Indiana University.

    “An eclipse violates our sense of what’s right.”

    So says Caty Pilachowski. Pilachowski, past president of the American Astronomical Society and now the Kirkwood Chair in Astronomy at Indiana University, has just returned from Hopkinsville, Kentucky where she observed the eclipse on the path of totality and watched the phenomena associated with a solar eclipse.

    “There are all kinds of effects that we can see during an eclipse,” says Pilachowski. “For example, we’re able to see the corona, which we can never see during the daytime without special equipment.”

    The surface of the sun, Pilachowski explains, has a temperature of roughly 5,780 kelvins (10,000º Fahrenheit). The thin gas that makes up the corona far above the sun, however, has a much hotter temperature— over a million degrees K.

    “That process of transporting energy into the highest atmosphere of the sun is not well understood,” she observes. “It’s the region just above the bright lower atmosphere of the sun that we’re best able to see during the eclipse, and that’s where the energy transport occurs.”

    Smile for the camera

    But the star in our own neighborhood isn’t the only one Pilachowski is keeping her eye on.

    When they’re not watching eclipses, Pilachowski and her colleagues at the IU Department of Astronomy use the One Degree Imager (ODI) on the WIYN 3.5M Observatory at Kitt Peak outside Tucson, Arizona.

    2
    One Degree Imager (ODI) on the WIYN 3.5M Observatory


    NOAO WIYN 3.5 meter telescope at Kitt Peak, AZ, USA

    The ODI was designed to image one square degree of sky at a time (the full moon takes up about half a square degree). Each image produced with the ODI is potentially 1 – 2 gigabytes in size.


    Kitt Peak outside of Tucson, Arizona hosts the 3.5 meter WIYN telescope, the primary research telescope for IU astronomers. Courtesy IU Astronomy; UITS Advanced Visualization Laboratory.

    IU astronomers collect thousands of these images, creating huge datasets that need to be examined quickly for scholarly insight.

    “Datasets from the ODI are much larger than can be handled with methods astronomers previously used, such as a CD-ROM or a portable hard drive” says Arvind Gopu, manager of the Scalable Compute Archive team.

    This is where IU’s computationally rich resources are critically important.

    The ODI Portal, Pipeline, and Archive (ODI-PPA) leverages the Karst, Big Red II, and Carbonate supercomputers at IU to quickly process these large amounts of data for analysis.

    3
    Karst supercomputer

    4
    Big Red II supercomputer

    These HPC tools allow researchers to perform statistical analysis and source extraction from the original image data. With these resources, they can determine if they’ve located stars, galaxies, or other items of interest from the large slice of the universe they’ve been viewing.

    “The advantage of using ODI-PPA is that you don’t have to have a lot of supercomputing experience,” says Gopu. “The idea is for astronomers to do the astronomy, and for us at UITS Research Technologies to do the computer science for them.”

    This makes the workflow on the ODI much faster than for other optical instruments. When collecting images of the universe, some instruments run into the crowded field problem, where stars are so close to each other they blend together when imaged. Teasing them apart requires a lot of computational heft.

    Another advantage ODI-PPA offers is its user-friendly web portal that makes it easy for researchers to view out-of-this-world images on their own machines, without requiring multiple trips to Kitt Peak.

    “Without the portal, IU astronomers would be dead in the water,” Pilachowski admits. “Lots and lots of data, with no way to get the science done.”

    Out of the fire and into the frying pan

    Pilachowski is also a principal investigtor on the Blanco DECam Bulge Survey (BDBS). A three-year US National Science Foundation-funded project, BDBS uses the Dark Energy Camera (DECam) attached to the Blanco Telescope in Chile to map the bulge at the heart of the Milky Way.

    Dark Energy Survey


    Dark Energy Camera [DECam], built at FNAL


    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam at an altitude of 7200 feet

    Like the yolk of a fried egg rising above egg whites in a frying pan, billions of stars orbit together to form a bulge that rises out of the galactic center.

    With the help of the DECam, Pilachowski can analyze populations of stars in the Milky Way’s bulge to study their properties.

    Astronomers use three different variables to catalog stars: How much hydrogen a star has, how much helium it has, and how much ‘metals’ it has (or, all the elements that aren’t hydrogen or helium).

    When the data from the survey is processed, Pilchowski can explore a large amount of information about stares in the Bulge, giving her clues about how the Milky Way’s central star system formed.

    “Most large astronomical catalogues are in the range of 500 million stars,” says Michael Young, astronomer and senior developer analyst at UITS Research Technologies. “When we’re done with this project, we should have a catalog of about a billion stars for researchers to use.”

    Journey of two eclipses

    As a child of the atomic age, Pilachowski grew up devouring books about the evolution of stars. She read as many books as she could about how they were formed, what stages they went through, and how they died.

    “That interest in stars has been a lifelong love for me,” Pilachowski says. “It’s neat to me that what I found exciting as a kid is what I get to spend my whole career studying.”

    She observed the last total solar eclipse in the continental US on February 26, 1979, an event she says further inspired her research in astronomy.

    “For me that eclipse was a combination of, ‘Wow, this is so amazing,’” Pilachowski recalls.

    “On the other hand, the observer in me saw cool things that were present, like planets that were visible right near the sun in the day time.”

    Regardless of whether scientists get closer to answering why the sun’s outer atmosphere is much hotter than its surface, Pilachowski says the eclipse has an eerie, unnerving effect on viewers.

    “We have this deep, ingrained understanding that the sun rises every morning and sets every evening,” says Pilachowski. “Things are as they’re supposed to be. An eclipse is something so rare and counter to our intuition that it just affects us deeply.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 12:20 pm on July 13, 2017 Permalink | Reply
    Tags: Chinese Sunway ThaihuLight supercomputer currently #1 on the TOP500 list of supercomputers, How supercomputers are uniting the US and China, Science Node,   

    From Science Node: “How supercomputers are uniting the US and China” 

    Science Node bloc
    Science Node

    12 July 2017
    Tristan Fitzpatrick

    38 years ago, US President Jimmy Carter and China Vice Premier Deng Xiaoping signed the US – China Agreement on Cooperation in Science and Technology, outlining broad opportunities to promote science and technology research.

    Since then the two nations have worked together on a variety of projects, including energy and climate research. Now, however, there is another goal that each country is working towards: The pursuit of exascale computing.

    At the PEARC17 conference in New Orleans, Louisiana, representatives from the high-performance computing communities in the US and China participated in the first international workshop on American and Chinese collaborations in experience and best practice in supercomputing.

    Both countries face the same challenges implementing and managing HPC resources across a large nation-state. The hardware and software technologies are rapidly evolving, the user base is ever-expanding, and the technical requirements for maintaining these large and fast machines is accelerating.

    It would be a major coup for either country’s scientific prowess if exascale computing could be reached, as it’s believed to be the order of processing for the human brain at the neural level. Initiatives like the Human Brain Project consider it to be a hallmark to advance computational power.

    “It’s less like an arms race between the two countries to see who gets there first and more like the Olympics,” says Dan Stanzione, executive director at the Texas Advanced Computing Center (TACC).

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    “We’d like to win and get the gold medal but hearing what China is doing with exascale research is going to help us get closer to this goal.”

    ___________________________________________________________________

    Exascale refers to computing systems that can perform a billion billion calculations per second — at least 50 times faster than the fastest supercomputers in the US.

    ___________________________________________________________________

    Despite the bona fides that would be awarded to whomever achieves the milestone first, TACC data mining and statistics group manager Weijia Xu stresses that collaboration is a greater motivator for both the US and China than just a race to see who gets there first.

    “I don’t think it’s really a competition,” Xu says. “It’s more of a common goal we all want to reach eventually. How you reach the goal is not exactly clear to everyone yet. Furthermore, there are many challenges ahead, such as how systems can be optimized for various applications.”

    The computational resources at China’s disposal could make it a great ally in the pursuit of exascale power. As of June 2017, China has the two fastest supercomputers in the top 500 supercomputers list, followed by five entries from the United States in the top ten.

    1
    Chinese Sunway ThaihuLight supercomputer, currently #1 on the TOP500 list of supercomputers.

    “While China has the top supercomputer in the world, China and the US probably have about fifty percent each of those top 500 machines besides the European countries,” says Si Liu, HPC software tools researcher at TACC. “We really believe if we have some collaboration between the US and China, we could do some great projects together and benefit the whole HPC community.”

    Besides pursuing the elusive exascale goal, Stanzione says the workshop opened up other ideas for how to improve the overall performance of HPC efforts in both nations. Co-located participants spoke on topics ranging from in situ simulations, artificial intelligence, and deep learning, among others.

    “We also ask questions like how do we run HPC systems, what do we run on them, and how it’s going to change in the next few years,” Stanzione says.“It’s a great time to get together and talk about details of processors, speeds, and feeds.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 1:21 pm on July 8, 2017 Permalink | Reply
    Tags: , , , , , , , Science Node, , UCSD Comet supercomputer   

    From Science Node: “Cracking the CRISPR clock” 

    Science Node bloc
    Science Node

    05 Jul, 2017
    Jan Zverina

    SDSC Dell Comet supercomputer

    Capturing the motion of gyrating proteins at time intervals up to one thousand times greater than previous efforts, a team led by University of California, San Diego (UCSD) researchers has identified the myriad structural changes that activate and drive CRISPR-Cas9, the innovative gene-splicing technology that’s transforming the field of genetic engineering.

    By shedding light on the biophysical details governing the mechanics of CRISPR-Cas9 (clustered regularly interspaced short palindromic repeats) activity, the study provides a fundamental framework for designing a more efficient and accurate genome-splicing technology that doesn’t yield ‘off-target’ DNA breaks currently frustrating the potential of the CRISPR-Cas9- system, particularly for clinical uses.


    Shake and bake. Gaussian accelerated molecular dynamics simulations and state-of-the-art supercomputing resources reveal the conformational change of the HNH domain (green) from its inactive to active state. Courtesy Giulia Palermo, McCammon Lab, UC San Diego.

    “Although the CRISPR-Cas9 system is rapidly revolutionizing life sciences toward a facile genome editing technology, structural and mechanistic details underlying its function have remained unknown,” says Giulia Palermo, a postdoctoral scholar with the UC San Diego Department of Pharmacology and lead author of the study [PNAS].

    See the full article here
    .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: