Tagged: Science Node Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 8:49 am on July 11, 2019 Permalink | Reply
    Tags: , Ruby Mendenhall, Science Node,   

    From Science Node: Women in STEM-“The citizen scientists of hidden America” Ruby Mendenhall 

    Science Node bloc
    From Science Node

    03 July, 2019
    Alisa Alering

    1
    This health study in Chicago recruits subjects to also be the scientists.

    When you read the words ‘citizen scientist’, what do you picture? Maybe backyard astronomers helping to classify distant galaxies, or fifth graders recording soil temperatures to track climate change.

    But Ruby Mendenhall, assistant dean for diversity and democratization of health innovation at the Carle Illinois College of Medicine, has a different idea of what citizen science can do—and who can participate.

    Mendenhall used a 2017-2018 NCSA Faculty Fellowship to examine how exposure to nearby gun crimes impacts African-American mothers living in Englewood, Chicago. Home to about 30,000 people, Englewood has a reputation as one of the most violent neighborhoods in the city.

    Beyond the physical effects of stress, Mendenhall wanted to investigate the long-term consequences experienced by women living in communities like Englewood. For example, what happens to a parent when the sound of gunshots is common during the day—and especially at night?

    Here’s where the citizen science comes in. The women of Englewood aren’t just subjects in this research, they’re active participants.

    “We wanted to put more agency in their hands,” says Mendenhall. “We asked them, ‘What would you like to see solved? What’s an issue that you have? How can we study this?’”

    From subjects to scientists

    Mendenhall sees citizen science as a way to address health disparities and social inequality. Though many citizen science projects focus on topics like backyard biology, it’s an existing framework that can be applied to community-based participatory research in health and medicine.

    “These are citizen scientists who can take knowledge of their own lived experience and create new knowledge about Black women and families,” says Mendenhall. “We hope they can help us make medical advances around depression, PTSD, and how the body responds to stress.”

    Mendenhall wanted to put more agency in the hands of the women, transforming them from study subjects into participating scientists. The researchers asked what the women wanted to see solved, what issues they were concerned about, and how it might be studied.

    Mendenhall then teamed up with computer scientist Kiel Gilleade to design a mobile health study that documented the women’s experience via wearable biosensors, phone GPS, and diary-keeping.

    Given historical problems with mistrust of the medical community—and with good reason—Mendenhall was concerned that the participants wouldn’t agree to let researchers take samples of their blood (for a separate study) to see how stress affected the genes that regulate the immune system.

    But, somewhat to her surprise, the women agreed. One of the reasons the women gave for their willingness to participate was that they recognized the impact stress was having on their bodies.

    “They talked about having headaches, backaches, stomachaches, many things,” says Mendenhall. “They were interested in what was going on with their bodies, what was the connection.”

    Asking the right questions

    2
    Whose voice is not represented? Mendenhall presented her keynote address, Using Advanced Computing to Recover Black Women’s Lost History, at the PEARC18 conference in Pittsburgh, in July 2018.

    Mendenhall hasn’t always engaged with computation to further her research. She started her academic career in African-American studies and sociology. But when faculty from NCSA visited her department, Mendenhall became intrigued by the possibilities of big data.

    “I didn’t change the research I was interested in, I didn’t change my focus on Black women and their agency and their lived experiences on the margins of society,” says Mendenhall. “What I did was expand my toolkit and my ability to answer questions—and even to ask different questions.”

    Some of the questions she’s asking are: Whose voice may not be represented? Whose lived experience isn’t represented? If they were, how would what we see be different? Mendenhall believes that scholars of all types can benefit from putting more time and energy into asking questions like these.

    “I think it’s important to understand that big data is not neutral, it is not objective,” says Mendenhall. “Data is situated within a historical and political context.”

    Despite biases in existing collections of data, Mendenhall believes data can also be applied to help equalize the historical record.

    “I think big data has great potential if more voices are brought in,” says Mendenhall. “If everyone’s voice can be heard and seen and studied and digitized. And if Black women can also study it themselves and develop ideas about what that data is representing.”

    The study about Black women in Englewood followed only twelve women but the next step will be to expand the pool of citizen scientists to 600 or more.

    “Ideally, I’m thinking about 100,000 citizen scientists or all the women in Chicago. If they could all be citizen scientists—then what would we see?”

    Mendenhall is currently at work on a funding proposal to create a Communiversity Think-and-Do Tank where researchers and citizen scientists will work together to address grand challenges (e.g., gun violence, Black infant and maternal mortality, mental health, diverse histories in the digital archives, etc.) She hopes this will be one avenue to get her closer to her goal of 100,000 citizen scientists.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 10:13 am on June 20, 2019 Permalink | Reply
    Tags: , Science Node, Ted Chiang   

    From Science Node: “Why does AI fascinate us?” 

    Science Node bloc

    10 June, 2019
    Alisa Alering

    1

    Ted Chiang talks about how our love for fictional AI interacts with the real-world use of artificial intelligence.

    Why do you think we are fascinated by AI?

    People have been interested in artificial beings for a very long time. Ever since we’ve had lifelike statues, we’ve imagined how they might behave if they were actually alive. More recently, our ideas of how robots might act are shaped by our perception of how good computers are at certain tasks. The earliest calculating machines did things like computing logarithm tables more accurately than people could. The fact that machines became capable of doing a task which we previously associated with very smart people made us think that the machines were, in some sense, like very smart people.

    How does our—let’s call it shared human mythology—of AI interact with the real forms of artificial intelligence we encounter in the world today?

    The fact that we use the term “artificial intelligence” creates associations in the public imagination which might not exist if the software industry used some other term. AI has, in science fiction, referred to a certain trope of androids and robots, so when the software industry uses the same term, it encourages us to personify software even more than we normally would.

    Is there a big difference between our fictional imaginary consumption of AI and what’s actually going on in current technology?

    2
    Intelligent machines. ‘Maria’ was the first robot to be depicted on film, in Fritz Lang’s Metropolis (1927). Courtesy Jeremy Tarling. (CC BY-SA 2.0)

    I think there’s a huge difference. In our fictional imagination “artificial intelligence” refers to something that is, in many ways, like a person. It’s a very rigid person, but we still think of it as a person. But nothing that we have in the software industry right now is remotely like a person—not even close. It’s very easy for us to attribute human-like characteristics to software, but that’s more of a reflection of our cognitive biases. It doesn’t say anything about the properties that the software itself possesses.

    What’s happening now or in the near future with intelligent systems that really captures your interest?

    What I find most interesting is not typically described as AI, but with the phrase ‘artificial life.’ Some researchers are creating digital organisms with bodies and sense organs that allow them to move around and navigate their environment. Usually there’s some mechanism where they can give rise to slightly different versions of themselves, and thus evolve over time. This avenue of research is really interesting because it could eventually result in software entities which have a lot of the properties that we associate with living organisms. It’s still going to be a long ways from anything that we consider intelligent, but it’s a very interesting avenue of research.

    Over time, these entities might come to have the intelligence of an insect. Even that would be pretty impressive, because even an insect is good at a lot of things which Watson (IBM’s AI supercomputer) can’t do at all. An insect can navigate its environment and look for food and avoid danger. A lot of the things that we call common sense are outgrowths of the fact that we have bodies and live in the physical world. If a digital organism could have some of that, that would be a way of laying the groundwork for an artificial intelligence to eventually have common sense.

    How do we teach an artificial intelligence the things we consider common sense?

    Alan Turing once wrote that he didn’t know what would be the best way to create a thinking machine; it might involve teaching it abstract activities like chess, or it might involve giving it eyes and a body and teaching it the way you’d teach a child. He thought both would be good avenues to explore.

    Historically, we’ve only tried that first route, and that has led to this idea that common sense is hard to teach or that artificial intelligence lack common sense. I think if we had gone with the second route, we’d have a different view of things.

    If you want an AI to be really good at playing chess, we have got that problem licked. But if you want something that can navigate your living room without constantly bumping into a coffee table, that’s a completely different challenge. If you want to solve that one, you’re going to need a different approach than what we’ve used for solving the grandmaster-level chess-playing problem.

    My cat’s really good in the living room but not so good at chess.

    Exactly. Because your cat grew up with eyes and a physical body.

    Since you’re someone who (presumably) spends a lot of time thinking about the social and philosophical aspects of AI, what do you think the creators of artificial beings should be concerned about?

    I think it’s important for all of us to think about the greater context in which the work we do takes place. When people say, “I was just doing my job,” we tend not to consider that a good excuse when doing that job leads to bad moral outcomes.

    When you as a technologist are being asked how to solve a problem, it’s worth thinking about, “Why am I being asked to solve this problem? In whose interest is it to solve this problem?” That’s something we all need to be thinking about no matter what sort of work we do.

    Otherwise, if everyone simply keeps their head down and just focuses narrowly on the task at hand, then nothing changes.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 6:12 am on May 23, 2019 Permalink | Reply
    Tags: "5 ways technology is making the world more accessible", 1. Self-driving cars? How about self-driving wheelchairs?, 2. Helping farmers stay in the field, 3. The power of thought, 4. Reading the signs, 5. This AI will help you “see”, Science Node   

    From Science Node: “5 ways technology is making the world more accessible” 

    Science Node bloc
    From Science Node

    22 May, 2019
    Laura Reed

    Accessibility goes mainstream as the world ages.

    Inventors have been finding ways to help people overcome disabilities for centuries. Ear trumpets boosted hearing in the 17th century. In the 1820s, Louis Braille devised a system that allowed the blind to read through their sense of touch.

    Innovations and legislation in the 20th century increased access to employment, entertainment, and information. But one in four US adults currently have a disability that significantly impacts their life. Can new technology provide some 21st-century solutions?

    1. Self-driving cars? How about self-driving wheelchairs?

    The world’s population is aging fast. The number of people in the US aged 65 and over is projected to increase from 48 million to 88 million by 2050. Similar demographic shifts are happening worldwide—and that means a lot of people will face challenges with mobility.


    Self-driving wheelchairs use lidar sensors to measure the distance to an object and can build a map after being manually driven around the area where it will be used. Courtesy SMART Comms.

    Autonomous wheelchairs could be the answer. That’s why Samsung, MIT, Northwestern University, and others are borrowing technology from self-driving cars to develop self-driving wheelchairs. Equipped with lidar sensors that measure the distance to an object by illuminating that object with a laser light, the wheelchair builds a map after being manually driven around the area where it will be used. After that, the user will be able to select where they want to go by clicking on the map.

    One company, the Toronto-based Cyberworks has a prototype chair that should be available for purchase in a few years. Self-driving wheelchairs could be the key to independent living for millions of people with disabilities.

    2. Helping farmers stay in the field

    2
    Disabled farmers need help staying in the fields. AgriAbility is a US national program that maintains a database of solutions from harvesting aids to equipment for adaptive horseback riding. Courtesy AgriAbility.

    When you’re in the grocery store, do you ever think about where your food comes from? According to a recent study, one in five farmers in the US has some type of disability. In addition, the average age of the American farmer is 57. Ailments associated with aging often impair a farmer’s ability to work.

    That’s why in 1990 the US government funded an assistive technology program, AgriAbility, to help disabled farmers. Its Assistive Technology Database is an index of over 1,400 solutions to problems faced by farmers such as harvesting aids, calving and calf care equipment, and accessories for adaptive horseback riding. Each solution offered in the database shows the cost of the technology along with the physical limitations the tech addresses. Other resources on the AgriAbility website include online training, links to state projects, and resources on many health care issues.

    3. The power of thought

    For most of us, the entire world is just a tap or a swipe away on our smartphones. But that isn’t the case for people who have upper body impairments or paralysis. Fortunately, researchers are working on technology that will allow users to control a mobile device with their thoughts.

    In a recent clinical trial of a brain-computer interface (BCI) called BrainGate, researchers implanted microelectrode arrays into the part of the brain that controls hand movement.

    The participants thought about moving their hands, and the BCI learned to translate the brain activity into actions on an Android tablet.

    Participants were eventually able to use the tablet to check and respond to email, search the internet, read news, and stream music. The researchers believe interfaces like BrainGate may enable people with degenerative conditions like ALS to communicate with others and participate in everyday activities.

    4. Reading the signs

    In a perfect world, a sign language interpreter would be at the ready any time a deaf or hard of hearing person needed them. But some institutions such as hospitals and courts use Video Remote Interpreting to save money. Unfortunately, the facial gestures and body movements that convey meaning in sign language may be lost when delivered in a video feed.

    That’s why research is underway to develop a tool that can convert sign language to speech in audio or text format. Two University of Washington students have developed a system that lets people fluent in American Sign Language (ASL) communicate with non-signers. SignAloud uses gloves designed to recognize ASL gestures. The gloves send data to a computer for processing. Then the word or phrase associated with the gesture is spoken through a speaker.

    With a similar glove-based product called BrightSign Glove, users record and name their own gestures to go with specific words or phrases. The product aims to sidestep the need for facial cues and body motions. Another version of BrightSign will send translations directly to the glove wearer’s smartphone. The phone can then vocalize the words and phrases.

    5. This AI will help you “see”

    Since the end of World War I, people who are blind or visually impaired have been using the “white cane,” or “long cane” to detect obstacles and scan for orientation marks. Now, thanks to Google AI, there’s an app for that.

    Google Lookout is an app that runs on Pixel devices in the US. It uses image recognition technology similar to that of Google Lens to assist users in learning about a new space, reading documents, and completing activities like cooking and shopping. The app detects an object, guesses what it is and tells the user about it. Google recommends that users attach the device to a lanyard worn around the neck, or in the front pocket of a shirt. Once opened, the app requires no further input. Google says it hopes to bring the app to more countries and platforms soon.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 10:03 am on May 9, 2019 Permalink | Reply
    Tags: A big universe needs big computing-Sijacki accessed HPC resources through XSEDE in the US and PRACE in Europe, , , , , Debora Sijacki, Science Node, She now uses the UK’s National Computing Service DiRAC in combination with PRACE, Sijacki wants to understand the role supermassive black holes (SMBH) play in galaxy formation., ,   

    From Science Node: Women in STEM- “Shining a light on cosmic darkness” Debora Sijacki 

    Science Node bloc
    From Science Node

    08 May, 2019
    Alisa Alering

    1
    Debora Sijacki. Courtesy David Orr.

    Award-winning astrophysicist Debora Sijacki wants to understand how galaxies form.

    Carl Sagan once described the Earth as a “pale blue dot, a lonely speck in the great enveloping cosmic dark.”

    The need to shine a light into that cosmic darkness has long inspired astronomers to investigate the wonders that lie beyond our lonely planet. For Debora Sijacki, a reader in astrophysics and cosmology at the University of Cambridge, her curiosity takes the form of simulating galaxies in order to understand their origins.

    2
    A supermassive black hole at the center of a young, star-rich galaxy. SMBHs distort space and light around them, as illustrated by the warped stars behind the black hole. Courtesy NASA/JPL-Caltech.

    “We human beings are a part of our Universe and we ultimately want to understand where we came from,” says Sijacki. “We want to know what is this bigger picture that we are taking part in.”

    Sijacki is the winner of the 2019 PRACE Ada Lovelace Award for HPC for outstanding contributions to and impact on high-performance computing (HPC). Initiated in 2016, the award recognizes female scientists working in Europe who have an outstanding impact on HPC research and who provide a role model for other women.

    Specifically, Sijacki wants to understand the role supermassive black holes (SMBH) play in galaxy formation. These astronomical objects are so immense that they contain mass on the order of hundreds of thousands to even billions of times the mass of the Sun. At the same time they are so compact that, if the Earth were a black hole, it would fit inside a penny.

    The first image of a black hole, Messier 87 Credit Event Horizon Telescope Collaboration, via NSF and ERC 4.10.19

    SMBHs are at the center of many massive galaxies—there’s even one at the center of our own galaxy, The Milky Way. Astronomers theorize that these SMBHs are important not just in their own right but because they affect the properties of the galaxies themselves.

    Sgr A* from ESO VLT


    SGR A* ,the supermassive black hole at the center of the Milky Way. NASA’s Chandra X-Ray Observatory

    “What we think happens is that when gas accretes very efficiently and draws close to the SMBH it eventually falls into the SMBH,” says Sijacki. “The SMBH then grows in mass, but at the same time this accretion process is related to an enormous release of energy that can actually change the properties of galaxies themselves.”

    A big universe needs big computing

    To investigate the interplay of these astronomical phenomena, Sijacki and her team create simulations where they can zoom into details of SMBHs while at the same time viewing a large patch of the Universe. This allows them to focus on the physics of how black holes influence galaxies and even larger environments.

    3
    Dark matter density (l) transitioning to gas density (r). Large-scale projection through the Illustris volume at z=0, centered on the most massive galaxy cluster of the Illustris cosmological simulation. Courtesy Illustris Simulation.

    But in order to study something as big as the Universe, you need a big computer. Or several. As a Hubble Fellow at Harvard University, Sijacki accessed HPC resources through XSEDE in the US and PRACE in Europe. She now uses the UK’s National Computing Service DiRAC in combination with PRACE.

    4

    DiRAC is the UK’s integrated supercomputing facility for theoretical modelling and HPC-based research in particle physics, astronomy and cosmology.

    PRACE supercomputing resources

    Hazel Hen, GCS@HLRS, Cray XC40 supercomputer Germany

    JOLIOT CURIE of GENCI Atos BULL Sequana system X1000 supercomputer France

    JUWELS, GCS@FZJ, Atos supercomputer Germany

    MARCONI, CINECA, Lenovo NeXtScale supercomputer Italy

    MareNostrum Lenovo supercomputer of the National Supercomputing Center in Barcelona

    Cray Piz Daint Cray XC50/XC40 supercomputer of the Swiss National Supercomputing Center (CSCS)

    SuperMUC-NG, GCS@LRZ, Lenovo supercomputer Germany

    According to Sijacki, in the 70s, 80s, and 90s, astrophysicists laid the foundations of galaxy formation and developed some of the key ideas that still guide our understanding. But it was soon recognized that these theories needed to be refined—or even refuted.

    “There is only so much we can do with the pen-and-paper approach,” says Sijacki. “The equations we are working on are very complex and we have to solve them numerically. And it’s not just a single physical process, but many different mechanisms that we want to explain. Often when you put different bits of complex physics together, you can’t easily predict the outcome.”

    The other motivation for high-performance computing is the need for higher resolution models. This is because the physics in the real Universe occurs on a vast range of scales.

    “We’re talking about billions and trillions of resolution elements,” says Sijacki. “It requires massive parallel calculations on thousands of cores to evolve this really complex system with many resolution elements.”

    In recent years, high-performance computing resources have become more powerful and more widely available. New architectures and novel algorithms promise even greater efficiency and optimized parallelization.

    4
    Jet feedback from active galactic nuclei. (A) Large-scale image of the gas density centered on a massive galaxy cluster. (B) High-velocity jet launched by the central supermassive black hole. (C) Cold disk-like structure around the SMBH from which black hole is accreting. (D) 2D Voronoi mesh reconstruction and (E) velocity streamline map of a section of the jet, illustrating massive increase in spatial resolution achieved by this simulation. Courtesy Bourne, Sijacki, and Puchwein.

    Given these advances, Sijacki projects a near-future where astrophysicists can, for the first time, perform simulations that can consistently track individual stars in a given galaxy and follow that galaxy within a cosmological framework.

    “Full predictive models of the evolution of our Universe is our ultimate goal,” says Sijacki. “We would like to have a theory that is completely predictive, free of ill-constrained parameters, where we can theoretically understand how the Universe was built and how the structures in the Universe came about. This is our guiding star.”

    Awards matter

    When asked about the significance of the award, Sijacki says that she is proud to have her research recognized—and to be associated with the name of Ada Lovelace.

    Perhaps more importantly, the award has already had an immediate effect on the female PhD students and post-docs at Cambridge’s Institute of Astronomy. Sijacki says the recognition motivates the younger generations of female scientists, by showing them that this is a possible career path that leads to success and recognition.

    “I have seen how my winning this award makes them more enthusiastic—and more ambitious,” says Sijacki. “I was really happy to see that.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 10:34 am on April 11, 2019 Permalink | Reply
    Tags: , , , Science Node,   

    From Science Node: “The end of an era” 

    Science Node bloc
    From Science Node

    10 Apr, 2019
    Alisa Alering

    For the last fifty years, computer technology has been getting faster and cheaper. Now that extraordinary progress is coming to an end. What happens next?

    John Shalf, department head for Computer Science at Berkeley Lab, has a few ideas. He’s going to share them in his keynote at ISC High Performance 2019 in Frankfurt, Germany (June 16-20), but he gave Science Node a sneak preview.

    Moore’s Law is based on Gordon Moore’s 1965 prediction that the number of transistors on a microchip doubles every two years, while the cost is halved. His prediction proved true for several decades. What’s different now?

    1
    Double trouble. From 1965 to 2004, the number of transistors on a microchip doubled every two years while cost decreased. Now that you can’t get more transistors on a chip, high-performance computing is in need of a new direction. Data courtesy Data quest/Intel.

    The end of Dennard scaling happened in 2004, when we couldn’t crank up the clock frequencies anymore on chips, so we moved to exponentially increasing parallelism in order to continue performance scaling. It was not an ideal solution, but it enabled us to continue some semblance of performance scaling. Now we’ve gotten to the point where we can’t squeeze any more transistors onto the chip.

    If you can’t cram any more transistors on the chip, then we can’t continue to scale the number of cores as a means to scale performance. And we’ll get no power improvement: with the end of Moore’s Law, in order to get ten times more performance we would need ten times more power in the future. Capital equipment cost won’t improve either. Meaning that if I spend $100 million and can get a 100 petaflop machine today, then I spend $100 million ten years from now, I’ll get the same machine.

    That sounds fairly dire. Is there anything we can do?

    There are three dimensions we can pursue: One is new architectures and packaging, the other is CMOS transistor replacements using new materials, the third is new models of computation that are not necessarily digital.

    Let’s break it down. Tell me about architectures.

    2
    John Shalf, of Lawrence Berkeley National Laboratory, wants to consider all options—from new materials and specialization to industry partnerships–when it comes to imagining the future of high-performance computing. Courtesy John Shalf.

    We need to change course and learn from our colleagues in other industries. Our friends in the phone business and in mega data centers are already pointing out the solution. Architectural specialization is one of the biggest sources of improvement in the iPhone. The A8 chip, introduced in 2014, had 29 different discreet accelerators. We’re now at the A11, and it has nearly 40 different discreet hardware accelerators. Future generation chips are slowly squeezing out the CPUs and having special function accelerators for different parts of their workload.

    And for the mega-data center, Google is making its own custom chip. They weren’t seeing the kind of performance improvements they needed from Intel or Nvidia, so they’re building their own custom chips tailored to improve the performance for their workloads. So are Facebook and Amazon. The only people absent from this are HPC.

    With Moore’s Law tapering off, the only way to get a leg up in performance is to go back to customization. The embedded systems and the ARM ecosystem is an example where, even though the chips are custom, the components—the little circuit designs on those chips—are reusable across many different disciplines. The new commodity is going to be these little IP blocks we arrange on the chip. We may need to add some IP blocks that are useful for scientific applications, but there’s a lot of IP reuse in that embedded ecosystem and we need to learn how to tap into that.

    How do new materials fit in?

    We’ve been using silicon for the past several decades because it is inexpensive and ubiquitous, and has many years of development effort behind it. We have developed an entire scalable manufacturing infrastructure around it, so it continues to be the most cost-effective route for mass-manufacture of digital devices. It’s pretty amazing, to use one material system for that long. But now we need to look at some new transistor that can continue to scale performance beyond what we’re able to wring out of silicon. Silicon is, frankly, not that great of a material when it comes to electron mobility.

    _________________________________________________________
    The Materials Project

    The current pace of innovation is extremely slow because the primary means available for characterizing new materials is to read a lot of papers. One solution might be Kristin Persson’s Materials Project, originally invented to advance the exploration of battery materials.

    By scaling materials computations over supercomputing clusters, research can be targeted to the most promising compounds, helping to remove guesswork from materials design. The hope is that reapplying this technology to also discover better electronic materials will speed the pace of discovery for new electronic devices.
    In 2016, an eight laboratory consortium was formed to push this in the DOE “Big ideas Summit” where grass-roots ideas from the labs are presented to the highest levels of DOE leadership. Read the whitepaper and elevator pitch here.

    After the ‘Beyond Moore’s Law’ project was invited back for the 2017 Big Ideas Summit, the DOE created a Microelectronics BRN (Basic Research Needs) Workshop. The initial report from that meeting is released, and the DOE’s FY20 budget includes a line item for Microelectronics research.
    _________________________________________________________

    The problem is, we know historically that once you demonstrate a new device concept in the laboratory, it takes about ten years to commercialize it. Prior experience has shown a fairly consistent timeline of 10 years from lab to fab. Although there are some promising directions, nobody has demonstrated something that’s clearly superior to silicon transistors in the lab yet. With no CMOS replacement imminent, that means we’re already ten years too late! We need to develop tools and processes to accelerate the pace for discovery of more efficient microelectronic devices to replace CMOS and the materials that make them possible.

    So, until we find a new material for the perfect chip, can we solve the problem with new models of computing. What about quantum computing?

    New models would include quantum and neuromorphic computing. These models expand computing into new directions, but they’re best at computing problems that are done poorly using digital computing.

    I like to use the example of ‘quantum Excel.’ Say I balance my checkbook by creating a spreadsheet with formulas, and it tells me how balanced my checkbook is. If I were to use a quantum computer for that—and it would be many, many, many years in the future where we’d have enough qubits to do it, but let’s just imagine—quantum Excel would be the superposition of all possible balanced checkbooks.

    And a neuromorphic computer would say, ‘Yes, it looks correct,’ and then you’d ask it again and it would say, ‘It looks correct within an 80% confidence interval.’ Neuromorphic is great at pattern recognition, but it wouldn’t be as good for running partial differential equations and computing exact arithmetic.

    We really need to go back to the basics. We need to go back to ‘What are the application requirements?’

    Clearly there are a lot of challenges. What’s exciting about this time right now?

    3
    The Summit supercomputer at Oak Ridge National Laboratory operates at a top speed of 200 petaflops and is currently the world’s fastest computer. But the end of Moore’s Law means that to get 10x that performance in the future, we also would need 10x more power. Courtesy Carlos Jones/ORNL.

    Computer architecture has become very, very important again. The previous era of exponential scaling created a much narrower space for innovation because the focus was general purpose computing, the universal machine. The problems we now face opens up the door again to mathematicians and computer architects to collaborate to solve big problems together. And I think that’s very exciting. Those kinds of collaborations lead to really fun, creative, and innovative solutions to worldwide important scientific problems.

    The real issue is that our economic model for acquiring supercomputing systems will be deeply disrupted. Originally, systems were designed by mathematicians to solve important mathematical problems. However, the exponential improvement rates of Moore’s law ensured that the most general purpose machines that were designed for the broadest range of problems would have a superior development budget and, over time, would ultimately deliver more cost-effective performance than specialized solutions.

    The end of Moore’s Law spells the end of general purpose computing as we know it. Continuing with this approach dooms us to modest or even non-existent performance improvements. But the cost of customization using current processes is unaffordable.

    We must reconsider our relationship with industry to re-enable specialization targeted at our relatively small HPC market. Developing a self-sustaining business model is paramount. The embedded ecosystem (including the ARM ecosystem) provides one potential path forward, but there is also the possibility of leveraging the emerging open source hardware ecosystem and even packaging technologies such as Chiplets to create cost-effective specialization.

    We must consider all options for business models and all options for partnerships across agencies or countries to ensure an affordable and sustainable path forward for the future of scientific and technical computing.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 8:13 am on March 14, 2019 Permalink | Reply
    Tags: "Can computing change the world?", Advanced Computing for Social Change, Computing4Change, Science Node,   

    From Science Node: “Can computing change the world?” 

    Science Node bloc
    From Science Node

    13 Mar, 2019
    Ellen Glover

    Last November, sixteen undergraduate students from around the world came together in Texas to combine their skills and tackle the issue of violence.


    The Computing4Change program brings together undergraduate students for a 48-hour intensive competition to apply computing to urgent social issues. This 2018 topic was “Resisting Cultural Acceptance of Violence.”

    This was part of Computing4Change, a program dedicated to empowering students of all races, genders, and backgrounds to implement change through advanced computing and research.

    The challenge was developed by Kelly Gaither and Rosalia Gomez from the Texas Advanced Computing Center (TACC), and Linda Akli of the Southeastern Universities Research Association.

    Three years ago, as the conference chair at the 2016 XSEDE conference in Miami, Gaither wanted to ensure that she authentically represented students’ voices to other conference attendees. Akli and Gomez led the student programs at the conference, bringing together a large, diverse group of students from Miami and surrounding area.

    So she asked the students what issues they cared about. “It was shocking that most of the issues had nothing to do with their school life and everything to do with the social conditions that they deal with every day,” Gaither says.

    After that, Gaither, Gomez, and Akli promised that they would start a larger program to give students a platform for the issues they found important. They brought in Ruby Mendenhall from the University of Illinois Urbana-Champaign and Sue Fratkin, a public policy analyst concentrating on technology and communication issues.

    2
    48-hour challenge. The student competitors had only 48 hours to do all of their research and come up with a 30-minute presentation before a panel of judges at the SC18 conference in Dallas, TX. Courtesy Computing4Change.

    Out of that collaboration came Advanced Computing for Social Change, a program that gave students a platform to use computing to investigate hot-button topics like Black Lives Matter and immigration. The inaugural competition was held at SC16 and was supported by the conference and by the National Science Foundation-funded XSEDE project.

    “The students at the SC16 competition were so empowered by being able to work on Black Lives Matter that they actually asked if they could work overnight and do the presentations later the next day,” Gaither says. “They felt like there was more work that needed to be done. I have never before seen that kind of enthusiasm for a given problem.”

    In 2018, Gaither, Gomez, and Akli made some big changes to the program and partnered with the Special Interest Group for High Performance Computing (SIGHPC). As a result of SIGHPC’s sponsorship, the program was renamed Computing4Change. Applications were opened up to national and international undergraduate students to ensure a diverse group of participants.

    “We know that the needle is not shifting with respect to diversity. We know that the pipeline is not coming in any more diverse, and we are losing diverse candidates when they do come into the pipeline,” Gaither says.

    The application included questions about what issues the applicants found important: What topics were they most passionate about and why? How did they see technology fitting into solutions?

    Within weeks, the program received almost 300 applicants for 16 available spots. An additional four students from Chaminade University of Honolulu were brought in to participate in the competition.

    In the months leading up to the conference, Gaither, Gomez, and Akli hosted a series of webinars teaching everything from data analytics to public speaking and understanding differences in personality types.

    All expenses, including flight, hotel, meals, and conference fees were covered for each student. “For some of these kids, this is the first time they’ve ever traveled on an airplane. We had a diverse set of academic backgrounds. For example, we had a student from Yale and a community college student,” says Gaither. “Their backgrounds span the gamut, but they all come in as equals.”

    Although they interacted online, the students didn’t meet in person until they showed up to the conference. That’s when they were assigned to their group of four and the competition topic of violence was revealed. The students had to individually decide what direction to take with the research and how that would mesh with their other group members’ choices.

    “Each of those kids had to have their individual hypothesis so that no one voice was more dominant than the other,” Gaither says. “And then they had to work together to find out what the common theme might be. We worked with them to assist with scope, analytics, and messaging.”

    The teams had 48 hours to do all of their research and come up with a 30-minute presentation to present to a panel of judges at the SC18 conference in Dallas, TX.

    All mentors stayed with the students, making sure they approached their research from a more personal perspective and worked through any unexpected roadblocks—just like they would have to in a real-world research situation.

    For example, one student wanted to find data on why people leave Honduras and seek asylum in the United States. Little explicit data exits on that topic, but there is data on why people from all countries seek asylum. The mentors encouraged her to look there for correlations.

    “That was a process of really trying to be creative about getting to the answer,” Gaither says. “But that’s life. With real data, that’s life.”

    The Computing4Change mentors also coached the students to analyze their data and present it clearly to the judges. Gaither hopes the students leave the program not only knowing more about advanced computing, but also more aware of their power to effect change. She says it’s easy to teach someone a skill, but it’s much more impactful to help them find a personal passion within that skill.

    “If you’re passionate about something, you’ll stick with it,” Gaither says. “You can plug into very large, complex problems that are relevant to all of us.”

    The next Computing4Change event will be held in Denver, CO, co-located with the SC19 conference Nov 16-22, 2019. Travel, housing, meals, and SC19 conference registration covered for the 20 students selected. Application deadline is April 8, 2019. Apply here.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 11:15 am on December 28, 2018 Permalink | Reply
    Tags: , , , , , Science Node, , The cosmos in a computer   

    From Science Node: “The cosmos in a computer” 

    Science Node bloc
    From Science Node

    28 Nov, 2018
    Ellen Glover

    How simulated galaxies could bring us one step closer to the origin of our universe.

    Thanks to telescopes like the Hubble and spacecrafts like Kepler, we know more than ever about the Milky Way Galaxy and what lies beyond. However, these observations only tell part of the story.

    NASA/ESA Hubble Telescope

    NASA/Kepler Telescope

    How did our incomprehensively vast universe come to be? What’s it going to look like millions of years from now? These age-old questions are now getting answers thanks to simulations created by supercomputers.

    One of these supercomputers is a Cray XC50, nicknamed ATERUI II and located at the National Astronomical Observatory in Japan (NAOJ).

    NAOJ ATERUI II Cray XC50 supercomputer ocated at the National Astronomical Observatory in Japan (NAOJ)

    It is the fastest supercomputer dedicated to astronomy and is ranked #83 of the top 500 most powerful supercomputers in the world.

    Named after a prominent 9th century chief, the ATERUI II is located in the same city where Aterui led his tribe in a battle against Emperor Kanmu. Despite the odds, Aterui and his people fought well. Since then, Aterui has become a symbol of intelligence, bravery, and unification.

    2
    100 billion. ATERUI II is able to calculate the mutual gravitational interactions between each of the more than 100 billion stars that make up our galaxy, allowing for the most detailed Milky Way simulation yet. Courtesy National Astronomical Observatory of Japan.

    “We named the supercomputer after him so that our astronomers can be brave and smart. While we are not the fastest in the world, we hope the ATERUI II can be used in a smart way to help unify us so we can better understand the universe,” says Eiichiro Kokubo, project director of the Center for Computational Astrophysics at NAOJ.

    ATERUI II was officially launched last June and serves as a bigger and better version of its decommissioned predecessor, ATERUI. With more than 40,000 processing cores and 385 Terabytes of memory, ATERUI II can perform as many as 3 quadrillion operations per second.

    In other words: it’s an incredibly powerful machine that is allowing us to boldly go where no one has ever gone before, from the Big Bang to the death of a star. It’s also exceedingly popular with researchers—150 astronomers are slated to use the supercomputer by the end of the year.

    ATERUI II’s unique power means it is capable of solving problems deemed too difficult for other supercomputers. For example, an attempt to simulate the Milky Way on a different machine meant researchers had to group the stars together in order to calculate their gravitational interactions.

    ATERUI II doesn’t have that problem. It’s able to calculate the mutual gravitational interactions between each of the more than 100 billion stars that make up our galaxy individually, allowing for the most detailed Milky Way Galaxy simulation yet.

    3
    The death of a star a thousand years ago left behind a superdense neutron star that expels extremely high-energy particles. By simulating events like these, ATERUI II gives astronomer’s insights that can’t be discovered through observation alone. Courtesy NASA/JPL-Caltech/ESA/CXC/Univ. of Ariz./Univ. of Szeged.

    While computational astronomy is a fairly young field, we need it in order to understand the universe beyond just observing celestial bodies. With its superior computational power, Kokubo says there are plans for ATERUI II to simulate everything from Saturn’s rings through a binary star formation to the large scale structure of the universe.

    “If we produce the universe in a computer, then we can use it to simulate the past and the future as well,” Kokubo says. “The universe exists in four dimensions: the first three are space and the last one is time. If we can capture the space, then we can better observe it through time.”

    ATERUI II isn’t only working on ways to better understand the stars and planets that make up the universe, it is also being used to explore the possibility of alien life. This starts with life on Earth.

    “If we can simulate and understand the origin of life on Earth and what it means to be habitable, we will be even closer to finding it elsewhere in the universe,” Kokubo says. “I’m interested in life and why we are here.”

    Kokubo isn’t alone. The mystery of how we came to be and what it all means has fascinated mankind for centuries. Our unknown origins have been explored in great pieces of art and literature throughout history and are at the core of every religion. Now, thanks to ATERUI II, we are one step closer to getting our answer.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 9:39 am on December 13, 2018 Permalink | Reply
    Tags: , , , , HPC Spaceborne Computer, , Science Node, Spaceborne Computer is first step in helping NASA get humanity to Mars,   

    From Science Node: “Launching a supercomputer into space” 

    Science Node bloc
    From Science Node

    03 Dec, 2018
    Kevin Jackson

    1
    HPC Spaceborne supercomputer replica.

    Spaceborne Computer is first step in helping NASA get humanity to Mars.

    The world needs more scientists like Dr. Mark Fernandez. His southern drawl and warm personality almost make you overlook the fact that he’s probably forgotten more about high-performance computing (HPC) than you’ll ever know.


    The Spaceborne Computer is currently flying aboard the International Space Station to prove that high-performance computing hardware can survive and operate in outer space conditions. Courtesy HPE.

    Fernandez is the Americas HPC Technology Officer for Hewlett Packard Enterprise (HPE). His current baby is the Spaceborne Computer, a supercomputer that has spent more than a year aboard the International Space Station (ISS).

    In this time, the Spaceborne Computer has run through a gamut of tests to ensure it works like it’s supposed to. Now, it’s a race to accomplish as much as possible before the machine is brought home.

    Computing for the stars

    The Spaceborne Computer’s history extends well before its launch to the ISS. In fact, Fernandez explains that the project began about three years prior.

    “NASA Ames was in a meeting with us in the summer of 2014 and they said that, for a mission to Mars or for a lunar outpost, the distance was so far that they would not be able to continue their mission of supporting the space explorers,” says Fernandez. “And so they just sort of off-handedly said, ‘take part of our current supercomputer and see what it would take to get it operating in space.’ And we took up the challenge.”

    When astronauts send and receive data to and from Earth, this information is moving at the speed of light. In the ISS, which is 240 miles (400 kilometers) away from Earth, data transmission still happens very quickly. The same won’t be true when humans begin our journey into the rest of the cosmos.

    “All science and engineering done here on Earth requires some type of high performance computing to make it function,” says Fernandez. “You don’t want to be 24 minutes away and trying to do your Mars dust storm predictions. You want to be able to take those scientific and engineering computations that are currently done here on Earth and bring them with you.”

    To get ready for these kinds of tasks, the Spaceborne Computer has spent the past year performing standard benchmarking tests in what Fernandez calls the “acceptance phase.” Now that these experiments are done, it’s time to get interesting.

    The sky’s not the limit

    For traditional supercomputers, powering and cooling the machine often represents a huge cost. This isn’t true in space.

    “The Moderate Temperature Loop (MTL) is how the environment for the human astronauts is maintained at a certain temperature,” says Fernandez. “Our experiments are allowed to tap into that MTL, and that’s where we put our heat. Our heat is then expelled into the coldness of space for free. We have free electricity coming from the solar cells, and we have free cooling from the coldness of space and therefore, by definition, we have the most energy efficient supercomputer in existence anywhere on Earth or elsewhere.”

    The cost-neutral aspect of the Spaceborne Computer allows HPE to give researchers access to the machine for free before it must return to Earth. One of these experiments, announced at SC18, concerns Entry, Descent, and Landing (EDL) software.

    “If you’re going to build a Mars habitat, you need to land carefully,” says Fernandez. “This EDL software runs in real time, it’s connected to the thrusters on the spacecraft, and in real time determines where you are and adjusts your thrusters so that you can land within 50 meters of your target. Now, it’s never been tested in space, and the only place it will ever run is in space. So they’re very excited about getting it to run on the Spaceborne Computer.”

    While Fernandez is delighted that his machine will be able to test important innovations like this, he seems dismayed by all the science he won’t be able to do. The Spaceborne Computer will soon be brought back home by NASA, and he’s doing what he can to cram in as many important experiments as possible.

    Fernandez’s attitude speaks volumes about the mental outlook we’ll need to traverse the cosmos. He often uses the term “space explorers” in place of “astronauts” or even “researchers.” It’s a term that cuts to the heart of what scientists like him are attempting to do.

    “We’re proud to be good space explorers,” says Fernandez. “I say, let’s all work together. We’ve got free electricity. We have free cooling. Let’s push science as far and as hard as we can.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 10:34 am on November 29, 2018 Permalink | Reply
    Tags: , , , Science Node,   

    From Science Node: “The race to exascale” 

    Science Node bloc
    From Science Node

    30 Jan, 2018
    Alisa Alering

    Who will get the first exascale machine – a supercomputer capable of 10^18 floating point operations per second? Will it be China, Japan, or the US?

    1
    When it comes to computing power you can never have enough. In the last sixty years, processing power has increased more than a trillionfold.

    Researchers around the world are excited because these new, ultra-fast computers represent a 50- to 100-fold increase in speed over today’s supercomputers and promise significant breakthroughs in many areas. That exascale supercomputers are coming is pretty clear. We can even predict the date, most likely in the mid-2020s. But the question remains as to what kind of software will run on these machines.

    Exascale computing heralds an era of ubiquitous massive parallelism, in which processors perform coordinated computations simultaneously. But the number of processors will be so high that computer scientists will have to constantly cope with failing components.

    The high number of processors will also likely slow programs tremendously. The consequence is that beyond the exascale hardware, we will also need exascale brains to develop new algorithms and implement them in exascale software.

    In 2011, the German Research Foundation established a priority program “Software for Exascale Computing”( SPPEXA ) to addresses fundamental research on various aspects of high performance computing (HPC) software, making the program the first of its kind in Germany.

    SPPEXA connects relevant sub-fields of computer science with the needs of computational science and engineering and HPC. The program provides the framework for closer cooperation and a co-design-driven approach. This is a shift from the current service-driven collaboration of groups focusing on fundamental HPC methodology (computer science or mathematics) on the one side with those working on science applications and providing the large codes (science and engineering) on the other side.

    Despite exascale computing still being several years away, SPPEXA scientists are well ahead of the game, developing scalable and efficient algorithms that will make the best use of resources when the new machines finally arrive. SPPEXA drives research towards extreme-scale computing in six areas: computational algorithms, system software, application software, data management and exploration, programming, and software tools.

    Some major projects include research on alternative sources of clean energy; stronger, lighter weight steel manufacturing; and unprecedented simulations of the earth’s convective processes:

    EXAHD supports Germany’s long-standing research into the use of plasma fusion as a clean, safe, and sustainable carbon-free energy source. One of the main goals of the EXAHD project is to develop scalable and efficient algorithms to run on distributed systems, with the aim of facilitating the progress of plasma fusion research.

    EXASTEEL is a massively parallel simulation environment for computational material science. Bringing together experts from mathematics, material and computer sciences, and engineering, EXASTEEL will serve as a virtual laboratory for testing new forms of steel with greater strengths and lower weight.

    TerraNeo addresses the challenges of understanding the convection of Earth’s mantle – the cause of most of our planet’s geological activity, from plate tectonics to volcanoes and earthquakes. Due to the sheer scale and complexity of the models, the advent of exascale computing offers a tremendous opportunity for greater understanding. But in order to take full advantage of the coming resources, TerraNeo is working to design new software with optimal algorithms that permit a scalable implementation.

    Exascale hardware is expected to have less consistent performance than current supercomputers due to fabrication, power, and heat issues. Their sheer size and unprecedented number of components will likely increase fault rates. Fast and Fault-Tolerant Microkernel-based Operating System for Exascale Computing (FFMK) aims to address these challenges through a coordinated approach that connects system software, computational algorithms, and application software.

    Mastering the various challenges related to the paradigm shift from moderately to massively parallel processing will be the key to any future capability computing application at exascale. It will also be crucial for learning how to effectively and efficiently deal with near-future commodity systems smaller-scale or capacity computing tasks. No matter who puts the first machine online, exascale supercomputing is coming. SPPEXA is making sure we are prepared to take full advantage of it.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 9:38 am on November 29, 2018 Permalink | Reply
    Tags: 1. Summit (US), 2. Sierra (US), 3. Sunway TaihuLight (China), 4. Tianhe-2 (China), 5. Piz Daint (Switzerland), , , Science Node, ,   

    From Science Node: “The 5 fastest supercomputers in the world” 

    Science Node bloc
    From Science Node

    Countries around the world strive to reach the peak of computing power–but there can be only one.

    19 Nov, 2018
    11.29.18 update
    Kevin Jackson

    Peak performance within supercomputing is a constantly moving target. In fact, a supercomputer is defined as being any machine “that performs at or near the currently highest operational rate.” The field is a continual battle to be the best. Those who achieve the top rank may only hang on to it for a fleeting moment.

    Competition is what makes supercomputing so exciting, continually driving engineers to reach heights that were unimaginable only a few years ago. To celebrate this amazing technology, let’s take a look at the fastest computers as defined by computer ranking project TOP500—and at what these machines are used for.

    5. Piz Daint (Switzerland)

    Cray Piz Daint supercomputer of the Swiss National Supercomputing Center (CSCS)

    Named after a mountain in the Swiss Alps, Piz Daint has been Europe’s fastest supercomputer since its debut in November 2013. But a recent 40 million Euro upgrade has boosted the Swiss National Supercomputer Centre’s machine into the global top five, now running at 21.2 petaFLOPS and ­utilizing 387,872 cores.

    The machine has helped scientists at the University of Basel make discoveries about “memory molecules” in the brain. Other Swiss scientists have taken advantage of its ultra-high resolutions to set up a near-global climate simulation.

    4. Tianhe-2 (China)

    China’s Tianhe-2 Kylin Linux supercomputer at National Supercomputer Center, Guangzhou, China

    Tianhe-2, whose name translates as “MilkyWay-2,” has also seen recent updates. But despite now boasting a whopping 4,981,760 cores and running at 61.4 petaFLOPS, that hasn’t stopped it from slipping two spots in just one year—from #2 to #4.

    TOP500 reported that the machine, developed by the National University of Defense Technology (NUDT) in China, is intended mainly for government security applications. This means that much of the work done by Tianhe-2 is kept secret, but if its processing power is anything to judge by, it must be working on some pretty important projects.

    3. Sunway TaihuLight (China)

    Sunway NRCPC TaihuLight, China, US News

    A former number one, Sunway TaihuLight dominated the list since its debut in June 2016. At that time, it’s 93.01 petaFLOPS and 10,649,000 cores made it the world’s most powerful supercomputer by a wide margin, boasting more than five times the processing power of its nearest competitor (ORNL’s Titan) and nearly 19 times more cores.

    But given the non-stop pace of technological advancement, no position is ever secure for long. TaihuLight ceded the top spot to competitors in June 2018.

    Located at the National Supercomputing Center in Wuxi, China, TaihuLight’s creators are using the supercomputer for tasks ranging from climate science to advanced manufacturing. It has also found success in marine forecasting, helping ships avoid rough seas while also helping with offshore oil drilling.

    2. Sierra (US)

    LLNL IBM NVIDIA Mellanox ATS-2 Sierra Supercomputer

    Sierra initially debuted at #3 on the June 2018 list with 71.6 petaFLOPS, but optimization has since pushed the processing speed on its 1,572,480 cores to 94.6 petaFLOPS, earning it the #2 spot in November 2018.

    Incorporating both IBM central processing units (CPUs) and NVIDIA graphics processing units (GPUs), Sierra is specifically designed for modeling and simulations essential for the US National Nuclear Security Administration.

    1. Summit (US)

    ORNL IBM AC922 SUMMIT supercomputer. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    Showing further evidence of the US Department of Energy’s renewed commitment to supercomputing power, Oak Ridge National Laboratory’s (ORNL) Summit first claimed the #1 spot in June 2018, taking the top rank from China for the first time in 6 years. Further upgrades have cemented that spot—at least until the next list comes out in June 2019.

    In the five months since its debut on the June 2018 list, Summit has widened its lead as the number one system, improving its High Performance Linpack (HPL) performance from 122.3 to 143.5 petaFLOPS.

    Scientists are already putting the world’s most powerful computer to work. A seven-member team from ORNL won the 2018 Gordon Bell Prize for their deployment of Summit to process genetic data in order to better understand how individuals develop chronic pain and respond to opioids.

    The race to possess the most powerful supercomputer never really ends. This friendly competition between countries has propelled a boom in processing power, and it doesn’t look like it’ll be slowing down anytime soon. With scientists using supercomputers for important projects such as curing debilitating diseases, we can only hope it will continue for years to come. [Whoever thinks this is a “friendly competition between countries” is way off base. This is a part of the Chinese route to world dominance]

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: