Tagged: Science Node Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:35 pm on July 5, 2018 Permalink | Reply
    Tags: Science Node, Some of our biggies, , University at Buffalo’s Center for Computational Research, XD Metrics on Demand (XDMoD) tool from U Buffalo   

    From Science Node: “”Getting the most out of your supercomputer” 

    Science Node bloc
    From Science Node

    02 Jul, 2018
    Kevin Jackson

    1
    No image caption or credit.

    As the name implies, supercomputers are pretty special machines. Researchers from every field seek out their high-performance capabilities, but time spent using such a device is expensive. As recently as 2015, it took the same amount of energy to run Tianhe-2, the world’s second-fastest supercomputer [now #4], for a year as it did to power a 13,501 person town in Mississippi.

    China’s Tianhe-2 Kylin Linux TH-IVB-FEP supercomputer at National Supercomputer Center, Guangzhou, China

    And that’s not to mention the initial costs associated with purchase, as well as salaries for staff to help run and support the machine. Supercomputers are kept incredibly busy by their users, often oversubscribed, with thousands of jobs in the queue waiting for others to finish.

    With computing time so valuable, managers of supercomputing centers are always looking for ways to improve performance and speed throughput for users. This is where Tom Furlani and his team at the University at Buffalo’s Center for Computational Research, come in.

    Thanks to a grant from the National Science Foundation (NSF) in 2010, Furlani and his colleagues have developed the XD Metrics on Demand (XDMoD) tool, to help organizations improve production on their supercomputers and better understand how they are being used to enable science and engineering.

    “XDMoD is an incredibly useful tool that allows us not only to monitor and report on the resources we allocate, but also provides new insight into the behaviors of our researcher community,” says John Towns, PI and Project Director for the Extreme Science and Engineering Discovery Environment (XSEDE).

    Canary in the coal mine

    Modern supercomputers are complex combinations of compute servers, high speed networks, and high performance storage systems. Each of these areas is a potential point of under performance or even outright failure. Add system software and the complexity only increases.

    With so much that can go wrong, a tool that can identify problems or poor performance as well as monitor overall usage is vital. XDMoD aims to fulfill that role by performing three functions:

    1. Job accounting – XDMoD provides metrics about utilization, including who is using the system and how much, what types of jobs are running, plus length of wait times, and more.

    2. Quality of service – The complex mechanisms behind HPC often mean that managers and support personnel don’t always know if everything is working correctly—or they lack the means to ensure that it is. All too often this results in users serving as “canaries in the coal mine” who identify and alert admins only after they’ve discovered an issue.

    To solve this, XDMoD launches application kernels daily that provide baseline performances for the cluster in question. If these kernels show that something that should take 30 seconds is now taking 120, support personnel know they need to investigate. XDMoD’s monitoring of the Meltdown and Spectre patches is a perfect example—the application kernels allowed system personnel to quantify the effects of the patches put in place to mitigate the chip vulnerabilities.

    3. Job-level performance – Much like job accounting, job-level performance zeroes in on usage metrics. However, this task focuses more on how well users’ codes are performing. XDMoD can measure the performance of every single job, helping users to improve the efficiency of their job or even figure out why it failed.

    Furlani also expects that XDMoD will soon include a module to help quantify the return on investment (ROI) for these expensive systems, by tying external funding of the supercomputer’s users to their external research funding.

    Thanks to its open-source code, XDMoD’s reach extends to commercial, governmental, and academic supercomputing centers worldwide, including England, Spain, Belgium, Germany, and many others.

    Future features

    In 2015, the NSF awarded the University at Buffalo a follow-on grant to continue work on XDMoD. Among other improvements, the project will include cloud computing metrics. Cloud use is growing all the time, and jobs performed there are much different in terms of metrics.

    2
    Who’s that user? XDMoD’s customizable reports help organizations better understand how their computing resources are being used to enable science and engineering. This graph depicts the allocation of resources delivered by supporting funding agency. Courtesy University at Buffalo.

    For the average HPC job, Furlani explains that the process starts with a researcher requesting resources, such as how many processors and how much memory they need. But in the cloud, a virtual machine may stop running and then start again. What’s more, a cloud-based supercomputer can increase and decrease cores and memory. This makes tracking performance more challenging.

    “Cloud computing has a beginning, but it doesn’t necessarily have a specific end,” Furlani says. “We have to restructure XDMoD’s entire backend data warehouse to accommodate that.”

    Regardless of where XDMoD goes next, tools like this will continue to shape and redefine what supercomputers can accomplish.

    Some of our biggies:

    ORNL IBM AC922 SUMMIT supercomputer. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    No. 1 in the world.

    LLNL SIERRA IBM supercomputer

    No.3 in the world

    ORNL Cray XK7 Titan Supercomputer

    No. 7 in the world

    NERSC Cray Cori II supercomputer at NERSC at LBNL, named after Gerty Cori, the first American woman to win a Nobel Prize in science

    No.10 in the world

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

    Advertisements
     
  • richardmitnick 11:18 am on June 3, 2018 Permalink | Reply
    Tags: , , , Science Node,   

    From Science Node: “Full speed ahead” 

    Science Node bloc
    From Science Node

    23 May, 2018
    Kevin Jackson

    US Department of Energy recommits to the exascale race.

    1

    The US was once a leader in supercomputing, having created the first high-performance computer (HPC) in 1964. But as of November 2017, TOP500 ranked Titan, the fastest American-made supercomputer, only fifth on its list of the most powerful machines in the world. In contrast, China holds the first and second spots by a whopping margin.

    ORNL Cray Titan XK7 Supercomputer

    Sunway TaihuLight, China

    Tianhe-2 supercomputer China

    But it now looks like the US Department of Energy (DoE) is ready to commit to taking back those top spots. In a CNN opinion article, Secretary of Energy Rick Perry proclaims that “the future is in supercomputers,” and we at Science Node couldn’t agree more. To get a better understanding of the DoE’s plans, we sat down for a chat with Under Secretary for Science Paul Dabbar.

    Why is it important for the federal government to support HPC rather than leaving it to the private sector?

    A significant amount of the Office of Science and the rest of the DoE has had and will continue to have supercomputing needs. The Office of Science produces tremendous amounts of data like at Argonne, and all of our national labs produce data of increasing volume. Supercomputing is also needed in our National Nuclear Security Administration (NNSA) mission, which fulfills very important modeling needs for Department of Defense (DoD) applications.

    But to Secretary Perry’s point, we’re increasingly seeing a number of private sector organizations building their own supercomputers based on what we had developed and built a few generations ago that are now used for a broad range of commercial purposes.

    At the end of the day, we know that a secondary benefit of this push is that we’re providing the impetus for innovation within supercomputing.

    We assist the broader American economy by helping to support science and technology innovation within supercomputing.

    How are supercomputers used for national security?

    The NNSA arm, which is one of the three major arms of the three Under Secretaries here at the department, is our primary area of support for the nation’s defense. And as various testing treaties came into play over time, having the computing capacity to conduct proper testing and security of our stockpiled weapons was key. And that’s why if you look at our three exascale computers that we’re in the process of executing, two of them are on behalf of the Office of Science and one of them is on behalf of the NNSA.

    One of these three supercomputers is the Aurora exascale machine currently being built at Argonne National Laboratory, which Secretary Perry believes will be finished in 2021. Where did this timeline come from, and why Argonne?

    Argonne National Laboratory ALCF

    ANL ALCF Cetus IBM supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    Depiction of ANL ALCF Cray Shasta Aurora supercomputer

    There was a group put together across different areas of DoE, primarily the Office of Science and NNSA. When we decided to execute on building the next wave of top global supercomputers, an internal consortium named the Collaboration of Oak Ridge, Argonne, and Livermore (CORAL) was formed.

    That consortium developed exactly how to fund the technologies, how to issue requests, and what the target capabilities for the machines should be. The 2021 timeline was based on the CORAL group, the labs, and the consortium in conjunction with the Department of Energy headquarters here, the Office of Advanced Computing, and ultimately talking with the suppliers.

    The reason Argonne was selected for the first machine was that they already have a leadership computing facility there. They have a long history of other machines of previous generations, and they were already in the process of building out an exascale machine. So they were already looking at architecture issues, talking with Intel and others on what could be accomplished, and taking a look at how they can build on what they already had in terms of their capabilities and physical plant and user facilities.

    Why now? What’s motivating the push for HPC excellence at this precise moment?

    A lot of this is driven by where the technology is and where the capabilities are for suppliers and the broader HPC market. We’re part of a constant dialogue with the Nvidias, Intels, IBMs, and Crays of the world in what we think is possible in terms of the next step in supercomputing.

    Why now? The technology is available now, and the need is there for us considering the large user facilities coming online across the whole of the national lab complex and the need for stronger computing power.

    The history of science, going back to the late 1800s and early 1900s, was about competition along strings of types of research, whether it was chemistry or physics. If you take any of the areas of science, including high-performance computing, anything that’s being done by anyone out there along any of these strings causes us all to move us along. However, we at the DoE believe America must and should be in the lead of scientific advances across all different areas, and certainly in the area of computing.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 1:07 pm on May 18, 2018 Permalink | Reply
    Tags: China's Sunway TaihuLight- the world's fastest supercomputer, Science Node,   

    From Science Node: “What puts the super in supercomputer?” 

    Science Node bloc
    From Science Node

    1
    No image caption or credit.

    The secret behind supercomputing? More of everything.

    14 May, 2018
    Kevin Jackson

    We’ve come a long way since MITS developed the first personal computer in 1974, which was sold as a kit that required the customer to assemble the machine themselves. Jump ahead to 2018, and around 77% of Americans currently own a smartphone, and nearly half of the global population uses the internet.


    Superpowering science. Faster processing speeds, extra memory, and super-sized storage capacity are what make supercomputers the tools of choice for many researchers.

    The devices we keep at home and in our pockets are pretty advanced compared to the technology of the past, but they can’t hold a candle to the raw power of a supercomputer.

    The capabilities of the HPC machines we talk about so often here at Science Node can be hard to conceptualize. That’s why we’re going to lay it all out for you and explain how supercomputers differ from the laptop on your desk, and just what it is these machines need all that extra performance for.

    The need for speed

    Computer performance is measured in FLOPS, which stands for floating-point operations per second. The more FLOPS a computer can process, the more powerful it is.

    2
    You’ve come a long way, baby. The first personal computer, the Altair 8800, was sold in 1974 as a mail-order kit that users had to assemble themselves.

    For example, look to the Intel Core i9 Extreme Edition processor designed for desktop computers. It has 18 cores, or processing units that take in tasks and complete them based on received instructions.

    This single chip is capable of one trillion floating point operations per second (i.e., 1 teraFLOP)—as fast as a supercomputer from 1998. You don’t need that kind of performance to check email and surf the web, but it’s great for hardcore gamers, livestreaming, and virtual reality.

    Modern supercomputers use similar chips, memory, and storage as personal computers, but instead of a few processors they have tens of thousands. What distinguishes supercomputers is scale.

    China’s Sunway TaihuLight, which is currently the fastest supercomputer in the world, boasts 10,648,600 cores with a maximum performance of more than 93,014.6 teraFLOPS.

    Sunway TaihuLight, China

    Theoretically, the Sunway TaihuLight is capable of reaching 125,436 teraFLOPS of performance—more than 125 thousand times faster than the Intel Core i9 Extreme Edition processor. And it ‘only’ cost around ¥1.8 billion ($270 million), compared to the Intel chip’s price tag of $1,999.

    Don’t forget memory

    A computer’s memory holds information while the processor is working on it. When you’re playing Fortnite, your computer’s random-access memory (RAM) stores and updates the speed and direction in which you’re running.

    Most people will get by fine with 8 to 16 GB of RAM. Hardcore gamers generally find that 32GB of RAM is enough, but computer aficionados that run virtual machines and perform other high-end computing tasks at home or at work will sometimes build machines with 64GB or more of RAM.

    ____________________________________________________
    What is a supercomputer used for?

    Climate modeling and weather forecasts
    Computational fluid dynamics
    Genome analysis
    Artificial intelligence (AI) and predictive analytics
    Astronomy and space exploration
    ____________________________________________________

    The Sunway TaihuLight once again squashes the competition with around 1,310,600 GB of memory to work with. This means the machine can hold and process an enormous amount of data at the same time, which allows for large-scale simulations of complex events, such as the devastating 1976 earthquake in Tangshan.

    Even a smaller supercomputer, such as the San Diego Supercomputer Center’s Comet, has 247 terabytes of memory—nearly 4000 times that of a well-equipped laptop.

    SDSC Dell Comet supercomputer at San Diego Supercomputer Center (SDSC)

    Major multitasking

    Another advantage of supercomputers is their ability to excel at parallel computing, which is when two or more processors run simultaneously and divide the workload of a task, reducing the time it takes to complete.

    Personal computers have limited parallel ability. But since the 1990s, most supercomputers have used massively parallel processing, in which thousands of processors attack a problem simultaneously. In theory this is great, but there can be problems.

    Someone (or something) has to decide how the task will be broken up and shared among the processors. But some complex problems don’t divide easily. One task may be processed quickly, but then must wait on a task that’s processed more slowly. The practical, rather than theoretical, speeds of supercomputers depend on this kind of task management.

    Super powers for super projects

    You might now be looking at your computer in disappointment, but the reality is that unless you’re studying volcanoes or sequencing the human genome, you simply don’t need that kind of power.

    The truth is, many supercomputers are shared resources, processing data and solving equations for multiple teams of researchers at the same time. It’s rare for a scientist to use a supercomputer’s entire capacity just for one project.

    So while a top-of-the-line machine like the Sunway TaihuLight leaves your laptop in the dirt, take heart that personal computers are getting faster all the time. But then, so are supercomputers. With each step forward in speed and performance, HPC technology helps us unlock the mysteries of the universe around us.

    Read more:

    The 5 fastest supercomputers in the world
    The race to exascale
    3 reasons why quantum computing is closer than ever

    See the full article here .

    Please help promote STEM in your local schools.
    stem

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 12:15 pm on April 26, 2018 Permalink | Reply
    Tags: , , , Science Node,   

    From Science Node: “Autism origins in junk DNA” 

    Science Node bloc
    Science Node

    [This post is dedicated to all of my readers whose lives and children have been affected by Austism in all of its many forms.]

    25 Apr, 2018
    Scott LaFee
    Jan Zverina

    Genes inherited from both parents contribute to development of autism in children.

    1
    Courtesy Unsplash/Brittany Simuangco.

    One percent of the world’s population lives with autism spectrum disorder (ASD), and the prevalence is increasing by around ten percent each year. Though there is no obvious straight line between autism and any single gene, genetics and inherited traits play an important role in development of the condition.

    In recent years, researchers have firmly established that gene mutations appearing for the first time, called de novo mutations, contribute to approximately one-third of cases of autism spectrum disorder (ASD).

    2
    Early symptoms. Children with ASD may avoid eye contact, have delayed speech, and fail to demonstrate interest. Courtesy Unsplash.

    In a new study [Science], an international team led by scientists at University of California San Diego (UCSD) School of Medicine have identified a culprit that may explain some of the remaining risk: rare inherited variants in regions of non-coding DNA.

    The newly discovered risk factors differ from known genetic causes of autism in two important ways. First, these variants do not alter the genes directly but instead disrupt the neighboring DNA control elements that turn genes on and off, called cis-regulatory elements or CREs. Second, these variants do not occur as new mutations in children with autism, but instead are inherited from their parents.

    “For ten years we’ve known that the genetic causes of autism consist partly of de novo mutations in the protein sequences of genes,” said Jonathan Sebat, a professor of psychiatry, cellular and molecular medicine and pediatrics at UCSD School of Medicine and chief of the Beyster Center for Genomics of Psychiatric Genomics. “However, gene sequences represent only 2 percent of the genome.”

    ____________________________________________________

    Autism facts

    Autism affects 1 in 68 children
    Boys are four times more likely than girls to have autism
    Symptoms usually appear before age 3
    Autism varies greatly; no two people with autism are alike
    There is currently no cure for autism
    Early intervention is key to successful treatment
    ____________________________________________________

    To investigate the other 98 percent of the genome in ASD, Sebat and his colleagues analyzed the complete genomes of 9,274 subjects from 2,600 families. One thousand genomes were sequenced in San Diego at Human Longevity Inc. (HLI) and at Illumina Inc.

    DNA sequences were analyzed with the Comet supercomputer at the San Diego Supercomputer Center (SDSC).

    SDSC Dell Comet supercomputer at San Diego Supercomputer Center (SDSC)

    These data were then combined with other large studies from the Simons Simplex Collection and the Autism Speaks MSSNG Whole Genome Sequencing Project.

    “Whole genome sequence data processing and analysis are both computationally and resource intensive,” said Madhusudan Gujral, an analyst with SDSC and co-author of the paper.

    Using SDSC’s Comet, processing and identifying specific structural variants from a single genome took about 2½-days.

    “Since Comet has 1,984 compute nodes and several petabytes of scratch space for analysis, tens of genomes can be processed at the same time,” added SDSC scientist Wayne Pfeiffer. “Instead of months, with Comet we were able to complete the data processing in weeks.”

    The researchers then analyzed structural variants, deleted or duplicated segments of DNA that disrupt regulatory elements of genes, dubbed CRE-SVs. From the complete genomes of families, the researchers found that CRE-SVs that are inherited from parents also contributed to ASD.


    HPC for the 99 percent. The Comet supercomputer at SDSC meets the needs of underserved researchers in domains that have not traditionally relied on supercomputers to help solve problems. Courtesy San Diego Supercomputer Center.

    “We also found that CRE-SVs were inherited predominantly from fathers, which was a surprise,” said co-first author William M. Brandler, PhD, a postdoctoral scholar in Sebat’s lab at UCSD and bioinformatics scientist at HLI.

    “Previous studies have found evidence that some protein-coding variants are inherited predominantly from mothers, a phenomenon known as a maternal origin effect. The paternal origin effect we see for non-coding variants suggests that the inherited genetic contribution from mothers and fathers may be qualitatively different.”

    Sebat said current research does not explain with certainty what mechanism determines these parent-of-origin effects, but he has proposed a plausible model.

    “There is a wide spectrum of genetic variation in the human population, with coding variants having strong effects and noncoding variants having weaker effects,” he said. “If men and women differ in their capacity to tolerate such variants, this could give rise to the parent-of-origin effects that we see.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 6:37 am on April 12, 2018 Permalink | Reply
    Tags: , , , Burçin Mutlu-Pakdil, Burçin’s Galaxy - PGC 1000174, Carnegie’s Las Campanas Observatory Chile over 2500 m (8200 ft) high, , , Science Node, ,   

    From Science Node: Women in STEM – “Burçin’s galaxy” Burçin Mutlu-Pakdil 

    Science Node bloc
    Science Node

    30 Mar, 2018
    Ellen Glover

    1
    Burçin Mutlu-Pakdil

    As a little girl growing up in Turkey, Burçin Mutlu-Pakdil loved the stars.


    Burçin’s galaxy, AKA PGC 1000714, is a unique, double-ringed, Hoag-type galaxy exhibiting features never observed before. Courtesy North Carolina Museum of Natural Sciences.

    “How is it possible not to fall in love with stars?” wonders Mutlu-Pakdil. “I find it very difficult not to be curious about the Universe, about the Milky Way and how everything got together. I really want to learn more. I love my job because of that.”

    1
    Young or old? The object’s blue outer rings suggests it may have formed more recently than the center.

    Her job is at The University of Arizona’s Steward Observatory, one of the world’s premier astronomy facilities, where she works as a postdoctoral astrophysics research associate.

    U Arizona Steward Observatory at Kitt Peak, AZ, USA, altitude 2,096 m (6,877 ft)

    Just a few years ago, while earning her Ph.D. at the University of Minnesota, Mutlu-Pakdil and her colleagues discovered PGC 1000174, a galaxy with qualities so rare they’ve never been observed anywhere else. For now, it’s known as Burçin’s Galaxy.

    The object was originally detected by Patrick Treuthardt, who was observing a different galaxy when he spotted it in the background. It piqued the astronomers’ attention because of an initial resemblance to Hoag’s Object. This rare galaxy is known for its yellow-orange center surrounded by a detached outer ring.

    “Our object looks very similar to Hoag’s Object. It has a very symmetric central body with a very symmetric outer ring,” explains Mutlu-Pakdil. “But my work showed that there is actually a second ring on this object. This makes it much more complex.”

    Through extensive imaging and analysis, Mutlu-Pakdil found that, unlike Hoag’s Object, this new galaxy has two rings with no visible materials attaching them, a phenomenon not seen before. It offered the first-ever observation and description of a double-ringed elliptical galaxy.

    4
    Eye on the universe. Sophisticated instruments like the 8.2 meter optical-infrared Subaru Telescope on the summit of Mauna Kea in Hawaii allow astronomers to peer ever further into the stars–and into the origins of the universe.


    NAOJ/Subaru Telescope at Mauna Kea Hawaii, USA,4,207 m (13,802 ft) above sea level

    Since spotting the intriguing galaxy, Mutlu-Pakdil and her team have evaluated it in several ways. They initially observed it via the Irénéé du Pont two-meter telescope at the Las Campanas observatory in Chile. And they recently captured infrared images with the Magellan 6.5-meter telescope also at Las Campanas.


    Carnegie Las Campanas Dupont telescope, Atacama Desert, over 2,500 m (8,200 ft) high approximately 100 kilometres (62 mi) northeast of the city of La Serena,Chile

    Carnegie 6.5 meter Magellan Baade and Clay Telescopes located at Carnegie’s Las Campanas Observatory, Chile. over 2,500 m (8,200 ft) high

    The optical images reveal that the components of Burçin’s Galaxy have different histories. Some parts of the galaxy are significantly older than others. The blue outer ring suggests a newer formation, while the red inner ring indicates the presence of older stars.

    Mutlu-Pakdil and her colleagues suspect that this galaxy was formed as some material accumulated into one massive object through gravitational attraction, AKA an accretion event.

    However, beyond that, PGC1000174’s unique qualities largely remain a mystery. There are about three trillion galaxies in our observable universe and more are being found all the time.

    “In such a vast universe, finding these rare objects is really important,” says Mutlu-Pakdil. “We are trying to create a complete picture of how the Universe works. These peculiar systems challenge our understanding. So far, we don’t have any theory that can explain the existence of this particular object, so we still have a lot to learn.”

    Challenging norms and changing lives

    In a way, Mutlu-Pakdil has been challenging the norms of science all her life.

    Though her parents weren’t educated beyond elementary school, they supported her desire to pursue her dreams of the stars.

    “When I was in college, I was the only female in my class, and I remember I felt so much like an outsider. I felt like I wasn’t fitting in,” she recalls of her time studying physics at Bilkent University in Ankara, Turkey.

    7
    Bilkent University

    8
    Astronomical ambassador. Mutlu-Pakdil believes in sharing her fascination for space and works to encourage students from all backgrounds to explore astronomy and other STEM fields.

    Throughout her education and career, Mutlu-Pakdil has experienced being a minority in an otherwise male-dominated field. It hasn’t slowed her down, but it has made her more passionate about promoting diversity in science and being a mentor to young people.

    “I realized, it is not about me, it is society that needs to change,” she says. “Now I really want to inspire people to do similar things. So kids from all backgrounds will be able to understand they can do science, too.”

    That’s why she serves as an ambassador for the American Astronomical Society and volunteers to mentor children in low-income neighborhoods to encourage them to pursue college and, hopefully, a career in STEM.

    She was also recently selected to be a 2018 TED Fellow and will present a TED talk about her discoveries and career on April 10.

    Through her work, Mutlu-Pakdil hopes to show people how important it is to learn about our universe. It behooves us all to take an interest in the night sky and the groundbreaking discoveries being made by astronomers like her around the world.

    “We are a part of this Universe, and we need to know what is going on in it. We have strong theories about how common galaxies form and evolve, but, for rare ones, we don’t have much information,” says Mutlu-Pakdil. “Those unique objects present the extreme cases, so they really give us a big picture for the Universe’s evolution — they stretch our understanding of everything.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 6:50 am on March 29, 2018 Permalink | Reply
    Tags: , , , , , , , Science Node,   

    From Science Node: “CERN pushes back the frontiers of physics” 

    Science Node bloc
    Science Node

    27 Mar, 2018
    Maria Girone
    CERN openlab Chief Technology Officer

    “Researchers at the European Organization for Nuclear Research (CERN) are probing the fundamental structure of the universe. They use the world’s largest and most complex scientific machines to study the basic constituents of matter — the fundamental particles.

    These particles are made to collide at close to the speed of light. This process gives physicists clues about how the particles interact, and provides insights into the laws of nature.

    CERN is home to the Large Hadron Collider (LHC), the world’s most powerful particle accelerator.

    LHC

    CERN/LHC Map

    CERN LHC Tunnel

    CERN LHC particles

    It consists of a 27km ring of superconducting magnets, combined with accelerating structures to boost the energy of the particles prior to the collisions. Special detectors — similar to large, 3D digital cameras built in cathedral-sized caverns —observe and record the results of these collisions.

    One billion collisions per second

    Up to about 1 billion particle collisions can take place every second inside the LHC experiments’ detectors. It is not possible to examine all of these events. Hardware and software filtering systems are used to select potentially interesting events for further analysis.

    Even after filtering, the CERN data center processes hundreds of petabytes (PB) of data every year. Around 150 PB are stored on disk at the site in Switzerland, with over 200 PB on tape — the equivalent of about 2,000 years of HD video.

    Physicists must sift through the 30-50 PB of data produced annually by the LHC experiments to determine if the collisions have revealed any interesting physics. The Worldwide LHC Computing Grid (WLCG), a distributed computing infrastructure arranged in tiers, gives a community of thousands of physicists near-real-time access to LHC data.

    2
    Power up. The planned upgrades to the Large Hadron Collider. Image courtesy CERN.

    With 170 computing centers in 42 countries, the WLCG is the most sophisticated data-taking and analysis system ever built for science. It runs more than two million jobs per day.

    The LHC has been designed to follow a carefully planned program of upgrades. The LHC typically produces particle collisions for a period of around three years (known as a ‘run’), followed by a period of about two years for upgrade and maintenance work (known as a ‘long shutdown’).

    The High-Luminosity Large Hadron Collider (HL-LHC), scheduled to come online around 2026, will crank up the performance of the LHC and increase the potential for discoveries. The higher the luminosity, the more collisions, and the more data the experiments can gather.

    An increased rate of collision events means that digital reconstruction becomes significantly more complex. At the same time, the LHC experiments plan to employ new, more flexible filtering systems that will collect a greater number of events.

    This will drive a huge increase in computing needs. Using current software, hardware, and analysis techniques, the estimated computing capacity required would be around 50-100 times higher than today. Data storage needs are expected to be in the order of exabytes by this time.

    Technology advances over the next seven to ten years will likely yield an improvement of approximately a factor ten in both the amount of processing and storage available at the same cost, but will still leave a significant resource gap. Innovation is therefore vital; we are exploring new technologies and methodologies together with the world’s leading information and communications technology (ICT) companies.

    Tackling tomorrow’s challenges today

    CERN openlab works to develop and test the new ICT techniques that help to make groundbreaking physics discoveries possible. Established in 2001, the unique public-private partnership provides a framework through which CERN collaborates with leading companies to accelerate the development of cutting-edge technologies.

    My colleagues and I have been busy working to identify the key challenges that will face the LHC research community in the coming years. Last year, we carried out an in-depth consultation process, involving workshops and discussions with representatives of the LHC experiments, the CERN IT department, our collaborators from industry, and other ‘big science’ projects.

    Based on our findings, we published the CERN openlab white paper on future ICT challenges in scientific research. We identified 16 ICT challenge areas, grouped into major R&D topics that are ripe for tackling together with industry collaborators.

    In data-center technologies, we need to ensure that data-center architectures are flexible and cost effective and that cloud computing resources can be used in a scalable, hybrid manner. New technologies for solving storage capacity issues must be thoroughly investigated, and long-term data-storage systems should be reliable and economically viable.

    We also need modernized code to ensure that maximum performance can be achieved on the new hardware platforms. Sucessfully translating the huge potential of machine learning into concrete solutions will play a role in monitoring the accelerator chain, optimizing the use of IT resources, and even hunting for new physics.

    Several IT challenges are common across research disciplines. With ever more research fields adopting methodologies driven by big data, it’s vital that we collaborate with research communities such as astrophysics, biomedicine, and Earth sciences.

    As well as sharing tools and learning from one another’s experience, working together to address common challenges can increase our ability to ensure that leading ICT companies are producing solutions that meet our common needs.

    These challenges must be tackled over the coming years in order to ensure that physicists across the globe can exploit CERN’s world-leading experimental infrastructure to its maximum potential. We believe that working together with industry leaders through CERN openlab can play a key role in overcoming these challenges, for the benefit of both the high-energy physics community and wider society.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 11:33 am on February 14, 2018 Permalink | Reply
    Tags: 1. Sunway TaihuLight (China), 2. Tianhe-2 (China), 3. Piz Daint (Switzerland), 4. Gyoukou (Japan), 5. Titan (United States), Science Node, The 5 fastest supercomputers in the world   

    From Science Node: “The 5 fastest supercomputers in the world” 

    Science Node bloc
    Science Node

    Peak performance within supercomputing is a constantly moving target. In fact, a supercomputer is defined as being any machine “that performs at or near the currently highest operational rate.” The field is a continual battle to be the best. Those who achieve the top rank may only hang on to it for a fleeting moment.

    Competition is what makes supercomputing so exciting, continually driving engineers to reach heights that were unimaginable only a few years ago. To celebrate this amazing technology, let’s take a look at the fastest computers as defined by computer ranking project TOP500 — and at what these machines are used for.

    5. Titan (United States)

    ORNL Cray XK7 Titan Supercomputer

    Built by Cray, Oak Ridge National Laboratory’s Titan is the follow-up to the company’s 2005 Jaguar supercomputer. Like Jaguar, Titan is unique due to its reliance on both CPUs and GPUs. According to Cray, GPUs can handle more calculations at a time than CPUs, which allow the GPUs to do “the heavy lifting”.

    Cray hopes to get Titan’s performance up to 20 petaFLOPS, but TOP 500 clocked the machine at 17.59 petaFLOPS in November 2017. For reference, 17.59 petaFLOPS is equal to 17,590 trillion calculations per second. The machine also has 299,008 CPU cores and 261,632 GPU cores.

    What’s more, this machine’s power is being put to good use. The S3D project focuses on modeling the physics behind combustion, which might give researchers the ability to create biofuel surrogates for gasoline. Another project called Denovo is working to find ways to increase efficiency within nuclear reactors. And a team at Brown University is using the supercomputer to model sickle cell disease, hoping to devise better treatments for a disease that affects around 100,000 Americans.

    4. Gyoukou (Japan)

    Although Titan nearly finished last year as the 4th fastest computer in the world, Japan’s Gyoukou stole the spot in November. Created by ExaScaler and PEZY Computing, this machine is currently housed at the Japan Agency for Marine-Earth Science and Technology. The machine reportedly has 19,860,000 cores and runs at speeds of up to 19.14 petaFLOPS.

    Japan Agency for Marine-Earth Science and Technology ExaScaker Gyoukou supercomputer

    Gyoukou is an extremely new system, presented to the public for the first time at SC17 in November. This, combined with the fact that PEZY’s president was arrested for fraud on December 4, 2017, means that the machine hasn’t had much time to prove its usefulness with real world projects. However, Gyoukou is incredibly energy efficient, with a power efficiency of 14.17 gigaFLOPS per watt.

    3. Piz Daint (Switzerland)

    Cray Cray XC30 Piz Daint supercomputer of the Swiss National Supercomputing Center (CSCS)

    Named after a mountain in the Swiss Alps, Piz Daint is the Swiss National Supercomputer Centre’s contribution to the field, running at 19.59 petaFLOPS and utilizing 361,760 cores.

    The machine has helped scientists at the University of Basel make discoveries about “memory molecules” in the brain. Other Swiss scientists have taken advantage of its ultra-high resolutions to set up a near-global climate simulation.

    2. Tianhe-2 (China)

    Tianhe-2 supercomputer China

    If supercomputing were a foot race, China would be a dot on the horizon compared to the rest of the competitors. Years of hard work and research enabled the country to grab the top two spots, with Tianhe-2 coming in second. The name translates as “MilkyWay-2,” and it’s much more powerful than Piz Daint, boasting a whopping 3,120,000 cores and running at 33.86 petaFLOPS.

    Developed by the National University of Defense Technology (NUDT) in China, TOP500 reported that the machine is intended mainly for government security applications. This means that much of the work done by Tianhe-2 is kept secret, but if its processing power is anything to judge by, it must be working on some pretty important projects.

    1. Sunway TaihuLight (China)

    5

    When it comes to supercomputing, no other machine can touch the Sunway TaihuLight. Its processing power exceeds 93.01 petaFLOPS and it relies on 10,649,000 cores, making it the strongest supercomputer in the world by a wide margin. That’s more than five times the processing power of Titan and nearly 19 times more cores.

    Located at the National Supercomputing Center in Wuxi, China, TaihuLight’s creators are using the supercomputer for tasks ranging from climate science to advanced manufacturing. It has also found success in marine forecasting, helping ships avoid rough seas while also helping with offshore oil drilling.

    The race to possess the most powerful supercomputer never really ends. This friendly competition between countries has propelled a boom in processing power, and it doesn’t look like it’ll be slowing down anytime soon. With scientists using supercomputers for important projects such as curing debilitating diseases, we can only hope it will continue for years to come.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 8:05 am on October 19, 2017 Permalink | Reply
    Tags: , , For the first time researchers could calculate the quantitative contributions from constituent quarks gluons and sea quarks –– to nucleon spin, Nucleons — protons and neutrons — are the principal constituents of the atomic nuclei, Piz Daint super computer, Quarks contribute only 30 percent of the proton spin, Science Node, , Theoretical models originally assumed that the spin of the nucleon came only from its constituent quarks, To calculate the spin of the different particles in their simulations the researchers consider the true physical mass of the quarks   

    From Science Node: “The mysterious case of Piz Daint and the proton spin puzzle” 

    Science Node bloc
    Science Node

    10 Oct, 2017 [Better late than…]
    Simone Ulmer

    Nucleons — protons and neutrons — are the principal constituents of the atomic nuclei. Those particles in turn are made up of yet smaller elementary particles: Their constituent quarks and gluons.

    Each nucleon has its own intrinsic angular momentum, or spin. Knowing the spin of elementary particles is important for understanding physical and chemical processes. University of Cyprus researchers may have solved the proton spin puzzle – with a little help from the Piz Daint supercomputer.

    Cray Piz Daint supercomputer of the Swiss National Supercomputing Center (CSCS)

    Proton spin crisis

    Spin is responsible for a material’s fundamental properties, such as phase changes in non-conducting materials that suddenly turn them into superconductors at very low temperatures.

    1
    Inside job. Artist’s impression of what the proton is made of. The quarks and gluons contribute to give exactly half the spin of the proton. The question of how is it done and how much each contributes has been a puzzle since 1987. Courtesy Brookhaven National Laboratory.

    Theoretical models originally assumed that the spin of the nucleon came only from its constituent quarks. But then in 1987, high-energy physics experiments conducted by the European Muon Collaboration precipitated what came to be known as the ‘proton spin crisis’: experiments performed at European Organization for Nuclear Research (CERN), Deutsches Elektronen-Synchrotron (DESY) and Stanford Linear Accelerator Center (SLAC) showed that quarks contribute only 30 percent of the proton spin.

    LHC

    CERN/LHC Map

    CERN LHC Tunnel

    CERN LHC particles

    DESY

    DESY Belle II detector

    DESY European XFEL

    DESY Helmholtz Centres & Networks

    DESY Nanolab II

    DESY Helmholtz Centres & Networks

    SLAC

    SLAC Campus

    SLAC/LCLS

    SLAC/LCLS II

    Since then, it has been unclear what other effects are contributing to the spin, and to what extent. Furhter high-energy physics studies suggested that quark-antiquark pairs, with their short-lived intermediate states might be in play here – in other words, purely relativistic quantum effects.

    Thirty years later, these mysterious effects have finally been accounted for in the calculations performed on CSCS supercomputer Piz Daint by a research group led by Constantia Alexandrou of the Computation-based Science and Technology Research Center of the Cyprus Institute and the Physics Department of the University of Cyprus in Nicosia. That group also included researchers from DESY-Zeuthen, Germany, and from the University of Utah and Temple University in the US.

    For the first time, researchers could calculate the quantitative contributions from constituent quarks, gluons, and sea quarks –– to nucleon spin. (Sea quarks are a short-lived intermediate state of quark-antiquark pairs inside the nucleon). With their calculations, the group made a crucial step towards solving the puzzle that brought on the proton spin crisis.

    To calculate the spin of the different particles in their simulations, the researchers consider the true physical mass of the quarks.

    “This is a numerically challenging task, but of essential importance for making sure that the values of the used parameters in the simulations correspond to reality,” says Karl Jansen, lead scientist at DESY-Zeuthen and project co-author.

    The strong [interaction] acting here, which is transmitted by the gluons, is one of the four fundamental forces of physics. The strong [interaction] is indeed strong enough to prevent the removal of a quark from a proton. This property, known as confinement, results in huge binding energy that ultimately holds together the nucleon constituents.

    The researchers used the mass of the pion, a so-called meson, consisting of one up and one down antiquark –the ‘light quarks’ – to fix the mass of the up and down quarks to the physical quark mass entering in the simulations. If the mass of the pion calculated from the simulation corresponds with the experimentally determined value, then the researchers consider that the simulation is done with the actual physical values for the quark mass.

    And that is exactly what Alexandrou and her team have achieved in their recently published research, Physical Review Letters.

    Their simulations also took into account the valence quarks (constituent quarks), sea quarks, and gluons. The researchers used the lattice theory of quantum chromodynamics (lattice QCD) to calculate this sea of particles and their QCD interactions [ETH Zürich].

    Elaborate conversion to physical values

    The biggest challenge with the simulations was to reduce statistical errors in calculating the ‘spin contributions’ from sea quarks and gluons, says Alexandrou. “In addition, a significant part was to carry out the renormalisation of these quantities.”

    3
    Spin cycle. Composition of the proton spin among the constituent quarks (blue and purple columns with the lines), sea quarks (blue, purple, and red solid columns) and gluons (green column). The errors are shown by the bars. Courtesy Constantia Alexandrou.

    In other words, they had to convert the dimensionless values determined by the simulations into a physical value that can be measured experimentally – such as the spin carried by the constituent and sea quarks and the gluons that the researchers were seeking.

    Alexandrou’s team is the first to have achieved this computation including gluons, whereby they had to calculate millions of the ‘propagators’ describing how quarks move between two points in space-time.

    “Making powerful supercomputers like Piz Daint open and available across Europe is extremely important for European science,” notes Jansen.

    “Simulations as elaborate as this were possible only thanks to the power of Piz Daint,” adds Alexandrou.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 11:23 am on October 9, 2017 Permalink | Reply
    Tags: , , , , , , Science Node,   

    From Science Node: “US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021” 

    Science Node bloc
    Science Node

    September 27, 2017
    Tiffany Trader

    ANL ALCF Cray Aurora supercomputer

    At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the “Aurora” supercomputer is on track to be the United States’ first exascale system. Aurora, originally named as the third pillar of the CORAL “pre-exascale” project, will still be built by Intel and Cray for Argonne National Laboratory, but the delivery date has shifted from 2018 to 2021 and target capability has been expanded from 180 petaflops to 1,000 petaflops (1 exaflop).

    2

    The fate of the Argonne Aurora “CORAL” supercomputer has been in limbo since the system failed to make it into the U.S. DOE budget request, while the same budget proposal called for an exascale machine “of novel architecture” to be deployed at Argonne in 2021.

    Until now, the only official word from the U.S. Exascale Computing Project was that Aurora was being “reviewed for changes and would go forward under a different timeline.”

    Officially, the contract has been “extended,” and not cancelled, but the fact remains that the goal of the Collaboration of Oak Ridge, Argonne, and Lawrence Livermore (CORAL) initiative to stand up two distinct pre-exascale architectures was not met.

    According to sources we spoke with, a number of people at the DOE are not pleased with the Intel/Cray (Intel is the prime contractor, Cray is the subcontractor) partnership. It’s understood that the two companies could not deliver on the 180-200 petaflops system by next year, as the original contract called for. Now Intel/Cray will push forward with an exascale system that is some 50x larger than any they have stood up.

    It’s our understanding that the cancellation of Aurora is not a DOE budgetary measure as has been speculated, and that the DOE and Argonne wanted Aurora. Although it was referred to as an “interim,” or “pre-exascale” machine, the scientific and research community was counting on that system, was eager to begin using it, and they regarded it as a valuable system in its own right. The non-delivery is regarded as disruptive to the scientific/research communities.

    Another question we have is that since Intel/Cray failed to deliver Aurora, and have moved on to a larger exascale system contract, why hasn’t their original CORAL contract been cancelled and put out again to bid?

    With increased global competitiveness, it seems that the DOE stakeholders did not want to further delay the non-IBM/Nvidia side of the exascale track. Conceivably, they could have done a rebid for the Aurora system, but that would leave them with an even bigger gap if they had to spin up a new vendor/system supplier to replace Intel and Cray.

    Starting the bidding process over again would delay progress toward exascale – and it might even have been the death knell for exascale by 2021, but Intel and Cray now have a giant performance leap to make and three years to do it. There is an open question on the processor front as the retooled Aurora will not be powered by Phi/Knights Hill as originally proposed.

    These events beg the question regarding the IBM-led effort and whether IBM/Nvidia/Mellanox are looking very good by comparison. The other CORAL thrusts — Summit at Oak Ridge and Sierra at Lawrence Livermore — are on track, with Summit several weeks ahead of Sierra, although it is looking like neither will make the cut-off for entry onto the November Top500 list as many had speculated.

    ORNL IBM Summit supercomputer depiction

    LLNL IBM Sierra supercomputer

    We reached out to representatives from Cray, Intel and the Exascale Computing Project (ECP) seeking official comment on the revised Aurora contract. Cray and Intel declined to comment and we did not hear back from ECP by press time. We will update the story as we learn more.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 11:48 am on October 4, 2017 Permalink | Reply
    Tags: , , Science Node, , Women once powered the tech industry: Can they do it again?   

    From Science Node: “Women once powered the tech industry: Can they do it again?” 

    Science Node bloc
    Science Node

    02 Oct, 2017
    Alisa Alering

    As women enter a field, compensation tends to decline. Is the tech meritocracy a lie?

    1

    Marie Hicks wants us to think about how gender and sexuality influence technological progress and why diversity might matter more in tech than in other fields.

    An assistant professor of history at the University of Wisconsin-Madison, Hicks studies the history of technological progress and the global computer revolution.

    In Programmed Inequality: How Britain discarded women technologists and lost its edge in computing, Hicks discusses how Britain undermined its early success in computation after World War II by neglecting its trained technical workforce — at that time largely composed of women.

    We had a few questions for Hicks about what lessons Britain’s past mistakes might hold for the seemingly-unstoppable economic engine that is Silicon Valley today.

    ‘Technical’ used to be associated with low status, less-skilled work, but now tech jobs are seen as high-status. How did the term evolve?

    In the UK, the class system was such that touching a machine in any way, even if it was an office computer, was seen as lower-class. For a time, there was enormous resistance to doing anything technical by the white men who were in the apex position of society.

    The US had less of that sort of built-in bias against technical work, but there was still the assumption that if you were working with a machine, the machine was doing most of the work. You were more of a tender or a minder—you were pushing buttons.

    The change resulted from a very intentional, concerted push from people inside these nascent fields to professionalize and raise the status of their jobs. All of these professional bodies that we have today, the IEEE and so on, were created in this period. They were helped along by the fact that this is difficult work, and there was a lot of call for it, leading to persistent shortages of people who could do the work.

    We’re in an interesting moment, when these professions are at their peak, and now we’re starting to see them decline in importance and remuneration. More and more, people are hired into jobs that are broken down in ways that require less skill or less training. New college hires are brought into them and the turnover is such that people no longer have the guarantee of a career.

    Will diversity initiatives, rather than elevating women, devalue the status of the field, as happened previously in professions like teaching and librarianship?

    We can see that already happening for certain subfields. Women are pushed into areas like quality assurance rather than what would be considered higher-level, more important, infrastructural engineering positions. The jobs require, in many cases, identical skills, and yet those subfields are paid less and have a lower status.

    The discrepancies are very much linked to the fact that there are a higher proportion of women doing the work. It’s a cycle: High pay and high status professions usually become more male-dominated. If that changes and more women enter the field, pay declines. The perception of the field changes, even if the work remains the same.

    Does the tech industry have a greater problem with structural inequality, or is the conversation just more visible?

    The really significant thing about tech is that it’s so powerful. It’s becoming the secondary branch of our government at this point. That’s why it’s so critical to look at lack of diversity in Silicon Valley.

    There’s just so much at stake in terms of who has the power to decide how we live, how we die, how we’re governed, just the entire shape of our lives.

    How do you suggest we tackle the problem?

    There’s this whole myth of meritocracy that attempts to solve the problem of diversity in STEM through the pipeline model — that, essentially, if we get enough white women and people of color into the beginning end of the pipeline, they’ll come out the other end as captains of industry who are in a position to make real changes in these fields.

    ut, of course, what happens is that they just leak out of the pipeline, because stuffing more and more people into a discriminatory system in an attempt to fix it doesn’t work.

    If you want more women and people of color in management, you have to hire them into those higher positions. You have to get people to make lateral moves from other industries. You have to promote people rather than saying, “Oh, you come in at the bottom level, and you somehow prove yourself.” It’s not going to be possible to get people to the top in that way.

    What we’ve seen is decades and decades where people have been kept at the bottom after they come in at the bottom. We have to have a real disruption in how we think about these jobs and how we hire for them. We can’t just do the same old thing but try to add more women and more people of color into the mix.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: