Tagged: Science Node Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:54 am on November 15, 2018 Permalink | Reply
    Tags: "Searching for ocean microbes", , Bermuda Atlantic Time Series, , Cyverse, DNA Databank of Japan, European Bioinformatics Institute, Hawaiian Ocean Time Series, Hurwitz Lab-University of Arizona, iMicrobe platform, National Center for Biotechnology Information, National Microbiome Collaborative, , Planet Microbe, Science Node, The Hurwitz Lab corrals big data sets into a more searchable form to help scientists study microorganisms,   

    From Science Node: “Searching for ocean microbes” 

    Science Node bloc
    From Science Node

    07 Nov, 2018
    Susan McGinley

    How one lab is consolidating ocean data to track climate change.

    1
    Courtesy David Clode/Unsplash.

    Scientists have been making monthly observations of the physical, biological, and chemical properties of the ocean since 1988. Now, thanks to the Hurwitz Lab at the University of Arizona (UA), researchers around the world have greater access than ever before to the information collected at these remote ocean sites.

    U Arizona bloc

    Led by Bonnie Hurwitz, assistant professor of biosystems engineering at UA, the Hurwitz Lab corrals big data sets into a more searchable form to help scientists study microorganisms – bacteria, fungi, algae, viruses, protozoa – and how they relate to each other, their hosts and the environment.

    3
    Sample collection. Bonnie Hurwitz next to the metal pod that serves as the main chamber for the Alvin submersible that scientists operate to collect samples from the deepest parts of the ocean not accessible to people. Courtesy Stefan Sievert, Woods Hole Oceanographic Institution.

    The lab is building a data infrastructure on top of Cyverse to integrate and build information from diverse data stores in collaboration with the broader cyber community. The goal is to give people the ability to use data sets that span a range of storage servers, all in one place.

    “One of the exciting things my lab is funded for is Planet Microbe, a three-year project through the National Science Foundation (NSF), to bring together genomic and environmental data sets coming from ocean research cruises,” Hurwitz said.

    “Samples of water are taken using an instrument called a CTD that measures salinity, temperature, depth, and other features to create a scan of ocean conditions across the water column.”

    As the CTD descends into the ocean, bottles are triggered at different depths to collect water samples for a variety of experiments including sequencing the DNA/RNA of microbes. The moment each sample leaves the ship is often the last time these valuable and varied data appear together.

    The first phase of the project focuses on the Hawaiian Ocean Time Series and the Bermuda Atlantic Time Series. At both locations, samples are collected across an ocean transect at a variety of depths across the water column, from surface to deep ocean.

    4
    A CTD device that measures water conductivity (salinity), temperature and depth is mounted underneath a set of water bottles used for collecting samples at varying depths in a column of water. Courtesy Tara Clemente, University of Hawaii.

    The readings taken at each level stream out to data banks around the world. Different labs conduct the analyses, but the Hurwitz lab reunites all of the data sets, including data from these long-term ecological sites used for monitoring climate and changes in the oceans.

    “Oceanographers have different tool kits. They are collecting data on ship to observe both the ocean environment and the genetics of microbes to understand the role they play in the ocean,” Hurwitz said. “We are including these data in a very simple web-based platform where users can run their own analyses and data pipelines to use the data in new ways.”

    While still in year one of the project, the first data have just been released under the iMicrobe platform, which connects users with computational resources for analyzing and visualizing the data.

    The platform’s bioinformatics tools let researchers analyze the data in new ways that may not have originally been possible when the data were collected, or to compare these global ocean data sets with new data as it becomes available.

    “We’re plumbers, actually, creating the pipelines between the world’s oceanographic data sets. We’re trying to enable scientists to access data from the world’s oceans,” Hurwitz said.

    A larger mission

    In addition to their Planet Microbe work, Hurwitz and her team work with the three entities that store and sync all of the world’s “omics” (genomics, proteomics) data – the European Bioinformatics Institute, the National Center for Biotechnology Information and the DNA Databank of Japan, and others.

    “We are working with the National Microbiome Collaborative, a national effort to bring together the world’s data in the microbiome sciences, from human to ocean and everything in between,” Hurwitz said.

    “Having those data sets captured and searchable is great,” said Hurwitz. “They are so big they can’t be housed in any one place. The infrastructure allows you to search across these areas.”

    5
    Going deep. Hurwitz and Amy Apprill, associate scientist at Woods Hole Oceanographic Institution, in front of the human-piloted Alvin submersible. Deep-water samples are collected using the pod’s robotic arm because the pressure of the water is too intense for divers. Courtesy Stefan Sievert, Woods Hole Oceanographic Institution.

    “If we want to start looking at things together in a holistic manner, we need to be able to remotely access data that are not on our servers. We are essentially indexing the world’s data and becoming a search engine for microbiome sciences.”

    By reconnecting ‘omics data with environmental data from oceanographic cruises, Hurwitz and her team are speeding up discoveries into environmental changes affecting the marine microbes that are responsible for producing half the air that we breathe.

    These data can be used in the future to predict how our oceans respond to change and to specific environmental conditions.

    “Our researchers can not only use a $30 million supercomputer at XSEDE (Extreme Science and Engineering Discovery Environment) supported by the NSF for running analyses, they also have access to modern big data architectures through a simple computer interface.”

    “We’re trying to understand where all the data are and how we can sync them,” Hurwitz said. “How data are structured and assembled together has been like the Wild West. We’re figuring it out.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 11:09 am on November 1, 2018 Permalink | Reply
    Tags: Abigail Hsu, Harshitha Menon, Laura Stephey, Margaret Lawson, More women more science, Science Node, Tess Bernard, Who says women don't like science?,   

    From Science Node: Women in STEM-” Who says women don’t like science? From renewable energy to big data, these five women are making a difference with advanced computing.” 

    Science Node bloc
    From Science Node

    31 Oct, 2018
    Alisa Alering

    1

    From renewable energy to big data, these five women are making a difference with advanced computing.

    There’s a misconception out there that women don’t like science. Or computers.

    But let’s not forget that it was Ada Lovelace who kicked off the computer era in 1843 when she outlined a sequence of operations for solving mathematical problems with Charles Babbage’s Analytical Engine. Or that up through the 1960s, women actually were the computers and the primary programmers.

    Times have changed, but women’s contributions to computing haven’t. So to correct some mistaken ideas, here are five cool things women are doing with high-performance computing.

    Speeding up our understanding of the Universe

    The Dark Energy Spectroscopic Instrument (DESI) survey will make the largest, most-detailed 3D map of the Universe ever created and help scientists better understand dark energy. Every night for 5 years, DESI will take images of the night sky that will be used to construct a 3D map spanning the nearby universe to 11 billion light years.

    LBNL/DESI Dark Energy Spectroscopic Instrument for the Nicholas U. Mayall 4-meter telescope at Kitt Peak National Observatory near Tucson, Ariz, USA


    NOAO/Mayall 4 m telescope at Kitt Peak, Arizona, USA, Altitude 2,120 m (6,960 ft)

    But in order for that map to be made, images from the telescope must be processed by the Cori supercomputer.

    NERSC Cray Cori II supercomputer at NERSC at LBNL, named after Gerty Cori, the first American woman to win a Nobel Prize in science

    Laura Stephey, a postdoctoral fellow at Lawrence Berkeley National Lab (LBNL) is optimizing data processing for the DESI experiment so that results can be returned to researchers overnight in order to plan their next night of observation.

    Developing fusion as a renewable energy source


    Model power source. University of Texas student Tess Bernard is developing computer simulations to model the physics of plasmas in order to design successful fusion experiments. Courtesy Kurzgesagt.

    Plasma is the fourth state of matter, made up of energetic, charged particles. Fusion happens when two light elements, like hydrogen, fuse together to form a heavier element, such as helium, and give off a lot of energy. This process happens naturally in stars like our sun, but scientists are working to recreate this in a lab.

    Tess Bernard, a graduate student at the University of Texas at Austin is developing computer simulations to model the physics of plasmas in order to help design successful fusion experiments. Says Bernard, “If we can successfully harness fusion energy on earth, we can provide a clean, renewable source of energy for the world.”

    Dealing with big data

    Modern scientific computing addresses a wide variety of real-world problems, from developing efficient fuels to predicting extreme weather. But these applications produce immense volumes of data which are cumbersome to store, manage, and explore.

    Which is why Margaret Lawson, a PhD student at the University of Illinois at Urbana-Champaign and Sandia National Laboratories is creating a system that allows scientists working with massive amounts of data to tag and search specific data. This makes it easier for scientists to make discoveries since the most interesting data is highlighted for further analysis.

    Preparing for exascale

    Exascale computing will represent a 50- to 100-fold increase in speed over today’s supercomputers and promises significant breakthroughs in many areas. But to reach these speeds, exascale machines will be massively parallel, and applications must be able to perform on a wide variety of architectures.

    Abigail Hsu, a PhD student at Stony Brook University, is investigating how different approaches to parallel optimization impact the performance portability of unstructured mesh Fortran codes. She hopes this will encourage the development of Fortran applications for exascale architectures.

    Sanity-checking simulations

    Computers make mistakes. And sometimes those failures have serious consequences. Like during the Gulf War, when an American missile failed to intercept an incoming Iraqi Scud. The Scud struck a barrack, killing 28 soldiers and injuring a hundred others. A report attributed this to computer arithmetic error–specifically a small error of 0.34 seconds in the system’s internal clock.

    Harshitha Menon, a computer scientist at Lawrence Livermore National Laboratory (LLNL) is developing a method to understand the impact of arithmetic errors in computing. Her tool identifies vulnerable regions of code to ensure that simulations give correct results.

    Says Menon, “We need to understand the impact of these errors on our computer programs because scientists and policy makers rely on their results to make accurate predictions that can have lasting impact.”

    More women, more science

    Want to find out more? All of these researchers—and many more—will be presenting their work at SC18 in the Women in HPC workshop on Sunday, November 11, 2018.

    So that covers astronomy, physics, computer science, and math. And they say women don’t like science. We say that’s a pretty unscientific conclusion.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 11:12 am on October 18, 2018 Permalink | Reply
    Tags: , Power of Neuorgaming Center, Science Node, UC San Diego Qualcomm Institute   

    From Science Node: “Don’t look away” 

    Science Node bloc
    From Science Node

    16 Oct, 2018
    Alicia Clarke

    What if eye-tracking games could improve the lives of people living with autism?

    At some point we’ve probably all found ourselves immersed in a video game—having fun while trying to advance to the next level. But what if games could do more than entertain? What if they could improve cognitive behaviors and motor skills at the same time?

    1
    If you look away, you crash your spaceship. Gaze-driven games harness the connection between eye movement and attention, training players that better engagement gets better results. Courtesy Alex Matthews, UC San Diego Qualcomm Institute.

    Those are some of the questions that led neuroscientist and eye tracking expert Leanne Chukoskie to create video games that do just that. Chukoskie now directs the Power of Neuorgaming Center (aptly shortened to PoNG) at the Qualcomm Institute. There she and her team create video games to help people on the autism spectrum lead fuller lives.

    Filling a gap

    Together with Jeanne Townsend, director of UC San Diego’s Research on Autism and Development Lab, Chukoskie saw an opening to explore neurogaming as a way to improve attention, gaze control and other behaviors associated with autism. The video games are gaze-driven, which means that they are played with the eyes, and not a mouse or a touchscreen.

    “We realized there was enormous growth potential in autism intervention—where you translate research into tools that can help people,” Chukoskie said. “Jeanne and I wanted to intervene, not just measure things. We wanted our work to be useful to the world sooner rather than later. And these games are the result of that goal.”


    The power of attention. UCSD researchers are developing games that train attention-orienting skills like a muscle, improving social development outcomes for children with autism. Courtesy Global Silicon Valleys.

    Chukoskie and her team, which includes adults on the autism spectrum and high school students, created four games and are busy making more. Their work was recently on display at the Qualcomm Institute during PoNG’s 2018 Internship Showcase.

    “Dr. Mole and Mr. Hide is one of our favorites. It’s basically what you think it is—all these little moles pop out of holes and you have to look at them to knock them back down. There are ninja moles you want to hit. Then the player begins to see professor moles, which we don’t want them to hit. (My joke is we don’t hit professors at UC San Diego!) This promotes fast and accurate eye movement and builds inhibitory control,” she explained.

    Beyond the lab

    Getting the games in the hands of people who can benefit from them most is another aspect that keeps Chukoskie busy. She and Townsend co-founded BrainLeap Technologies in 2017 to make that goal a reality. BrainLeap Technologies is headquartered in the Qualcomm Institute Innovation Space, just a short walk from the PoNG lab.

    3
    Dr. Mole and Mr. Hide. Knocking down moles as they pop out of holes promotes fast and accurate eye movement and builds inhibition control. Courtesy BrainLeap Technologies.

    “We want to make the games available to families, and eventually schools, so they do the most good for the most people.” said Chukoskie. “Starting a company wasn’t what I had in mind initially, but it soon became clear that’s what we needed to do.”

    As with her lab, students and interns play a critical role at BrainLeap Technologies. They bring their creativity, energy and skill. In return, they develop professional skills they can take into the workforce and their communities.

    The power of collaboration

    6
    Not just for autism. Neuroscientist Leanne Chukoskie is also exploring using video game simulations with sensors that monitor stress responses as a possible intervention against human trafficking. Courtesy Alex Matthews, UC San Diego Qualcomm Institute.

    Chukoskie’s enthusiasm and knack for developing products with real-world applications is creating buzz within the walls of the Qualcomm Institute. She is exploring other fields where neurogaming could have an impact. One area is human trafficking. Could video simulations with sensors that monitor stress responses help people recognize subtle signs of danger first in a simulation and then later in the real world? The opportunities for interdisciplinary collaborations are endless.

    “UC San Diego, and especially the Qualcomm Institute, opened my eyes to what can happen when we bring the power of our expertise together,” Chukoskie said. “On top of that, the institute has a strong social mission. It didn’t take long for it to become obvious that the Qualcomm Institute was the right place for our lab and our business.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 10:34 am on September 6, 2018 Permalink | Reply
    Tags: , , Science Node, ,   

    From Science Node: “Putting neutrinos on ice” 

    Science Node bloc
    From Science Node

    29 Aug, 2018
    Ken Chiacchia
    Jan Zverina

    1
    IceCube Collaboration/Google Earth: PGC/NASA U.S. Geological Survy Data SIO,NOAA, U.S. Navy, NGA, GEBCO Landsat/Copernicus.

    Identification of cosmic-ray source by IceCube Neutrino Observatory depends on global collaboration.

    Four billion years ago—before the first life had developed on Earth—a massive black hole shot out a proton at nearly the speed of light.

    Fast forward—way forward—to 45.5 million years ago. At that time, the Antarctic continent had started collecting an ice sheet. Eventually Antarctica would capture 61 percent of the fresh water on Earth.

    Thanks to XSEDE resources and help from XSEDE Extended Collaborative Support Service (ECSS) experts, scientists running the IceCube Neutrino Observatory in Antarctica and their international partners have taken advantage of those events to answer a hundred-year-old scientific mystery: Where do cosmic rays come from?

    U Wisconsin IceCube neutrino observatory

    U Wisconsin ICECUBE neutrino detector at the South Pole

    IceCube employs more than 5000 detectors lowered on 86 strings into almost 100 holes in the Antarctic ice NSF B. Gudbjartsson, IceCube Collaboration

    Lunar Icecube

    IceCube DeepCore annotated

    IceCube PINGU annotated


    DM-Ice II at IceCube annotated

    Making straight the path

    First identified in 1912, cosmic rays have puzzled scientists. The higher in the atmosphere you go, the more of them you can measure. The Earth’s thin shell of air, scientists came to realize, was protecting us from potentially harmful radiation that filled space. Most cosmic ray particles consist of a single proton. That’s the smallest positively charged particle of normal matter.

    Cosmic ray particles are ridiculously powerful. Gonzalo Merino, computing facilities manager for the Wisconsin IceCube Particle Astrophysics Center at the University of Wisconsin-Madison (UW), compares the force of a proton accelerated by the LHC, the world’s largest atom-smasher, as similar to the force of a mosquito flying into a person.

    LHC

    CERN map


    CERN LHC Tunnel

    CERN LHC particles

    By comparison, the “Oh-My-God” cosmic ray particle detected by the University of Utah in 1991 hit with the force of a baseball flying at 58 miles per hour.

    Because cosmic-ray particles are electrically charged, they would be pushed and pulled by every magnetic field they encounter along the way. Cosmic rays would not travel in a straight line, particularly if they came from some powerful object far away in the Universe. You can’t figure out where they originated from by their direction when they hit Earth.

    Particle-physics theorists came to the rescue.

    “If cosmic rays hit any matter around them, the collision will generate secondary products,” Merino says. “A byproduct of any high-energy interaction with the protons that make up much of a cosmic ray will be neutrinos.”

    Neutrinos respond to gravity and to what’s known as the weak subatomic force, like most matter. But they aren’t affected by the electromagnetic forces that send cosmic rays on a drunkard’s walk. Scientists realized that the intense showers of protons at the source of cosmic rays had to be hitting matter nearby, producing neutrinos that can be tracked back to their source.

    The shape of water

    But if the matter that makes up your instrument can’t interact with an incoming neutrino, how are you going to detect it? The answer lay in making the detector big.

    “The probability that a neutrino will interact with matter is extremely low, but not zero,” Merino explains. “If you want to see neutrinos, you need to build a huge detector so that they collide with matter at a reasonable rate.”

    2
    Multimessenger astronomy combines information from different cosmic messenger—cosmic rays, neutrinos, gamma rays, and gravitational waves—to learn about the distant and extreme universe. Courtesy IceCube Collaboration.

    Enter the Antarctic ice shelf. The ice here is nearly pure water and could be used as a detector. From 2005 through 2010, a UW-led team created the IceCube Neutrino Observatory by drilling 86 holes deep in the ice, re-freezing detectors in the holes. Their new detector consisted of 5,160 detectors suspended in a huge ice cube six-tenths of a mile on each side.

    The IceCube scientists weren’t quite ready to detect cosmic-ray-associated neutrinos yet. While the IceCube observatory was nearly pure water, it wasn’t completely pure. As a natural formation, its transparency might differ a bit from spot to spot, which could affect detection.

    “Progress in understanding the precise optical properties of the ice leads to increasing complexity in simulating the propagation of photons in the instrument and to a better overall performance of the detector,” says Francis Halzen, a UW professor of physics and the lead scientist for the IceCube Neutrino Observatory.

    GPUs to the rescue

    The collaborators simulated the effects of neutrinos hitting the ice using traditional supercomputers containing standard central processing units (CPUs). They realized, though, that portions of their computations would instead work faster on graphics-processing units (GPUs), invented to improve video-game animation.

    “We realized that a part of the simulation is a very good match for GPUs,” Merino says. “These computations run 100 to 300 times faster on GPUs than on CPUs.”

    Madison’s own GPU cluster and collaborators’ campuses’ GPU systems helped, but it wasn’t enough.

    3

    Then Merino had a talk with XSEDE ECSS expert Sergiu Sanielevici from the Pittsburgh Supercomputing Center (PSC), lead of XSEDE’s Novel and Innovative Projects.

    Pittsburgh Supercomputing Center 3000 cores, 6 TFLOPS

    Sanielevici filled him in on the large GPU capability of XSEDE supercomputing systems. The IceCube team wound up using a number of XSEDE machines for GPU and CPU computations: Bridges at PSC, Comet at the San Diego Supercomputer Center (SDSC), XStream at Stanford University and the collection of clusters available through the Open Science Grid Consortium.

    3
    Bridges at PSC

    SDSC Dell Comet supercomputer at San Diego Supercomputer Center (SDSC)

    Stanford U Cray Xstream supercomputer

    The IceCube scientists could not assume that their computer code would run well in the XSEDE system. Their massive and complex flow of calculations could have slowed down considerably had the new machines conflicted with it. ECSS expertise was critical to making the join-up smooth.

    “XSEDE’s resources integrated seamlessly; that was very important for us,” Merino says. “XSEDE has been very collaborative, extremely open in facilitating that integration.”
    Paydirt

    Their detector built and simulated, the IceCube scientists had to wait for it to detect a cosmic neutrino. On Sept. 22, 2017, it happened. An automated system tuned to the signature of a cosmic-ray neutrino sent a message to the members of the IceCube Collaboration, an international team with more than 300 scientists in 12 countries.

    This was important. A single neutrino detection would not have been proof by itself. Scientists at observatories that detect other types of radiation expected from cosmic rays needed to look at the same spot in the sky.

    4
    Blazars are a type of active galaxy with one of its jets pointing toward us. It emits both neutrinos and gamma rays that could be detected by the IceCube Neutrino Observatory as well as by other telescopes on Earth and in space. Courtesy IceCube/NASA.

    They found multiple types of radiation coming from the same spot in the sky as the neutrino. At this spot was a “blazar” called TXS 0506+056, about 4 billion light years from Earth. A type of active galactic nucleus (AGN), a blazar is a huge black hole sitting in the center of a distant galaxy, flaring as it eats the galaxy’s matter. Blazars are AGNs that happen to be pointed straight at us.

    The scientists think that the vast forces surrounding the black hole are likely the catapult that shot cosmic-ray particles on their way toward Earth. After a journey of 4 billion years across the vastness of space, one of the neutrinos created by those particles blazed a path through IceCube’s detector.

    The IceCube scientists went back over nine and a half years of detector data, before they’d set up their automated warning. They found several earlier detections from TXS 0506+056, greatly raising their confidence.

    The findings led to papers in the prestigious journal Science and Science in July 2018. Future work will focus on confirming that blazars are the source—or at least a major source—of the high-energy particles that fill the Universe.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 8:07 am on August 19, 2018 Permalink | Reply
    Tags: , , , , Dr Farah Alibay, , Science Node, When flying to Mars is your day job,   

    From BBC Presented by via Science Node: Women in STEM- “When flying to Mars is your day job” Dr Farah Alibay 

    Science Node bloc
    Science Node

    BBC
    BBC

    17 August 2018
    Mary Halton

    1
    “As a kid… I never really thought there was a job where you worked on spacecraft.” Farah Alibay

    Sending missions to Mars for a living sounds like a dream job. But not every day can be launch day – so what do Nasa’s spacecraft engineers get up to the rest of the time?

    Dr Farah Alibay is based at Nasa’s Jet Propulsion Laboratory (JPL) and works on the InSight mission – which lifted off to Mars in May 2018.

    NASA/Mars Insight Lander

    It aims to land on the planet in November and have a look inside – taking its internal temperature and listening for “Marsquakes” to learn more about how our nearest neighbour formed.

    Now halfway to the Red Planet and running to a Mars day rather than an Earth one, InSight is looked after by a dedicated team who regularly check in with the spacecraft on its long journey, including Dr Alibay.

    She shared a day at her job with the BBC.

    2
    “My official title is Payload Systems Engineer.” Farah Alibay/JPL/NASA

    What’s a working day like for you?

    So it’s sort of weird that we’re on our way to Mars… and it’s really boring! But really that’s the way you want it to be. Everything’s going fine, so we’ll just keep going!

    Before we launched, my job was to make sure that all the instruments were integrated properly on the spacecraft, and that they were tested properly.

    Right now while we’re sort of in this limbo time where we’re waiting, my job is to help the teams prepare for operations.

    3
    “I love that a lot of my work is collaborative, so I spend a lot of time working with other people.” Farah Alibay/JPL/NASA

    It’s kind of an engineer’s job to worry. Because it’s always the things you never imagined would happen that happen.

    We’re halfway to Mars right now, literally this week is the halfway point, and I’ve been getting Mars landing nightmares.

    Less than half the missions that have tried landing on Mars have succeeded. So it’s a little scary when you spend that much time on a spacecraft and it’s all going to come down to that one day – Monday 16 November. We’ll see what happens!

    The way that we operate the spacecraft is that we basically write commands. Each one is a piece of code that we send up to the spacecraft to tell it what to do when it’s on the ground.

    When the spacecraft is sleeping at night, we work. So we get all the data down, look at it and tell the spacecraft: “Hey InSight, tomorrow these are the tasks I want you to do!”

    4

    And then we uplink it, right before it wakes up in the morning. Then we go to bed and the spacecraft does its work.

    5
    Being ‘on console’ means working from mission control, home to the Deep Space Network which communicates with Nasa’s distant missions. Farah Alibay/JPL/NASA

    But because the Mars day shifts every day, we also have to shift our schedule by an hour every day. So the first day we’ll start at 6am, and then [the next] will be 7am… 8am… 9am… and then we take a day off.

    About once a week we’ve been turning on a different instrument and doing a checkout. So just making sure that everything was ok from launch, that the instrument is still behaving properly.

    One of those tests is happening today. We do that from console because the spacecraft is being operated at Lockheed Martin in Denver, and the instrument teams are looking at that data from Europe, so we use a system that allows us all to talk to each other.

    What’s your favourite aspect of your job?

    No matter what I do on a given day, no one’s really done it before. And I think that’s what’s exciting. We don’t just do incremental change, we do brand new things.

    6
    Landing sites are carefully chosen, as many of the spacecraft that have tried to land on Mars have met with an unpleasant end. Farah Alibay/JPL/NASA

    It helps put things in perspective, because my job does involve spending days looking at spreadsheets sometimes, or building PowerPoint slides, or answering emails. I definitely do a lot of that, so it’s just as boring sometimes as other jobs.

    But putting it into perspective… even on a boring day my spacecraft is still on its way to Mars!

    7
    Team X brainstorm: “You can go to them and say I have this wild idea, and they make you make this wild idea into a mission concept.” Farah Alibay/JPL/NASA

    How did you become a Nasa engineer?

    So my path is a little strange. I actually grew up in England… I grew up in Manchester and went to university at Cambridge and then ended up at MIT. When I was at MIT I interned at JPL.

    One of the things I try to do is mentor other women interns, because I had really great mentors when I was an intern, and that’s how I got my job.

    8
    Dr Alibay with JPL intern Taleen Sarkissian.Farah Alibay/JPL/NASA

    What’s next, after Mars?

    I will be part of the InSight team until the end of the instrument deployment, so probably until February 2019.

    My dream actually… we don’t have a mission on that yet, but my favourite moon is Saturn’s Enceladus.

    The geysers at the south pole of Enceladus are incredible, and I’ve worked on mission concepts before that we’ve proposed to Nasa to fly through those plumes. One day I want there to be a mission to do that.

    We’re focused on finding life in the Solar System right now, and I think a lot of us believe that in our lifetime… if there’s life in the Solar System we’re probably going find it.

    So I want to be part of the team that finds it.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 1:35 pm on July 5, 2018 Permalink | Reply
    Tags: Science Node, Some of our biggies, , University at Buffalo’s Center for Computational Research, XD Metrics on Demand (XDMoD) tool from U Buffalo   

    From Science Node: “”Getting the most out of your supercomputer” 

    Science Node bloc
    From Science Node

    02 Jul, 2018
    Kevin Jackson

    1
    No image caption or credit.

    As the name implies, supercomputers are pretty special machines. Researchers from every field seek out their high-performance capabilities, but time spent using such a device is expensive. As recently as 2015, it took the same amount of energy to run Tianhe-2, the world’s second-fastest supercomputer [now #4], for a year as it did to power a 13,501 person town in Mississippi.

    China’s Tianhe-2 Kylin Linux TH-IVB-FEP supercomputer at National Supercomputer Center, Guangzhou, China

    And that’s not to mention the initial costs associated with purchase, as well as salaries for staff to help run and support the machine. Supercomputers are kept incredibly busy by their users, often oversubscribed, with thousands of jobs in the queue waiting for others to finish.

    With computing time so valuable, managers of supercomputing centers are always looking for ways to improve performance and speed throughput for users. This is where Tom Furlani and his team at the University at Buffalo’s Center for Computational Research, come in.

    Thanks to a grant from the National Science Foundation (NSF) in 2010, Furlani and his colleagues have developed the XD Metrics on Demand (XDMoD) tool, to help organizations improve production on their supercomputers and better understand how they are being used to enable science and engineering.

    “XDMoD is an incredibly useful tool that allows us not only to monitor and report on the resources we allocate, but also provides new insight into the behaviors of our researcher community,” says John Towns, PI and Project Director for the Extreme Science and Engineering Discovery Environment (XSEDE).

    Canary in the coal mine

    Modern supercomputers are complex combinations of compute servers, high speed networks, and high performance storage systems. Each of these areas is a potential point of under performance or even outright failure. Add system software and the complexity only increases.

    With so much that can go wrong, a tool that can identify problems or poor performance as well as monitor overall usage is vital. XDMoD aims to fulfill that role by performing three functions:

    1. Job accounting – XDMoD provides metrics about utilization, including who is using the system and how much, what types of jobs are running, plus length of wait times, and more.

    2. Quality of service – The complex mechanisms behind HPC often mean that managers and support personnel don’t always know if everything is working correctly—or they lack the means to ensure that it is. All too often this results in users serving as “canaries in the coal mine” who identify and alert admins only after they’ve discovered an issue.

    To solve this, XDMoD launches application kernels daily that provide baseline performances for the cluster in question. If these kernels show that something that should take 30 seconds is now taking 120, support personnel know they need to investigate. XDMoD’s monitoring of the Meltdown and Spectre patches is a perfect example—the application kernels allowed system personnel to quantify the effects of the patches put in place to mitigate the chip vulnerabilities.

    3. Job-level performance – Much like job accounting, job-level performance zeroes in on usage metrics. However, this task focuses more on how well users’ codes are performing. XDMoD can measure the performance of every single job, helping users to improve the efficiency of their job or even figure out why it failed.

    Furlani also expects that XDMoD will soon include a module to help quantify the return on investment (ROI) for these expensive systems, by tying external funding of the supercomputer’s users to their external research funding.

    Thanks to its open-source code, XDMoD’s reach extends to commercial, governmental, and academic supercomputing centers worldwide, including England, Spain, Belgium, Germany, and many others.

    Future features

    In 2015, the NSF awarded the University at Buffalo a follow-on grant to continue work on XDMoD. Among other improvements, the project will include cloud computing metrics. Cloud use is growing all the time, and jobs performed there are much different in terms of metrics.

    2
    Who’s that user? XDMoD’s customizable reports help organizations better understand how their computing resources are being used to enable science and engineering. This graph depicts the allocation of resources delivered by supporting funding agency. Courtesy University at Buffalo.

    For the average HPC job, Furlani explains that the process starts with a researcher requesting resources, such as how many processors and how much memory they need. But in the cloud, a virtual machine may stop running and then start again. What’s more, a cloud-based supercomputer can increase and decrease cores and memory. This makes tracking performance more challenging.

    “Cloud computing has a beginning, but it doesn’t necessarily have a specific end,” Furlani says. “We have to restructure XDMoD’s entire backend data warehouse to accommodate that.”

    Regardless of where XDMoD goes next, tools like this will continue to shape and redefine what supercomputers can accomplish.

    Some of our biggies:

    ORNL IBM AC922 SUMMIT supercomputer. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    No. 1 in the world.

    LLNL SIERRA IBM supercomputer

    No.3 in the world

    ORNL Cray XK7 Titan Supercomputer

    No. 7 in the world

    NERSC Cray Cori II supercomputer at NERSC at LBNL, named after Gerty Cori, the first American woman to win a Nobel Prize in science

    No.10 in the world

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 11:18 am on June 3, 2018 Permalink | Reply
    Tags: , , , Science Node,   

    From Science Node: “Full speed ahead” 

    Science Node bloc
    From Science Node

    23 May, 2018
    Kevin Jackson

    US Department of Energy recommits to the exascale race.

    1

    The US was once a leader in supercomputing, having created the first high-performance computer (HPC) in 1964. But as of November 2017, TOP500 ranked Titan, the fastest American-made supercomputer, only fifth on its list of the most powerful machines in the world. In contrast, China holds the first and second spots by a whopping margin.

    ORNL Cray Titan XK7 Supercomputer

    Sunway TaihuLight, China

    Tianhe-2 supercomputer China

    But it now looks like the US Department of Energy (DoE) is ready to commit to taking back those top spots. In a CNN opinion article, Secretary of Energy Rick Perry proclaims that “the future is in supercomputers,” and we at Science Node couldn’t agree more. To get a better understanding of the DoE’s plans, we sat down for a chat with Under Secretary for Science Paul Dabbar.

    Why is it important for the federal government to support HPC rather than leaving it to the private sector?

    A significant amount of the Office of Science and the rest of the DoE has had and will continue to have supercomputing needs. The Office of Science produces tremendous amounts of data like at Argonne, and all of our national labs produce data of increasing volume. Supercomputing is also needed in our National Nuclear Security Administration (NNSA) mission, which fulfills very important modeling needs for Department of Defense (DoD) applications.

    But to Secretary Perry’s point, we’re increasingly seeing a number of private sector organizations building their own supercomputers based on what we had developed and built a few generations ago that are now used for a broad range of commercial purposes.

    At the end of the day, we know that a secondary benefit of this push is that we’re providing the impetus for innovation within supercomputing.

    We assist the broader American economy by helping to support science and technology innovation within supercomputing.

    How are supercomputers used for national security?

    The NNSA arm, which is one of the three major arms of the three Under Secretaries here at the department, is our primary area of support for the nation’s defense. And as various testing treaties came into play over time, having the computing capacity to conduct proper testing and security of our stockpiled weapons was key. And that’s why if you look at our three exascale computers that we’re in the process of executing, two of them are on behalf of the Office of Science and one of them is on behalf of the NNSA.

    One of these three supercomputers is the Aurora exascale machine currently being built at Argonne National Laboratory, which Secretary Perry believes will be finished in 2021. Where did this timeline come from, and why Argonne?

    Argonne National Laboratory ALCF

    ANL ALCF Cetus IBM supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    Depiction of ANL ALCF Cray Shasta Aurora supercomputer

    There was a group put together across different areas of DoE, primarily the Office of Science and NNSA. When we decided to execute on building the next wave of top global supercomputers, an internal consortium named the Collaboration of Oak Ridge, Argonne, and Livermore (CORAL) was formed.

    That consortium developed exactly how to fund the technologies, how to issue requests, and what the target capabilities for the machines should be. The 2021 timeline was based on the CORAL group, the labs, and the consortium in conjunction with the Department of Energy headquarters here, the Office of Advanced Computing, and ultimately talking with the suppliers.

    The reason Argonne was selected for the first machine was that they already have a leadership computing facility there. They have a long history of other machines of previous generations, and they were already in the process of building out an exascale machine. So they were already looking at architecture issues, talking with Intel and others on what could be accomplished, and taking a look at how they can build on what they already had in terms of their capabilities and physical plant and user facilities.

    Why now? What’s motivating the push for HPC excellence at this precise moment?

    A lot of this is driven by where the technology is and where the capabilities are for suppliers and the broader HPC market. We’re part of a constant dialogue with the Nvidias, Intels, IBMs, and Crays of the world in what we think is possible in terms of the next step in supercomputing.

    Why now? The technology is available now, and the need is there for us considering the large user facilities coming online across the whole of the national lab complex and the need for stronger computing power.

    The history of science, going back to the late 1800s and early 1900s, was about competition along strings of types of research, whether it was chemistry or physics. If you take any of the areas of science, including high-performance computing, anything that’s being done by anyone out there along any of these strings causes us all to move us along. However, we at the DoE believe America must and should be in the lead of scientific advances across all different areas, and certainly in the area of computing.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 1:07 pm on May 18, 2018 Permalink | Reply
    Tags: China's Sunway TaihuLight- the world's fastest supercomputer, Science Node,   

    From Science Node: “What puts the super in supercomputer?” 

    Science Node bloc
    From Science Node

    1
    No image caption or credit.

    The secret behind supercomputing? More of everything.

    14 May, 2018
    Kevin Jackson

    We’ve come a long way since MITS developed the first personal computer in 1974, which was sold as a kit that required the customer to assemble the machine themselves. Jump ahead to 2018, and around 77% of Americans currently own a smartphone, and nearly half of the global population uses the internet.


    Superpowering science. Faster processing speeds, extra memory, and super-sized storage capacity are what make supercomputers the tools of choice for many researchers.

    The devices we keep at home and in our pockets are pretty advanced compared to the technology of the past, but they can’t hold a candle to the raw power of a supercomputer.

    The capabilities of the HPC machines we talk about so often here at Science Node can be hard to conceptualize. That’s why we’re going to lay it all out for you and explain how supercomputers differ from the laptop on your desk, and just what it is these machines need all that extra performance for.

    The need for speed

    Computer performance is measured in FLOPS, which stands for floating-point operations per second. The more FLOPS a computer can process, the more powerful it is.

    2
    You’ve come a long way, baby. The first personal computer, the Altair 8800, was sold in 1974 as a mail-order kit that users had to assemble themselves.

    For example, look to the Intel Core i9 Extreme Edition processor designed for desktop computers. It has 18 cores, or processing units that take in tasks and complete them based on received instructions.

    This single chip is capable of one trillion floating point operations per second (i.e., 1 teraFLOP)—as fast as a supercomputer from 1998. You don’t need that kind of performance to check email and surf the web, but it’s great for hardcore gamers, livestreaming, and virtual reality.

    Modern supercomputers use similar chips, memory, and storage as personal computers, but instead of a few processors they have tens of thousands. What distinguishes supercomputers is scale.

    China’s Sunway TaihuLight, which is currently the fastest supercomputer in the world, boasts 10,648,600 cores with a maximum performance of more than 93,014.6 teraFLOPS.

    Sunway TaihuLight, China

    Theoretically, the Sunway TaihuLight is capable of reaching 125,436 teraFLOPS of performance—more than 125 thousand times faster than the Intel Core i9 Extreme Edition processor. And it ‘only’ cost around ¥1.8 billion ($270 million), compared to the Intel chip’s price tag of $1,999.

    Don’t forget memory

    A computer’s memory holds information while the processor is working on it. When you’re playing Fortnite, your computer’s random-access memory (RAM) stores and updates the speed and direction in which you’re running.

    Most people will get by fine with 8 to 16 GB of RAM. Hardcore gamers generally find that 32GB of RAM is enough, but computer aficionados that run virtual machines and perform other high-end computing tasks at home or at work will sometimes build machines with 64GB or more of RAM.

    ____________________________________________________
    What is a supercomputer used for?

    Climate modeling and weather forecasts
    Computational fluid dynamics
    Genome analysis
    Artificial intelligence (AI) and predictive analytics
    Astronomy and space exploration
    ____________________________________________________

    The Sunway TaihuLight once again squashes the competition with around 1,310,600 GB of memory to work with. This means the machine can hold and process an enormous amount of data at the same time, which allows for large-scale simulations of complex events, such as the devastating 1976 earthquake in Tangshan.

    Even a smaller supercomputer, such as the San Diego Supercomputer Center’s Comet, has 247 terabytes of memory—nearly 4000 times that of a well-equipped laptop.

    SDSC Dell Comet supercomputer at San Diego Supercomputer Center (SDSC)

    Major multitasking

    Another advantage of supercomputers is their ability to excel at parallel computing, which is when two or more processors run simultaneously and divide the workload of a task, reducing the time it takes to complete.

    Personal computers have limited parallel ability. But since the 1990s, most supercomputers have used massively parallel processing, in which thousands of processors attack a problem simultaneously. In theory this is great, but there can be problems.

    Someone (or something) has to decide how the task will be broken up and shared among the processors. But some complex problems don’t divide easily. One task may be processed quickly, but then must wait on a task that’s processed more slowly. The practical, rather than theoretical, speeds of supercomputers depend on this kind of task management.

    Super powers for super projects

    You might now be looking at your computer in disappointment, but the reality is that unless you’re studying volcanoes or sequencing the human genome, you simply don’t need that kind of power.

    The truth is, many supercomputers are shared resources, processing data and solving equations for multiple teams of researchers at the same time. It’s rare for a scientist to use a supercomputer’s entire capacity just for one project.

    So while a top-of-the-line machine like the Sunway TaihuLight leaves your laptop in the dirt, take heart that personal computers are getting faster all the time. But then, so are supercomputers. With each step forward in speed and performance, HPC technology helps us unlock the mysteries of the universe around us.

    Read more:

    The 5 fastest supercomputers in the world
    The race to exascale
    3 reasons why quantum computing is closer than ever

    See the full article here .

    Please help promote STEM in your local schools.
    stem

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 12:15 pm on April 26, 2018 Permalink | Reply
    Tags: , , , Science Node,   

    From Science Node: “Autism origins in junk DNA” 

    Science Node bloc
    Science Node

    [This post is dedicated to all of my readers whose lives and children have been affected by Austism in all of its many forms.]

    25 Apr, 2018
    Scott LaFee
    Jan Zverina

    Genes inherited from both parents contribute to development of autism in children.

    1
    Courtesy Unsplash/Brittany Simuangco.

    One percent of the world’s population lives with autism spectrum disorder (ASD), and the prevalence is increasing by around ten percent each year. Though there is no obvious straight line between autism and any single gene, genetics and inherited traits play an important role in development of the condition.

    In recent years, researchers have firmly established that gene mutations appearing for the first time, called de novo mutations, contribute to approximately one-third of cases of autism spectrum disorder (ASD).

    2
    Early symptoms. Children with ASD may avoid eye contact, have delayed speech, and fail to demonstrate interest. Courtesy Unsplash.

    In a new study [Science], an international team led by scientists at University of California San Diego (UCSD) School of Medicine have identified a culprit that may explain some of the remaining risk: rare inherited variants in regions of non-coding DNA.

    The newly discovered risk factors differ from known genetic causes of autism in two important ways. First, these variants do not alter the genes directly but instead disrupt the neighboring DNA control elements that turn genes on and off, called cis-regulatory elements or CREs. Second, these variants do not occur as new mutations in children with autism, but instead are inherited from their parents.

    “For ten years we’ve known that the genetic causes of autism consist partly of de novo mutations in the protein sequences of genes,” said Jonathan Sebat, a professor of psychiatry, cellular and molecular medicine and pediatrics at UCSD School of Medicine and chief of the Beyster Center for Genomics of Psychiatric Genomics. “However, gene sequences represent only 2 percent of the genome.”

    ____________________________________________________

    Autism facts

    Autism affects 1 in 68 children
    Boys are four times more likely than girls to have autism
    Symptoms usually appear before age 3
    Autism varies greatly; no two people with autism are alike
    There is currently no cure for autism
    Early intervention is key to successful treatment
    ____________________________________________________

    To investigate the other 98 percent of the genome in ASD, Sebat and his colleagues analyzed the complete genomes of 9,274 subjects from 2,600 families. One thousand genomes were sequenced in San Diego at Human Longevity Inc. (HLI) and at Illumina Inc.

    DNA sequences were analyzed with the Comet supercomputer at the San Diego Supercomputer Center (SDSC).

    SDSC Dell Comet supercomputer at San Diego Supercomputer Center (SDSC)

    These data were then combined with other large studies from the Simons Simplex Collection and the Autism Speaks MSSNG Whole Genome Sequencing Project.

    “Whole genome sequence data processing and analysis are both computationally and resource intensive,” said Madhusudan Gujral, an analyst with SDSC and co-author of the paper.

    Using SDSC’s Comet, processing and identifying specific structural variants from a single genome took about 2½-days.

    “Since Comet has 1,984 compute nodes and several petabytes of scratch space for analysis, tens of genomes can be processed at the same time,” added SDSC scientist Wayne Pfeiffer. “Instead of months, with Comet we were able to complete the data processing in weeks.”

    The researchers then analyzed structural variants, deleted or duplicated segments of DNA that disrupt regulatory elements of genes, dubbed CRE-SVs. From the complete genomes of families, the researchers found that CRE-SVs that are inherited from parents also contributed to ASD.


    HPC for the 99 percent. The Comet supercomputer at SDSC meets the needs of underserved researchers in domains that have not traditionally relied on supercomputers to help solve problems. Courtesy San Diego Supercomputer Center.

    “We also found that CRE-SVs were inherited predominantly from fathers, which was a surprise,” said co-first author William M. Brandler, PhD, a postdoctoral scholar in Sebat’s lab at UCSD and bioinformatics scientist at HLI.

    “Previous studies have found evidence that some protein-coding variants are inherited predominantly from mothers, a phenomenon known as a maternal origin effect. The paternal origin effect we see for non-coding variants suggests that the inherited genetic contribution from mothers and fathers may be qualitatively different.”

    Sebat said current research does not explain with certainty what mechanism determines these parent-of-origin effects, but he has proposed a plausible model.

    “There is a wide spectrum of genetic variation in the human population, with coding variants having strong effects and noncoding variants having weaker effects,” he said. “If men and women differ in their capacity to tolerate such variants, this could give rise to the parent-of-origin effects that we see.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: