Tagged: Science Node Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:03 pm on April 29, 2017 Permalink | Reply
    Tags: , , Science Node, The risks to science-based policy we aren’t talking about   

    From Science Node: “The risks to science-based policy we aren’t talking about” 

    Science Node bloc
    Science Node

    19 Apr, 2017 [Where has this been?]
    Gretchen Goldman

    1
    Courtesy Jesse Springer.

    You’d think public policy would benefit the public, but increasingly that’s not the case. Gretchen Goldman from the Union of Concerned Scientists outlines the threats to evidence-based policies.


    The evidence of how the relationship between corporations and the political system is playing out.

    “Thank you, Dr. Goldman. That was frightening,” moderator Keesha Gaskins-Nathan said to me after I spoke last week as the only scientist at the Stetson University Law Review Symposium.

    “My talk covered the ways that the role of science in federal decisionmaking is being degraded by the Trump administration, by Congress, and by corporate and ideological forces.

    Together, these alarming moves are poised to damage the crucial role that science plays in keeping us all safe and healthy — this is why I will march at the March for Science on April 22.

    If current trends proceed unabated, science-based policy as we know it could change forever. Indeed, some of its core tenets are being chipped away. And a lot is at stake if we fail to stop it.

    We are currently witnessing efforts by this administration and Congress to freeze and roll back the federal government’s work to protect public health and safety. Congress is attempting to pollute the science advice that decisionmakers depend on, and is appointing decisionmakers who are openly hostile to the very missions of the science agencies they now lead.

    Threats to science-based America

    We cannot afford to make decisions without science. But now, this very process by which we make science-based policies in this country is under threat.

    Our decisionmakers have deep conflicts of interest, disrespect for science, and aren’t being transparent.

    This is a recipe for disaster.

    How can our leaders use science effectively to inform policy decisions if they can’t even make independent decisions and don’t recognize the value of science?

    EPA chief administrator Scott Pruitt, for example, this month said that carbon dioxide “is not a primary contributor to global warming.” (It is.)

    This blatant misinforming on climate science occurred on top of his extensive record of suing the agency over the science-based ozone rule I just described (among other rules).

    This type of disrespect for science-based policies from cabinet members is an alarming signal of the kind of scientific integrity losses we can expect under this administration.

    Congress is trying to degrade science advice.

    A cornerstone of science-based policy is the role of independent science advice feeding into policy decisions.

    But Congress wants to change who sits on science advisory committees and redefine what counts as science. The Regulatory Accountability Act, for example, would threaten how federal agencies can use science to make policy decisions.

    Past versions of the bill (which has already passed the House this year and is expected to be introduced soon in the Senate) have included concerning provisions. One mandated that government agencies could only use science if all of the underlying data and methods were publicly available — including health data, proprietary data, trade secrets, and intellectual property.

    In another case, the bill added more than 70 new regulatory procedures that would effectively shut down the government’s ability to protect us from new threats to our health, safety, and the environment. It is a dangerous precedent when politicians — not scientists — are deciding how science can inform policy decisions.

    Scientists face intimidation, muzzling, and political attacks.

    No one becomes a scientist because they want a political target on their back. But this is unfortunately what many scientists are now facing.

    While it won’t be enacted in its current form, the president’s budget shows his frightening priorities, which apparently include major cuts to science agencies like the EPA, Department of Energy, and NOAA.

    Communication gag orders, disappearing data, and review of scientific documents by political appointees in the first month of the administration have created a chilling effect for scientists within the government.

    Congress has even revived the Holman Rule, which allows them to reduce the salary of a federal employee down to $1.

    It is easy to see how such powers could be used to target government scientists producing political controversial science.

    Hurting science hurts real people

    Importantly, we must be clear about who will be affected most if science-based policymaking is dismantled. In many cases, these burdens will disproportionately fall on low-income communities and communities of color.

    If we cannot protect people from ozone pollution, those in urban areas, those without air conditioning, and those with lung diseases will be hurt most.

    If we cannot address climate change, frontline communities in low-lying areas will bear the brunt of it.

    If we cannot keep harmful chemicals out of children’s toys, families who buy cheaper products at dollar stores will pay the price.

    If we cannot protect people from unsafe drugs (FDA), contaminated food (USDA, FDA), occupational hazards (OSHA), chemical disasters (EPA, OSHA, DHS), dangerous vehicles (DOT) and unsafe consumer products (CPSC), then we’re all at risk.

    This is about more than science. It is about protecting people using the power of science. We have everything to lose.

    But we can take action. We can articulate the benefits of science to decisionmakers, the media, and the public.

    We can hold our leaders accountability for moves they make to dismantle science-based policy process.

    And we can support our fellow scientists both in and outside of the federal government.

    It starts with marching — but it cannot end here.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 4:34 pm on April 28, 2017 Permalink | Reply
    Tags: , , Open Science Grid, Science Node, XSEDE-Extreme Science and Engineering Discovery Environment   

    From Science Node: “A record year for the Open Science Grid” 

    Science Node bloc
    Science Node

    1
    Courtesy Open Science Grid.

    27 Apr, 2017
    Greg Moore

    Serving researchers across a wide variety of scientific disciplines, the Open Science Grid (OSG) weaves the national fabric of distributed high throughput computing.

    Over the last 12 months, the OSG has handled over one billion CPU hours. These record numbers have transformed the face of science nationally.

    2

    “We just had a record week recently of over 30 million hours (close to 32.8 million) and the trend is pointing to frequent 30 million-hour weeks — it will become typical,” says Scott Teige, manager of OSG’s Grid Operations Center at Indiana University (IU).

    “To reach 32.8 million, we need 195,000 cores running 24/7 for a week.”

    Teige’s job is to keep things running smoothly. The OSG Grid Operations Center provides operational support for users, developers, and system administrators. They are also on point for real-time monitoring and problem tracking, grid service maintenance, security incident response, and information repositories.

    Big and small

    Where is all this data coming from? Teige explains that the largest amount of data is coming from the experiments associated with the Large Hadron Collider (LHC), for which the OSG was originally designed.

    But the LHC is just part of the story. There are plenty of CPU cycles to go around, so opportunistic use has become a much larger focus. When OSG resources are not busy, scientists from many disciplines use those hours to revolutionize their science.

    For example, the Structural Protein-Ligand Interactome (SPLINTER) project by the Indiana University School of Medicine predicts the interaction of thousands of small molecules with thousands of proteins using the three-dimensional structure of the bound complex between each pair of protein and compound.

    By using the OSG, SPLINTER finds a quick and efficient solution to its computing needs — and develops a systems biology approach to target discovery.

    The opportunistic resources deliver millions of CPU hours in a matter of days, greatly reducing simulation time. This allows researchers to identify small molecule candidates for individual proteins, or new protein targets for existing FDA-approved drugs and biologically active compounds.

    “We serve virtual organizations (VOs) that may not have their own resources,” says Teige. “SPLINTER is a prime example of how we partner with the OSG to transform research — our resources alone cannot meet their needs.”

    Hoosier nexus

    Because Teige’s group is based at Indiana University, a lot of the OSG operational infrastructure is run out of the IU Data Center. And, because IU is an Extreme Science and Engineering Discovery Environment (XSEDE) resource, the university also handles submissions to the OSG.

    3
    OSG meets LHC. A view inside the Compact Muon Solenoid (CMS) detecter, a particle detector on the LHC. The OSG was designed for the massive datasets generated in the search for particles like the Higgs boson. Courtesy Tighe Flanagan. (CC BY-SA 3.0)

    That means scientists and researchers nationwide can connect both to XSEDE’s collection of integrated digital resources and services and to OSG’s opportunistic resources.

    “We operate information services to determine states of resources used in how jobs are submitted,” said Teige. “We operate the various user interfaces like the GOC homepage, support tools, and the ticket system. We also operate a global file system called Oasis where files are deposited to be available for use in a reasonably short time span. And we provide certification services for the user community.”

    From LHC big data to smaller opportunistic research computing needs, Teige’s team makes sure the OSG has the support they depend on so discovery moves forward reliably and transparently.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 10:41 am on March 14, 2017 Permalink | Reply
    Tags: , dark web, Science Node, , Wrangling crime in the deep   

    From Science Node: “Wrangling crime in the deep, dark web” 

    Science Node bloc
    Science Node

    06 Mar, 2017
    Jorge Salazar

    Much of the internet hides like an iceberg below the surface.

    This so-called ‘deep web’ is estimated to be 500 times bigger than the ‘surface web’ seen through search engines like Google. For scientists and others, the deep web holds important computer code and licensing agreements.

    Nestled further inside the deep web, one finds the ‘dark web,’ a place where images and video are used by traders in illicit drugs, weapons, and human lives.

    “Behind forms and logins, there are bad things,” says Chris Mattmann, chief architect in the instrument and science data systems section of the NASA Jet Propulsion Laboratory (JPL) at the California Institute of Technology.

    “Behind the dynamic portions of the web, people are doing nefarious things, and on the dark web, they’re doing even more nefarious things. They traffic in guns and human organs. They’re doing these activities and then they’re tying them back to terrorism.”

    In 2014, the Defense Advanced Research Projects Agency (DARPA) started a program called Memex to make the deep web accessible. “The goal of Memex was to provide search engines the retrieval capacity to deal with those situations and to help defense and law enforcement go after the bad guys on the deep web,” Mattmann says.

    At the same time, the US National Science Foundation (NSF) invested $11.2 million in a first-of-its-kind data-intensive supercomputer – the Wrangler supercomputer, now housed at the Texas Advanced Computing Center (TACC). The NSF asked engineers and computer scientists at TACC, Indiana University, and the University of Chicago if a computer could be built to handle massive amounts of input and output.


    TACC Wrangler

    Wrangler does just that, enabling the speedy file transfers needed to fly past big data bottlenecks that can slow down even the fastest computers. It was built to work in tandem with number crunchers such as TACC’s Stampede, which in 2013 was the sixth fastest computer in the world.


    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    “Although we have a lot of search-based queries through different search engines like Google, it’s still a challenge to query the system in way that answers your questions directly,” says Karanjeet Singh.

    Singh is a University of Southern California graduate student who works with Chris Mattmann on Memex and other projects.

    “The objective is to get more and more domain-specific information from the internet and to associate facts from that information.”

    Once the Memex user extracts the information they need, they can apply tools such as named entity recognizer, sentiment analysis, and topic summarization. This can help law enforcement agencies find links between different activities, such as illegal weapon sales and human trafficking.

    The problem is that even the fastest computers like Stampede weren’t designed to handle the input and output of the millions of files needed for the Memex project.

    “Let’s say that we have one system directly in front of us, and there is some crime going on,” Singh says. “What the JPL is trying to do is automate a lot of domain-specific query processes into a system where you can just feed in the questions and receive the answers.”

    For that, he works with an open source web crawler called Apache Nutch. It retrieves and collects web page and domain information of the deep web. The MapReduce framework powers those crawls with a divide-and-conquer approach to big data that breaks it up into small pieces that run simultaneously.

    The NSF asked engineers and computer scientists at TACC, Indiana University, and the University of Chicago if a computer could be built to handle massive amounts of input and output.

    Wrangler avoids data overload by virtue of its 600 terabytes of speedy flash storage. What’s more, Wrangler supports the Hadoop framework, which runs using MapReduce.

    Together, Wrangler and Memex constitute a powerful crime-fighting duo. NSF investment in advanced computation has placed powerful tools in the hands of public defense agencies, moving law enforcement beyond the limitations of commercial search engines.

    “Wrangler is a fantastic tool that we didn’t have before as a mechanism to do research,” says Mattman. “It has been an amazing resource that has allowed us to develop techniques that are helping save people, stop crime, and stop terrorism around the world.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 3:52 pm on February 12, 2017 Permalink | Reply
    Tags: Gender discrimination is not just a women's issue, , Science Node,   

    From Science Node: Women in STEM – “Continue the conversation” Very Important 

    Science Node bloc
    Science Node

    07 Feb, 2017
    Helen Patton

    1

    According to recent research conducted by the US National Science Foundation (NSF), although women comprise more than half of the U.S. workforce, only 28% are employed in STEM related fields and of that, only 11% are pursuing a career in information security.

    Similarly, when looking at workplace diversity, minorities represented 29% of STEM related fields, with approximately 6% Hispanic and 8% African-American representation in the IT sector.

    2
    Leading ladies. Participants in a gender and diversity panel at the Internet2 Technology Exchange. From left, Theresa Semmens, Helen Patton, Mary Dunker, and Kimberly Milford. Courtesy Internet2.

    When asked why women chose to leave the profession or why they might not consider a career in information security and IT, often the answers are as complex as the problem.

    Some cite stereotyping, organizational culture, and the lack of encouragement and support from management and fellow colleagues. Others cite the lack of guidance from management and uncertainty about their career trajectory.

    Moving forward, my co-panelists and I offer the following guiding principles to anyone interested in supporting gender and diversity initiatives.

    Engaging everyone in the dialogue

    Gender discrimination is not just a women’s issue – it’s a men’s issue, too. Similarly, making concerted efforts to challenge the lack of diversity in the workplace should be everyone’s concern. It’s important to include both men and women in the conversation and work collectively at solving the gender discrimination and diversity problems in the workplace.

    Building a community of allies

    Many of our male colleagues have expressed their desire to put an end to gender discrimination and make real change to improve diversity. There needs to be tools and resources, such as this one about male allies, that help our colleagues become allies for women both at work and at home.

    Sharing success stories

    It’s important to move beyond simply presenting the data on gender discrimination in the workplace. In addition to making tools accessible, we must highlight possible solutions and share success stories alongside the data. A good reference for this is the National Center for Women and Information Technology (NCWIT).


    Target practices. Building on insights from behavioral economics, Iris Bohnet argues that to overcome gender bias in organizations and society, we should focus on de-biasing systems — how we evaluate performance, hire, promote, structure tests, form groups — rather than on trying to de-bias people.

    Another great resource is Iris Bohnet’s book What Works – Gender Diversity by Design, which makes suggestions on ways we can recruit, hire, develop, and promote gender diverse talent.

    Inclusive language

    We want to be conscious of how we present the profession through the use of language. We want to avoid using terms and descriptions that may come across as biased, either consciously or unconsciously. We want to ensure the terms and language we use are gender-inclusive.

    Commitment to mentorship

    A coach, mentor, or advocate will instill in the person seeking help and advice the idea that they can make a difference and are valued for their contributions. Mentorship forms a support system that enhances a positive experience of growth and development for an individual’s career.

    Research suggests that the most beneficial mentoring is based on mutual learning, active engagement, and striving to push the leadership capabilities of mentees.

    Championing diversity

    We need to ensure that everyone who has an interest and desire to break into information security has the opportunity, comfort level, and confidence to do so.

    Diversity in the workplace contributes to an institution’s creativity and adds new perspectives to professional conversations. It creates a well-rounded team and allows for more efficiencies, diverse ideas, varied technical skill sets, broader communication forums, and business management skill sets.

    Women and minorities need champions; those who advocate, support, and recognize their efforts and contributions.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 1:35 pm on February 11, 2017 Permalink | Reply
    Tags: Francine Berman, IoT Bill of Rights, Science Node, Toward an ethics of the Internet of Things   

    From Science Node: “Toward an ethics of the Internet of Things” 

    Science Node bloc
    Science Node

    02 Feb, 2017
    Lance Farrell

    1
    Courtesy Esther Emmely Gons. (CC BY 2.0)

    2
    Francine Berman

    The Internet of Things (IoT): let’s define some terms here.

    Internet: Well that’s simple enough — that electronic communications network through which we all gather our mail, news, and cat videos.

    Then what’s this ‘things’ part?

    Examples might help here: Refrigerators, phones, copy machines, baby monitors, automobiles, microwaves, street lights – really, any computerized object that can link to the internet.

    In this brave new world of networked devices, how do we maintain individual rights and manage the IoT safely and ethically?

    That’s the question we put to Francine Berman, the Edward P. Hamilton distinguished professor in computer science at Rensselaer Polytechnic Institute.

    What’s the promise and peril inherent in IoT?

    The IoT has tremendous potential to enhance society, work, and life. Smart, networked systems can make us safer in our homes and vehicles, increase our efficiency at work and at play, empower us through information, and create new opportunities. But technologies have neither social values nor ethics.

    The same systems that can be used to enhance our lives can also be used to facilitate misbehavior. Denial-of-service attacks or hackers can put smart infrastructure systems — and the people they serve — in danger. Remote surveillance systems that allow parents to check on their infants, or adult children to check on aging parents have also been used to spy on unsuspecting individuals and scream at babies.

    The potential of the IoT will be achieved when we have a common sense of appropriate behavior, social mechanisms to enforce responsibility and accountability, and when we enable technical architectures that incorporate safety, security, and protections. For best results, we need to develop all of these in coordination and not just after technologies have matured.

    Many people assume the rights and protections we enjoy in democratic society are applicable to the IoT realm. Is this not the case?

    Whether we’re dealing with rights and protections in existing scenarios or new ones, the IoT will be a brave new world. We will need to conceptualize, extend, or re-establish a working notion of individual rights and the public good.

    Our mileage will vary: In some cases, rights and protections from other spheres of life will be extensible to the IoT, although the IoT and digital technologies will vastly impact the interpretation and potential consequences of existing policy and law.

    For example, we have seen policy makers and lawmakers struggle to extend copyright law into digital media such as YouTube and apply health information privacy laws to smartphone health apps.

    These scenarios provide vastly different environments than the original scenarios covered by law and policy and will need to evolve to adequately promote responsible behavior in IoT environments.

    The IoT will also necessitate new rights and protections. For example, in environments with embedded surveillance, do you have a right to opt out? It may be that in many instances you don’t. What are your rights and what is society’s responsibility toward you in these environments?

    An IoT ‘Bill of Rights’ sounds like a good idea — to what extent will or won’t it work?

    An IoT Bill of Rights provides an important framework for thinking about the impact of IoT, but will only be as good as its scope, interpretation, and enforcement.

    For example, a ‘right to privacy’ that gives individuals control of the data they generate and the metadata collected about them could ensure control over a digital persona.

    However, the technical infrastructure that implements this right may be challenging to engineer.

    _______________________________________________________________
    “The potential of the IoT will be achieved when we have a common sense of appropriate behavior, social mechanisms to enforce responsibility and accountability, and when we enable technical architectures that incorporate safety, security, and protections. ~Francine Berman”
    _______________________________________________________________

    Will individuals want to or be able to sift through all records for all on-line IoT services and devices they use (smart phone, refrigerator, car, shopping site, browser, etc.) to pick and choose which information is private and what can be shared?

    Will a strong individual right to privacy also make public data in the IoT less valuable? For example, if half of the residents in an area choose to keep the location and images of their homes private, the Google map of that area may cease to be useful.

    How we determine what information is private and under what circumstances, who can control it, who can access it, who can enforce it, and what happens when things go wrong will be critical for a ‘right to privacy’ to be meaningful.

    So how to safely and ethically deploy IoT?

    3

    I don’t think that we should set up a governance system for the IoT without substantive discussion and experience with IoT technologies.

    So now is exactly the right time for thought leadership and exploration. We need to be developing the intellectual groundwork for IoT governance, policy, and laws to understand how to prioritize and promote the public good within the IoT.

    We need to understand intended and unintended consequences for a broad spectrum of potential IoT policy, regulation, and frameworks.

    We need to work now to understand the specifics of how the IoT will impact current social mechanisms or necessitate new ones to create an environment in which the IoT can achieve its highest potential.

    We also need to pilot and experiment with various IoT policies now to gain experience with how various approaches will work. Smart systems, cities, and workplaces can collect information on the success and challenges of various policies, system coordination and security mechanisms, and approaches to data rights, privacy, stewardship and use.

    This is already happening in opportunistic areas like transportation (Who is responsible when a self-driving car has an accident? What information should be private within a vehicle-net?) but is needed for the broader spectrum of IoT scenarios, systems, and devices.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 5:38 pm on February 8, 2017 Permalink | Reply
    Tags: E-wave propagation index (EPI), Find a better way of predicting blood clots in the heart, Science Node,   

    From Science Node: “Protecting the heart with supercomputers” 

    Science Node bloc
    Science Node

    30 Jan, 2017 [Where has this been?]
    Aaron Dubrow

    1
    Simulations on the Stampede supercomputer find a better way of predicting blood clots in the heart.

    What pump can run for 80 years without breaking down? Why, the heart, of course.

    But when it does malfunction, the results can be dire. To head these problems off at the pass, researchers harnessed some Texas supercomputers to find a better way to predict which patients are at risk from blood clots.

    Matters of the heart

    Blood clots, frequently the byproduct of a heart weakened by disease or an injury, are among the leading causes of deaths related to heart-disease. Since the chambers of the heart are the largest reservoirs of blood in the body, they are most at risk for generating clots.

    2
    Rajat Mittal led a team that tapped Texas supercomputers to develop a better way to predict blood clot risk. Courtesy Johns Hopkins University.

    The challenge for physicians is predicting when a patient is in danger of developing a blood clot.

    The degree to which a stream of blood penetrates the mitral valve into the left ventricle of the heart is the critical factor. If this mitral jet doesn’t travel deep enough into the ventricle, it can prevent the heart from properly flushing, leading to clots and other consequences.

    The metric that characterizes the jet penetration, the E-wave propagation index (EPI), assesses patient risk of clot formation much more accurately than current tools and procedures.

    “The beauty of the index is that it doesn’t require additional measurements. It reformulates echocardiogram data into a new metric,” says Rajat Mittal, professor of mechanical engineering at Johns Hopkins University and one of the principal investigators on the research. “The clinician doesn’t have to do any additional work.”

    Mittal’s findings were based on simulations performed at the Texas Advanced Computing Center (TACC) and validated using data from patients who both did and did not experience post-heart attack blood clots.

    TACC bloc

    “Because we understood the fluid dynamics in the heart using our computational models,” Mittal observes, “we can see that the ejection fraction (the current procedure of choice in cardiology) is not able to stratify clot risk, whereas the EPI can very accurately stratify who will get a clot and who will not.”

    Patient computing

    Mittal and his team required large computing resources to derive and test their hypothesis. Run in parallel on 256 to 512 processors, each simulation took several 100,000 computing hours to complete.

    “This work cannot be done by simulating a single case. Having a large enough sample size to base conclusions on was essential for this research,” Mittal says. “We could never come close to being able to do what we needed to do it if weren’t for the Stampede supercomputer.”

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF
    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    To arrive at their hypothesis, Mittal’s team captured detailed measurements from 13 patients and used those to make patient-specific models of the heart that consider fluid flow, physical structures, and bio-chemistry.

    These models led, in turn, to new insights into the factors that correlate most closely to stagnation in the left ventricle, chief among them, mitral jet penetration.

    Working in collaboration with clinicians, including lead author, Thura Harfi of Ohio State University, the team tested their hypothesis using data from 75 individuals — 25 healthy patients, 25 patients who experienced clots in their left ventricle, and 25 patients who had a compromised heart but who didn’t have any clots.

    They found that, based on the EPI measurement, one in every five patients with severe cardiomyopathy who are currently not being treated with anti-clotting drugs would be at risk of a left ventricular clot and would benefit from anticoagulation.

    Future flows

    In addition to establishing the new diagnostic tool for clinicians, Mittal’s research helps advance new, efficient computational models that will be necessary to make patient-specific diagnostics feasible.

    Mittal foresees a time where doctors will perform patient-specific heart simulations routinely to determine the best course of treatment. However, hospitals would need systems hundreds of times faster than a current desktop computer to be able to figure out a solution locally in a reasonable timeframe.

    5
    The team plans to continue to test their hypothesis, applying the EPI metric to a larger dataset. They hope in the future to run a clinical study with a forward-looking analysis.

    “These research results are an important first step to move our basic scientific understanding of the physics of how blood flows in the heart to real-time predictions and treatments for the well-being of patients,” says Ronald Joslin, NSF Fluid Dynamics program director.

    With a better understanding of the mechanics of blood clots and ways to the predict them, the researchers have turned their attention to other sources of blood clots, including bio-prosthetic heart valves and atrial fibrillation (AFib) – a quivering or irregular heartbeat that affects around 3 million Americans.

    “The potential for impact in this area is very motivating,” Mittal said, “not just for me but for my collaborators, students and post-docs as well.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 9:09 am on January 13, 2017 Permalink | Reply
    Tags: 2016 in review, , , Science Node   

    From Science Node: “2016 in review: A big first year for Science Node” 

    Science Node bloc
    Science Node

    20 Dec, 2016
    Sarah Engel

    Science Node celebrated its first full year in September. As we look back on these last 12 months, we noticed a few patterns worth highlighting.

    By December, we were also celebrating over 24,000 connections across our newsletter and social media platforms.

    This growth is due in no small part to partners like XSEDE, Internet2, Open Science Grid, ESnet, and, of course, Indiana University. We’re also grateful for past support from the US National Science Foundation, CERN (the European Organization for Nuclear Research), and the European Commission (via the e-Science Talk project, as well as others).

    Our growth is about more than connections, though. It’s due in large part to the persistence of Managing Editor Lance Farrell – and behind the scenes help from Indiana University’s Greg Moore. In late 2016, we also welcomed two new writers, Alisa Alering and Tristan Fitzpatrick. You’ve seen some of their work already, and you can expect even more in the coming months.

    Science gets personal

    Citizen science and personalized medicine are two examples of how science now reaches into our daily lives – and promises to, on the one hand, hold us close to discovery and, on the other hand, improve our ability to avoid and manage disease.

    Check out Alisa’s take on how science is closer to us than ever before.

    For the history books

    2016 was also a year of amazing discoveries. Scientists confirmed Albert Einstein’s 100-year-old prediction of gravitational waves when LIGO heard the echo of a massive merger of black holes. Science Node was there to cover the computational collaboration that made the discovery possible.

    We also cheered when astrophysicists revved up galactic-sized supercomputer simulations and discovered evidence of a dark planet lurking at the distant edge of our solar system. All that remains is for Konstantin Batygin to actually locate this planet that the models say must be there!

    Find these stories and more in Tristan’s article about the big science news of the year.

    An international focus

    We’re very proud of our global science community – like the German scientist who used a Swiss supercomputer to spot a lake of lava under an island in the Sea of Japan, and the Australian scientists who adapted a firefighting technique to a supercomputing environment and found a smart way to combat invasive species.

    Explore these examples in Lance’s around the world article.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 2:49 pm on January 3, 2017 Permalink | Reply
    Tags: A day in the life of a molecular machine, , , Georgia State University, , NERSC - National Energy Research for Scientific Computing Center, Science Node,   

    From Science Node: “A day in the life of a molecular machine” 

    Science Node bloc
    Science Node

    01 Dec, 2016 [Where has this been?]
    Jorge Salazar

    1
    Courtesy Macmillan Publishers Ltd; Yuan He, et al.

    Supercomputers and cryo-electron microscopy take a perfect picture of molecular machines.

    It sounds like something out of Star Trek: Nano-sized robots self-assemble to form biological machines that do the work of living. And yet this is not science fiction – this really happens.

    Every cell in our body has identical DNA, the twisted staircase of nucleic acids uniquely coded to each organism. Molecular machines take pieces of DNA called genes and make a brain cell when needed, instead of, say, a bone cell.

    2
    Model scientist. Ivaylo Ivanov, associate professor of chemistry at Georgia State University, conducted over four million hours of supercomputer simulations to model molecular machines.

    Scientists today are just starting to understand their structure and function using the latest microscopes and supercomputers.

    Cryo-electron microscopy (cryo-EM) combined with supercomputer simulations have created the best model yet of a vital molecular machine, the human pre-initiation complex (PIC).

    “For the first time, structures have been detailed of the complex groups of molecules that open human DNA,” says study co-author Ivaylo Ivanov, associate professor of chemistry at Georgia State University.

    Ivanov led the computational work that modeled the atoms of the different proteins that act like cogs of the PIC molecular machine.

    The experiment began with images painstakingly taken of PIC. They were made by a group led by study co-author Eva Nogales, senior faculty scientist at Lawrence Berkeley National Laboratory.

    Nogales’ group used cryo-EM to freeze human PIC bound to DNA before zapping it with electron beams. Thanks to recent advances, cryo-EM can now image at near atomic resolution large and complicated biological structures that have proven too difficult to crystalize.

    In all, over 1.4 million cryo-EM ‘freeze frames’ of PIC were processed using supercomputers at the National Energy Research for Scientific Computing Center (NERSC)

    NERSC
    NERSC CRAY Cori supercomputer
    NERSC CRAY Cori supercomputer
    LBL NERSC Cray XC30 Edison supercomputer
    LBL NERSC Cray XC30 Edison supercomputer

    “Cryo-EM is going through a great expansion,” Nogales says. “It is allowing us to get higher resolution of more structures in different states so that we can describe several pictures showing how they are moving. We don’t see a continuum, but we see snapshots through the process of action.”

    Using eXtreme Science and Engineering Discovery Environment (XSEDE) resources, scientists next built an accurate model that made physical sense of the density maps of PIC.

    4
    Ice queen. Eva Nogales, senior faculty scientist at the Lawrence Berkeley National Laboratory uses cryo-electron microscopy to produce near atomic-level resolution images of molecular structure. Courtesy Eva Nogales.

    To model complex molecular machines, including those for this study, Ivanov’s team ran over four million core hours of simulations on the Stampede supercomputer at the Texas Advanced Computing Center (TACC).
    TACC bloc
    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF
    “Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    The goal of all this computational effort is to produce atomic models that tell the full story of the structure and function of the protein complex of molecules. To get there, Ivanov’s team took the twelve components of the PIC assembly and created homology models for each component that accounted for their amino acid sequences and their relation to similar known protein 3-D structures.

    XSEDE was “absolutely necessary” for this modeling, says Ivanov. “When we include water and counter ions in addition to the PIC complex in a molecular dynamics simulation box, we get the simulation system size of over a million atoms. For that we need to go to a thousand cores. In this case, we went up to two thousand and forty-eight cores – for that we needed Stampede,” Ivanov said.

    One of the insights gained in the study is a working model of how PIC opens the otherwise stable DNA double helix for transcription. Imagine a cord made of two threads twisted around each other, Nogales explains. Hold one end very tightly, then grab the other and twist it in the opposite direction of the threading to unravel the cord. That’s basically how the living machines that keep us alive do it.

    4
    Changing stations. By aligning the three models of holo-PICs, sequential states are morphed with a special focus on the nucleic acids regions. Courtesy Macmillan Publishers Ltd; Yuan He, et al.

    Both scientists said that they are just beginning to get an atomic-level understanding of transcription, crucial to gene expression and ultimately disease.

    “Many disease states come about because there are errors in how much a certain gene is being read and how much a certain protein with a certain activity in the cell is present,” Nogales says. “Those disease states could be due to excess production of the protein, or conversely not enough. It is very important to understand the molecular process that regulates this production so that we can understand the disease state.”

    While this fundamental work does not directly produce cures, it does lay the foundation to help develop them in the future, said Ivanov. “In order to understand disease, we have to understand how these complexes function in the first place… A collaboration between computational modelers and experimental structural biologists could be very fruitful in the future. ”

    The results,Near-atomic resolution visualization of human transcription promoter opening, were recently published in Nature.

    The article was authored by Yuan He, Lawrence Berkeley National Laboratory and now at Northwestern University; Chunli Yan and Ivaylo Ivanov, Georgia State University; Jie Fang, Carla Inouye, Robert Tjian, Eva Nogales, UC Berkeley.

    Funding came from the National Institute of General Medical Sciences (NIH) and the National Science Foundation.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 9:50 pm on December 23, 2016 Permalink | Reply
    Tags: , , Science Node,   

    From Science Node: “Supercomputing an earthquake-ready building” 

    Science Node bloc
    Science Node

    19 Dec, 2016
    Tristan Fitzpatrick

    Preparing for an earthquake takes more than luck, thanks to natural hazard engineers and their supercomputers.

    1
    Courtesy Ellen Rathje.

    If someone is inside a building during an earthquake, there isn’t much they can do except duck under a table and hope for the best.

    That’s why designing safe buildings is an important priority for natural hazards researchers.

    Natural hazards engineering involves experimentation, numerical simulation, and data analysis to improve seismic design practices.

    To facilitate this research, the US National Science Foundation (NSF) has invested in the DesignSafe cyberinfrastructure so that researchers can fully harness the vast amount of data available in natural hazards engineering.

    Led by Ellen Rathje at the University of Texas and developed by the Texas Advanced Computing Center (TACC), DesignSafe includes an interactive web interface, repositories to share data sets, and a cloud-based workspace for researchers to perform simulation, computation, data analysis, and other tasks.

    TACC bloc

    For example, scientists may use a device known as a shake table to simulate earthquake movement and measure how buildings respond to them.

    “From a shaking table test we can measure the movements of a building due to a certain seismic loading,” Rathje says, “and then we can develop a numerical model of that building subjected to the same earthquake loading.”

    Researchers then compare the simulation to experimental data that’s been collected previously from observations in the field.

    “In natural hazards engineering, we take advantage of a lot of experimental data,” Rathje says, “and try to couple it with numerical simulations, as well as field data from observations, and bring it all together to make advances.”

    The computational resources of Extreme Science and Engineering Discovery Environment (XSEDE) make these simulations possible. DesignSafe facilitates the use of these resources within the natural hazards engineering research community.

    2
    Taming the tsunami? The 2011 Tohuko tsunami caused severe structural damage and the loss of many lives — almost 16,000 dead, over 6,000 injured, and 2,500 missing. Natural hazards engineers use supercomputer simulations and shake tables to minimize damage by designing safer buildings. Courtesy EPA.

    According to Rathje, the merger between the two groups is beneficial for both and for researchers interested in natural hazards engineering.

    Rathje previously researched disasters such as the Haiti earthquake in 2010 and earthquakes in Japan. While the collaboration between XSEDE and TACC is a step forward for natural hazards research, Rathje says it’s just another step toward making buildings safer during earthquakes.

    “There’s still a lot of work to be done in natural hazards engineering,” she admits, “but we’ve been able to bring it all under one umbrella so that natural hazards researchers can come to one place to get the data they need for their research.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 6:50 pm on December 11, 2016 Permalink | Reply
    Tags: , Science Node, , The right way to simulate the Milky Way   

    From Science Node: “The right way to simulate the Milky Way” 

    Science Node bloc
    Science Node

    13 Sep, 2016 [Where oh where has this been?]
    Whitney Clavin

    Astronomers have created the most detailed computer simulation to date of our Milky Way galaxy’s formation, from its inception billions of years ago as a loose assemblage of matter to its present-day state as a massive, spiral disk of stars.

    The simulation solves a decades-old mystery surrounding the tiny galaxies that swarm around the outside of our much larger Milky Way. Previous simulations predicted that thousands of these satellite, or dwarf, galaxies should exist. However, only about 30 of the small galaxies have ever been observed. Astronomers have been tinkering with the simulations, trying to understand this ‘missing satellites’ problem to no avail.


    Access mp4 video here .
    Supercomputers and superstars. Caltech associate professor of theoretical astrophysics Phil Hopkins and Carnegie-Caltech research fellow Andrew Wetzel use XSEDE supercomputers to build the most detailed and realistic simulation of galaxy formation ever created. The results solve a decades-long mystery regarding dwarf galaxies around our Milky Way. Courtesy Caltech.

    Now, with the new simulation — which used resources from the Extreme Science and Engineering Discovery Environment (XSEDE) running in parallel for 700,000 central processing unit (CPU) hours — astronomers at the California Institute of Technology (Caltech) have created a galaxy that looks like the one we live in today, with the correct, smaller number of dwarf galaxies.

    “That was the aha moment, when I saw that the simulation can finally produce a population of dwarf galaxies like the ones we observe around the Milky Way,” says Andrew Wetzel, postdoctoral fellow at Caltech and Carnegie Observatories in Pasadena, and lead author of a paper about the new research, published August 20 in Astrophysical Journal Letters.

    One of the main updates to the new simulation relates to how supernovae, explosions of massive stars, affect their surrounding environments. In particular, the simulation incorporated detailed formulas that describe the dramatic effects that winds from these explosions can have on star-forming material and dwarf galaxies. These winds, which reach speeds up to thousands of kilometers per second, “can blow gas and stars out of a small galaxy,” says Wetzel.

    Indeed, the new simulation showed the winds can blow apart young dwarf galaxies, preventing them from reaching maturity. Previous simulations that were producing thousands of dwarf galaxies weren’t taking the full effects of supernovae into account.

    “We had thought before that perhaps our understanding of dark matter was incorrect in these simulations, but these new results show we don’t have to tinker with dark matter,” says Wetzel. “When we more precisely model supernovae, we get the right answer.”

    Astronomers simulate our galaxy to understand how the Milky Way, and our solar system within it, came to be. To do this, the researchers tell a computer what our universe was like in the early cosmos. They write complex codes for the basic laws of physics and describe the ingredients of the universe, including everyday matter like hydrogen gas as well as dark matter, which, while invisible, exerts gravitational tugs on other matter. The computers then go to work, playing out all the possible interactions between particles, gas, and stars over billions of years.

    “In a galaxy, you have 100 billion stars, all pulling on each other, not to mention other components we don’t see, like dark matter,” says Caltech’s Phil Hopkins, associate professor of theoretical astrophysics and principal scientist for the new research. “To simulate this, we give a supercomputer equations describing those interactions and then let it crank through those equations repeatedly and see what comes out at the end.”

    The researchers are not done simulating our Milky Way. They plan to use even more computing time, up to 20 million CPU hours, in their next rounds. This should lead to predictions about the very faintest and smallest of dwarf galaxies yet to be discovered. Not a lot of these faint galaxies are expected to exist, but the more advanced simulations should be able to predict how many are left to find.

    The study was funded by Caltech, a Sloan Research Fellowship, the US National Science Foundation (NSF), NASA, an Einstein Postdoctoral Fellowship, the Space Telescope Science Institute, UC San Diego, and the Simons Foundation.

    Other coauthors on the study are: Ji-Hoon Kim of Stanford University, Claude-André Faucher-Giguére of Northwestern University, Dušan Kereš of UC San Diego, and Eliot Quataert of UC Berkeley.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: