Tagged: Computing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:56 pm on July 9, 2015 Permalink | Reply
    Tags: , , Computing, Network Computing, ,   

    From Symmetry: “More data, no problem” 


    July 09, 2015
    Katie Elyce Jones

    Scientists are ready to handle the increased data of the current run of the Large Hadron Collider.

    Photo by Reidar Hahn, Fermilab

    Physicist Alexx Perloff, a graduate student at Texas A&M University on the CMS experiment, is using data from the first run of the Large Hadron Collider for his thesis, which he plans to complete this year.

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles

    CERN CMS Detector

    When all is said and done, it will have taken Perloff a year and a half to conduct the computing necessary to analyze all the information he needs—not unusual for a thesis.

    But had he used the computing tools LHC scientists are using now, he estimates he could have finished his particular kind of analysis in about three weeks. Although Perloff represents only one scientist working on the LHC, his experience shows the great leaps scientists have made in LHC computing by democratizing their data, becoming more responsive to popular demand and improving their analysis software.

    A deluge of data

    Scientists estimate the current run of the LHC could create up to 10 times more data than the first one. CERN already routinely stores 6 gigabytes (or 6 billion units of digital information) per second, up from 1 gigabyte per second in the first run.

    The second run of the LHC is more data-intensive because the accelerator itself is more intense: The collision energy is 60 percent greater, resulting in “pile-up” or more collisions per proton bunch. Proton bunches are also injected into the ring closer together, resulting in more collisions per second.

    On top of that, the experiments have upgraded their triggers, which automatically choose which of the millions of particle events per second to record. The CMS trigger will now record more than twice as much data per second as it did in the previous run.

    Had CMS and ATLAS scientists relied only on adding more computers to make up for the data hike, they would likely have needed about four to six times more computing power in CPUs and storage than they used in the first run of the LHC.


    To avoid such a costly expansion, they found smarter ways to share and analyze the data.

    Flattening the hierarchy

    Over a decade ago, network connections were less reliable than they are today, so the Worldwide LHC Computing Grid was designed to have different levels, or tiers, that controlled data flow.

    All data recorded by the detectors goes through the CERN Data Centre, known as Tier-0, where it is initially processed, then to a handful of Tier-1 centers in different regions across the globe.

    CERN DATA Center
    One view of the Cern Data Centre

    During the last run, the Tier-1 centers served Tier-2 centers, which were mostly the smaller university computing centers where the bulk of physicists do their analyses.

    “The experience for a user on Run I was more restrictive,” says Oliver Gutsche, assistant head of the Scientific Computing Division for Science Workflows and Operations at Fermilab, the US Tier-1 center for CMS*. “You had to plan well ahead.”

    Now that the network has proved reliable, a new model “flattens” the hierarchy, enabling a user at any ATLAS or CMS Tier-2 center to access data from any of their centers in the world. This was initiated in Run I and is now fully in place for Run II.

    Through a separate upgrade known as data federation, users can also open a file from another computing center through the network, enabling them to view the file without going through the process of transferring it from center to center.

    Another significant upgrade affects the network stateside. Through its Energy Sciences Network, or ESnet, the US Department of Energy increased the bandwidth of the transatlantic network that connects the US CMS and ATLAS Tier-1 centers to Europe. A high-speed network, ESnet transfers data 15,000 times faster than the average home network provider.

    Dealing with the rush

    One of the thrilling things about being a scientist on the LHC is that when something exciting shows up in the detector, everyone wants to talk about it. The downside is everyone also wants to look at it.

    “When data is more interesting, it creates high demand and a bottleneck,” says David Lange, CMS software and computing co-coordinator and a scientist at Lawrence Livermore National Laboratory. “By making better use of our resources, we can make more data available to more people at any time.”

    To avoid bottlenecks, ATLAS and CMS are now making data accessible by popularity.

    “For CMS, this is an automated system that makes more copies when popularity rises and reduces copies when popularity declines,” Gutsche says.

    Improving the algorithms

    One of the greatest recent gains in computing efficiency for the LHC relied on the physicists who dig into the data. By working closely with physicists, software engineers edited the algorithms that describe the physics playing out in the LHC, thereby significantly improving processing time for reconstruction and simulation jobs.

    “A huge amount of effort was put in, primarily by physicists, to understand how the physics could be analyzed while making the computing more efficient,” says Richard Mount, senior research scientist at SLAC National Accelerator Laboratory who was ATLAS computing coordinator during the recent LHC upgrades.

    CMS tripled the speed of event reconstruction and halved simulation time. Similarly, ATLAS quadrupled reconstruction speed.

    Algorithms that determine data acquisition on the upgraded triggers were also improved to better capture rare physics events and filter out the background noise of routine (and therefore uninteresting) events.

    “More data” has been the drumbeat of physicists since the end of the first run, and now that it’s finally here, LHC scientists and students like Perloff can pick up where they left off in the search for new physics—anytime, anywhere.

    *While not noted in the article, I believe that Brookhaven National Laboratory is the Tier 1 site for Atlas in the United States.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.

  • richardmitnick 12:22 pm on April 24, 2015 Permalink | Reply
    Tags: , Computing,   

    From phys.org: “Silicon Valley marks 50 years of Moore’s Law” 


    April 24, 2015
    Pete Carey, San Jose Mercury News

    Plot of CPU transistor counts against dates of introduction; note the logarithmic vertical scale; the line corresponds to exponential growth with transistor count doubling every two years. Credit: Wikipedia

    Computers were the size of refrigerators when an engineer named Gordon Moore laid the foundations of Silicon Valley with a vision that became known as “Moore’s Law.”

    Moore, then the 36-year-old head of research at Fairchild Semiconductor, predicted in a trade magazine article published 50 years ago Sunday that computer chips would double in complexity every year, at little or no added cost, for the next 10 years. In 1975, based on industry developments, he updated the prediction to doubling every two years.

    And for the past five decades, chipmakers have proved him right – spawning scores of new companies and shaping Silicon Valley to this day.

    “If Silicon Valley has a heartbeat, it’s Moore’s Law. It drove the valley at what has been a historic speed, unmatched in history, and allowed it to lead the rest of the world,” said technology consultant Rob Enderle.

    Moore’s prediction quickly became a business imperative for chip companies. Those that ignored the timetable went out of business. Companies that followed it became rich and powerful, led by Intel, the company Moore co-founded.

    Thanks to Moore’s Law, people carry smartphones in their pocket or purse that are more powerful than the biggest computers made in 1965 – or 1995, for that matter. Without it, there would be no slender laptops, no computers powerful enough to chart a genome or design modern medicine’s lifesaving drugs. Streaming video, social media, search, the cloud-none of that would be possible on today’s scale.

    “It fueled the information age,” said Craig Hampel, chief scientist at Rambus, a Sunnyvale semiconductor company. “As you drive around Silicon Valley, 99 percent of the companies you see wouldn’t be here” without cheap computer processors due to Moore’s Law.

    Moore was asked in 1964 by Electronics magazine to write about the future of integrated circuits for the magazine’s April 1965 edition.

    The basic building blocks of the digital age, integrated circuits are chips of silicon that hold tiny switches called transistors. More transistors meant better performance and capabilities.

    Taking stock of how semiconductor manufacturing was shrinking transistors and regularly doubling the number that would fit on an integrated circuit, Moore got some graph paper and drew a line for the predicted annual growth in the number of transistors on a chip. It shot up like a missile, with a doubling of transistors every year for at least a decade.

    It seemed clear to him what was coming, if not to others.

    “Integrated circuits will lead to such wonders as home computers – or at least terminals connected to a central computer – automatic controls for automobiles, and personal portable communications equipment,” he wrote.

    California Institute of Technology professor Carver Mead coined the name Moore’s Law, and as companies competed to produce the most powerful chips, it became a law of survival-double the transistors every year or die.

    “In the beginning, it was just a way of chronicling the progress,” Moore, now 86, said in an interview conducted by Intel. “But gradually, it became something that the various industry participants recognized. … You had to be at least that fast or you were falling behind.”

    Moore’s Law also held prices down because advancing technology made it inexpensive to pack chips with increasing numbers of transistors. If transistors hadn’t gotten cheaper as they grew in number on a chip, integrated circuits would still be a niche product for the military and others able to afford a very high price. Intel’s first microprocessor, or computer on a chip, with 2,300 transistors, cost more than $500 in current dollars. Today, an Intel Core i5 microprocessor has more than a billion transistors-and costs $276.

    “That was my real objective-to communicate that we have a technology that’s going to make electronics cheap,” Moore said.

    The reach of Moore’s Law extends beyond personal tech gadgets.

    “The really cool thing about it is it’s not just iPhones,” said G. Dan Hutcheson of VLSI Research, a technology market research company based in Santa Clara. “Every drug developed in the past 20 years or so had to have the computing power to get down and model molecules. They never would have been able to without that power. DNA analysis, genomes, wouldn’t exist-you couldn’t do the genetic testing. It all boils down to transistors.”

    Hutcheson says what Moore predicted was much more than a self-fulfilling prophecy. He had foreseen that optics, chemistry and physics would be combined to shrink transistors over time without substantial added cost.

    As transistors become vanishingly small, it’s harder to keep Moore’s Law going.

    About a decade ago, the shrinking of the physical dimensions led to overheating and stopped major performance boosts for every new generation of chips. Companies responded by introducing so-called multicore computers, with several processors on a PC.

    “What’s starting to happen is people are looking to other innovations on silicon to give them performance” as a way to extend Moore’s Law, said Spike Narayan, director of science and technology at IBM‘s Almaden Research Center.

    Then, about a year and a half ago, “something even more drastic started happening,” Narayan said. The wires connecting transistors became so small that they became more resistant to electrical current. “Big problem,” he said.

    “That’s why you see all the materials research and innovation,” he said of new efforts to find alternative materials and structures for chips.

    Another issue confronting Moore’s Law is that the energy consumed by chips has begun to rise as transistors shrink. “Our biggest challenge” is energy efficiency, said Alan Gara, chief architect of the Aurora supercomputer Intel is building for Argonne National Laboratory near Chicago.

    Intel says it sees a path to continue the growth predicted by Moore’s Law through the next decade. The next generation of processors is in “full development mode,” said Mark Bohr, an Intel senior fellow who leads a group that decides how each generation of Intel chips will be made. Bohr is spending his time on the generation after that, in which transistors will shrink to 7 nanometers. The average human hair is 25,000 nanometers wide.

    At some point the doubling will slow down, says Chenming Hu, an electrical engineering and computer science professor at the University of California, Berkeley. Hu is a key figure in the development of a new transistor structure that’s helping keep Moore’s Law going.

    “It’s totally understandable that a company, in order to gain more market share and beat out all competitors, needs to double and triple if you can,” Hu said. “That’s why this scaling been going on at such a fast pace. But no exponential growth can go on forever.”

    Hu says what’s likely is that at some point the doubling every two years will slow to every four or five years.

    “And that’s probably a better thing than flash and fizzle out. You really want have the same growth at lower pace.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    About Phys.org in 100 Words

    Phys.org™ (formerly Physorg.com) is a leading web-based science, research and technology news service which covers a full range of topics. These include physics, earth science, medicine, nanotechnology, electronics, space, biology, chemistry, computer sciences, engineering, mathematics and other sciences and technologies. Launched in 2004, Phys.org’s readership has grown steadily to include 1.75 million scientists, researchers, and engineers every month. Phys.org publishes approximately 100 quality articles every day, offering some of the most comprehensive coverage of sci-tech developments world-wide. Quancast 2009 includes Phys.org in its list of the Global Top 2,000 Websites. Phys.org community members enjoy access to many personalized features such as social networking, a personal home page set-up, RSS/XML feeds, article comments and ranking, the ability to save favorite articles, a daily newsletter, and other options.

  • richardmitnick 2:47 pm on October 22, 2014 Permalink | Reply
    Tags: , Computing,   

    From BNL: “Brookhaven Lab Launches Computational Science Initiative” 

    Brookhaven Lab

    October 22, 2014
    Karen McNulty Walsh, (631) 344-8350 or Peter Genzer, (631) 344-3174

    Leveraging computational science expertise and investments across the Laboratory to tackle “big data” challenges

    Building on its capabilities in computational science and data management, the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory is embarking upon a major new Computational Science Initiative (CSI). This program will leverage computational science expertise and investments across multiple programs at the Laboratory—including the flagship facilities that attract thousands of scientific users each year—further establishing Brookhaven as a leader in tackling the “big data” challenges at the frontiers of scientific discovery. Key partners in this endeavor include nearby universities such as Columbia, Cornell, New York University, Stony Brook, and Yale, and IBM Research.

    Blue Gene/Q Supercomputer at Brookhaven National Laboratory

    “The CSI will bring together under one umbrella the expertise that drives [the success of Brookhaven’s scientific programs] to foster cross-disciplinary collaboration and make optimal use of existing technologies, while also leading the development of new tools and methods that will benefit science both within and beyond the Laboratory.”
    — Robert Tribble

    “Advances in computational science and management of large-scale scientific data developed at Brookhaven Lab have been a key factor in the success of the scientific programs at the Relativistic Heavy Ion Collider (RHIC), the National Synchrotron Light Source (NSLS), the Center for Functional Nanomaterials (CFN), and in biological, atmospheric, and energy systems science, as well as our collaborative participation in international research endeavors, such as the ATLAS experiment at Europe’s Large Hadron Collider,” said Robert Tribble, Brookhaven Lab’s Deputy Director for Science and Technology, who is leading the development of the new initiative. “The CSI will bring together under one umbrella the expertise that drives this success to foster cross-disciplinary collaboration and make optimal use of existing technologies, while also leading the development of new tools and methods that will benefit science both within and beyond the Laboratory.”

    BNL RHIC Campus
    RHIC at BNL

    BNL NSLS Interior
    NSLS at BNL

    A centerpiece of the initiative will be a new Center for Data-Driven Discovery (C3D) that will serve as a focal point for this activity. Within the Laboratory it will drive the integration of intellectual, programmatic, and data/computational infrastructure with the goals of accelerating and expanding discovery by developing critical mass in key disciplines, enabling nimble response to new opportunities for discovery or collaboration, and ultimately integrating the tools and capabilities across the entire Laboratory into a single scientific resource. Outside the Laboratory C3D will serve as a focal point for recruiting, collaboration, and communication.

    The people and capabilities of C3D are also integral to the success of Brookhaven’s key scientific facilities, including those named above, the new National Synchrotron Light Source II (NSLS-II), and a possible future electron ion collider (EIC) at Brookhaven. Hundreds of scientists from Brookhaven and thousands of facility users from universities, industry, and other laboratories around the country and throughout the world will benefit from the capabilities developed by C3D personnel to make sense of the enormous volumes of data produced at these state-of-the-art research facilities.

    BNL NSLS II Photo
    BNL NSLS-II Interior
    NSLS II at BNL

    The CSI in conjunction with C3D will also host a series of workshops/conferences and training sessions in high-performance computing—including annual workshops on extreme-scale data and scientific knowledge discovery, extreme-scale networking, and extreme-scale workflow for integrated science. These workshops will explore topics at the frontier of data-centric, high-performance computing, such as the combination of efficient methodologies and innovative computer systems and concepts to manage and analyze scientific data generated at high volumes and rates.

    “The missions of C3D and the overall CSI are well aligned with the broad missions and goals of many agencies and industries, especially those of DOE’s Office of Science and its Advanced Scientific Computing Research (ASCR) program,” said Robert Harrison, who holds a joint appointment as director of Brookhaven Lab’s Computational Science Center (CSC) and Stony Brook University’s Institute for Advanced Computational Science (IACS) and is leading the creation of C3D.

    The CSI at Brookhaven will specifically address the challenge of developing new tools and techniques to deliver on the promise of exascale science—the ability to compute at a rate of 1018 floating point operations per second (exaFLOPS), to handle the copious amount of data created by computational models and simulations, and to employ exascale computation to interpret and analyze exascale data anticipated from experiments in the near future.

    “Without these tools, scientific results would remain hidden in the data generated by these simulations,” said Brookhaven computational scientist Michael McGuigan, who will be working on data visualization and simulation at C3D. “These tools will enable researchers to extract knowledge and share key findings.”

    Through the initiative, Brookhaven will establish partnerships with leading universities, including Columbia, Cornell, Stony Brook, and Yale to tackle “big data” challenges.

    “Many of these institutions are already focusing on data science as a key enabler to discovery,” Harrison said. “For example, Columbia University has formed the Institute for Data Sciences and Engineering with just that mission in mind.”

    Computational scientists at Brookhaven will also seek to establish partnerships with industry. “As an example, partnerships with IBM have been successful in the past with co-design of the QCDOC and BlueGene computer architectures,” McGuigan said. “We anticipate more success with data-centric computer designs in the future.”

    An area that may be of particular interest to industrial partners is how to interface big-data experimental problems (such as those that will be explored at NSLS-II, or in the fields of high-energy and nuclear physics) with high-performance computing using advanced network technologies. “The reality of ‘computing system on a chip’ technology opens the door to customizing high-performance network interface cards and application program interfaces (APIs) in amazing ways,” said Dantong Yu, a group leader and data scientist in the CSC.

    “In addition, the development of asynchronous data access and transports based on remote direct memory access (RDMA) techniques and improvements in quality of service for network traffic could significantly lower the energy footprint for data processing while enhancing processing performance. Projects in this area would be highly amenable to industrial collaboration and lead to an expansion of our contributions beyond system and application development and designing programming algorithms into the new arena of exascale technology development,” Yu said.

    “The overarching goal of this initiative will be to bring under one umbrella all the major data-centric activities of the Lab to greatly facilitate the sharing of ideas, leverage knowledge across disciplines, and attract the best data scientists to Brookhaven to help us advance data-centric, high-performance computing to support scientific discovery,” Tribble said. “This initiative will also greatly increase the visibility of the data science already being done at Brookhaven Lab and at its partner institutions.”

    See the full article here.

    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 9:38 am on August 29, 2014 Permalink | Reply
    Tags: , Computing,   

    From BNL Lab: “DOE ‘Knowledgebase’ Links Biologists, Computer Scientists to Solve Energy, Environmental Issues” 

    Brookhaven Lab

    August 29, 2014
    Rebecca Harrington

    With new tool, biologists don’t have to be programmers to answer big computational questions

    If biologists wanted to determine the likely way a particular gene variant might increase a plant’s yield for producing biofuels, they used to have to track down several databases and cross-reference them using complex computer code. The process would take months, especially if they weren’t familiar with the computer programming necessary to analyze the data.

    Combining information about plants, microbes, and the complex biomolecular interactions that take place inside these organisms into a single, integrated “knowledgebase” will greatly enhance scientists’ ability to access and share data, and use it to improve the production of biofuels and other useful products.

    Now they can do the same analysis in a matter of hours, using the Department of Energy’s Systems Biology Knowledgebase (KBase), a new computational platform to help the biological community analyze, store, and share data. Led by scientists at DOE’s Lawrence Berkeley, Argonne, Brookhaven, and Oak Ridge national laboratories, KBase amasses the data available on plants, microbes, microbial communities, and the interactions among them with the aim of improving the environment and energy production. The computational tools, resources, and community networking available will allow researchers to propose and test new hypotheses, predict biological behavior, design new useful functions for organisms, and perform experiments never before possible.

    “Quantitative approaches to biology were significantly developed during the last decade, and for the first time, we are now in a position to construct predictive models of biological organisms,” said computational biologist Sergei Maslov, who is principal investigator (PI) for Brookhaven’s role in the effort and Associate Chief Science Officer for the overall project, which also has partners at a number of leading universities, Cold Spring Harbor Laboratory, the Joint Genome Institute, the Environmental Molecular Sciences Laboratory, and the DOE Bioenergy Centers. “KBase allows research groups to share and analyze data generated by their project, put it into context with data generated by other groups, and ultimately come to a much better quantitative understanding of their results. Biomolecular networks, which are the focus of my own scientific research, play a central role in this generation and propagation of biological knowledge.”

    Maslov said the team is transitioning from the scientific pilot phase into the production phase and will gradually expand from the limited functionality available now. By signing up for an account, scientists can access the data and tools free of charge, opening the doors to faster research and deeper collaboration.
    Easy coding

    “We implement all the standard tools to operate on this kind of key data so a single PI doesn’t need to go through the hassle by themselves.”
    — Shinjae Yoo, assistant computational scientist working on the project at Brookhaven

    As problems in energy, biology, and the environment get bigger, the data needed to solve them becomes more complex, driving researchers to use more powerful tools to parse through and analyze this big data. Biologists across the country and around the world generate massive amounts of data — on different genes, their natural and synthetic variations, proteins they encode, and their interactions within molecular networks — yet these results often don’t leave the lab where they originated.

    “By doing small-scale experiments, scientists cannot get the system-level understanding of biological organisms relevant to the DOE mission,” said Shinjae Yoo, an assistant computational scientist working on the project at Brookhaven. “But they can use KBase for the analysis of their large-scale data. KBase will also allow them to compare and contrast their data with other key datasets generated by projects funded by the DOE and other agencies. We implement all the standard tools to operate on this kind of key data so a single PI doesn’t need to go through the hassle by themselves.”

    For non-programmers, KBase offers a “Narrative Interface,” allowing them to upload their data to KBase and construct a narrative of their analysis with a series of pre-coded programs that has a human in the middle interpreting and filtering their output.

    In one pre-coded narrative, researchers can filter through naturally occurring variations of Poplar genes, one of the DOE flagship bioenergy plant species. Scientists can discover genes associated with a reduced amount of lignin—a cell wall protein that makes conversion of Poplar biomass to biofuels more difficult. In this narrative, scientists can use datasets from KBase and from their own research to then find candidate genes, and use networks to select the genes most likely to be related to a specific trait they’re looking for—say, genes that result in reduced lignin content, which could ease the biomass to biofuel conversion. And if other researchers wanted to run the same program for a different plant, they could just put different data in the same narrative.

    “Everything is already there,” Yoo said. “You simply need to upload the data in the right format and run through several easy steps within the narrative.”

    For those who know how to code, KBase has the IRIS Interface, a web-based command line terminal where researchers can run and control the programs on their own, allowing scientists to analyze large volumes of data. If researchers want to learn how to do the coding themselves, KBase also has tutorials and resources to help interested scientists learn it.
    A social network

    But KBase’s most powerful resource is the community itself. Researchers are encouraged to upload their data and programs so that other users can benefit from them. This type of cooperative environment encourages sharing and feedback among researchers, so the programs, tools, and annotation of datasets can improve with other users’ input.

    Brookhaven is leading the plant team on the project, while the microbe and microbial community teams are based at other partner institutions. A computer scientist by training, Yoo said his favorite part of working on KBase has been how much biology he’s learned. Acting as a go-between among the biologists at Brookhaven, who are describing what they’d like to see KBase be able to do, and the computer scientists, who are coding the programs to make it happen, Yoo has had to understand both languages of science.

    “I’m learning plant biology. That’s pretty cool to me,” he said. “In the beginning, it was quite tough. Three years later I’ve caught up, but I still have a lot to learn.”

    Ultimately, KBase aims to interweave huge amounts of data with the right tools and user interface to enable bench scientists without programming backgrounds to answer the kinds of complex questions needed to solve the energy and environmental issues of our time.

    “We can gain systematic understanding of a biological process much faster, and also have a much deeper understanding,” Yoo said, “so we can engineer plant organisms or bacteria to improve productivity, biomass yield—and then use that information for biodesign.”

    KBase is funded by the DOE’s Office of Science. The Office of Science (SC) is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

    See the full article here.

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 10:49 am on August 13, 2014 Permalink | Reply
    Tags: , Computing, ,   

    From Fermilab: “Fermilab hosts first C++ school for next generation of particle physicists” 

    Fermilab is an enduring source of strength for the US contribution to scientific research world wide.

    Wednesday, Aug. 13, 2014

    Fermilab Leah Hesla
    Leah Hesla

    Colliding particle beams without the software know-how to interpret the collisions would be like conducting an archaeological dig without the tools to sift through the artifacts. Without a way to get to the data, you wouldn’t know what you were looking at.

    Eager to keep future particle physicists well-equipped and up to date on the field’s chosen data analysis tools, Scientific Computing Division‘s Liz Sexton-Kennedy and Sudhir Malik, now physics professor at University of Puerto Rico Mayagyez, organized Fermilab’s first C++ software school, which was held last week.

    “C++ is the language of high-energy physics analysis and reconstruction,” Malik said. “There was no organized effort to teach it, so we started this school.”

    Although software skills are crucial for simulating and interpreting particle physics data, physics graduate school programs don’t formally venture into the more utilitarian skill sets. Thus scientists take it upon themselves to learn C++ outside the classroom, either on their own or through discussions with their peers. Usually this self-education is absorbed through examples, whether or not the examples are flawed, Sexton-Kennedy said.

    The school aimed to set its students straight.

    It also looked to increase the numbers of particle physicists fluent in C++, a skill that is useful beyond particle physics. Fields outside academia highly value such expertise — enough that particle physicists are being lured away to jobs in industry.

    “We would lose people who were good at both physics and C++,” Sexton-Kennedy said. “The few of us who stayed behind needed to teach the next generation.”

    The next generation appears to have been waiting for just such an opportunity: Within two weeks of the C++ school opening registration, 80 students signed up. It was so popular that the co-organizers had to start a wait list.

    The software school students include undergraduates, graduate students and postdocs, all of whom work on Fermilab experiments.

    “We get most of the ideas for how to use software for event reconstruction for the LBNE near-detector reference design from these sessions,” said Xinchun Tian, a University of South Carolina postdoc working on the Long-Baseline Neutrino Experiment. “C++ is very useful for our research.”

    Fermilab NUMI Tunnel project
    Fermilab NuMI tunnel

    Fermilab LBNE
    Fermilab LBNE

    University of Wisconsin physics professor Matt Herndon led the sessions. He was assisted by 13 people: Notre Dame University physics professor Mike Hildreth and volunteers from the SCD Scientific Software Infrastructure Department.

    Malik and Sexton-Kennedy plan to make the school material available online.

    “People have to take these tools seriously, and in high-energy physics, the skills mushroom around C++ software,” Malik said. “Students are learning C++ while growing up in the field.”

    See the full article here.

    Fermilab Campus

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics.

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 12:48 pm on June 18, 2014 Permalink | Reply
    Tags: , Computing, , ,   

    From Princeton: “Familiar yet strange: Water’s ‘split personality’ revealed by computer model” 

    Princeton University
    Princeton University

    June 18, 2014
    Catherine Zandonella, Office of the Dean for Research

    Seemingly ordinary, water has quite puzzling behavior. Why, for example, does ice float when most liquids crystallize into dense solids that sink?

    Using a computer model to explore water as it freezes, a team at Princeton University has found that water’s weird behaviors may arise from a sort of split personality: at very cold temperatures and above a certain pressure, water may spontaneously split into two liquid forms.

    The team’s findings were reported in the journal Nature.

    “Our results suggest that at low enough temperatures water can coexist as two different liquid phases of different densities,” said Pablo Debenedetti, the Class of 1950 Professor in Engineering and Applied Science and Princeton’s dean for research, and a professor of chemical and biological engineering.

    The two forms coexist a bit like oil and vinegar in salad dressing, except that the water separates from itself rather than from a different liquid. “Some of the molecules want to go into one phase and some of them want to go into the other phase,” said Jeremy Palmer, a postdoctoral researcher in the Debenedetti lab.

    The finding that water has this dual nature, if it can be replicated in experiments, could lead to better understanding of how water behaves at the cold temperatures found in high-altitude clouds where liquid water can exist below the freezing point in a “supercooled” state before forming hail or snow, Debenedetti said. Understanding how water behaves in clouds could improve the predictive ability of current weather and climate models, he said.

    Pressure–temperature phase diagram, including an illustration of the liquid–liquid transition line proposed for several polyamorphous materials. This liquid–liquid phase transition would be a first order, discontinuous transition between low and high density liquids (labelled 1 and 2). This is analogous to polymorphism of crystalline materials, where different stable crystalline states (solid 1, 2 in diagram) of the same substance can exist (e.g. diamond and graphite are two polymorphs of carbon). Like the ordinary liquid–gas transition, the liquid–liquid transition is expected to end in a critical point. At temperatures beyond these critical points there is a continuous range of fluid states, i.e. the distinction between liquids and gasses is lost. If crystallisation is avoided the liquid–liquid transition can be extended into the metastable supercooled liquid regime.

    The new finding serves as evidence for the “liquid-liquid transition” hypothesis, first suggested in 1992 by Eugene Stanley and co-workers at Boston University and the subject of recent debate. The hypothesis states that the existence of two forms of water could explain many of water’s odd properties — not just floating ice but also water’s high capacity to absorb heat and the fact that water becomes more compressible as it gets colder.

    Princeton University researchers conducted computer simulations to explore what happens to water as it is cooled to temperatures below freezing and found that the supercooled liquid separated into two liquids with different densities. The finding agrees with a two-decade-old hypothesis to explain water’s peculiar behaviors, such as becoming more compressible and less dense as it is cooled. The X axis above indicates the range of crystallinity (Q6) from liquid water (less than 0.1) to ice (greater than 0.5) plotted against density (ρ) on the Y axis. The figure is a two-dimensional projection of water’s calculated “free energy surface,” a measure of the relative stability of different phases, with orange indicating high free energy and blue indicating low free energy. The two large circles in the orange region reveal a high-density liquid at 1.15 g/cm3 and low-density liquid at 0.90 g/cm3. The blue area represents cubic ice, which in this model forms at a density of about 0.88 g/cm3. (Image courtesy of Jeremy Palmer)

    At cold temperatures, the molecules in most liquids slow to a sedate pace, eventually settling into a dense and orderly solid that sinks if placed in liquid. Ice, however, floats in water due to the unusual behavior of its molecules, which as they get colder begin to push away from each other. The result is regions of lower density — that is, regions with fewer molecules crammed into a given volume — amid other regions of higher density. As the temperature falls further, the low-density regions win out, becoming so prevalent that they take over the mixture and freeze into a solid that is less dense than the original liquid.

    The work by the Princeton team suggests that these low-density and high-density regions are remnants of the two liquid phases that can coexist in a fragile, or “metastable” state, at very low temperatures and high pressures. “The existence of these two forms could provide a unifying theory for how water behaves at temperatures ranging from those we experience in everyday life all the way to the supercooled regime,” Palmer said.

    Since the proposal of the liquid-liquid transition hypothesis, researchers have argued over whether it really describes how water behaves. Experiments would settle the debate, but capturing the short-lived, two-liquid state at such cold temperatures and under pressure has proved challenging to accomplish in the lab.

    Instead, the Princeton researchers used supercomputers to simulate the behavior of water molecules — the two hydrogens and the oxygen that make up “H2O” — as the temperature dipped below the freezing point.

    The team used computer code to represent several hundred water molecules confined to a box, surrounded by an infinite number of similar boxes. As they lowered the temperature in this virtual world, the computer tracked how the molecules behaved.

    The team found that under certain conditions — about minus 45 degrees Celsius and about 2,400-times normal atmospheric pressure — the virtual water molecules separated into two liquids that differed in density.

    The pattern of molecules in each liquid also was different, Palmer said. Although most other liquids are a jumbled mix of molecules, water has a fair amount of order to it. The molecules link to their neighbors via hydrogen bonds, which form between the oxygen of one molecule and a hydrogen of another. These molecules can link — and later unlink — in a constantly changing network. On average, each H2O links to four other molecules in what is known as a tetrahedral arrangement.

    The researchers found that the molecules in the low-density liquid also contained tetrahedral order, but that the high-density liquid was different. “In the high-density liquid, a fifth neighbor molecule was trying to squeeze into the pattern,” Palmer said.

    Normal ice (left) contains water molecules linked into ring-like structures via hydrogen bonds (dashed blue lines) between the oxygen atoms (red beads) and hydrogen atoms (white beads) of neighboring molecules, with six water molecules per ring. Each water molecule in ice also has four neighbors that form a tetrahedron (right), with a center molecule linked via hydrogen bonds to four neighboring molecules. The green lines indicate the edges of the tetrahedron. Water molecules in liquid water form distorted tetrahedrons and ring structures that can contain more or less than six molecules per ring. (Image courtesy of Jeremy Palmer)

    The researchers also looked at another facet of the two liquids: the tendency of the water molecules to form rings via hydrogen bonds. Ice consists of six water molecules per ring. Calculations by Fausto Martelli, a postdoctoral research associate advised by Roberto Car, the Ralph W. *31 Dornte Professor in Chemistry, found that in this computer model the average number of molecules per ring decreased from about seven in the high-density liquid to just above six in the low-density liquid, but then climbed slightly before declining again to six molecules per ring as ice, suggesting that there is more to be discovered about how water molecules behave during supercooling.

    A better understanding of water’s behavior at supercooled temperatures could lead to improvements in modeling the effect of high-altitude clouds on climate, Debenedetti said. Because water droplets reflect and scatter the sunlight coming into the atmosphere, clouds play a role in whether the sun’s energy is reflected away from the planet or is able to enter the atmosphere and contribute to warming. Additionally, because water goes through a supercooled phase before forming hail or snow, such research may aid strategies for preventing ice from forming on airplane wings.

    “The research is a tour de force of computational physics and provides a splendid academic look at a very difficult problem and a scholarly controversy,” said C. Austen Angell, professor of chemistry and biochemistry at Arizona State University, who was not involved in the research. “Using a particular computer model, the Debenedetti group has provided strong support for one of the theories that can explain the outstanding properties of real water in the supercooled region.”

    In their computer simulations, the team used an updated version of a model noted for its ability to capture many of water’s unusual behaviors first developed in 1974 by Frank Stillinger, then at Bell Laboratories in Murray Hill, N.J., and now a senior chemist at Princeton; and Aneesur Rahman, then at the U.S. Argonne National Laboratory. The same model was used to develop the liquid-liquid transition hypothesis.

    Collectively, the work took several million computer hours, which would take several human lifetimes using a typical desktop computer, Palmer said. In addition to the initial simulations, the team verified the results using six calculation methods. The computations were performed at Princeton’s High-Performance Computing Research Center’s Terascale Infrastructure for Groundbreaking Research in Science and Engineering (TIGRESS).

    The team included Yang Liu, who earned her doctorate at Princeton in 2012, and Athanassios Panagiotopoulos, the Susan Dod Brown Professor of Chemical and Biological Engineering.

    Support for the research was provided by the National Science Foundation (CHE 1213343) and the U.S. Department of Energy (DE-SC0002128 and DE-SC0008626).

    The article, Metastable liquid-liquid transition in a molecular model of water, by Jeremy C. Palmer, Fausto Martelli, Yang Liu, Roberto Car, Athanassios Z. Panagiotopoulos and Pablo G. Debenedetti, appeared in the journal Nature.

    See the full article here.

    About Princeton: Overview

    Princeton University is a vibrant community of scholarship and learning that stands in the nation’s service and in the service of all nations. Chartered in 1746, Princeton is the fourth-oldest college in the United States. Princeton is an independent, coeducational, nondenominational institution that provides undergraduate and graduate instruction in the humanities, social sciences, natural sciences and engineering.

    As a world-renowned research university, Princeton seeks to achieve the highest levels of distinction in the discovery and transmission of knowledge and understanding. At the same time, Princeton is distinctive among research universities in its commitment to undergraduate teaching.

    Today, more than 1,100 faculty members instruct approximately 5,200 undergraduate students and 2,600 graduate students. The University’s generous financial aid program ensures that talented students from all economic backgrounds can afford a Princeton education.

    Princeton Shield

    ScienceSprings is powered by MAINGEAR computers

  • richardmitnick 7:31 am on May 15, 2014 Permalink | Reply
    Tags: , Computing,   

    From Sandia Lab: “The brain: key to a better computer “ 

    Sandia Lab

    May 15, 2014
    Sue Holmes, sholmes@sandia.gov, (505) 844-6362

    Your brain is incredibly well-suited to handling whatever comes along, plus it’s tough and operates on little energy. Those attributes — dealing with real-world situations, resiliency and energy efficiency — are precisely what might be possible with neuro-inspired computing.

    “Today’s computers are wonderful at bookkeeping and solving scientific problems often described by partial differential equations, but they’re horrible at just using common sense, seeing new patterns, dealing with ambiguity and making smart decisions,” said John Wagner, cognitive sciences manager at Sandia National Laboratories.

    In contrast, the brain is “proof that you can have a formidable computer that never stops learning, operates on the power of a 20-watt light bulb and can last a hundred years,” he said.

    Although brain-inspired computing is in its infancy, Sandia has included it in a long-term research project whose goal is future computer systems. Neuro-inspired computing seeks to develop algorithms that would run on computers that function more like a brain than a conventional computer.

    Sandia National Laboratories researchers are drawing inspiration from neurons in the brain, such as these green fluorescent protein-labeled neurons in a mouse neocortex, with the aim of developing neuro-inspired computing systems. Although brain-inspired computing is in its infancy, Sandia has included it in a long-term research project whose goal is future computer systems. (Photo by Frances S. Chance, courtesy of Janelia Farm Research Campus)

    “We’re evaluating what the benefits would be of a system like this and considering what types of devices and architectures would be needed to enable it,” said microsystems researcher Murat Okandan.

    Sandia’s facilities and past research make the laboratories a natural for this work: its Microsystems & Engineering Science Applications (MESA) complex, a fabrication facility that can build massively interconnected computational elements; its computer architecture group and its long history of designing and building supercomputers; strong cognitive neurosciences research, with expertise in such areas as brain-inspired algorithms; and its decades of work on nationally important problems, Wagner said.

    New technology often is spurred by a particular need. Early conventional computing grew from the need for neutron diffusion simulations and weather prediction. Today, big data problems and remote autonomous and semiautonomous systems need far more computational power and better energy efficiency.

    Neuro-inspired computers would be ideal for robots, remote sensors

    Neuro-inspired computers would be ideal for operating such systems as unmanned aerial vehicles, robots and remote sensors, and solving big data problems, such as those the cyber world faces and analyzing transactions whizzing around the world, “looking at what’s going where and for what reason,” Okandan said.

    Such computers would be able to detect patterns and anomalies, sensing what fits and what doesn’t. Perhaps the computer wouldn’t find the entire answer, but could wade through enormous amounts of data to point a human analyst in the right direction, Okandan said.

    “If you do conventional computing, you are doing exact computations and exact computations only. If you’re looking at neurocomputation, you are looking at history, or memories in your sort of innate way of looking at them, then making predictions on what’s going to happen next,” he said. “That’s a very different realm.”

    Modern computers are largely calculating machines with a central processing unit and memory that stores both a program and data. They take a command from the program and data from the memory to execute the command, one step at a time, no matter how fast they run. Parallel and multicore computers can do more than one thing at a time but still use the same basic approach and remain very far removed from the way the brain routinely handles multiple problems concurrently.

    The architecture of neuro-inspired computers would be fundamentally different, uniting processing and storage in a network architecture “so the pieces that are processing the data are the same pieces that are storing the data, and the data will be processed with all nodes functioning concurrently,” Wagner said. “It won’t be a serial step-by-step process; it’ll be this network processing everything all at the same time. So it will be very efficient and very quick.”

    Unlike today’s computers, neuro-inspired computers would inherently use the critical notion of time. “The things that you represent are not just static shots, but they are preceded by something and there’s usually something that comes after them,” creating episodic memory that links what happens when. This requires massive interconnectivity and a unique way of encoding information in the activity of the system itself, Okandan said.

    More neurosciences research opens more possibilities for brain-inspired computing

    Each neuron in a neural structure can have connections coming in from about 10,000 neurons, which in turn can connect to 10,000 other neurons in a dynamic way. Conventional computer transistors, on the other hand, connect on average to four other transistors in a static pattern.

    Computer design has drawn from neuroscience before, but an explosion in neuroscience research in recent years opens more possibilities. While it’s far from a complete picture, Okandan said what’s known offers “more guidance in terms of how neural systems might be representing data and processing information” and clues about replicating those tasks in a different structure to address problems impossible to solve on today’s systems.

    Brain-inspired computing isn’t the same as artificial intelligence, although a broad definition of artificial intelligence could encompass it.

    “Where I think brain-inspired computing can start differentiating itself is where it really truly tries to take inspiration from biosystems, which have evolved over generations to be incredibly good at what they do and very robust against a component failure. They are very energy efficient and very good at dealing with real-world situations. Our current computers are very energy inefficient, they are very failure-prone due to components failing and they can’t make sense of complex data sets,” Okandan said.

    Computers today do required computations without any sense of what the data is — it’s just a representation chosen by a programmer.

    “Whereas if you think about neuro-inspired computing systems, the structure itself will have an internal representation of the datastream that it’s receiving and previous history that it’s seen, so ideally it will be able to make predictions on what the future states of that datastream should be, and have a sense for what the information represents.” Okandan said.

    He estimates a project dedicated to brain-inspired computing will develop early examples of a new architecture in the first several years, but said higher levels of complexity could take decades, even with the many efforts around the world working toward the same goal.

    “The ultimate question is, ‘What are the physical things in the biological system that let you think and act, what’s the core essence of intelligence and thought?’ That might take just a bit longer,” he said.

    For more information, visit the 2014 Neuro-Inspired Computational Elements Workshop website.

    See the full article here.

    Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major R&D responsibilities in national security, energy and environmental technologies, and economic competitiveness.

    ScienceSprings is powered by MAINGEAR computers

  • richardmitnick 8:31 am on April 20, 2014 Permalink | Reply
    Tags: , Computing, , ,   

    From Berkeley Lab: “Discovery of New Semiconductor Holds Promise for 2D Physics and Electronics” 

    Berkeley Lab

    March 20, 2014
    Lynn Yarris (510) 486-5375 lcyarris@lbl.gov

    From super-lubricants, to solar cells, to the fledgling technology of valleytronics, there is much to be excited about with the discovery of a unique new two-dimensional semiconductor, rhenium disulfide, by researchers at Berkeley Lab’s Molecular Foundry. Rhenium disulfide, unlike molybdenum disulfide and other dichalcogenides, behaves electronically as if it were a 2D monolayer even as a 3D bulk material. This not only opens the door to 2D electronic applications with a 3D material, it also makes it possible to study 2D physics with easy-to-make 3D crystals.

    Nano-beam electron diffraction pattern of rhenium disulfide with a zoom-in insert image reveals a quasi-hexagonal reflection pattern.

    “Rhenium disulfide remains a direct-bandgap semiconductor, its photoluminescence intensity increases while its Raman spectrum remains unchanged, even with the addition of increasing numbers of layers,” says Junqiao Wu, a physicist with Berkeley Lab’s Materials Sciences Division who led this discovery. “This makes bulk crystals of rhenium disulfide an ideal platform for probing 2D excitonic and lattice physics, circumventing the challenge of preparing large-area, single-crystal monolayers.”

    Wu, who is also a professor with the University of California-Berkeley’s Department of Materials Science and Engineering, headed a large international team of collaborators who used the facilities at the Molecular Foundry, a U.S Department of Energy (DOE) national nanoscience center, to prepare and characterize individual monolayers of rhenium disulfide. Through a variety of spectroscopy techniques, they studied these monolayers both as stacked multilayers and as bulk materials. Their study revealed that the uniqueness of rhenium disulfide stems from a disruption in its crystal lattice symmetry called a Peierls distortion.

    Sefaattin Tongay was the lead author of a Nature Communications paper announcing the discovery of rhenium disulfide. (Photo by Roy Kaltschmidt)

    “Semiconducting transition metal dichalcogenides consist of monolayers held together by weak forces,” says Sefaattin Tongay, lead author of a paper describing this research in Nature Communications for which Wu was the corresponding author. The paper was titled Monolayer behaviour in bulk ReS2 due to electronic and vibrational decoupling.

    “Typically the monolayers in a semiconducting transition metal dichalcogenides, such as molybdenum disulfide, are relatively strongly coupled, but isolated monolayers show large changes in electronic structure and lattice vibration energies,” Tongay says. “The result is that in bulk these materials are indirect gap semiconductors and in the monolayer they are direct gap.”

    What Tongay, Wu and their collaborators found in their characterization studies was that rhenium disulfide contains seven valence electrons as opposed to the six valence electrons of molybdenum disulfide and other transition metal dichalcogenides. This extra valence electron prevents strong interlayer coupling between multiple monolayers of rhenium disulfide.

    “The extra electron is eventually shared between two rhenium atoms, which causes the atoms to move closer to one another other, forming quasi-one-dimensional chains within each layer and creating the Peierls distortion in the lattice,” Tongay says. “Once the Peierls distortion takes place, interlayer registry is largely lost, resulting in weak interlayer coupling and monolayer behavior in the bulk.”

    Atomic structure of a monolayer of rhenium disulphide shows the dimerization of the rhenium atoms as a result of the Peierls, forming a rhenium chain denoted by the red zigzag line.

    Rhenium disulfide’s weak interlayer coupling should make this material highly useful in tribology and other low-friction applications. Since rhenium disulfide also exhibits strong interactions between light and matter that are typical of monolayer semiconductors, and since the bulk rhenium disulfide behaves as if it were a monolayer, the new material should also be valuable for solar cell applications. It might also be a less expensive alternative to diamond for valleytronics.

    In valleytronics, the wave quantum number of the electron in a crystalline material is used to encode information. This number is derived from the spin and momentum of an electron moving through a crystal lattice as a wave with energy peaks and valleys. Encoding information when the electrons reside in these minimum energy valleys offers a highly promising potential new route to quantum computing and ultrafast data-processing.

    “Rhenium atoms have a relatively large atomic weight, which means electron spin-orbit interactions are significant,” Tongay says. “This could make rhenium disulfide an ideal material for valleytronics applications.”

    The collaboration is now looking at ways to tune the properties of rhenium disulfide in both monolayer and bulk crystals through engineered defects in the lattice and selective doping. They are also looking to alloy rhenium disulfide with other members of the dichalcogenide family.

    Other authors of the Nature Communications paper in addition to Wu and Tongay were Hasan Sahin, Changhyun Ko, Alex Luce, Wen Fan, Kai Liu, Jian Zhou, Ying-Sheng Huang, Ching-Hwa Ho, Jinyuan Yan, Frank Ogletree, Shaul Aloni, Jie Ji, Shushen Li, Jingbo Li, and F. M. Peeters.

    This research was primarily supported by the DOE Office of Science.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    ScienceSprings is powered by MAINGEAR computers

  • richardmitnick 11:35 am on March 31, 2014 Permalink | Reply
    Tags: , , Computing, ,   

    From Brookhaven Lab: “Generations of Supercomputers Pin Down Primordial Plasma” 

    Brookhaven Lab

    March 31, 2014
    Justin Eure

    As one groundbreaking IBM system retires, a new Blue Gene supercomputer comes online at Brookhaven Lab to help precisely model subatomic interactions

    Brookhaven Lab physicists Peter Petreczky and Chulwoo Jung with technology architect Joseph DePace—who oversees operations and maintenance of the Lab’s supercomputers—in front of the Blue Gene/Q supercomputer.

    Supercomputers are constantly evolving to meet the increasing complexity of calculations ranging from global climate models to cosmic inflation. The bigger the puzzle, the more scientists and engineers push the limits of technology forward. Imagine, then, the advances driven by scientists seeking the code behind our cosmos.

    This mutual push and pull of basic science and technology plays out every day among physicists at the U.S. Department of Energy’s Brookhaven National Laboratory. The Lab’s Lattice Gauge Theory Group—led by physicist Frithjof Karsch—hunts for equations to describe the early universe and the forces binding matter together. Their search spans generations of supercomputers and parallels studies of the primordial plasma discovered and explored at Brookhaven’s Relativistic Heavy Ion Collider (RHIC).

    Brookhaven RHIC

    “You need more than just pen and paper to recreate the quantum-scale chemistry unfolding at the foundations of matter—you need supercomputers,” said Brookhaven Lab physicist Peter Petreczky. “The racks of IBM’s Blue Gene/L hosted here just retired after six groundbreaking years, but the cutting-edge Blue Gene/Q is now online to keep pushing nuclear physics forward. ”

    Equations to Describe the Dawn of Time

    When RHIC smashes gold ions together at nearly the speed of light, the trillion-degree collisions melt the protons inside each atom. The quarks and gluons inside then break free for a fraction of a second, mirroring the ultra-hot conditions of the universe just microseconds after the Big Bang. This remarkable matter, called quark-gluon plasma, surprised physicists by exhibiting zero viscosity—it behaved like a perfect, friction-free liquid. But this raised new questions: how and why?

    Cosmic Microwave Background Planck
    Cosmic Microwave Background by ESA/Planck

    Armed with the right equations of state, scientists can begin to answer that question and model that perfect plasma at each instant. This very real quest revolves in part around the very artificial: computer simulations.

    “If our equations are accurate, the laws of physics hold up through the simulations and we gain a new and nuanced vocabulary to characterize and predict truly fundamental interactions,” Karsch said. “If we’re wrong, the simulation produces something very different from reality. We’re in the business of systematically eliminating uncertainties.”

    Building a Quantum Grid

    Quantum chromodynamics (QCD) is the theoretical framework that describes these particle interactions on the subatomic scale. But even the most sophisticated computer can’t replicate the full QCD complexity that plays out in reality.

    “To split that sea of information into discrete pieces, physicists developed a four-dimensional grid of space-time points called the lattice,” Petreczky said. “We increase the density of this lattice as technology evolves, because the closer we pack our lattice-bound particles, the closer we approximate reality.”

    Imagine a laser grid projected into a smoke-filled room, transforming that swirling air into individual squares. Each intersection in that grid represents a data point that can be used to simulate the flow of the actual smoke. In fact, scientists use this same lattice-based approximation in fields as diverse as climate science and nuclear fusion.

    As QCD scientists incorporated more and more subatomic details into an ever-denser grid—including the full range of quark and gluon types—the mathematical demands leapt exponentially.
    QCD on a Chip

    Physicist Norman Christ, a Columbia University professor and frequent Brookhaven Lab collaborator, partnered with supercomputing powerhouse IBM to tackle the unprecedented hardware challenge for QCD simulations. The new system would need a relatively small physical footprint, good temperature control, and a combination of low power and high processor density.

    The result was the groundbreaking QCDOC, or QuantumChromoDynamics On a Chip. QCDOC came online in 2004 with a processing power of 10 teraflops, or 10 trillion floating operations per second, a common performance standard.

    “The specific needs of Christ and his collaborators actually revolutionized and rejuvenated supercomputing in this country,” said physicist Berndt Mueller, who leads Brookhaven Lab’s Nuclear and Particle Physics directorate. “The new architecture developed for QCD simulations was driven by these fundamental physics questions. That group laid the foundation for generations of IBM supercomputers that routinely rank among the world’s most powerful.”
    Generations of Giants

    The first QCDOC simulations featured lattices with 16 points in each spatial direction—a strong starting point and testing ground for QCD hypotheses, but a far cry from definitive. Building on QCDOC, IBM launched its Blue Gene series of supercomputers. In fact, the chief architect for all three generations of these highly scalable, general-purpose machines was physicist Alan Gara, who did experimental work at Fermilab [Tevatron] and CERN’s Large Hadron Collider before being recruited by IBM.

    “We had the equation of state for quark-gluon plasma prepared for publication in 2007 based on QCDOC calculations,” Petreczky said, “but it was not as accurate as we hoped. Additional work on the newly installed Blue Gene/L gave us confidence that we were on the right track.”

    The New York Blue system—led by Stony Brook University and Brookhaven Lab with funding from New York State—added 18 racks of Blue Gene/L and two racks of Blue Gene/P in 2007. This 100-teraflop boost doubled the QCD model density to 32 lattice points and ran simulations some 10 million times more complex. Throughout this period, lattice theorists also used Blue Gene supercomputers at DOE’s Argonne and Lawrence Livermore national labs.

    The 600-teraflop Blue Gene/Q came online at Brookhaven Lab in 2013, packing the processing power of 18 racks of Blue Gene/P into just three racks. This new system signaled the end for Blue Gene/L, which went offline in January 2014. Both QCDOC and Blue Gene/Q were developed in close partnership with RIKEN, a leading Japanese research institution.

    “Exciting as it is, moving across multiple systems is also a bit of a headache,” Petreczky said. “Before we get to the scientific simulations, there’s a long transition period and a tremendous amount of code writing. Chulwoo Jung, one of our group members, takes on a lot of that crucial coding.”

    Pinning Down Fundamental Fabric

    Current simulations of QCD matter feature 64 spatial lattice points in each direction, allowing physicist an unprecedented opportunity to map the quark-gluon plasma created at RHIC and explore the strong nuclear force. The Lattice Gauge Theory collaboration continues to run simulations and plans to extend the equations of state to cover all the energy levels achieved at both RHIC and the Large Hadron Collider at CERN.

    The equations already ironed out by Brookhaven’s theorists apply to everything from RHIC’s friction-free superfluid to physics beyond the standard model—including the surprising spin of muons in the g-2 experiment and rare meson decays at Fermilab.

    “This is the beauty of pinning down fundamental interactions: the foundations of matter are literally universal,” Petreczky said. “And only a few groups in the world are describing this particular aspect of our universe.”

    Additional Brookhaven Lab lattice theorists include Michael Creutz, Christoph Lehner, Taku Izubuchi, Swagato Mukherjee, and Amarjit Soni.

    The Brookhaven Computational Science Center (CSC) hosts the IBM Blue Gene supercomputers and Intel clusters used by scientists across the Lab. The CSC brings together researchers in biology, chemistry, physics and medicine with applied mathematicians and computer scientists to take advantage of the new opportunities for scientific discovery made possible by modern computers. The CSC is supported by DOE’s Office of Science.

    See the full article here.

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.

    ScienceSprings is powered by MAINGEAR computers

  • richardmitnick 11:33 am on March 11, 2014 Permalink | Reply
    Tags: , Computing,   

    From Fermilab: “Network connections: universities compute for particle physics” 

    Fermilab is an enduring source of strength for the US contribution to scientific research world wide.

    Tuesday, March 11, 2014
    Clementine Jones, Computing Sector Communications

    Notre Dame University has an active high-energy physics computing program and is one of a number of schools contributing to computing for the CMS experiment. Pictured here are professors Mike Hildreth (left) and Kevin Lannon, who is also computing liaison for the US CMS collaboration board. Photo courtesy of Kevin Lannon, Notre Dame University

    Building on Fermilab Today’s University Profiles, the Computing Sector followed up with several 2012 participants to inquire about the roles their computing departments play in particle physics research programs. We questioned randomly selected universities for two articles, and this feature focuses on the seven responses to our second set of questions.

    The first article concentrated on the value of collaboration, demonstrated by the contribution of software development or local computing resources from different universities’ computing departments to various high-energy physics research groups. Unsurprisingly, this remains a theme here: Six of the universities are currently Open Science Grid members; six are CMS or ATLAS Tier-3 or Tier-2 centers, with the seventh almost finished installing a Tier-3 center; and all have contributed software to experiments. Effective collaboration also encompasses research to improve resources. This can be incremental, enhancing precision or efficiency, or revolutionary, with novel approaches stemming from R&D efforts that create new experiment and analysis opportunities. The following are selected examples of R&D work from the universities’ responses.

    Several are investigating future processing, infrastructure and storage requirements. Professor Markus Wobisch referred to Louisiana Tech University’s interest in “multicore particle physics computing and high-availability applications.” Research scientist Shawn McKee said that the University of Michigan is researching “next-generation infrastructures, including software-defined networking, new file systems, and tools and techniques for agilely provisioning, configuring and maintaining their infrastructure and virtualization capabilities.” The University of Notre Dame’s Professor Michael Hildreth is lead principal investigator on a project looking into “data preservation issues for the future.” He is also working with others, including Professor Kevin Lannon, to “develop techniques for opportunistic computing.”

    Others focused even further on grid infrastructure. Professor Brad Abbott emphasized the University of Oklahoma’s early involvement in grid computing R&D, having had “one of the first US ATLAS grid computing test-bed setups” and being the first site to adopt an existing high-performance computing cluster as part of ATLAS. Professors Ian Shipsey and Norbert Neumeister described Purdue University’s membership in the ExTENCI project to provide an interface between the Open Science Grid and XSEDE “to bridge the efforts of these two cyberinfrastructure projects.” Professor George Alverson says that Northeastern University personnel are currently involved as grid users and testers, and the institution hopes soon to begin grid integration work.

    Finally, Professor Sung-Won Lee and postdoctoral research fellow Chris Cowden said a group at Texas Tech University is studying further applications of their FFTJet algorithm, “which applies image processing techniques to jet finding in high-energy physics experiments,” as well as “developing an application of the Geant4 simulation toolkit to study the CMS Phase II detector upgrade designs.”

    Whether fine tuning or paradigm shifting, these projects represent advances in computing capabilities and applications, the benefits of which are felt across the field of high-energy physics.

    See the full article here.

    Fermilab Campus

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics.

    ScienceSprings is powered by MAINGEAR computers

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 496 other followers

%d bloggers like this: