Tagged: Computer technology Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:46 pm on November 19, 2014 Permalink | Reply
    Tags: , , Computer technology, ,   

    Fron LLNL: “Lawrence Livermore tops Graph 500″ 


    Lawrence Livermore National Laboratory

    Nov. 19, 2014

    Don Johnston
    johnston19@llnl.gov
    925-784-3980

    Lawrence Livermore National Laboratory scientists’ search for new ways to solve large complex national security problems led to the top ranking on Graph 500 and new techniques for solving large graph problems on small high performance computing (HPC) systems, all the way down to a single server.

    “To fulfill our missions in national security and basic science, we explore different ways to solve large, complex problems, most of which include the need to advance data analytics,” said Dona Crawford, associate director for Computation at Lawrence Livermore. “These Graph 500 achievements are a product of that work performed in collaboration with our industry partners. Furthermore, these innovations are likely to benefit the larger scientific computing community.”

    3
    Photo from left: Robin Goldstone, Dona Crawford and Maya Gokhale with the Graph 500 certificate. Missing is Scott Futral.

    Lawrence Livermore’s Sequoia supercomputer, a 20-petaflop IBM Blue Gene/Q system, achieved the world’s best performance on the Graph 500 data analytics benchmark, announced Tuesday at SC14. LLNL and IBM computer scientists attained the No. 1 ranking by completing the largest problem scale ever attempted — scale 41 — with a performance of 23.751 teraTEPS (trillions of traversed edges per second). The team employed a technique developed by IBM.

    ibm
    LLNL Sequoia supercomputer, a 20-petaflop IBM Blue Gene/Q system

    The Graph 500 offers performance metrics for data intensive computing or ‘big data,’ an area of growing importance to the high performance computing (HPC) community.

    In addition to achieving the top Graph 500 ranking, Lawrence Livermore computer scientists also have demonstrated scalable Graph 500 performance on small clusters and even a single node. To achieve these results, Livermore computational researchers have combined innovative research in graph algorithms and data-intensive runtime systems.

    Robin Goldstone, a member of LLNL’s HPC Advanced Technologies Office said: “These are really exciting results that highlight our approach of leveraging HPC to solve challenging large-scale data science problems.”

    The results achieved demonstrate, at two different scales, the ability to solve very large graph problems on modest sized computing platforms by integrating flash storage into the memory hierarchy of these systems. Enabling technologies were provided through collaborations with Cray, Intel, Saratoga Speed and Mellanox.

    A scale 40-graph problem, containing 17.6 trillion edges, was solved on 300 nodes of LLNL’s Catalyst cluster. Catalyst, designed in partnership with Intel and Cray, augments a standard HPC architecture with additional capabilities targeted at data intensive computing. Each Catalyst computer node features 128 gigabytes (GB) of dynamic random access memory (DRAM) plus an additional 800 GB of high performance flash storage and uses the LLNL DI-MMAP runtime that integrates flash into the memory hierarchy. With the HavoqGT graph traversal framework, Catalyst was able to store and process the 217 TB scale 40 graph, a feat that is otherwise only achievable on the world’s largest supercomputers. The Catalyst run was No. 4 in size on the list.

    DI-MMAP and HavoqGT also were used to solve a smaller, but equally impressive, scale 37-graph problem on a single server with 50 TB of network-attached flash storage. The server, equipped with four Intel E7-4870 v2 processors and 2 TB of DRAM, was connected to two Altamont XP all-flash arrays from Saratoga Speed Inc., over a high bandwidth Mellanox FDR Infiniband interconnect. The other scale 37 entries on the Graph 500 list required clusters of 1,024 nodes or larger to process the 2.2 trillion edges.

    “Our approach really lowers the barrier of entry for people trying to solve very large graph problems,” said Roger Pearce, a researcher in LLNL’s Center for Applied Scientific Computing (CASC).

    “These results collectively demonstrate LLNL’s preeminence as a full service data intensive HPC shop, from single server to data intensive cluster to world class supercomputer,” said Maya Gokhale, LLNL principal investigator for data-centric computing architectures.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 2:59 pm on October 22, 2014 Permalink | Reply
    Tags: , Computer technology, ,   

    From isgtw: “Laying the groundwork for data-driven science” 


    international science grid this week

    October 22, 2014
    Amber Harmon

    he ability to collect and analyze massive amounts of data is rapidly transforming science, industry, and everyday life — but many of the benefits of big data have yet to surface. Interoperability, tools, and hardware are still evolving to meet the needs of diverse scientific communities.

    data
    Image courtesy istockphoto.com.

    One of the US National Science Foundation’s (NSF’s) goals is to improve the nation’s capacity in data science by investing in the development of infrastructure, building multi-institutional partnerships to increase the number of data scientists, and augmenting the usefulness and ease of using data.

    As part of that effort, the NSF announced $31 million in new funding to support 17 innovative projects under the Data Infrastructure Building Blocks (DIBBs) program. Now in its second year, the 2014 DIBBs awards support research in 22 states and touch on research topics in computer science, information technology, and nearly every field of science supported by the NSF.

    “Developed through extensive community input and vetting, NSF has an ambitious vision and strategy for advancing scientific discovery through data,” says Irene Qualters, division director for Advanced Cyberinfrastructure. “This vision requires a collaborative national data infrastructure that is aligned to research priorities and that is efficient, highly interoperable, and anticipates emerging data policies.”

    Of the 17 awards, two support early implementations of research projects that are more mature; the others support pilot demonstrations. Each is a partnership between researchers in computer science and other science domains.

    One of the two early implementation grants will support a research team led by Geoffrey Fox, a professor of computer science and informatics at Indiana University, US. Fox’s team plans to create middleware and analytics libraries that enable large-scale data science on high-performance computing systems. Fox and his team plan to test their platform with several different applications, including geospatial information systems (GIS), biomedicine, epidemiology, and remote sensing.

    “Our innovative architecture integrates key features of open source cloud computing software with supercomputing technology,” Fox said. “And our outreach involves ‘data analytics as a service’ with training and curricula set up in a Massive Open Online Course or MOOC.”Among others, US institutions collaborating on the project include Arizona State University in Phoenix; Emory University in Atlanta, Georgia; and Rutgers University in New Brunswick, New Jersey.

    Ken Koedinger, professor of human computer interaction and psychology at Carnegie Mellon University in Pittsburgh, Pennsylvania, US, leads the other early implementation project. Koedinger’s team concentrates on developing infrastructure that will drive innovation in education.

    The team will develop a distributed data infrastructure, LearnSphere, that will make more educational data accessible to course developers, while also motivating more researchers and companies to share their data with the greater learning sciences community.

    “We’ve seen the power that data has to improve performance in many fields, from medicine to movie recommendations,” Koedinger says. “Educational data holds the same potential to guide the development of courses that enhance learning while also generating even more data to give us a deeper understanding of the learning process.”

    The DIBBs program is part of a coordinated strategy within NSF to advance data-driven cyberinfrastructure. It complements other major efforts like the DataOne project, the Research Data Alliance, and Wrangler, a groundbreaking data analysis and management system for the national open science community.

    See the full article here.

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 11:42 am on September 23, 2014 Permalink | Reply
    Tags: , Computer technology,   

    From NSF: “Protecting our processors” 

    nsf
    National Science Foundation

    September 23, 2014
    Media Contacts
    Aaron Dubrow, NSF, (703) 292-4489, adubrow@nsf.gov
    Dan Francisco, SRC, (916) 812-8814, dan@integrityglobal.biz

    The National Science Foundation (NSF) and Semiconductor Research Corporation (SRC) today announced nine research awards to 10 universities totaling nearly $4 million under a joint program focused on Secure, Trustworthy, Assured and Resilient Semiconductors and Systems (STARSS).

    chipboard
    Intentional fault injection into microprocessor hardware is an important threat to the embedded computers and microcontrollers that secure the nation’s information technology infrastructure. Virginia Tech’s FAME project develops a methodology to defend micro-controllers against malicious fault injection. The methodology is being validated with test chips which are subjected to an elaborate tamper-sensitivity analysis. The photo shows a test board used to support this analysis by enabling precise control of the operating conditions of the test chip. Credit: Photo courtesy of Jim Stroup, Virginia Tech

    The awards support research at the circuit, architecture and system levels on new strategies, methods and tools to decrease the likelihood of unintended behavior or access; increase resistance and resilience to tampering; and improve the ability to provide authentication throughout the supply chain and in the field.

    “The processes and tools used to design and manufacture semiconductors ensure that the resulting product does what it is supposed to do. However, a key question that must also be addressed is whether the product does anything else, such as behaving in ways that are unintended or malicious,” said Keith Marzullo, division director of NSF’s Computer and Network Systems Division, which leads the NSF/SRC partnership on STARSS. “Through this partnership with SRC, we are pleased to focus on hardware and systems security research addressing this challenge and to provide a unique opportunity to facilitate the transition of this research into practical use.”

    NSF’s involvement in STARSS is part of its Secure and Trustworthy Cyberspace (SaTC) portfolio, which in August announced nearly $75 million in cybersecurity awards.

    The STARRS program expands SRC’s Trustworthy and Secure Semiconductors and Systems (T3S) program, engaging 10 universities across the U.S. Initial T3S industry participants are Freescale, Intel Corporation and Mentor Graphics. NSF is the first federal partner.

    “The goal of SRC’s T3S initiative is to develop cost-effective strategies and tools for the design and manufacture of chips and systems that are reliable, trustworthy and secure,” said Celia Merzbacher, SRC vice president for innovative partnerships. “This includes designing for security and assurance at the outset so as to build in resistance and resilience to attack or tampering. The research enabled by the STARSS program with NSF is a cornerstone of this overall effort.”

    SRC is the world’s leading university-research consortium for semiconductors and related technologies.

    A number of trends are motivating industry and government to support research in hardware and system security. The design and manufacture of semiconductor circuits and systems requires many steps and involves the work of hundreds of engineers–typically distributed across multiple locations and organizations worldwide.

    Moreover, a typical microprocessor is likely to include dozens of design modules from various sources. Designers at each level need assurance that the components being incorporated can be trusted in order for the final system to be trustworthy.

    Today, the design and manufacture of semiconductor circuits and systems includes extensive verification and testing to ensure the final product does what it is intended to do. Similar approaches are needed to provide assurance that the product is authentic and does not allow unwanted functionality, access or control. This includes strategies, tools and methods at all stages, from architecture through manufacture and throughout the lifecycle of the product.

    The first round of awards made through the STARSS program will support nine research projects with diverse areas of focus. They are:

    Combating integrated circuit counterfeiting using secure chip odometers--Carnegie Mellon University researchers will design and implement secure chip odometers to provide integrated circuits (ICs) with both a secure gauge of use/age and an authentication of provenance to detect counterfeit ICs;
    Intellectual Property (IP) Trust-A comprehensive framework for IP integrity validation–Case Western Reserve University and University of Florida researchers will develop a comprehensive and scalable framework for IP trust analysis and verification by evaluating IPs of diverse types and forms and develop threat models, taxonomy and instances of IP trust/integrity issues;
    Design of low-cost, memory-based security primitives and techniques for high-volume products–University of Connecticut researchers will develop metrics and algorithms to make static RAM physical “unclonable” functions that are substantially more reliable at extreme operating conditions and aging, and extend this to dynamic RAM and Flash;
    Trojan detection and diagnosis in mixed-signal systems using on-the-fly learned, pre-computed and side channel tests–Georgia Institute of Technology researchers will leverage knowledge of state of the art mixed-signal/analog/radio frequency for detection of Trojans in generic mixed-signal systems;
    Metric and CAD for differential power analysis (DPA) resistance–Iowa State University researchers will investigate statistical metrics and design techniques to measure and defend against DPA attacks;
    Design of secure and anti-counterfeit integrated circuits–University of Minnesota researchers will develop hierarchical approaches for authentication and obfuscation of chips;
    Hardware authentication through high-capacity, physical unclonable functions (PUF)-based secret key generation and lattice coding–University of Texas at Austin researchers will develop strong machine-learning resistant PUFs, capable of producing high-entropy outputs, and a new lattice-based stability algorithm for high-capacity secret key generation;
    Fault-attack awareness using microprocessor enhancements–Virginia Institute of Technology and State University researchers will develop a collection of hardware techniques for microprocessor architectures to detect fault injection attacks, and to mitigate fault analysis through an appropriate response in software; and
    Invariant carrying machine for hardware assurance–Northwestern University researchers will develop techniques for improving the reliability and trustworthiness of hardware systems via an Invariant-Carrying Machine approach.

    A second joint NSF-SRC STARSS funding opportunity was announced on Aug. 13 as part of the latest NSF SaTC program solicitation. For more information, visit the NSF website.

    -NSF-

    See the full article here.

    The National Science Foundation (NSF) is an independent federal agency created by Congress in 1950 “to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense…we are the funding source for approximately 24 percent of all federally supported basic research conducted by America’s colleges and universities. In many fields such as mathematics, computer science and the social sciences, NSF is the major source of federal backing.

    seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 3:53 pm on August 1, 2014 Permalink | Reply
    Tags: , , , Computer technology, ,   

    From Rutgers University: “Astrophysics Professor Creates Computer Models that Help Explain How Galaxies Formed and Evolved” 

    Rutgers Banner
    Rutgers University

    rm
    Rachel Somerville (Photo: Miguel Acevedo)

    July 30, 2014
    Carl Blesch

    When most people think of astronomers, they envision scientists who spend time peering at stars and galaxies through telescopes on high mountain tops. Rutgers astronomer Rachel Somerville depends on colleagues who make such observations, but her primary tools for understanding how galaxies formed billions of years ago – and how they continue to evolve today – are large computers.

    The quality and significance of her work were affirmed this week when the Simons Foundation, a private foundation that sponsors research in mathematics and the basic sciences, awarded Somerville $500,000 in research support over five years. She is one of 16 theoretical scientists at American and Canadian universities who were named Simons Investigators for 2014.

    A professor of astrophysics in the Department of Physics and Astronomy, School of Arts and Sciences, Somerville creates computer models or simulations of the physical principles that underlie galaxy formation. These models help astronomers make sense of what they see when the Hubble Space Telescope and other instruments peer into the farthest reaches of space and reveal how galaxies looked as they took shape in a young universe.

    The Simons Foundation cited her contributions to the development of “semianalytic modeling methods that combine computational and pencil-and-paper theory.” According to the group, these contributions have helped scientists understand how the growth of supermassive black holes and the energy they release is linked to a galaxy’s properties and its ability to form stars.

    Somerville explains that astronomers cannot see any single galaxy evolve through a telescope.

    “We see galaxies at different points in their lifetimes and in different wavelengths,” she said, referring to images acquired with visible light, radio waves and X-rays. Models then help astronomers predict which kinds of early galaxies evolved into disks like our Milky Way while others evolved into the round balls of stars that astronomers call elliptical galaxies.

    As a theoretical astronomer, Somerville values the opportunities she gets to interact with observational astronomers at Rutgers and elsewhere who provide her with new data that make her models more comprehensive and robust.

    “It’s hard to make models that fit all the observations,” she said. “I try to go the extra distance to connect what the models predict with things that we can actually observe.”

    Somerville is a relative newcomer to Rutgers, appointed in October 2011 to the George A. and Margaret M. Downsbrough Chair in Astrophysics.

    In 2013, she received the Dannie Heineman Prize in Astrophysics from the American Astronomical Society and the American Institute of Physics. The prize recognizes exceptional work by mid-career astronomers, citing her for providing fundamental insights into galaxy formation and evolution using modeling, simulations, and observations.

    Before joining Rutgers, Somerville held a joint appointment as associate research professor at Johns Hopkins University and associate astronomer with tenure at the Space Telescope Science Institute (STScI). STScI manages selection, planning and scheduling of scientific activities for the Hubble Space Telescope.

    Before that, she held faculty appointments at the Max Planck Institute for Astronomy in Germany and the University of Michigan, and postdoctoral appointments at the Hebrew University in Jerusalem and Cambridge University in the United Kingdom.

    Somerville’s goal at Rutgers is to build more expertise in galaxy formation theory and help the department’s astronomy group pursue new areas such as the study of extrasolar planets.

    “Rutgers is a great place for galaxy formation theorists because we have opportunities to interact with the excellent observational astronomers here,” she said, noting the university’s involvement with the powerful new Southern African Large Telescope, also referred to as SALT. “I’ve benefitted from supportive colleagues and contact with graduate and undergraduate students. I’m constantly inspired by their enthusiasm.”

    South African Large Telescope
    South African Large Telescope

    Rutgers, The State University of New Jersey, is a leading national research university and the state’s preeminent, comprehensive public institution of higher education. Rutgers is dedicated to teaching that meets the highest standards of excellence; to conducting research that breaks new ground; and to providing services, solutions, and clinical care that help individuals and the local, national, and global communities where they live.

    Founded in 1766, Rutgers teaches across the full educational spectrum: preschool to precollege; undergraduate to graduate; postdoctoral fellowships to residencies; and continuing education for professional and personal advancement.

    Rutgers Seal


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 2:49 pm on February 20, 2014 Permalink | Reply
    Tags: , , Computer technology,   

    From Caltech: “A New Laser for a Faster Internet” 

    Caltech Logo
    Caltech

    02/19/2014
    Jessica Stoller-Conrad

    A new laser developed by a research group at Caltech holds the potential to increase by orders of magnitude the rate of data transmission in the optical-fiber network—the backbone of the Internet.

    The study was published the week of February 10–14 in the online edition of the Proceedings of the National Academy of Sciences. The work is the result of a five-year effort by researchers in the laboratory of Amnon Yariv, Martin and Eileen Summerfield Professor of Applied Physics and professor of electrical engineering; the project was led by postdoctoral scholar Christos Santis (PhD ’13) and graduate student Scott Steger.Light is capable of carrying vast amounts of information—approximately 10,000 times more bandwidth than microwaves, the earlier carrier of long-distance communications. But to utilize this potential, the laser light needs to be as spectrally pure—as close to a single frequency—as possible. The purer the tone, the more information it can carry, and for decades researchers have been trying to develop a laser that comes as close as possible to emitting just one frequency.

    laser
    No image Credit

    Today’s worldwide optical-fiber network is still powered by a laser known as the distributed-feedback semiconductor (S-DFB) laser, developed in the mid 1970s in Yariv’s research group. The S-DFB laser’s unusual longevity in optical communications stemmed from its, at the time, unparalleled spectral purity—the degree to which the light emitted matched a single frequency. The laser’s increased spectral purity directly translated into a larger information bandwidth of the laser beam and longer possible transmission distances in the optical fiber—with the result that more information could be carried farther and faster than ever before.

    At the time, this unprecedented spectral purity was a direct consequence of the incorporation of a nanoscale corrugation within the multilayered structure of the laser. The washboard-like surface acted as a sort of internal filter, discriminating against spurious “noisy” waves contaminating the ideal wave frequency. Although the old S-DFB laser had a successful 40-year run in optical communications—and was cited as the main reason for Yariv receiving the 2010 National Medal of Science—the spectral purity, or coherence, of the laser no longer satisfies the ever-increasing demand for bandwidth.

    “What became the prime motivator for our project was that the present-day laser designs—even our S-DFB laser—have an internal architecture which is unfavorable for high spectral-purity operation. This is because they allow a large and theoretically unavoidable optical noise to comingle with the coherent laser and thus degrade its spectral purity,” he says.

    The old S-DFB laser consists of continuous crystalline layers of materials called III-V semiconductors—typically gallium arsenide and indium phosphide—that convert into light the applied electrical current flowing through the structure. Once generated, the light is stored within the same material. Since III-V semiconductors are also strong light absorbers—and this absorption leads to a degradation of spectral purity—the researchers sought a different solution for the new laser.

    The high-coherence new laser still converts current to light using the III-V material, but in a fundamental departure from the S-DFB laser, it stores the light in a layer of silicon, which does not absorb light. Spatial patterning of this silicon layer—a variant of the corrugated surface of the S-DFB laser—causes the silicon to act as a light concentrator, pulling the newly generated light away from the light-absorbing III-V material and into the near absorption-free silicon.

    This newly achieved high spectral purity—a 20 times narrower range of frequencies than possible with the S-DFB laser—could be especially important for the future of fiber-optic communications. Originally, laser beams in optic fibers carried information in pulses of light; data signals were impressed on the beam by rapidly turning the laser on and off, and the resulting light pulses were carried through the optic fibers. However, to meet the increasing demand for bandwidth, communications system engineers are now adopting a new method of impressing the data on laser beams that no longer requires this “on-off” technique. This method is called coherent phase communication.

    In coherent phase communications, the data resides in small delays in the arrival time of the waves; the delays—a tiny fraction (10-16) of a second in duration—can then accurately relay the information even over thousands of miles. The digital electronic bits carrying video, data, or other information are converted at the laser into these small delays in the otherwise rock-steady light wave. But the number of possible delays, and thus the data-carrying capacity of the channel, is fundamentally limited by the degree of spectral purity of the laser beam. This purity can never be absolute—a limitation of the laws of physics—but with the new laser, Yariv and his team have tried to come as close to absolute purity as is possible.

    These findings were published in a paper titled, High-coherence semiconductor lasers based on integral high-Q resonators in hybrid Si/III-V platforms. In addition to Yariv, Santis, and Steger, other Caltech coauthors include graduate student Yaakov Vilenchik, and former graduate student Arseny Vasilyev (PhD, ’13). The work was funded by the Army Research Office, the National Science Foundation, and the Defense Advanced Research Projects Agency. The lasers were fabricated at the Kavli Nanoscience Institute at Caltech.

    See the full article here.

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 3:13 pm on February 11, 2014 Permalink | Reply
    Tags: , Computer technology, , ,   

    From PPPL: “Solution to plasma-etching puzzle could mean more powerful microchips” 

    February 11, 2014
    John Greenwald

    Research conducted by PPPL in collaboration with the University of Alberta provides a key step toward the development of ever-more powerful computer chips. The researchers discovered the physics behind a mysterious process that gives chipmakers unprecedented control of a recent plasma-based technique for etching transistors on integrated circuits, or chips. This discovery could help to maintain Moore’s Law, which observes that the number of transistors on integrated circuits doubles nearly every two years.

    chip
    An integrated-circuit microchip with 456 million transistors
    (Photo by John Greenwald/PPPL Office of Communications)

    The recent technique utilizes electron beams to reach and harden the surface of the masks that are used for printing microchip circuits. More importantly, the beam creates a population of “suprathermal” electrons that produce the plasma chemistry necessary to protect the mask. The energy of these electrons is greater than simple thermal heating could produce — hence the name “suprathermal.” But how the beam electrons transform themselves into this suprathermal population has been a puzzle.

    The PPPL and University of Alberta researchers used a computer simulation to solve the puzzle. The simulation revealed that the electron beam generates intense plasma waves that move through the plasma like ripples in water. And these waves lead to the generation of the crucial suprathermal electrons.

    This discovery could bring still-greater control of the plasma-surface interactions and further increase the number of transistors on integrated circuits. Insights from both numerical simulations and experiments related to beam-plasma instabilities thus portend the development of new plasma sources and the increasingly advanced chips that they fabricate.

    See the full article here.

    Princeton Plasma Physics Laboratory is a U.S. Department of Energy national laboratory managed by Princeton University.


    ScienceSprings is powered by Maingear computers

     
  • richardmitnick 3:14 pm on January 15, 2014 Permalink | Reply
    Tags: , Computer technology, , ,   

    From Fermilab: “From the Scientific Computing Division – Intensity Frontier experiments develop insatiable appetite” 


    Fermilab is an enduring source of strength for the US contribution to scientific research world wide.

    rr
    Rob Roser, head of the Scientific Computing Division, wrote this column.

    The neutrino and muon experiments at Fermilab are getting more demanding! They have reached a level of sophistication and precision that the present computing resources available at Fermilab are no longer sufficient to handle. The solution: The Scientific Computing Division is now introducing grid and cloud services to satisfy those experiments’ appetite for large amounts of data and computing time.

    An insatiable appetite for computing resources is not new to Fermilab. Both Tevatron experiments as well as the CMS experiment require computing resources that far exceed our on-site capacity to successfully perform their science. As a result the scientific collaborations have been working closely with us over many years to leverage computing capabilities at the universities and other laboratories. Now, the demand from our Intensity Frontier experiments has reached this level.

    The Scientific Computing Services quadrant under the leadership of Margaret Votava has worked very hard over the past year with various computing organizations to provide experiments with the capability to run their software at remote locations, transfer data and bring the results back to Fermilab.

    See much more in the full article here.

    Fermilab Campus

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics.


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 6:58 am on April 3, 2013 Permalink | Reply
    Tags: , Computer technology, ,   

    From Symmetry: “Semiconductors” 

    Accelerator-powered ion implantation proves key to advances in integrated circuits.

    April 02, 2013
    Glenn Roberts Jr.

    Particle accelerators earned an important place on the semiconductor assembly line decades ago, and today their role in silicon wafer manufacturing processes continues to grow in complexity and scope.

    wafer
    A single silicon wafer, like the one seen here, is typically bombarded with ions of several different elements. Boron, arsenic and phosphorous are among the elements most commonly used in the semiconductor industry. Photo: Reidar Hahn, Fermilab

    As a silicon wafer makes its way down the assembly line, it may pass through dozens of particle beams produced by accelerators in a process known as ion implantation. Born out of the national labs, this process embeds fast-moving particles in the wafer at specific locations, depths and concentrations, permanently changing the semiconductor’s electrical qualities by selectively creating an abundance of electrons or electron vacancies at specific locations.

    These electron-rich or electron-depleted areas, in combination with other transistor components affixed to the regions, work like rivers of charge to guide electrons around a semiconductor in precisely controlled ways.

    Advances in ion implantation have helped manufacturers to pack more transisters into an integrated circuit, revolutionizing computing speed and power and reducing room-sized machines to pocket-sized devices.

    ‘Ion implantation is an absolutely necessary technology in the way we build devices, and its use has been growing,’ says Larry Larson, an engineering professor at Texas State University at San Marcos who previously worked for National Semiconductor, a Silicon Valley-based chip manufacturing firm acquired by Texas Instruments in 2011. ‘Every time a factory is built, they need some number of ion-implantation machines in the factory, and the number of machines per factory has grown over the years.’

    Today there are an estimated 12,000 ion-implantation accelerators operating worldwide and an average of 300 new ones are purchased each year, with the lion’s share purchased by the semiconductor industry.

    To meet manufacturing demands, the implanting processes become incrementally more exacting and elaborate each year, with researchers fine-tuning the number of particle beams a single wafer encounters and the angle at which each beam hits the wafer. The speed of the implantation process is also ramping up to meet manufacturing demand; today, the quickest implanters can process about 300 wafers an hour.

    Alexander Wu Chao, a professor at SLAC National Accelerator Laboratory and editor of the journal Reviews of Accelerator Science and Technology, says that ion-implantation accelerators are essential to today’s—and tomorrow’s—advanced electronics.”

    See the full article here.

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 2:06 pm on March 11, 2013 Permalink | Reply
    Tags: , , Computer technology   

    From Caltech: “Creating Indestructible Self-Healing Circuits” 

    Caltech Logo
    Caltech

    Caltech engineers build electronic chips that repair themselves

    03/11/2013
    Kimm Fesenmaier

    Imagine that the chips in your smart phone or computer could repair and defend themselves on the fly, recovering in microseconds from problems ranging from less-than-ideal battery power to total transistor failure. It might sound like the stuff of science fiction, but a team of engineers at the California Institute of Technology (Caltech), for the first time ever, has developed just such self-healing integrated chips.

    The team, made up of members of the High-Speed Integrated Circuits laboratory in Caltech’s Division of Engineering and Applied Science, has demonstrated this self-healing capability in tiny power amplifiers. The amplifiers are so small, in fact, that 76 of the chips—including everything they need to self-heal—could fit on a single penny. In perhaps the most dramatic of their experiments, the team destroyed various parts of their chips by zapping them multiple times with a high-power laser, and then observed as the chips automatically developed a work-around in less than a second.

    ‘It was incredible the first time the system kicked in and healed itself. It felt like we were witnessing the next step in the evolution of integrated circuits,’ says Ali Hajimiri, the Thomas G. Myers Professor of Electrical Engineering at Caltech. ‘We had literally just blasted half the amplifier and vaporized many of its components, such as transistors, and it was able to recover to nearly its ideal performance.’

    chip
    Some of the damage Caltech engineers intentionally inflicted on their self-healing power amplifier using a high-power laser. The chip was able to recover from complete transistor destruction. This image was captured with a scanning electron microscope.
    Credit: Jeff Chang and Kaushik Dasgupta

    The team’s results appear in the March issue of IEEE Transactions on Microwave Theory and Techniques.”

    See the full article here.

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 10:14 am on February 13, 2013 Permalink | Reply
    Tags: , , Computer technology, ,   

    From ESA Technology: “Silicon brains to oversee satellites” 

    ESASpaceForEuropeBanner
    European Space Agency

    XMM Newton
    XMM-Newton

    herschelHerschel


    Planck

    13 February 2013
    No Writer Credit

    A beautiful and expensive sight: upwards of €6 million-worth of silicon wafers, crammed with the complex integrated circuits that sit at the heart of each and every ESA mission. Years of meticulous design work went into these tiny brains, empowering satellites with intelligence.

    chgips
    Silicon wafers etched with integrated circuits for space missions. No image credit.

    The image shows a collection of six silicon wafers that contain some 14 different chip designs developed by several European companies during the last eight years with ESA’s financial and technical support.

    Each of these 20 cm-diameter wafers contains between 30 and 80 replicas of each chip, each one carrying up to about 10 million transistors or basic circuit switches.

    To save money on the high cost of fabrication, various chips designed by different companies and destined for multiple ESA projects are crammed onto the same silicon wafers, etched into place at specialised semiconductor manufacturing plants or ‘fabs’, in this case LFoundry (formerly Atmel) in France.

    Once manufactured, the chips, still on the wafer, are tested. The wafers are then chopped up. They become ready for use when placed inside protective packages – just like standard terrestrial microprocessors – and undergo final quality tests.

    Through little metal pins or balls sticking out of their packages these miniature brains are then connected to other circuit elements – such as sensors, actuators, memory or power systems – used across the satellite.

    To save the time and money needed to develop complex chips like these, ESA’s Microelectronics section maintains a catalogue of chip designs, known as Intellectual Property (IP) cores, available to European industry through ESA licence.”

    See the full article here.

    The European Space Agency (ESA), established in 1975, is an intergovernmental organization dedicated to the exploration of space, currently with 19 member states. Headquartered in Paris, ESA has a staff of more than 2,000. ESA’s space flight program includes human spaceflight, mainly through the participation in the International Space Station program, the launch and operations of unmanned exploration missions to other planets and the Moon, Earth observation, science, telecommunication as well as maintaining a major spaceport, the Guiana Space Centre at Kourou, French Guiana, and designing launch vehicles. ESA science missions are based at ESTEC in Noordwijk, Netherlands, Earth Observation missions at ESRIN in Frascati, Italy, ESA Mission Control (ESOC) is in Darmstadt, Germany, the European Astronaut Centre (EAC) that trains astronauts for future missions is situated in Cologne, Germany, and the European Space Astronomy Centre is located in Villanueva de la Cañada, Spain.

    ESA Technology


    ScienceSprings is powered by MAINGEAR computers

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 355 other followers

%d bloggers like this: