Tagged: Computing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:00 pm on May 8, 2016 Permalink | Reply
    Tags: , Computing, ,   

    From INVERSE: “What Will Replace Moore’s Law as Technology Advances Beyond the Microchip?” 



    May 5, 2016
    Adam Toobin

    The mathematics of Moore’s Law has long baffled observers, even as it underlies much of the technological revolution that has transformed the world over the past 50 years, but as chips get smaller, there’s now renewed speculation that it will be squeezed out.

    In the 1965, Intel cofounder Dr. Gordon Moore observed that the number of digital transistors on a single microchip doubled every two years. The trend has stuck ever since: computers the size of entire rooms now rest in the palm of your hand, at a fraction of the cost.

    But with the under-girding technology approaching the size of a single atom, many fear the heyday of the digital revolution is coming to a close, forcing technologists around the world to rethink their business strategies and their notions of computing altogether.

    We have faced the end of Moore’s Law before — in fact, Brian Krzanich, Intel’s chief executive, jokes he has seen the doomsday prediction made no less than four times in his life. But what makes the coming barrier different is that whether we have another five or even ten years of boosting the silicon semiconductors that constitute the core of modern computing, we are going to hit a physical wall sooner rather than later.

    BOINC WallPaper

    Transistor counts for integrated circuits plotted against their dates of introduction. The curve shows Moore’s law – the doubling of transistor counts every two years. The y-axis is logarithmic, so the line corresponds to exponential growth.

    If Moore’s Law is to survive, it would require a radical innovation, rather than the predictable progress that has sustained chip makers over recent decades.

    And most technology companies in the world are beginning to acknowledge the changing forecast for digital hardware. Semiconductor industry associations of the United States, Europe, Japan, South Korea, and Taiwan will issue only one more report forecasting chip technology growth. Intel’s CEO casts these gloomy predictions as premature and refused to participate with the final report. Krzanich insists Intel has the technical capabilities to keep improving chips while keeping costs low for manufacturers, though few in the industry believe the faltering company will maintain its quixotic course for long.

    Access mp4 video here .

    The rest of the industry is casting forth to new opportunities. New technologies like graphene (an atomic-scale honeycomb-like web of carbon atoms) and quantum computing offer a unique way out of physical limitations imposed by silicon superconductors. Graphene has recently enthralled chipmakers with its affordable carbon base and configuration that makes it an ideal candidate for faster, though still largely conventional, digital processing.

    The ideal crystalline structure of graphene is a hexagonal grid.

    “As you look at Intel saying the PC industry is slowing and seeing the first signs of slowing in mobile computing, people are starting to look for new places to put semiconductors,” said David Kanter, a semiconductor industry analyst at Real World Technologies in San Francisco, told The New York Times.

    Quantum computing, on the other hand, would tap the ambiguity inherent in the universe to change computing forever. The prospect has long intrigued tech companies, and the recent debut of some radical early stage designs have reignited the fervor of quantum’s advocates.

    This image appeared in an IBM promotion that read: “IBM unlocks quantum computing capabilities, lifts limits of innovation.”

    For many years, the end of Moore’s Law was viewed as a kind of apocalypse scenario for the technology industry: What would we do when there was no more room on the chip? Much of what has been forecast about the future of the digital world has been preceded on the notion that we will continue to make the incredible improvements of the past half century.

    It’s perhaps a good sign that technology companies are soberly looking to the future and getting excited about new, promising developments that may yet yield entirely new frontiers.

    Photos via Wgsimon [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0) or GFDL (http://www.gnu.org/copyleft/fdl.html)%5D, via Wikimedia Commons, AlexanderAlUS (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0) or GFDL (http://www.gnu.org/copyleft/fdl.html)%5D, via Wikimedia Commons, IBM, Jamie Baxter

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 9:00 am on March 25, 2016 Permalink | Reply
    Tags: , Computing, ,   

    From MIT Tech Review: “Intel Puts the Brakes on Moore’s Law” 

    MIT Technology Review
    MIT Technology Review

    Tom Simonite


    Chip maker Intel has signaled a slowing of Moore’s Law, a technological phenomenon that has played a role in just about every major advance in engineering and technology for decades.

    BOINC WallPaper
    CPU displayed by BOINC

    Since the 1970s, Intel has released chips that fit twice as many transistors into the same space roughly every two years, aiming to follow an exponential curve named after Gordon Moore, one of the company’s cofounders. That continual shrinking has helped make computers more powerful, compact, and energy-efficient. It has helped bring us smartphones, powerful Internet services, and breakthroughs in fields such as artificial intelligence and genetics. And Moore’s Law has become shorthand for the idea that anything involving computing gets more capable over time.

    But Intel disclosed in a regulatory filing last month that it is slowing the pace with which it launches new chip-making technology. The gap between successive generations of chips with new, smaller transistors will widen. With the transistors in Intel’s latest chips already as small as 14 nanometers, it is becoming more difficult to shrink them further in a way that’s cost-effective for production.

    Intel’s strategy shift is not a complete surprise. It already pushed back the debut of its first chips with 10-nanometer transistors from the end of this year to sometime in 2017. But it is notable that the company has now admitted that wasn’t a one-off, and that it can’t keep up the pace it used to. That means Moore’s Law will slow down, too.

    That doesn’t necessarily mean that our devices are about to stop improving, or that ideas such as driverless cars will stall from lack of processing power. Intel says it will deliver extra performance upgrades between generations of transistor technology by making improvements to the way chips are designed. And the company’s chips are essentially irrelevant to mobile devices, a market dominated by competitors that are generally a few years behind in terms of shrinking transistors and adopting new manufacturing technologies. It is also arguable that for many important new use cases for computing, such as wearable devices or medical implants, chips are already powerful enough and power consumption is more important.

    But raw computing power still matters. Putting more of it behind machine-learning algorithms has been crucial to recent breakthroughs in artificial intelligence, for example. And Intel is likely to have to deliver more bad news about the future of chips and Moore’s Law before too long.

    The company’s chief of manufacturing said in February that Intel needs to switch away from silicon transistors in about four years. “The new technology will be fundamentally different,” he said, before admitting that Intel doesn’t yet have a successor lined up. There are two leading candidates—technologies known as spintronics and tunneling transistors—but they may not offer big increases in computing power. And both are far from being ready for use in making processors in large volumes.

    [If one examines the details of many many supercomputers, one sees that graphics processing units (GPU’s) are becoming much mofre important than central processing units (CPU’s) which are based upon transister developments ruled by Moore’s Law]

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

  • richardmitnick 11:13 am on February 13, 2016 Permalink | Reply
    Tags: , , Computing,   

    From Nature: “The chips are down for Moore’s law” 

    Nature Mag

    09 February 2016
    M. Mitchell Waldrop


    Next month, the worldwide semiconductor industry will formally acknowledge what has become increasingly obvious to everyone involved: Moore’s law, the principle that has powered the information-technology revolution since the 1960s, is nearing its end.

    A rule of thumb that has come to dominate computing, Moore’s law states that the number of transistors on a microprocessor chip will double every two years or so — which has generally meant that the chip’s performance will, too. The exponential improvement that the law describes transformed the first crude home computers of the 1970s into the sophisticated machines of the 1980s and 1990s, and from there gave rise to high-speed Internet, smartphones and the wired-up cars, refrigerators and thermostats that are becoming prevalent today.

    None of this was inevitable: chipmakers deliberately chose to stay on the Moore’s law track. At every stage, software developers came up with applications that strained the capabilities of existing chips; consumers asked more of their devices; and manufacturers rushed to meet that demand with next-generation chips. Since the 1990s, in fact, the semiconductor industry has released a research road map every two years to coordinate what its hundreds of manufacturers and suppliers are doing to stay in step with the law — a strategy sometimes called More Moore. It has been largely thanks to this road map that computers have followed the law’s exponential demands.

    Not for much longer. The doubling has already started to falter, thanks to the heat that is unavoidably generated when more and more silicon circuitry is jammed into the same small area. And some even more fundamental limits loom less than a decade away. Top-of-the-line microprocessors currently have circuit features that are around 14 nanometres across, smaller than most viruses. But by the early 2020s, says Paolo Gargini, chair of the road-mapping organization, “even with super-aggressive efforts, we’ll get to the 2–3-nanometre limit, where features are just 10 atoms across. Is that a device at all?” Probably not — if only because at that scale, electron behaviour will be governed by quantum uncertainties that will make transistors hopelessly unreliable. And despite vigorous research efforts, there is no obvious successor to today’s silicon technology.

    The industry road map released next month will for the first time lay out a research and development plan that is not centred on Moore’s law. Instead, it will follow what might be called the More than Moore strategy: rather than making the chips better and letting the applications follow, it will start with applications — from smartphones and supercomputers to data centres in the cloud — and work downwards to see what chips are needed to support them. Among those chips will be new generations of sensors, power-management circuits and other silicon devices required by a world in which computing is increasingly mobile.

    The changing landscape, in turn, could splinter the industry’s long tradition of unity in pursuit of Moore’s law. “Everybody is struggling with what the road map actually means,” says Daniel Reed, a computer scientist and vice-president for research at the University of Iowa in Iowa City. The Semiconductor Industry Association (SIA) in Washington DC, which represents all the major US firms, has already said that it will cease its participation in the road-mapping effort once the report is out, and will instead pursue its own research and development agenda.

    Everyone agrees that the twilight of Moore’s law will not mean the end of progress. “Think about what happened to airplanes,” says Reed. “A Boeing 787 doesn’t go any faster than a 707 did in the 1950s — but they are very different airplanes”, with innovations ranging from fully electronic controls to a carbon-fibre fuselage. That’s what will happen with computers, he says: “Innovation will absolutely continue — but it will be more nuanced and complicated.”

    Laying down the law

    The 1965 essay (1) that would make Gordon Moore famous started with a meditation on what could be done with the still-new technology of integrated circuits. Moore, who was then research director of Fairchild Semiconductor in San Jose, California, predicted wonders such as home computers, digital wristwatches, automatic cars and “personal portable communications equipment” — mobile phones. But the heart of the essay was Moore’s attempt to provide a timeline for this future. As a measure of a microprocessor’s computational power, he looked at transistors, the on–off switches that make computing digital. On the basis of achievements by his company and others in the previous few years, he estimated that the number of transistors and other electronic components per chip was doubling every year.

    Moore, who would later co-found Intel in Santa Clara, California, underestimated the doubling time; in 1975, he revised it to a more realistic two years (2). But his vision was spot on. The future that he predicted started to arrive in the 1970s and 1980s, with the advent of microprocessor-equipped consumer products such as the Hewlett Packard hand calculators, the Apple II computer and the IBM PC. Demand for such products was soon exploding, and manufacturers were engaging in a brisk competition to offer more and more capable chips in smaller and smaller packages (see Moore’s lore)

    This was expensive. Improving a microprocessor’s performance meant scaling down the elements of its circuit so that more of them could be packed together on the chip, and electrons could move between them more quickly. Scaling, in turn, required major refinements in photolithography, the basic technology for etching those microscopic elements onto a silicon surface. But the boom times were such that this hardly mattered: a self-reinforcing cycle set in. Chips were so versatile that manufacturers could make only a few types — processors and memory, mostly — and sell them in huge quantities. That gave them enough cash to cover the cost of upgrading their fabrication facilities, or ‘fabs’, and still drop the prices, thereby fuelling demand even further.

    Soon, however, it became clear that this market-driven cycle could not sustain the relentless cadence of Moore’s law by itself. The chip-making process was getting too complex, often involving hundreds of stages, which meant that taking the next step down in scale required a network of materials-suppliers and apparatus-makers to deliver the right upgrades at the right time. “If you need 40 kinds of equipment and only 39 are ready, then everything stops,” says Kenneth Flamm, an economist who studies the computer industry at the University of Texas at Austin.

    To provide that coordination, the industry devised its first road map. The idea, says Gargini, was “that everyone would have a rough estimate of where they were going, and they could raise an alarm if they saw roadblocks ahead”. The US semiconductor industry launched the mapping effort in 1991, with hundreds of engineers from various companies working on the first report and its subsequent iterations, and Gargini, then the director of technology strategy at Intel, as its chair. In 1998, the effort became the International Technology Roadmap for Semiconductors, with participation from industry associations in Europe, Japan, Taiwan and South Korea. (This year’s report, in keeping with its new approach, will be called the International Roadmap for Devices and Systems.)

    “The road map was an incredibly interesting experiment,” says Flamm. “So far as I know, there is no example of anything like this in any other industry, where every manufacturer and supplier gets together and figures out what they are going to do.” In effect, it converted Moore’s law from an empirical observation into a self-fulfilling prophecy: new chips followed the law because the industry made sure that they did.

    And it all worked beautifully, says Flamm — right up until it didn’t.

    Heat death

    The first stumbling block was not unexpected. Gargini and others had warned about it as far back as 1989. But it hit hard nonetheless: things got too small.

    “It used to be that whenever we would scale to smaller feature size, good things happened automatically,” says Bill Bottoms, president of Third Millennium Test Solutions, an equipment manufacturer in Santa Clara. “The chips would go faster and consume less power.”

    But in the early 2000s, when the features began to shrink below about 90 nanometres, that automatic benefit began to fail. As electrons had to move faster and faster through silicon circuits that were smaller and smaller, the chips began to get too hot.

    That was a fundamental problem. Heat is hard to get rid of, and no one wants to buy a mobile phone that burns their hand. So manufacturers seized on the only solutions they had, says Gargini. First, they stopped trying to increase ‘clock rates’ — how fast microprocessors execute instructions. This effectively put a speed limit on the chip’s electrons and limited their ability to generate heat. The maximum clock rate hasn’t budged since 2004.

    Second, to keep the chips moving along the Moore’s law performance curve despite the speed limit, they redesigned the internal circuitry so that each chip contained not one processor, or ‘core’, but two, four or more. (Four and eight are common in today’s desktop computers and smartphones.) In principle, says Gargini, “you can have the same output with four cores going at 250 megahertz as one going at 1 gigahertz”. In practice, exploiting eight processors means that a problem has to be broken down into eight pieces — which for many algorithms is difficult to impossible. “The piece that can’t be parallelized will limit your improvement,” says Gargini.

    Even so, when combined with creative redesigns to compensate for electron leakage and other effects, these two solutions have enabled chip manufacturers to continue shrinking their circuits and keeping their transistor counts on track with Moore’s law. The question now is what will happen in the early 2020s, when continued scaling is no longer possible with silicon because quantum effects have come into play. What comes next? “We’re still struggling,” says An Chen, an electrical engineer who works for the international chipmaker GlobalFoundries in Santa Clara, California, and who chairs a committee of the new road map that is looking into the question.

    That is not for a lack of ideas. One possibility is to embrace a completely new paradigm — something like quantum computing, which promises exponential speed-up for certain calculations, or neuromorphic computing, which aims to model processing elements on neurons in the brain. But none of these alternative paradigms has made it very far out of the laboratory. And many researchers think that quantum computing will offer advantages only for niche applications, rather than for the everyday tasks at which digital computing excels. “What does it mean to quantum-balance a chequebook?” wonders John Shalf, head of computer-science research at the Lawrence Berkeley National Laboratory in Berkeley, California.

    Material differences

    A different approach, which does stay in the digital realm, is the quest to find a ‘millivolt switch’: a material that could be used for devices at least as fast as their silicon counterparts, but that would generate much less heat. There are many candidates, ranging from 2D graphene-like compounds to spintronic materials that would compute by flipping electron spins rather than by moving electrons. “There is an enormous research space to be explored once you step outside the confines of the established technology,” says Thomas Theis, a physicist who directs the nanoelectronics initiative at the Semiconductor Research Corporation (SRC), a research-funding consortium in Durham, North Carolina.

    Unfortunately, no millivolt switch has made it out of the laboratory either. That leaves the architectural approach: stick with silicon, but configure it in entirely new ways. One popular option is to go 3D. Instead of etching flat circuits onto the surface of a silicon wafer, build skyscrapers: stack many thin layers of silicon with microcircuitry etched into each. In principle, this should make it possible to pack more computational power into the same space. In practice, however, this currently works only with memory chips, which do not have a heat problem: they use circuits that consume power only when a memory cell is accessed, which is not that often. One example is the Hybrid Memory Cube design, a stack of as many as eight memory layers that is being pursued by an industry consortium originally launched by Samsung and memory-maker Micron Technology in Boise, Idaho.

    Microprocessors are more challenging: stacking layer after layer of hot things simply makes them hotter. But one way to get around that problem is to do away with separate memory and microprocessing chips, as well as the prodigious amount of heat — at least 50% of the total — that is now generated in shuttling data back and forth between the two. Instead, integrate them in the same nanoscale high-rise.

    This is tricky, not least because current-generation microprocessors and memory chips are so different that they cannot be made on the same fab line; stacking them requires a complete redesign of the chip’s structure. But several research groups are hoping to pull it off. Electrical engineer Subhasish Mitra and his colleagues at Stanford University in California have developed a hybrid architecture that stacks memory units together with transistors made from carbon nanotubes, which also carry current from layer to layer (3). The group thinks that its architecture could reduce energy use to less than one-thousandth that of standard chips.

    Going mobile

    The second stumbling block for Moore’s law was more of a surprise, but unfolded at roughly the same time as the first: computing went mobile.

    Twenty-five years ago, computing was defined by the needs of desktop and laptop machines; supercomputers and data centres used essentially the same microprocessors, just packed together in much greater numbers. Not any more. Today, computing is increasingly defined by what high-end smartphones and tablets do — not to mention by smart watches and other wearables, as well as by the exploding number of smart devices in everything from bridges to the human body. And these mobile devices have priorities very different from those of their more sedentary cousins.

    Keeping abreast of Moore’s law is fairly far down on the list — if only because mobile applications and data have largely migrated to the worldwide network of server farms known as the cloud. Those server farms now dominate the market for powerful, cutting-edge microprocessors that do follow Moore’s law. “What Google and Amazon decide to buy has a huge influence on what Intel decides to do,” says Reed.

    Much more crucial for mobiles is the ability to survive for long periods on battery power while interacting with their surroundings and users. The chips in a typical smartphone must send and receive signals for voice calls, Wi-Fi, Bluetooth and the Global Positioning System, while also sensing touch, proximity, acceleration, magnetic fields — even fingerprints. On top of that, the device must host special-purpose circuits for power management, to keep all those functions from draining the battery.

    The problem for chipmakers is that this specialization is undermining the self-reinforcing economic cycle that once kept Moore’s law humming. “The old market was that you would make a few different things, but sell a whole lot of them,” says Reed. “The new market is that you have to make a lot of things, but sell a few hundred thousand apiece — so it had better be really cheap to design and fab them.”

    Both are ongoing challenges. Getting separately manufactured technologies to work together harmoniously in a single device is often a nightmare, says Bottoms, who heads the new road map’s committee on the subject. “Different components, different materials, electronics, photonics and so on, all in the same package — these are issues that will have to be solved by new architectures, new simulations, new switches and more.”

    For many of the special-purpose circuits, design is still something of a cottage industry — which means slow and costly. At the University of California, Berkeley, electrical engineer Alberto Sangiovanni-Vincentelli and his colleagues are trying to change that: instead of starting from scratch each time, they think that people should create new devices by combining large chunks of existing circuitry that have known functionality (4). “It’s like using Lego blocks,” says Sangiovanni-Vincentelli. It’s a challenge to make sure that the blocks work together, but “if you were to use older methods of design, costs would be prohibitive”.

    Costs, not surprisingly, are very much on the chipmakers’ minds these days. “The end of Moore’s law is not a technical issue, it is an economic issue,” says Bottoms. Some companies, notably Intel, are still trying to shrink components before they hit the wall imposed by quantum effects, he says. But “the more we shrink, the more it costs”.

    Every time the scale is halved, manufacturers need a whole new generation of ever more precise photolithography machines. Building a new fab line today requires an investment typically measured in many billions of dollars — something only a handful of companies can afford. And the fragmentation of the market triggered by mobile devices is making it harder to recoup that money. “As soon as the cost per transistor at the next node exceeds the existing cost,” says Bottoms, “the scaling stops.”

    Many observers think that the industry is perilously close to that point already. “My bet is that we run out of money before we run out of physics,” says Reed.

    Certainly it is true that rising costs over the past decade have forced a massive consolidation in the chip-making industry. Most of the world’s production lines now belong to a comparative handful of multinationals such as Intel, Samsung and the Taiwan Semiconductor Manufacturing Company in Hsinchu. These manufacturing giants have tight relationships with the companies that supply them with materials and fabrication equipment; they are already coordinating, and no longer find the road-map process all that useful. “The chip manufacturer’s buy-in is definitely less than before,” says Chen.

    Take the SRC, which functions as the US industry’s research agency: it was a long-time supporter of the road map, says SRC vice-president Steven Hillenius. “But about three years ago, the SRC contributions went away because the member companies didn’t see the value in it.” The SRC, along with the SIA, wants to push a more long-term, basic research agenda and secure federal funding for it — possibly through the White House’s National Strategic Computing Initiative, launched in July last year.

    That agenda, laid out in a report (5) last September, sketches out the research challenges ahead. Energy efficiency is an urgent priority — especially for the embedded smart sensors that comprise the ‘Internet of things’, which will need new technology to survive without batteries, using energy scavenged from ambient heat and vibration. Connectivity is equally key: billions of free-roaming devices trying to communicate with one another and the cloud will need huge amounts of bandwidth, which they can get if researchers can tap the once-unreachable terahertz band lying deep in the infrared spectrum. And security is crucial — the report calls for research into new ways to build in safeguards against cyberattack and data theft.

    These priorities and others will give researchers plenty to work on in coming years. At least some industry insiders, including Shekhar Borkar, head of Intel’s advanced microprocessor research, are optimists. Yes, he says, Moore’s law is coming to an end in a literal sense, because the exponential growth in transistor count cannot continue. But from the consumer perspective, “Moore’s law simply states that user value doubles every two years”. And in that form, the law will continue as long as the industry can keep stuffing its devices with new functionality.

    The ideas are out there, says Borkar. “Our job is to engineer them.”

    Nature 530, 144–147 (11 February 2016) doi:10.1038/530144a

    1.Moore, G. E. Electronics 38, 114–117 (1965).

    2.Moore, G. E. IEDM Tech. Digest 11–13 (1975).

    3.Sabry Aly, M. M. et al. Computer 48(12), 24–33 (2015).

    4.Nikolic, B. 41th Eur. Solid-State Circuits Conf. (2015); available at http://go.nature.com/wwljk7

    5. Rebooting the IT Revolution: A Call to Action (SIA/SRC, 2015); available at http://go.nature.com/urvkhw

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Nature is a weekly international journal publishing the finest peer-reviewed research in all fields of science and technology on the basis of its originality, importance, interdisciplinary interest, timeliness, accessibility, elegance and surprising conclusions. Nature also provides rapid, authoritative, insightful and arresting news and interpretation of topical and coming trends affecting science, scientists and the wider public.

  • richardmitnick 4:40 pm on December 23, 2015 Permalink | Reply
    Tags: , Computing,   

    From Berkeley: “Engineers demo first processor that uses light for ultrafast communications” 

    UC Berkeley

    UC Berkeley

    December 23, 2015
    Sarah Yang

    The electronic-photonic processor chip communicates to the outside world directly using light, illustrated here. The photo shows the packaged microchip under illumination, revealing the chip’s primary features. (Image by Glenn J. Asakawa, University of Colorado, Glenn.Asakawa@colorado.edu)

    Engineers have successfully married electrons and photons within a single-chip microprocessor, a landmark development that opens the door to ultrafast, low-power data crunching.

    The researchers packed two processor cores with more than 70 million transistors and 850 photonic components onto a 3-by-6-millimeter chip. They fabricated the microprocessor in a foundry that mass-produces high-performance computer chips, proving that their design can be easily and quickly scaled up for commercial production.

    The new chip, described in a paper to be published Dec. 24 in the print issue of the journal Nature, marks the next step in the evolution of fiber optic communication technology by integrating into a microprocessor the photonic interconnects, or inputs and outputs (I/O), needed to talk to other chips.

    “This is a milestone. It’s the first processor that can use light to communicate with the external world,” said Vladimir Stojanović, an associate professor of electrical engineering and computer sciences at the University of California, Berkeley, who led the development of the chip. “No other processor has the photonic I/O in the chip.”

    Stojanović and fellow UC Berkeley professor Krste Asanović teamed up with Rajeev Ram at the Massachusetts Institute of Technology
    and Miloš Popović at the University of Colorado Boulder to develop the new microprocessor.

    “This is the first time we’ve put a system together at such scale, and have it actually do something useful, like run a program,” said Asanović, who helped develop the free and open architecture called RISC-V (reduced instruction set computer), used by the processor.

    Greater bandwidth with less power

    Compared with electrical wires, fiber optics support greater bandwidth, carrying more data at higher speeds over greater distances with less energy. While advances in optical communication technology have dramatically improved data transfers between computers, bringing photonics into the computer chips themselves had been difficult.

    The electronic-photonic processor chip naturally illuminated by red and green bands of light. (Image by Glenn J. Asakawa, University of Colorado, Glenn.Asakawa@colorado.edu)

    That’s because no one until now had figured out how to integrate photonic devices into the same complex and expensive fabrication processes used to produce computer chips without changing the process itself. Doing so is key since it does not further increase the cost of the manufacturing or risk failure of the fabricated transistors.

    The researchers verified the functionality of the chip with the photonic interconnects by using it to run various computer programs, requiring it to send and receive instructions and data to and from memory. They showed that the chip had a bandwidth density of 300 gigabits per second per square millimeter, about 10 to 50 times greater than packaged electrical-only microprocessors currently on the market.

    The photonic I/O on the chip is also energy-efficient, using only 1.3 picojoules per bit, equivalent to consuming 1.3 watts of power to transmit a terabit of data per second. In the experiments, the data was sent to a receiver 10 meters away and back.

    “The advantage with optical is that with the same amount of power, you can go a few centimeters, a few meters or a few kilometers,” said study co-lead author Chen Sun, a recent UC Berkeley Ph.D. graduate from Stojanović’s lab at the Berkeley Wireless Research Center. “For high-speed electrical links, 1 meter is about the limit before you need repeaters to regenerate the electrical signal, and that quickly increases the amount of power needed. For an electrical signal to travel 1 kilometer, you’d need thousands of picojoules for each bit.”

    The achievement opens the door to a new era of bandwidth-hungry applications. One near-term application for this technology is to make data centers more green. According to the Natural Resources Defense Council, data centers consumed about 91 billion kilowatt-hours of electricity in 2013, about 2 percent of the total electricity consumed in the United States, and the appetite for power is growing exponentially.

    This research has already spun off two startups this year with applications in data centers in mind. SiFive is commercializing the RISC-V processors, while Ayar Labs is focusing on photonic interconnects. Earlier this year, Ayar Labs – under its previous company name of OptiBit – was awarded the MIT Clean Energy Prize. Ayar Labs is getting further traction through the CITRIS Foundry startup incubator at UC Berkeley.

    The advance is timely, coming as world leaders emerge from the COP21 United Nations climate talks with new pledges to limit global warming.

    Further down the road, this research could be used in applications such as LIDAR, the light radar technology used to guide self-driving vehicles and the eyes of a robot; brain ultrasound imaging; and new environmental biosensors.

    ‘Fiat lux’ on a chip

    The researchers came up with a number of key innovations to harness the power of light within the chip.

    The illumination and camera create a rainbow-colored pattern across the electronic-photonic processor chip. (Image by Milos Popović, University of Colorado, milos.popovic@colorado.edu)

    Each of the key photonic I/O components – such as a ring modulator, photodetector and a vertical grating coupler – serves to control and guide the light waves on the chip, but the design had to conform to the constraints of a process originally thought to be hostile to photonic components. To enable light to move through the chip with minimal loss, for instance, the researchers used the silicon body of the transistor as a waveguide for the light. They did this by using available masks in the fabrication process to manipulate doping, the process used to form different parts of transistors.

    After getting the light onto the chip, the researchers needed to find a way to control it so that it can carry bits of data. They designed a silicon ring with p-n doped junction spokes next to the silicon waveguide to enable fast and low-energy modulation of light.

    Using the silicon-germanium parts of a modern transistor – an existing part of the semiconductor manufacturing process – to build a photodetector took advantage of germanium’s ability to absorb light and convert it into electricity.

    A vertical grating coupler that leverages existing poly-silicon and silicon layers in innovative ways was used to connect the chip to the external world, directing the light in the waveguide up and off the chip. The researchers integrated electronic components tightly with these photonic devices to enable stable operation in a hostile chip environment.

    The authors emphasized that these adaptations all worked within the parameters of existing microprocessor manufacturing systems, and that it will not be difficult to optimize the components to further improve their chip’s performance.

    Other co-lead authors on this paper are Mark Wade, Ph.D. student at the University of Colorado, Boulder; Yunsup Lee, a Ph.D. candidate at UC Berkeley; and Jason Orcutt, an MIT graduate who now works at the IBM Research Center in New York.

    The Defense Advanced Research Projects Agency (DARPA) helped support this work.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Founded in the wake of the gold rush by leaders of the newly established 31st state, the University of California’s flagship campus at Berkeley has become one of the preeminent universities in the world. Its early guiding lights, charged with providing education (both “practical” and “classical”) for the state’s people, gradually established a distinguished faculty (with 22 Nobel laureates to date), a stellar research library, and more than 350 academic programs.

    UC Berkeley Seal

  • richardmitnick 8:11 pm on December 20, 2015 Permalink | Reply
    Tags: , Computing, Hacking,   

    From The Atlantic: “Pop Culture Is Finally Getting Hacking Right” 

    Atlantic Magazine

    The Atlantic Magazine

    Dec 1, 2015
    Joe Marshall


    Movies and TV shows have long relied on lazy and unrealistic depictions of how cybersecurity works. That’s beginning to change.

    The idea of a drill-wielding hacker who runs a deep-web empire selling drugs to teens seems like a fantasy embodying the worst of digital technology. It’s also, in the spirit of CSI: Cyber, completely ridiculous. So it was no surprise when a recent episode of the CBS drama outed its villain as a a video-game buff who lived at home with his mother. For a series whose principal draw is watching Patricia Arquette yell, “Find the malware!”, that sort of stereotypical characterization and lack of realism is to be expected.

    But CSI: Cyber is something of an anomaly when it comes to portraying cybersecurity on the big or small screen. Hollywood is putting more effort into creating realistic technical narratives and thoughtfully depicting programming culture, breaking new ground with shows like Mr. Robot, Halt and Catch Fire, and Silicon Valley, and films like Blackhat. It’s a smart move, in part because audiences now possess a more sophisticated understanding of such technology than they did in previous decades. Cyberattacks, such as the 2013 incident that affected tens of millions of Target customers, are a real threat, and Americans generally have little confidence that their personal records will remain private and secure. The most obvious promise of Hollywood investing in technically savvy fiction is that these works will fuel a grassroots understanding of digital culture, including topics such as adblockers and surveillance self-defense. But just as important is a film and TV industry that sees the artistic value in accurately capturing a subject that’s relevant to the entire world.

    In some ways, cyberthrillers are just a new kind of procedural—rough outlines of the technical worlds only a few inhabit. But unlike shows based on lawyers, doctors, or police officers, shows about programmers deal with especially timely material. Perry Mason, the TV detective from the ’50s and ’60s, would recognize the tactics of Detective Lennie Briscoe from Law & Order, but there’s no ’60s hacker counterpart to talk shop with Mr. Robot’s Elliot Alderson. It’s true that what you can hack has changed dramatically over the past 20 years: The amount of information is exploding, and expanding connectivity means people can program everything from refrigerators to cars . But beyond that, hacking itself looks pretty much the same, thanks to the largely unchanging appearance and utility of the command-line—a text-only interface favored by developers, hackers, and other programming types.

    Laurelai Storm / Github

    So why has it taken so long for television and film to adapt and accurately portray the most essential aspects of programming? The usual excuse from producers and set designers is that it’s ugly and translates poorly to the screen. As a result, the easiest way to portray code in a movie has long been to shoot a green screen pasted onto a computer display, then add technical nonsense in post-production. Faced with dramatizing arcane details that most viewers at the time wouldn’t understand, the overwhelming temptation for filmmakers was to amp up the visuals, even if it meant creating something utterly removed from the reality of programming. That’s what led to the trippy, Tron-like graphics in 1995’s Hackers, or Hugh Jackman bravely assembling a wire cube made out of smaller, more solid cubes in 2001’s Swordfish.

    A scene from Hackers (MGM)

    A scene from Swordfish (Warner Bros.)

    But more recent depictions of coding are much more naturalistic than previous CGI-powered exercises in geometry. Despite its many weaknesses, this year’s Blackhat does a commendable job of representing cybersecurity. A few scenes show malware reminiscent of this decompiled glimpse of Stuxnet—the cyber superweapon created as a joint effort by the U.S. and Israel. The snippets look similar because they’re both variants of C, a popular programming language commonly used in memory-intensive applications. In Blackhat, the malware’s target was the software used to manage the cooling towers of a Chinese nuclear power plant. In real-life, Stuxnet was used to target the software controlling Iranian centrifuges to systematically and covertly degrade the country’s nuclear enrichment efforts.

    An image of code used in Stuxnet (Github)

    Code shown in Blackhat (Universal)

    In other words, both targeted industrial machinery and monitoring software, and both seem to be written in a language compatible with those ends. Meaning that Hollywood producers took care to research what real-life malware might look like and how it’d likely be used, even if the average audience member wouldn’t know the difference. Compared to the sky-high visuals of navigating a virtual filesystem in Hackers, where early-CGI wizardry was thought the only way to retain audience attention, Blackhat’s commitment to the terminal and actual code is refreshing.

    Though it gets the visuals right, Blackhat highlights another common Hollywood misstep when it comes to portraying computer science on screen: It uses programming for heist-related ends. For many moviegoers, hacking is how you get all green lights for your getaway car (The Italian Job) or stick surveillance cameras in a loop (Ocean’s Eleven, The Score, Speed). While most older films frequently fall into this trap, at least one action hacker flick sought to explore how such technology could affect society more broadly, even if it fumbled the details. In 1995, The Net debuted as a cybersecurity-themed Sandra Bullock vehicle that cast one of America’s sweethearts into a kafkaesque nightmare. As part of her persecution at the hands of the evil Gatekeeper corporation, Bullock’s identity is erased from a series of civil and corporate databases, turning her into a fugitive thanks to a forged criminal record. Technical jibberish aside, The Net was ahead of its time in tapping into the feeling of being powerless to contradict an entrenched digital bureaucracy.

    It’s taken a recent renaissance in scripted television to allow the space for storytellers to focus on programming as a culture, instead of a techy way to spruce up an action movie. And newer television shows have increasingly been able to capture that nuance without sacrificing mood and veracity. While design details like screens and terminal shots matter, the biggest challenge is writing a script that understands and cares about programming. Mr. Robot, which found critical success when it debuted on USA this summer, is perhaps the most accurate television show ever to depict cybersecurity. In particular, programmers have praised the show’s use of terminology, its faithful incorporation of actual security issues into the plot, and the way its protagonist uses real applications and tools. The HBO comedy series Silicon Valley, which was renewed for a third season, had a scene where a character wrote out the math behind a new compression algorithm. It turned out to be fleshed-out enough that a fan of the show actually recreated it. And even though a show like CSI: Cyber might regularly miss the mark, it has its bright spots, such as an episode about car hacking.

    There’s a more timeless reason for producers and writers to scrutinize technical detail: because it makes for good art. “We’re constantly making sure the verisimilitude of the show is as impervious as possible,” said Jonathan Lisco, the showrunner for AMC’s Halt and Catch Fire, a drama about the so-called Silicon Prairie of 1980s Texas. The actress Mackenzie Davis elaborated on the cachet such specificity could lend a show: “We need the groundswell of nerds to be like, ‘You have to watch this!’” The rise of software development as a profession means a bigger slice of the audience can now tell when a showrunner is phoning it in, and pillory the mistakes online. But it’s also no coincidence that Halt and Catch Fire is on the same network that was once home to that other stickler for accuracy—Mad Men.

    Rising technical literacy and a Golden Age of creative showrunners have resulted in a crop of shows that infuse an easy but granular technical understanding with top-notch storytelling. Coupling an authentic narrative with technical aplomb can allow even average viewers to intuitively understand high-level concepts that hold up under scrutiny. And even if audiences aren’t compelled to research on their own, the rough shape of a lesson can still seep through—like how cars are hackable, or the importance of guarding against phishing and financial fraud. But above all, more sophisticated representations of hacking make for better art. In an age of black mirrors, the soft glow of an open terminal has never radiated more promise.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 1:56 pm on July 9, 2015 Permalink | Reply
    Tags: , , Computing, Network Computing, ,   

    From Symmetry: “More data, no problem” 


    July 09, 2015
    Katie Elyce Jones

    Scientists are ready to handle the increased data of the current run of the Large Hadron Collider.

    Photo by Reidar Hahn, Fermilab

    Physicist Alexx Perloff, a graduate student at Texas A&M University on the CMS experiment, is using data from the first run of the Large Hadron Collider for his thesis, which he plans to complete this year.

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles

    CERN CMS Detector

    When all is said and done, it will have taken Perloff a year and a half to conduct the computing necessary to analyze all the information he needs—not unusual for a thesis.

    But had he used the computing tools LHC scientists are using now, he estimates he could have finished his particular kind of analysis in about three weeks. Although Perloff represents only one scientist working on the LHC, his experience shows the great leaps scientists have made in LHC computing by democratizing their data, becoming more responsive to popular demand and improving their analysis software.

    A deluge of data

    Scientists estimate the current run of the LHC could create up to 10 times more data than the first one. CERN already routinely stores 6 gigabytes (or 6 billion units of digital information) per second, up from 1 gigabyte per second in the first run.

    The second run of the LHC is more data-intensive because the accelerator itself is more intense: The collision energy is 60 percent greater, resulting in “pile-up” or more collisions per proton bunch. Proton bunches are also injected into the ring closer together, resulting in more collisions per second.

    On top of that, the experiments have upgraded their triggers, which automatically choose which of the millions of particle events per second to record. The CMS trigger will now record more than twice as much data per second as it did in the previous run.

    Had CMS and ATLAS scientists relied only on adding more computers to make up for the data hike, they would likely have needed about four to six times more computing power in CPUs and storage than they used in the first run of the LHC.


    To avoid such a costly expansion, they found smarter ways to share and analyze the data.

    Flattening the hierarchy

    Over a decade ago, network connections were less reliable than they are today, so the Worldwide LHC Computing Grid was designed to have different levels, or tiers, that controlled data flow.

    All data recorded by the detectors goes through the CERN Data Centre, known as Tier-0, where it is initially processed, then to a handful of Tier-1 centers in different regions across the globe.

    CERN DATA Center
    One view of the Cern Data Centre

    During the last run, the Tier-1 centers served Tier-2 centers, which were mostly the smaller university computing centers where the bulk of physicists do their analyses.

    “The experience for a user on Run I was more restrictive,” says Oliver Gutsche, assistant head of the Scientific Computing Division for Science Workflows and Operations at Fermilab, the US Tier-1 center for CMS*. “You had to plan well ahead.”

    Now that the network has proved reliable, a new model “flattens” the hierarchy, enabling a user at any ATLAS or CMS Tier-2 center to access data from any of their centers in the world. This was initiated in Run I and is now fully in place for Run II.

    Through a separate upgrade known as data federation, users can also open a file from another computing center through the network, enabling them to view the file without going through the process of transferring it from center to center.

    Another significant upgrade affects the network stateside. Through its Energy Sciences Network, or ESnet, the US Department of Energy increased the bandwidth of the transatlantic network that connects the US CMS and ATLAS Tier-1 centers to Europe. A high-speed network, ESnet transfers data 15,000 times faster than the average home network provider.

    Dealing with the rush

    One of the thrilling things about being a scientist on the LHC is that when something exciting shows up in the detector, everyone wants to talk about it. The downside is everyone also wants to look at it.

    “When data is more interesting, it creates high demand and a bottleneck,” says David Lange, CMS software and computing co-coordinator and a scientist at Lawrence Livermore National Laboratory. “By making better use of our resources, we can make more data available to more people at any time.”

    To avoid bottlenecks, ATLAS and CMS are now making data accessible by popularity.

    “For CMS, this is an automated system that makes more copies when popularity rises and reduces copies when popularity declines,” Gutsche says.

    Improving the algorithms

    One of the greatest recent gains in computing efficiency for the LHC relied on the physicists who dig into the data. By working closely with physicists, software engineers edited the algorithms that describe the physics playing out in the LHC, thereby significantly improving processing time for reconstruction and simulation jobs.

    “A huge amount of effort was put in, primarily by physicists, to understand how the physics could be analyzed while making the computing more efficient,” says Richard Mount, senior research scientist at SLAC National Accelerator Laboratory who was ATLAS computing coordinator during the recent LHC upgrades.

    CMS tripled the speed of event reconstruction and halved simulation time. Similarly, ATLAS quadrupled reconstruction speed.

    Algorithms that determine data acquisition on the upgraded triggers were also improved to better capture rare physics events and filter out the background noise of routine (and therefore uninteresting) events.

    “More data” has been the drumbeat of physicists since the end of the first run, and now that it’s finally here, LHC scientists and students like Perloff can pick up where they left off in the search for new physics—anytime, anywhere.

    *While not noted in the article, I believe that Brookhaven National Laboratory is the Tier 1 site for Atlas in the United States.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.

  • richardmitnick 12:22 pm on April 24, 2015 Permalink | Reply
    Tags: , Computing,   

    From phys.org: “Silicon Valley marks 50 years of Moore’s Law” 


    April 24, 2015
    Pete Carey, San Jose Mercury News

    Plot of CPU transistor counts against dates of introduction; note the logarithmic vertical scale; the line corresponds to exponential growth with transistor count doubling every two years. Credit: Wikipedia

    Computers were the size of refrigerators when an engineer named Gordon Moore laid the foundations of Silicon Valley with a vision that became known as “Moore’s Law.”

    Moore, then the 36-year-old head of research at Fairchild Semiconductor, predicted in a trade magazine article published 50 years ago Sunday that computer chips would double in complexity every year, at little or no added cost, for the next 10 years. In 1975, based on industry developments, he updated the prediction to doubling every two years.

    And for the past five decades, chipmakers have proved him right – spawning scores of new companies and shaping Silicon Valley to this day.

    “If Silicon Valley has a heartbeat, it’s Moore’s Law. It drove the valley at what has been a historic speed, unmatched in history, and allowed it to lead the rest of the world,” said technology consultant Rob Enderle.

    Moore’s prediction quickly became a business imperative for chip companies. Those that ignored the timetable went out of business. Companies that followed it became rich and powerful, led by Intel, the company Moore co-founded.

    Thanks to Moore’s Law, people carry smartphones in their pocket or purse that are more powerful than the biggest computers made in 1965 – or 1995, for that matter. Without it, there would be no slender laptops, no computers powerful enough to chart a genome or design modern medicine’s lifesaving drugs. Streaming video, social media, search, the cloud-none of that would be possible on today’s scale.

    “It fueled the information age,” said Craig Hampel, chief scientist at Rambus, a Sunnyvale semiconductor company. “As you drive around Silicon Valley, 99 percent of the companies you see wouldn’t be here” without cheap computer processors due to Moore’s Law.

    Moore was asked in 1964 by Electronics magazine to write about the future of integrated circuits for the magazine’s April 1965 edition.

    The basic building blocks of the digital age, integrated circuits are chips of silicon that hold tiny switches called transistors. More transistors meant better performance and capabilities.

    Taking stock of how semiconductor manufacturing was shrinking transistors and regularly doubling the number that would fit on an integrated circuit, Moore got some graph paper and drew a line for the predicted annual growth in the number of transistors on a chip. It shot up like a missile, with a doubling of transistors every year for at least a decade.

    It seemed clear to him what was coming, if not to others.

    “Integrated circuits will lead to such wonders as home computers – or at least terminals connected to a central computer – automatic controls for automobiles, and personal portable communications equipment,” he wrote.

    California Institute of Technology professor Carver Mead coined the name Moore’s Law, and as companies competed to produce the most powerful chips, it became a law of survival-double the transistors every year or die.

    “In the beginning, it was just a way of chronicling the progress,” Moore, now 86, said in an interview conducted by Intel. “But gradually, it became something that the various industry participants recognized. … You had to be at least that fast or you were falling behind.”

    Moore’s Law also held prices down because advancing technology made it inexpensive to pack chips with increasing numbers of transistors. If transistors hadn’t gotten cheaper as they grew in number on a chip, integrated circuits would still be a niche product for the military and others able to afford a very high price. Intel’s first microprocessor, or computer on a chip, with 2,300 transistors, cost more than $500 in current dollars. Today, an Intel Core i5 microprocessor has more than a billion transistors-and costs $276.

    “That was my real objective-to communicate that we have a technology that’s going to make electronics cheap,” Moore said.

    The reach of Moore’s Law extends beyond personal tech gadgets.

    “The really cool thing about it is it’s not just iPhones,” said G. Dan Hutcheson of VLSI Research, a technology market research company based in Santa Clara. “Every drug developed in the past 20 years or so had to have the computing power to get down and model molecules. They never would have been able to without that power. DNA analysis, genomes, wouldn’t exist-you couldn’t do the genetic testing. It all boils down to transistors.”

    Hutcheson says what Moore predicted was much more than a self-fulfilling prophecy. He had foreseen that optics, chemistry and physics would be combined to shrink transistors over time without substantial added cost.

    As transistors become vanishingly small, it’s harder to keep Moore’s Law going.

    About a decade ago, the shrinking of the physical dimensions led to overheating and stopped major performance boosts for every new generation of chips. Companies responded by introducing so-called multicore computers, with several processors on a PC.

    “What’s starting to happen is people are looking to other innovations on silicon to give them performance” as a way to extend Moore’s Law, said Spike Narayan, director of science and technology at IBM‘s Almaden Research Center.

    Then, about a year and a half ago, “something even more drastic started happening,” Narayan said. The wires connecting transistors became so small that they became more resistant to electrical current. “Big problem,” he said.

    “That’s why you see all the materials research and innovation,” he said of new efforts to find alternative materials and structures for chips.

    Another issue confronting Moore’s Law is that the energy consumed by chips has begun to rise as transistors shrink. “Our biggest challenge” is energy efficiency, said Alan Gara, chief architect of the Aurora supercomputer Intel is building for Argonne National Laboratory near Chicago.

    Intel says it sees a path to continue the growth predicted by Moore’s Law through the next decade. The next generation of processors is in “full development mode,” said Mark Bohr, an Intel senior fellow who leads a group that decides how each generation of Intel chips will be made. Bohr is spending his time on the generation after that, in which transistors will shrink to 7 nanometers. The average human hair is 25,000 nanometers wide.

    At some point the doubling will slow down, says Chenming Hu, an electrical engineering and computer science professor at the University of California, Berkeley. Hu is a key figure in the development of a new transistor structure that’s helping keep Moore’s Law going.

    “It’s totally understandable that a company, in order to gain more market share and beat out all competitors, needs to double and triple if you can,” Hu said. “That’s why this scaling been going on at such a fast pace. But no exponential growth can go on forever.”

    Hu says what’s likely is that at some point the doubling every two years will slow to every four or five years.

    “And that’s probably a better thing than flash and fizzle out. You really want have the same growth at lower pace.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    About Phys.org in 100 Words

    Phys.org™ (formerly Physorg.com) is a leading web-based science, research and technology news service which covers a full range of topics. These include physics, earth science, medicine, nanotechnology, electronics, space, biology, chemistry, computer sciences, engineering, mathematics and other sciences and technologies. Launched in 2004, Phys.org’s readership has grown steadily to include 1.75 million scientists, researchers, and engineers every month. Phys.org publishes approximately 100 quality articles every day, offering some of the most comprehensive coverage of sci-tech developments world-wide. Quancast 2009 includes Phys.org in its list of the Global Top 2,000 Websites. Phys.org community members enjoy access to many personalized features such as social networking, a personal home page set-up, RSS/XML feeds, article comments and ranking, the ability to save favorite articles, a daily newsletter, and other options.

  • richardmitnick 2:47 pm on October 22, 2014 Permalink | Reply
    Tags: , Computing,   

    From BNL: “Brookhaven Lab Launches Computational Science Initiative” 

    Brookhaven Lab

    October 22, 2014
    Karen McNulty Walsh, (631) 344-8350 or Peter Genzer, (631) 344-3174

    Leveraging computational science expertise and investments across the Laboratory to tackle “big data” challenges

    Building on its capabilities in computational science and data management, the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory is embarking upon a major new Computational Science Initiative (CSI). This program will leverage computational science expertise and investments across multiple programs at the Laboratory—including the flagship facilities that attract thousands of scientific users each year—further establishing Brookhaven as a leader in tackling the “big data” challenges at the frontiers of scientific discovery. Key partners in this endeavor include nearby universities such as Columbia, Cornell, New York University, Stony Brook, and Yale, and IBM Research.

    Blue Gene/Q Supercomputer at Brookhaven National Laboratory

    “The CSI will bring together under one umbrella the expertise that drives [the success of Brookhaven’s scientific programs] to foster cross-disciplinary collaboration and make optimal use of existing technologies, while also leading the development of new tools and methods that will benefit science both within and beyond the Laboratory.”
    — Robert Tribble

    “Advances in computational science and management of large-scale scientific data developed at Brookhaven Lab have been a key factor in the success of the scientific programs at the Relativistic Heavy Ion Collider (RHIC), the National Synchrotron Light Source (NSLS), the Center for Functional Nanomaterials (CFN), and in biological, atmospheric, and energy systems science, as well as our collaborative participation in international research endeavors, such as the ATLAS experiment at Europe’s Large Hadron Collider,” said Robert Tribble, Brookhaven Lab’s Deputy Director for Science and Technology, who is leading the development of the new initiative. “The CSI will bring together under one umbrella the expertise that drives this success to foster cross-disciplinary collaboration and make optimal use of existing technologies, while also leading the development of new tools and methods that will benefit science both within and beyond the Laboratory.”

    BNL RHIC Campus
    RHIC at BNL

    BNL NSLS Interior
    NSLS at BNL

    A centerpiece of the initiative will be a new Center for Data-Driven Discovery (C3D) that will serve as a focal point for this activity. Within the Laboratory it will drive the integration of intellectual, programmatic, and data/computational infrastructure with the goals of accelerating and expanding discovery by developing critical mass in key disciplines, enabling nimble response to new opportunities for discovery or collaboration, and ultimately integrating the tools and capabilities across the entire Laboratory into a single scientific resource. Outside the Laboratory C3D will serve as a focal point for recruiting, collaboration, and communication.

    The people and capabilities of C3D are also integral to the success of Brookhaven’s key scientific facilities, including those named above, the new National Synchrotron Light Source II (NSLS-II), and a possible future electron ion collider (EIC) at Brookhaven. Hundreds of scientists from Brookhaven and thousands of facility users from universities, industry, and other laboratories around the country and throughout the world will benefit from the capabilities developed by C3D personnel to make sense of the enormous volumes of data produced at these state-of-the-art research facilities.

    BNL NSLS II Photo
    BNL NSLS-II Interior
    NSLS II at BNL

    The CSI in conjunction with C3D will also host a series of workshops/conferences and training sessions in high-performance computing—including annual workshops on extreme-scale data and scientific knowledge discovery, extreme-scale networking, and extreme-scale workflow for integrated science. These workshops will explore topics at the frontier of data-centric, high-performance computing, such as the combination of efficient methodologies and innovative computer systems and concepts to manage and analyze scientific data generated at high volumes and rates.

    “The missions of C3D and the overall CSI are well aligned with the broad missions and goals of many agencies and industries, especially those of DOE’s Office of Science and its Advanced Scientific Computing Research (ASCR) program,” said Robert Harrison, who holds a joint appointment as director of Brookhaven Lab’s Computational Science Center (CSC) and Stony Brook University’s Institute for Advanced Computational Science (IACS) and is leading the creation of C3D.

    The CSI at Brookhaven will specifically address the challenge of developing new tools and techniques to deliver on the promise of exascale science—the ability to compute at a rate of 1018 floating point operations per second (exaFLOPS), to handle the copious amount of data created by computational models and simulations, and to employ exascale computation to interpret and analyze exascale data anticipated from experiments in the near future.

    “Without these tools, scientific results would remain hidden in the data generated by these simulations,” said Brookhaven computational scientist Michael McGuigan, who will be working on data visualization and simulation at C3D. “These tools will enable researchers to extract knowledge and share key findings.”

    Through the initiative, Brookhaven will establish partnerships with leading universities, including Columbia, Cornell, Stony Brook, and Yale to tackle “big data” challenges.

    “Many of these institutions are already focusing on data science as a key enabler to discovery,” Harrison said. “For example, Columbia University has formed the Institute for Data Sciences and Engineering with just that mission in mind.”

    Computational scientists at Brookhaven will also seek to establish partnerships with industry. “As an example, partnerships with IBM have been successful in the past with co-design of the QCDOC and BlueGene computer architectures,” McGuigan said. “We anticipate more success with data-centric computer designs in the future.”

    An area that may be of particular interest to industrial partners is how to interface big-data experimental problems (such as those that will be explored at NSLS-II, or in the fields of high-energy and nuclear physics) with high-performance computing using advanced network technologies. “The reality of ‘computing system on a chip’ technology opens the door to customizing high-performance network interface cards and application program interfaces (APIs) in amazing ways,” said Dantong Yu, a group leader and data scientist in the CSC.

    “In addition, the development of asynchronous data access and transports based on remote direct memory access (RDMA) techniques and improvements in quality of service for network traffic could significantly lower the energy footprint for data processing while enhancing processing performance. Projects in this area would be highly amenable to industrial collaboration and lead to an expansion of our contributions beyond system and application development and designing programming algorithms into the new arena of exascale technology development,” Yu said.

    “The overarching goal of this initiative will be to bring under one umbrella all the major data-centric activities of the Lab to greatly facilitate the sharing of ideas, leverage knowledge across disciplines, and attract the best data scientists to Brookhaven to help us advance data-centric, high-performance computing to support scientific discovery,” Tribble said. “This initiative will also greatly increase the visibility of the data science already being done at Brookhaven Lab and at its partner institutions.”

    See the full article here.

    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 9:38 am on August 29, 2014 Permalink | Reply
    Tags: , Computing,   

    From BNL Lab: “DOE ‘Knowledgebase’ Links Biologists, Computer Scientists to Solve Energy, Environmental Issues” 

    Brookhaven Lab

    August 29, 2014
    Rebecca Harrington

    With new tool, biologists don’t have to be programmers to answer big computational questions

    If biologists wanted to determine the likely way a particular gene variant might increase a plant’s yield for producing biofuels, they used to have to track down several databases and cross-reference them using complex computer code. The process would take months, especially if they weren’t familiar with the computer programming necessary to analyze the data.

    Combining information about plants, microbes, and the complex biomolecular interactions that take place inside these organisms into a single, integrated “knowledgebase” will greatly enhance scientists’ ability to access and share data, and use it to improve the production of biofuels and other useful products.

    Now they can do the same analysis in a matter of hours, using the Department of Energy’s Systems Biology Knowledgebase (KBase), a new computational platform to help the biological community analyze, store, and share data. Led by scientists at DOE’s Lawrence Berkeley, Argonne, Brookhaven, and Oak Ridge national laboratories, KBase amasses the data available on plants, microbes, microbial communities, and the interactions among them with the aim of improving the environment and energy production. The computational tools, resources, and community networking available will allow researchers to propose and test new hypotheses, predict biological behavior, design new useful functions for organisms, and perform experiments never before possible.

    “Quantitative approaches to biology were significantly developed during the last decade, and for the first time, we are now in a position to construct predictive models of biological organisms,” said computational biologist Sergei Maslov, who is principal investigator (PI) for Brookhaven’s role in the effort and Associate Chief Science Officer for the overall project, which also has partners at a number of leading universities, Cold Spring Harbor Laboratory, the Joint Genome Institute, the Environmental Molecular Sciences Laboratory, and the DOE Bioenergy Centers. “KBase allows research groups to share and analyze data generated by their project, put it into context with data generated by other groups, and ultimately come to a much better quantitative understanding of their results. Biomolecular networks, which are the focus of my own scientific research, play a central role in this generation and propagation of biological knowledge.”

    Maslov said the team is transitioning from the scientific pilot phase into the production phase and will gradually expand from the limited functionality available now. By signing up for an account, scientists can access the data and tools free of charge, opening the doors to faster research and deeper collaboration.
    Easy coding

    “We implement all the standard tools to operate on this kind of key data so a single PI doesn’t need to go through the hassle by themselves.”
    — Shinjae Yoo, assistant computational scientist working on the project at Brookhaven

    As problems in energy, biology, and the environment get bigger, the data needed to solve them becomes more complex, driving researchers to use more powerful tools to parse through and analyze this big data. Biologists across the country and around the world generate massive amounts of data — on different genes, their natural and synthetic variations, proteins they encode, and their interactions within molecular networks — yet these results often don’t leave the lab where they originated.

    “By doing small-scale experiments, scientists cannot get the system-level understanding of biological organisms relevant to the DOE mission,” said Shinjae Yoo, an assistant computational scientist working on the project at Brookhaven. “But they can use KBase for the analysis of their large-scale data. KBase will also allow them to compare and contrast their data with other key datasets generated by projects funded by the DOE and other agencies. We implement all the standard tools to operate on this kind of key data so a single PI doesn’t need to go through the hassle by themselves.”

    For non-programmers, KBase offers a “Narrative Interface,” allowing them to upload their data to KBase and construct a narrative of their analysis with a series of pre-coded programs that has a human in the middle interpreting and filtering their output.

    In one pre-coded narrative, researchers can filter through naturally occurring variations of Poplar genes, one of the DOE flagship bioenergy plant species. Scientists can discover genes associated with a reduced amount of lignin—a cell wall protein that makes conversion of Poplar biomass to biofuels more difficult. In this narrative, scientists can use datasets from KBase and from their own research to then find candidate genes, and use networks to select the genes most likely to be related to a specific trait they’re looking for—say, genes that result in reduced lignin content, which could ease the biomass to biofuel conversion. And if other researchers wanted to run the same program for a different plant, they could just put different data in the same narrative.

    “Everything is already there,” Yoo said. “You simply need to upload the data in the right format and run through several easy steps within the narrative.”

    For those who know how to code, KBase has the IRIS Interface, a web-based command line terminal where researchers can run and control the programs on their own, allowing scientists to analyze large volumes of data. If researchers want to learn how to do the coding themselves, KBase also has tutorials and resources to help interested scientists learn it.
    A social network

    But KBase’s most powerful resource is the community itself. Researchers are encouraged to upload their data and programs so that other users can benefit from them. This type of cooperative environment encourages sharing and feedback among researchers, so the programs, tools, and annotation of datasets can improve with other users’ input.

    Brookhaven is leading the plant team on the project, while the microbe and microbial community teams are based at other partner institutions. A computer scientist by training, Yoo said his favorite part of working on KBase has been how much biology he’s learned. Acting as a go-between among the biologists at Brookhaven, who are describing what they’d like to see KBase be able to do, and the computer scientists, who are coding the programs to make it happen, Yoo has had to understand both languages of science.

    “I’m learning plant biology. That’s pretty cool to me,” he said. “In the beginning, it was quite tough. Three years later I’ve caught up, but I still have a lot to learn.”

    Ultimately, KBase aims to interweave huge amounts of data with the right tools and user interface to enable bench scientists without programming backgrounds to answer the kinds of complex questions needed to solve the energy and environmental issues of our time.

    “We can gain systematic understanding of a biological process much faster, and also have a much deeper understanding,” Yoo said, “so we can engineer plant organisms or bacteria to improve productivity, biomass yield—and then use that information for biodesign.”

    KBase is funded by the DOE’s Office of Science. The Office of Science (SC) is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

    See the full article here.

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 10:49 am on August 13, 2014 Permalink | Reply
    Tags: , Computing, ,   

    From Fermilab: “Fermilab hosts first C++ school for next generation of particle physicists” 

    Fermilab is an enduring source of strength for the US contribution to scientific research world wide.

    Wednesday, Aug. 13, 2014

    Fermilab Leah Hesla
    Leah Hesla

    Colliding particle beams without the software know-how to interpret the collisions would be like conducting an archaeological dig without the tools to sift through the artifacts. Without a way to get to the data, you wouldn’t know what you were looking at.

    Eager to keep future particle physicists well-equipped and up to date on the field’s chosen data analysis tools, Scientific Computing Division‘s Liz Sexton-Kennedy and Sudhir Malik, now physics professor at University of Puerto Rico Mayagyez, organized Fermilab’s first C++ software school, which was held last week.

    “C++ is the language of high-energy physics analysis and reconstruction,” Malik said. “There was no organized effort to teach it, so we started this school.”

    Although software skills are crucial for simulating and interpreting particle physics data, physics graduate school programs don’t formally venture into the more utilitarian skill sets. Thus scientists take it upon themselves to learn C++ outside the classroom, either on their own or through discussions with their peers. Usually this self-education is absorbed through examples, whether or not the examples are flawed, Sexton-Kennedy said.

    The school aimed to set its students straight.

    It also looked to increase the numbers of particle physicists fluent in C++, a skill that is useful beyond particle physics. Fields outside academia highly value such expertise — enough that particle physicists are being lured away to jobs in industry.

    “We would lose people who were good at both physics and C++,” Sexton-Kennedy said. “The few of us who stayed behind needed to teach the next generation.”

    The next generation appears to have been waiting for just such an opportunity: Within two weeks of the C++ school opening registration, 80 students signed up. It was so popular that the co-organizers had to start a wait list.

    The software school students include undergraduates, graduate students and postdocs, all of whom work on Fermilab experiments.

    “We get most of the ideas for how to use software for event reconstruction for the LBNE near-detector reference design from these sessions,” said Xinchun Tian, a University of South Carolina postdoc working on the Long-Baseline Neutrino Experiment. “C++ is very useful for our research.”

    Fermilab NUMI Tunnel project
    Fermilab NuMI tunnel

    Fermilab LBNE
    Fermilab LBNE

    University of Wisconsin physics professor Matt Herndon led the sessions. He was assisted by 13 people: Notre Dame University physics professor Mike Hildreth and volunteers from the SCD Scientific Software Infrastructure Department.

    Malik and Sexton-Kennedy plan to make the school material available online.

    “People have to take these tools seriously, and in high-energy physics, the skills mushroom around C++ software,” Malik said. “Students are learning C++ while growing up in the field.”

    See the full article here.

    Fermilab Campus

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics.

    ScienceSprings relies on technology from

    MAINGEAR computers



Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: