Tagged: Computing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 7:06 am on February 6, 2018 Permalink | Reply
    Tags: , , Computing, , ,   

    From CSIROscope: “Cybersecurity: we can hack it” 

    CSIRO bloc


    6 February 2018
    Chris Chelvan

    No image capton or credit.

    It is estimated that 3,885,567,619 people across the world have access to the internet, roughly 51.7% of the world population. More often than not, the internet is used to benefit society — from connecting opposite sides of the world to making knowledge more accessible. But sometimes, the anonymity provided by the internet creates risks of cyberbullying as well as threats to cyber security.

    Every month, at least 50,000 new cyber threats arise that expose internet users to risk. The National Vulnerability Database (NVD) operated by the National Institute of Standards and Technology suggests that between 500 and 1,000 new vulnerabilities emerge every month, of which at least 25 per cent are critical and pose a risk for significant damage.

    Some of the largest cybersecurity threats emerged just last year. The WannaCry ransomware attack in May 2017 affected more than 300,000 computers across 150 countries causing billions of dollars in damage. Spectre and Meltdown, too, exposed critical cyber vulnerabilities in computers and mobile phones around the world, exposing millions of people to hackers — in fact, Data61 researcher Dr Yuval Yarom from the Trustworthy Systems Group was one of the contributors whose research uncovered the Spectre issue.

    Not only are cyber threats increasing, they’re also evolving. First the focus was on attacking technology: hacking, malware, and remote access. Then the focus shifted to attacking humans with phishing, social engineering and ransomware, like WannaCry. Now cyber attacks are now more sophisticated than ever and even harder to detect.

    And yet, given all these threats, Australia has next to no cyber security specialists. The Australian Cyber Security Growth Network has said the demand for skills in the sector far outstrips demand. A recent Government report estimated Australia would need another 11,000 cyber security specialists over the next decade.

    It’s against this diverse backdrop of new and constantly changing threats that we celebrate Safer Internet Day and call on our future generation of science, technology, engineering and mathematics (STEM) leaders to fill the glaring shortage of cybersecurity professionals in Australia.

    Not only are we short of information security professionals now, but data show that by 2022 we’ll be short up to 1.8 million positions. This is particularly urgent in Australia, where women make up just one in three students studying STEM — a proportion that needs to rise to meet the country’s growing cyber security needs.

    Introducing STEM Professional in Schools, our education program that shows young women how they can make an impact in Australia and across the world. STEM Professionals in Schools is Australia’s leading STEM education volunteering program, bringing real world STEM into the classroom to inspire students and teachers.

    Our Data61 CEO, Adrian Turner, visited Melbourne Girls College to talk about safer internet usage and the importance of STEM.

    “These students are our future innovators, scientists and engineers,” Mr Turner said.

    “It’s essential to equip them with the skills they need in school, and to capture their interest in cybersecurity and why it matters now and in the future so they can see how much of a crucial role it is and will continue to play in Australia’s data-driven future and digital economy.”

    A rewarding career in STEM can take on many forms, too. Data61’s STEM graduates have worked in various roles and research projects, spanning everything from machine learning and robotics to analytics and of course — cybersecurity.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    SKA/ASKAP radio telescope at the Murchison Radio-astronomy Observatory (MRO) in Mid West region of Western Australia

    So what can we expect these new radio projects to discover? We have no idea, but history tells us that they are almost certain to deliver some major surprises.

    Making these new discoveries may not be so simple. Gone are the days when astronomers could just notice something odd as they browse their tables and graphs.

    Nowadays, astronomers are more likely to be distilling their answers from carefully-posed queries to databases containing petabytes of data. Human brains are just not up to the job of making unexpected discoveries in these circumstances, and instead we will need to develop “learning machines” to help us discover the unexpected.

    With the right tools and careful insight, who knows what we might find.

    CSIRO campus

    CSIRO, the Commonwealth Scientific and Industrial Research Organisation, is Australia’s national science agency and one of the largest and most diverse research agencies in the world.

  • richardmitnick 12:10 pm on November 29, 2017 Permalink | Reply
    Tags: , , Bridging gaps in high-performance computing languages, Computing, Generative programming, Programming languages, Tiark Rompf   

    From ASCRDiscovery: “Language barrier” 

    Advancing Science Through Computing

    November 2017
    No writer credit

    A Purdue University professor is using a DOE early-career award to bridge gaps in high-performance computing languages.

    Detail from an artwork made through generative programming. Purdue University’s Tiark Rompf is investigating programs that create new ones to bring legacy software up to speed in the era of exascale computing. Painting courtesy of Freddbomba via Wikimedia Commons.

    A Purdue University assistant professor of computer science leads a group effort to find new and better ways to generate high-performance computing codes that run efficiently on as many different kinds of supercomputer architectures as possible.

    That’s the challenging goal Tiark Rompf has set for himself with his recent Department of Energy Early Career Research Program award – to develop what he calls “program generators” for exascale architectures and beyond.

    “Programming supercomputers is hard,” Rompf says. Coders typically write software in so-called general-purpose languages. The languages are low-level, meaning “specialized to a given machine architecture. So when a machine is upgraded or replaced, one has to rewrite most of the software.”

    As an alternative to this rewriting, which involves tediously translating low-level code from one supercomputer platform into another, programmers would prefer to use high-level languages “written in a way that feels natural” to them, Rompf says, and “closer to the way a programmer thinks about the computation.”

    But high-level and low-level languages are far apart, with a steel wall of differences between the ways the two types of languages are written, interpreted and executed. In particular, high-level languages rarely perform as well as desired. Executing them requires special so-called smart compilers that must use highly specialized analysis to figure out what the program “really means and how to match it to machine instructions.”

    Tiark Rompf. Photo courtesy of Purdue University.

    Rompf and his group propose avoiding that with something called generative programming, which he has worked on since before he received his 2012 Ph.D. from Ecole Polytechnique Federale de Lausanne (EPFL) in Switzerland. The idea is to create special programs structured so they’re able to make additional programs where needed.

    In a 2015 paper, Rompf and research colleagues at EPFL, Stanford University and ETH Zurich also called for a radical reassessment of high-level languages. “We really need to think about how to design programming languages and (software) libraries that embrace this generative programming idea,” he adds.

    Program generators “are attractive because they can automate the process of producing very efficient code,” he says. But building them “has also been very hard, and therefore only a few exist today. We’re planning to build the necessary infrastructure to make it an order of magnitude easier.”

    As he noted in his early-career award proposal, progress building program generators is extremely difficult for more reasons than just programmer-computer disharmony. Other obstacles include compiler limitations, differing capabilities of supercomputer processors, the changing ways data are stored and the ways software libraries are accessed. Rompf plans to use his five-year, $750,000 award to evaluate generative programming as a way around some of those roadblocks.

    One idea, for instance, is to identify and create an extensible stack of intermediate languages that could serve as transitional steps when high-level codes must be translated into machine code. These also are described as “domain-specific languages” or DSLs, as they encode more knowledge about the application subject than general-purpose languages.

    Eventually, programmers hope to entirely phase out legacy languages such as C and Fortran, substituting only high-level languages and DSLs. Rompf points out that legacy codes can be decades older than the processors they run on, and some have been heavily adapted to run on new generations of machines, an investment that can make legacy codes difficult to jettison.

    Rompf started Project Lancet to integrate generative approaches into a virtual machine for high-level languages.

    Generative programming was the basis for Rompf’s doctoral research. It was described as an approach called Lightweight Modular Staging, or LMS, in a 2010 paper he wrote with his EPFL Ph.D. advisor, Martin Odersky. That’s “a software platform that provides capabilities for other programmers to develop software in a generative style,” Rompf says.

    LMS also underpins Delite, a software framework Rompf later developed in collaboration with a Stanford University group to build DSLs targeting parallel processing in supercomputer architectures – “very important for the work I’m planning to do,” he says.

    While working at Oracle Labs between 2012 and 2014, Rompf started Project Lancet to integrate generative approaches into a virtual machine for high-level languages. Virtual machines are code that can induce real computers to run selected programs. In the case of Lancet, software executes high-level languages and then performs selective compilations in machine code.

    Born and raised in Germany, Rompf joined Purdue in the fall of 2014. It’s “a great environment for doing this kind of research,” he says. “We have lots of good students in compilers, high-performance and databases. We’ve been hiring many new assistant professors. There are lots of young people who all want to accomplish things.”

    He calls his DOE Early Career award a great honor. “I think there are many opportunities for future work in getting more of the DOE community in the interaction.” Although he is the project’s only principal investigator, he is collaborating with other groups at Purdue, ETH Zurich and Stanford and has received recent and related National Science Foundation research grants.

    As a busy assistant professor, he has six graduate students on track to get their doctorates, plus a varying number of undergraduate assistants. Rompf also is a member of the Purdue Research on Programming Languages group (PurPL), with 10 faculty members and their students.

    “It’s a very vibrant group, which like the Purdue computer science department has been growing a lot in recent years,” he says.

    Now in its eighth year, the DOE Office of Science’s Early Career Research Program for researchers in universities and DOE national laboratories supports the development of individual research programs of outstanding scientists early in their careers and stimulates research careers in the disciplines supported by the Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

  • richardmitnick 2:04 pm on October 13, 2017 Permalink | Reply
    Tags: , , , , Computing, ,   

    From BNL: “Scientists Use Machine Learning to Translate ‘Hidden’ Information that Reveals Chemistry in Action” 

    Brookhaven Lab

    October 10, 2017
    Karen McNulty Walsh
    (631) 344-8350

    Peter Genzer
    (631) 344-3174

    New method allows on-the-fly analysis of how catalysts change during reactions, providing crucial information for improving performance.

    A sketch of the new method that enables fast, “on-the-fly” determination of three-dimensional structure of nanocatalysts. The neural network converts the x-ray absorption spectra into geometric information (such as nanoparticle sizes and shapes) and the structural models are obtained for each spectrum. No image credit.

    Chemistry is a complex dance of atoms. Subtle shifts in position and shuffles of electrons break and remake chemical bonds as participants change partners. Catalysts are like molecular matchmakers that make it easier for sometimes-reluctant partners to interact.

    Now scientists have a way to capture the details of chemistry choreography as it happens. The method—which relies on computers that have learned to recognize hidden signs of the steps—should help them improve the performance of catalysts to drive reactions toward desired products faster.

    The method—developed by an interdisciplinary team of chemists, computational scientists, and physicists at the U.S. Department of Energy’s Brookhaven National Laboratory and Stony Brook University—is described in a new paper published in the Journal of Physical Chemistry Letters. The paper demonstrates how the team used neural networks and machine learning to teach computers to decode previously inaccessible information from x-ray data, and then used that data to decipher 3D nanoscale structures.

    Decoding nanoscale structures

    “The main challenge in developing catalysts is knowing how they work—so we can design better ones rationally, not by trial-and-error,” said Anatoly Frenkel, leader of the research team who has a joint appointment with Brookhaven Lab’s Chemistry Division and Stony Brook University’s Materials Science Department. “The explanation for how catalysts work is at the level of atoms and very precise measurements of distances between them, which can change as they react. Therefore it is not so important to know the catalysts’ architecture when they are made but more important to follow that as they react.”

    Anatoly Frenkel (standing) with co-authors (l to r) Deyu Lu, Yuewei Lin, and Janis Timoshenko. No image credit.

    Trouble is, important reactions—those that create important industrial chemicals such as fertilizers—often take place at high temperatures and under pressure, which complicates measurement techniques. For example, x-rays can reveal some atomic-level structures by causing atoms that absorb their energy to emit electronic waves. As those waves interact with nearby atoms, they reveal their positions in a way that’s similar to how distortions in ripples on the surface of a pond can reveal the presence of rocks. But the ripple pattern gets more complicated and smeared when high heat and pressure introduce disorder into the structure, thus blurring the information the waves can reveal.

    So instead of relying on the “ripple pattern” of the x-ray absorption spectrum, Frenkel’s group figured out a way to look into a different part of the spectrum associated with low-energy waves that are less affected by heat and disorder.

    “We realized that this part of the x-ray absorption signal contains all the needed information about the environment around the absorbing atoms,” said Janis Timoshenko, a postdoctoral fellow working with Frenkel at Stony Brook and lead author on the paper. “But this information is hidden ‘below the surface’ in the sense that we don’t have an equation to describe it, so it is much harder to interpret. We needed to decode that spectrum but we didn’t have a key.”

    Fortunately Yuewei Lin and Shinjae Yoo of Brookhaven’s Computational Science Initiative and Deyu Lu of the Center for Functional Nanomaterials (CFN) had significant experience with so-called machine learning methods. They helped the team develop a key by teaching computers to find the connections between hidden features of the absorption spectrum and structural details of the catalysts.

    “Janis took these ideas and really ran with them,” Frenkel said.

    The team used theoretical modeling to produce simulated spectra of several hundred thousand model structures, and used those to train the computer to recognize the features of the spectrum and how they correlated with the structure.

    “Then we built a neural network that was able to convert the spectrum into structures,” Frenkel said.

    When they tested to see if the method would work to decipher the shapes and sizes of well-defined platinum nanoparticles (using x-ray absorption spectra previously published by Frenkel and his collaborators) it did.

    “This method can now be used on the fly,” Frenkel said. “Once the network is constructed it takes almost no time for the structure to be obtained in any real experiment.”

    That means scientists studying catalysts at Brookhaven’s National Synchrotron Light Source II (NSLS-II), for example, could obtain real-time structural information to decipher why a particular reaction slows down, or starts producing an unwanted product—and then tweak the reaction conditions or catalyst chemistry to achieve desired results. This would be a big improvement over waiting to analyze results after completing the experiments and then figuring out what went wrong.

    In addition, this technique can process and analyze spectral signals from very low-concentration samples, and will be particularly useful at new high flux and high-energy-resolution beamlines incorporating special optics and high-throughput analysis techniques at NSLS-II.

    “This will offer completely new methods of using synchrotrons for operando research,” Frenkel said.

    This work was funded by the DOE Office of Science (BES) and by Brookhaven’s Laboratory Directed Research and Development program. Previously published spectra for the model nanoparticles used to validate the neural network were collected at the Advanced Photon Source (APS) at DOE’s Argonne National Laboratory and the original National Synchrotron Light Source (NSLS) at Brookhaven Lab, now replaced by NSLS-II. CFN, NSLS-II, and APS are DOE Office of Science User Facilities. In addition to Frenkel and Timoshenko, Lu and Lin are co-authors on the paper.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.

  • richardmitnick 9:32 am on July 7, 2017 Permalink | Reply
    Tags: , , Computing, , St. Jude Children’s Research Hospital   

    From ORNL: “ORNL researchers apply imaging, computational expertise to St. Jude research” 


    Oak Ridge National Laboratory

    July 6, 2017
    Stephanie G. Seay

    Left to right: ORNL’s Derek Rose, Matthew Eicholtz, Philip Bingham, Ryan Kerekes, and Shaun Gleason.

    Measuring migrating neurons in a developing mouse brain.

    Identifying and analyzing neurons in a mouse auditory cortex.
    No image credits for above images

    In the quest to better understand and cure childhood diseases, scientists at St. Jude Children’s Research Hospital accumulate enormous amounts of data from powerful video microscopes. To help St. Jude scientists mine that trove of data, researchers at Oak Ridge National Laboratory have created custom algorithms that can provide a deeper understanding of the images and quicken the pace of research.

    The work resides in St. Jude’s Department of Developmental Neurobiology in Memphis, Tennessee, where scientists use advanced microscopy to capture the details of phenomena such as nerve cell growth and migration in the brains of mice. ORNL researchers take those videos and leverage their expertise in image processing, computational science, and machine learning to analyze the footage and create statistics.

    A recent Science article details St. Jude research on brain plasticity, or the ability of the brain to change and form new connections between neurons. In this work, ORNL helped track mice brain cell electrical activity in the auditory cortex when the animals were exposed to certain tones.

    ORNL researchers created an algorithm to measure electrical activations, or signals, across groups of neurons, collecting statistics and making correlations between cell activity in the auditory cortex and tones heard by the mice. The team first had to stabilize the video because it was taken while the mice were awake and moving to ensure a proper analysis was being conducted, said Derek Rose, who now leads the work at ORNL.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.


  • richardmitnick 3:39 pm on May 14, 2017 Permalink | Reply
    Tags: 22-Year-Old Researcher Accidentally Stops Global Cyberattack, , Computing, , Massive cyberattack thwarted   

    From Inverse: “22-Year-Old Researcher Accidentally Stops Global Cyberattack” 



    May 13, 2017
    Grace Lisa Scott

    And then he blogged about how he did it.

    On Friday, a massive cyberattack spread across 74 countries, infiltrating global companies like FedEx and Nissan, telecommunication networks, and most notably the UK’s National Health Service. It left the NHS temporarily crippled, with test results and patient records becoming unavailable and phones not working.

    The ransomware attack employed a malware called WannaCrypt that encrypts a user’s data and then demands a payment — in this instance $300-worth of bitcoins — to retrieve and unlock said data. The malware is spread through email and exploits a vulnerability in Windows. Microsoft did release a patch that fixes the vulnerability back in March, but any computer without the update would have remained vulnerable.

    The attack was suddenly halted early Friday afternoon (Eastern Standard Time) thanks to a 22-year-old cybersecurity researcher from southwest England. Going by the pseudonym MalwareTech on Twitter, the researcher claimed he accidentally activated the software’s “kill switch” by registering a complicated domain name hidden in the malware.

    After getting home from lunch with a friend and realizing the true severity of the cyberattack, the cybersecurity expert started looking for a weakness within the malware with the help of a few fellow researchers. On Saturday, he detailed how he managed to stop the malware spread in a blog post endearingly-titled “How to Accidentally Stop a Global Cyber Attacks”.

    “You’ve probably read about the WannaCrypt fiasco on several news sites, but I figured I’d tell my story,” he says.

    MalwareTech had registered the domain as a way to track the spread. “My job is to look for ways we can track and potentially stop botnets (and other kinds of malware), so I’m always on the lookout to pick up unregistered malware control server (C2) domains. In fact I registered several thousand of such domains in the past year,” he says.

    By registering the domain and setting up a sinkhole server he was planning to track the WannaCrypt spread.

    Fortunately, it didn’t turn out to be necessary because just by registering the domain MalwareTech he had engaged what was possibly an obscure but intentional kill switch for the ransomware. A peer linked MalwareTech to a tweet by a fellow researcher named Darien Huss who had just tweeted the discovery.

    The move gave companies and institutions time to patch their systems to avoid infection before the attackers could change the code and get the ransomware going again.

    In an interview with The Guardian Saturday, MalwareTech warned that the attack was probably not over. “The attackers will realize how we stopped it, they’ll change the code and then they’ll start again. Enable windows update, update and then reboot.”

    As for MalwareTech himself, he says he prefers to remain anonymous. “…It just doesn’t make sense to give out my personal information, obviously we’re working against bad guys and they’re not going to be happy about this,” he told the Guardian.

    To get into the nitty gritty of just why MalwareTech’s sinkhole managed to stop the international ransomware you can read his full blog post here.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 1:31 pm on April 4, 2017 Permalink | Reply
    Tags: , , Computing, , Tim Berners-Lee wins $1 million Turing Award,   

    From MIT: “Tim Berners-Lee wins $1 million Turing Award” 

    MIT News

    MIT Widget

    MIT News

    April 4, 2017
    Adam Conner-Simons

    Tim Berners-Lee was honored with the Turing Award for his work inventing the World Wide Web, the first web browser, and “the fundamental protocols and algorithms [that allowed] the web to scale.” Photo: Henry Thomas

    CSAIL researcher honored for inventing the web and developing the protocols that spurred its global use.

    MIT Professor Tim Berners-Lee, the researcher who invented the World Wide Web and is one of the world’s most influential voices for online privacy and government transparency, has won the most prestigious honor in computer science, the Association for Computing Machinery (ACM) A.M. Turing Award. Often referred to as “the Nobel Prize of computing,” the award comes with a $1 million prize provided by Google.

    In its announcement today, ACM cited Berners-Lee for “inventing the World Wide Web, the first web browser, and the fundamental protocols and algorithms allowing the web to scale.” This year marks the 50th anniversary of the award.

    A principal investigator at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) with a joint appointment in the Department of Electrical Engineering and Computer Science, Berners-Lee conceived of the web in 1989 at the European Organization for Nuclear Research (CERN) as a way to allow scientists around the world to share information with each other on the internet. He introduced a naming scheme (URIs), a communications protocol (HTTP), and a language for creating webpages (HTML). His open-source approach to coding the first browser and server is often credited with helping catalyzing the web’s rapid growth.

    “I’m humbled to receive the namesake award of a computing pioneer who showed that what a programmer could do with a computer is limited only by the programmer themselves,” says Berners-Lee, the 3COM Founders Professor of Engineering at MIT. “It is an honor to receive an award like the Turing that has been bestowed to some of the most brilliant minds in the world.”

    Berners-Lee is founder and director of the World Wide Web Consortium (W3C), which sets technical standards for web development, as well as the World Wide Web Foundation, which aims to establish the open web as a public good and a basic right. He also holds a professorship at Oxford University.

    As director of CSAIL’s Decentralized Information Group, Berners-Lee has developed data systems and privacy-minded protocols such as “HTTP with Accountability” (HTTPA), which monitors the transmission of private data and enables people to examine how their information is being used. He also leads Solid (“social linked data”), a project to re-decentralize the web that allows people to control their own data and make it available only to desired applications.

    “Tim Berners-Lee’s career — as brilliant and bold as they come — exemplifies MIT’s passion for using technology to make a better world,” says MIT President L. Rafael Reif. “Today we celebrate the transcendent impact Tim has had on all of our lives, and congratulate him on this wonderful and richly deserved award.”

    While Berners-Lee was initially drawn to programming through his interest in math, there was also a familial connection: His parents met while working on the Ferranti Mark 1, the world’s first commercial general-purpose computer. Years later, he wrote a program called Enquire to track connections between different ideas and projects, indirectly inspiring what later became the web.

    “Tim’s innovative and visionary work has transformed virtually every aspect our lives, from communications and entertainment to shopping and business,” says CSAIL Director Daniela Rus. “His work has had a profound impact on people across the world, and all of us at CSAIL are so very proud of him for being recognized with the highest honor in computer science.”

    Berners-Lee has received multiple accolades for his technical contributions, from being knighted by Queen Elizabeth to being named one of TIME magazine’s “100 Most Important People of the 20th Century.” He will formally receive the Turing Award during the ACM’s annual banquet June 24 in San Francisco.

    Past Turing Award recipients who have taught at MIT include Michael Stonebraker (2014), Shafi Goldwasser and Silvio Micali (2013), Barbara Liskov (2008), Ronald Rivest (2002), Butler Lampson (1992), Fernando Corbato (1990), John McCarthy (1971) and Marvin Minsky (1969).

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 2:48 pm on January 30, 2017 Permalink | Reply
    Tags: , , Computing, Dr. Miriam Eisenstein, GSK-3, Modeling of molecules on the computer, , ,   

    From Weizmann: Women in STEM – “Staff Scientist: Dr. Miriam Eisenstein” 

    Weizmann Institute of Science logo

    Weizmann Institute of Science

    No writer credit found

    Name: Dr. Miriam Eisenstein
    Department: Chemical Research Support

    “The modeling of molecules on the computer,” says Dr. Miriam Eisenstein, Head of the Macromolecular Modeling Unit of the Weizmann Institute of Science’s Chemical Research Support Department, “is sometimes the only way to understand exactly how such complex molecules as proteins interact.”

    Eisenstein was one of the first to develop molecular docking methods while working with Prof. Ephraim Katzir – over two decades ago – and she has worked in collaboration with many groups at the Weizmann Institute.

    But even with all her experience, protein interactions can still surprise her. This was the case in a recent collaboration with the lab group of Prof. Hagit Eldar-Finkelman of Tel Aviv University, in research that was hailed as a promising new direction for finding treatments for Alzheimer’s disease. Eldar-Finkelman and her group were investigating an enzyme known as GSK-3, which affects the activity of various proteins by clipping a particular type of chemical tag, known as a phosphate group, onto them. GSK-3 thus performs quite a few crucial functions in the body, but it can also become overactive, and this extra activity has been implicated in a number of diseases, including diabetes and Alzheimer’s.

    The Tel Aviv group, explains Eisenstein, was exploring a new way of blocking, or at least damping down, the activity of this enzyme. GSK-3 uses ATP — a small, phosphate-containing molecule — in the chemical tagging process, transferring one of the ATP phosphate groups to a substrate. The ATP binding site on the enzyme is often targeted with ATP-like drug compounds that by themselves binding prevent the ATP from binding, thus blocking the enzyme’s activity. But such compounds are not discriminating enough, often blocking related enzymes in the process, which is an undesired side effect. This is why Eldar-Finkelman and her team looked for molecules that would compete with the substrate and occupy its binding cavity, so that the enzyme’s normal substrates cannot attach to GSK-3 and clip onto the phosphate groups.

    After identifying one molecule – a short piece of protein, or peptide – that substituted for GSK-3’s substrates in experiments, Eldar-Finkelman turned to Eisenstein to design peptides that would be better at competing with the substrate. At first Eisenstein computed model structures of the enzyme with an attached protein substrate and the enzyme with an attached peptide; she then characterized the way in which the enzyme binds either the substrate or the competing peptide. The model structures pinpointed the contacts, and these were verified experimentally by Eldar-Finkelman.

    This led to the next phase, a collaborative effort to introduce alterations to the peptide so as to improve its binding capabilities. One of the new peptides was predicted by Eisenstein to be a good substrate, and Eldar-Finkelman’s experiments showed that it indeed was. Once chemically tagged, the new peptide proved to be excellent at binding to GSK-3 – many times better than the original – and this was the surprise, because normally, once they are tagged, such substrates are repelled from the substrate-binding cavity and end up dissociating from the enzyme. Molecular modeling explained what was happening. After initially binding as a substrate and attaining a phosphate group, the peptide slid within the substrate-binding cavity, changing its conformation in the process, and attached tightly to a position normally occupied by the protein substrate.

    Experiments in Eldar-Finkelman’s group showed that this peptide is also active in vivo and, moreover, was able to reduce the symptoms of an Alzheimer-like condition in mice. The results of this research appeared in Science Signaling.

    “This experiment is a great example of the synergy between biologists and computer modelers,” says Eisenstein. “Hagit understands the function of this enzyme in the body, and she had this great insight on a possible way to control its actions. I am interested in the way that two proteins fit together and influence one another at the molecular and atomic levels, so I can provide the complementary insight.”

    “Molecular modeling is such a useful tool, it has enabled me to work with a great many groups and take part in a lot of interesting, exciting work, over the years,” she adds. “Computers have become much stronger in that time, but the basic, chemical principles of attraction and binding between complex molecules remain the same, and our work is as relevant as ever.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Weizmann Institute Campus

    The Weizmann Institute of Science is one of the world’s leading multidisciplinary research institutions. Hundreds of scientists, laboratory technicians and research students working on its lushly landscaped campus embark daily on fascinating journeys into the unknown, seeking to improve our understanding of nature and our place within it.

    Guiding these scientists is the spirit of inquiry so characteristic of the human race. It is this spirit that propelled humans upward along the evolutionary ladder, helping them reach their utmost heights. It prompted humankind to pursue agriculture, learn to build lodgings, invent writing, harness electricity to power emerging technologies, observe distant galaxies, design drugs to combat various diseases, develop new materials and decipher the genetic code embedded in all the plants and animals on Earth.

    The quest to maintain this increasing momentum compels Weizmann Institute scientists to seek out places that have not yet been reached by the human mind. What awaits us in these places? No one has the answer to this question. But one thing is certain – the journey fired by curiosity will lead onward to a better future.

  • richardmitnick 4:00 pm on May 8, 2016 Permalink | Reply
    Tags: , Computing, ,   

    From INVERSE: “What Will Replace Moore’s Law as Technology Advances Beyond the Microchip?” 



    May 5, 2016
    Adam Toobin

    The mathematics of Moore’s Law has long baffled observers, even as it underlies much of the technological revolution that has transformed the world over the past 50 years, but as chips get smaller, there’s now renewed speculation that it will be squeezed out.

    In the 1965, Intel cofounder Dr. Gordon Moore observed that the number of digital transistors on a single microchip doubled every two years. The trend has stuck ever since: computers the size of entire rooms now rest in the palm of your hand, at a fraction of the cost.

    But with the under-girding technology approaching the size of a single atom, many fear the heyday of the digital revolution is coming to a close, forcing technologists around the world to rethink their business strategies and their notions of computing altogether.

    We have faced the end of Moore’s Law before — in fact, Brian Krzanich, Intel’s chief executive, jokes he has seen the doomsday prediction made no less than four times in his life. But what makes the coming barrier different is that whether we have another five or even ten years of boosting the silicon semiconductors that constitute the core of modern computing, we are going to hit a physical wall sooner rather than later.

    BOINC WallPaper

    Transistor counts for integrated circuits plotted against their dates of introduction. The curve shows Moore’s law – the doubling of transistor counts every two years. The y-axis is logarithmic, so the line corresponds to exponential growth.

    If Moore’s Law is to survive, it would require a radical innovation, rather than the predictable progress that has sustained chip makers over recent decades.

    And most technology companies in the world are beginning to acknowledge the changing forecast for digital hardware. Semiconductor industry associations of the United States, Europe, Japan, South Korea, and Taiwan will issue only one more report forecasting chip technology growth. Intel’s CEO casts these gloomy predictions as premature and refused to participate with the final report. Krzanich insists Intel has the technical capabilities to keep improving chips while keeping costs low for manufacturers, though few in the industry believe the faltering company will maintain its quixotic course for long.

    Access mp4 video here .

    The rest of the industry is casting forth to new opportunities. New technologies like graphene (an atomic-scale honeycomb-like web of carbon atoms) and quantum computing offer a unique way out of physical limitations imposed by silicon superconductors. Graphene has recently enthralled chipmakers with its affordable carbon base and configuration that makes it an ideal candidate for faster, though still largely conventional, digital processing.

    The ideal crystalline structure of graphene is a hexagonal grid.

    “As you look at Intel saying the PC industry is slowing and seeing the first signs of slowing in mobile computing, people are starting to look for new places to put semiconductors,” said David Kanter, a semiconductor industry analyst at Real World Technologies in San Francisco, told The New York Times.

    Quantum computing, on the other hand, would tap the ambiguity inherent in the universe to change computing forever. The prospect has long intrigued tech companies, and the recent debut of some radical early stage designs have reignited the fervor of quantum’s advocates.

    This image appeared in an IBM promotion that read: “IBM unlocks quantum computing capabilities, lifts limits of innovation.”

    For many years, the end of Moore’s Law was viewed as a kind of apocalypse scenario for the technology industry: What would we do when there was no more room on the chip? Much of what has been forecast about the future of the digital world has been preceded on the notion that we will continue to make the incredible improvements of the past half century.

    It’s perhaps a good sign that technology companies are soberly looking to the future and getting excited about new, promising developments that may yet yield entirely new frontiers.

    Photos via Wgsimon [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0) or GFDL (http://www.gnu.org/copyleft/fdl.html)%5D, via Wikimedia Commons, AlexanderAlUS (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0) or GFDL (http://www.gnu.org/copyleft/fdl.html)%5D, via Wikimedia Commons, IBM, Jamie Baxter

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 9:00 am on March 25, 2016 Permalink | Reply
    Tags: , Computing, ,   

    From MIT Tech Review: “Intel Puts the Brakes on Moore’s Law” 

    MIT Technology Review
    MIT Technology Review

    Tom Simonite


    Chip maker Intel has signaled a slowing of Moore’s Law, a technological phenomenon that has played a role in just about every major advance in engineering and technology for decades.

    BOINC WallPaper
    CPU displayed by BOINC

    Since the 1970s, Intel has released chips that fit twice as many transistors into the same space roughly every two years, aiming to follow an exponential curve named after Gordon Moore, one of the company’s cofounders. That continual shrinking has helped make computers more powerful, compact, and energy-efficient. It has helped bring us smartphones, powerful Internet services, and breakthroughs in fields such as artificial intelligence and genetics. And Moore’s Law has become shorthand for the idea that anything involving computing gets more capable over time.

    But Intel disclosed in a regulatory filing last month that it is slowing the pace with which it launches new chip-making technology. The gap between successive generations of chips with new, smaller transistors will widen. With the transistors in Intel’s latest chips already as small as 14 nanometers, it is becoming more difficult to shrink them further in a way that’s cost-effective for production.

    Intel’s strategy shift is not a complete surprise. It already pushed back the debut of its first chips with 10-nanometer transistors from the end of this year to sometime in 2017. But it is notable that the company has now admitted that wasn’t a one-off, and that it can’t keep up the pace it used to. That means Moore’s Law will slow down, too.

    That doesn’t necessarily mean that our devices are about to stop improving, or that ideas such as driverless cars will stall from lack of processing power. Intel says it will deliver extra performance upgrades between generations of transistor technology by making improvements to the way chips are designed. And the company’s chips are essentially irrelevant to mobile devices, a market dominated by competitors that are generally a few years behind in terms of shrinking transistors and adopting new manufacturing technologies. It is also arguable that for many important new use cases for computing, such as wearable devices or medical implants, chips are already powerful enough and power consumption is more important.

    But raw computing power still matters. Putting more of it behind machine-learning algorithms has been crucial to recent breakthroughs in artificial intelligence, for example. And Intel is likely to have to deliver more bad news about the future of chips and Moore’s Law before too long.

    The company’s chief of manufacturing said in February that Intel needs to switch away from silicon transistors in about four years. “The new technology will be fundamentally different,” he said, before admitting that Intel doesn’t yet have a successor lined up. There are two leading candidates—technologies known as spintronics and tunneling transistors—but they may not offer big increases in computing power. And both are far from being ready for use in making processors in large volumes.

    [If one examines the details of many many supercomputers, one sees that graphics processing units (GPU’s) are becoming much mofre important than central processing units (CPU’s) which are based upon transister developments ruled by Moore’s Law]

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

  • richardmitnick 11:13 am on February 13, 2016 Permalink | Reply
    Tags: , , Computing,   

    From Nature: “The chips are down for Moore’s law” 

    Nature Mag

    09 February 2016
    M. Mitchell Waldrop


    Next month, the worldwide semiconductor industry will formally acknowledge what has become increasingly obvious to everyone involved: Moore’s law, the principle that has powered the information-technology revolution since the 1960s, is nearing its end.

    A rule of thumb that has come to dominate computing, Moore’s law states that the number of transistors on a microprocessor chip will double every two years or so — which has generally meant that the chip’s performance will, too. The exponential improvement that the law describes transformed the first crude home computers of the 1970s into the sophisticated machines of the 1980s and 1990s, and from there gave rise to high-speed Internet, smartphones and the wired-up cars, refrigerators and thermostats that are becoming prevalent today.

    None of this was inevitable: chipmakers deliberately chose to stay on the Moore’s law track. At every stage, software developers came up with applications that strained the capabilities of existing chips; consumers asked more of their devices; and manufacturers rushed to meet that demand with next-generation chips. Since the 1990s, in fact, the semiconductor industry has released a research road map every two years to coordinate what its hundreds of manufacturers and suppliers are doing to stay in step with the law — a strategy sometimes called More Moore. It has been largely thanks to this road map that computers have followed the law’s exponential demands.

    Not for much longer. The doubling has already started to falter, thanks to the heat that is unavoidably generated when more and more silicon circuitry is jammed into the same small area. And some even more fundamental limits loom less than a decade away. Top-of-the-line microprocessors currently have circuit features that are around 14 nanometres across, smaller than most viruses. But by the early 2020s, says Paolo Gargini, chair of the road-mapping organization, “even with super-aggressive efforts, we’ll get to the 2–3-nanometre limit, where features are just 10 atoms across. Is that a device at all?” Probably not — if only because at that scale, electron behaviour will be governed by quantum uncertainties that will make transistors hopelessly unreliable. And despite vigorous research efforts, there is no obvious successor to today’s silicon technology.

    The industry road map released next month will for the first time lay out a research and development plan that is not centred on Moore’s law. Instead, it will follow what might be called the More than Moore strategy: rather than making the chips better and letting the applications follow, it will start with applications — from smartphones and supercomputers to data centres in the cloud — and work downwards to see what chips are needed to support them. Among those chips will be new generations of sensors, power-management circuits and other silicon devices required by a world in which computing is increasingly mobile.

    The changing landscape, in turn, could splinter the industry’s long tradition of unity in pursuit of Moore’s law. “Everybody is struggling with what the road map actually means,” says Daniel Reed, a computer scientist and vice-president for research at the University of Iowa in Iowa City. The Semiconductor Industry Association (SIA) in Washington DC, which represents all the major US firms, has already said that it will cease its participation in the road-mapping effort once the report is out, and will instead pursue its own research and development agenda.

    Everyone agrees that the twilight of Moore’s law will not mean the end of progress. “Think about what happened to airplanes,” says Reed. “A Boeing 787 doesn’t go any faster than a 707 did in the 1950s — but they are very different airplanes”, with innovations ranging from fully electronic controls to a carbon-fibre fuselage. That’s what will happen with computers, he says: “Innovation will absolutely continue — but it will be more nuanced and complicated.”

    Laying down the law

    The 1965 essay (1) that would make Gordon Moore famous started with a meditation on what could be done with the still-new technology of integrated circuits. Moore, who was then research director of Fairchild Semiconductor in San Jose, California, predicted wonders such as home computers, digital wristwatches, automatic cars and “personal portable communications equipment” — mobile phones. But the heart of the essay was Moore’s attempt to provide a timeline for this future. As a measure of a microprocessor’s computational power, he looked at transistors, the on–off switches that make computing digital. On the basis of achievements by his company and others in the previous few years, he estimated that the number of transistors and other electronic components per chip was doubling every year.

    Moore, who would later co-found Intel in Santa Clara, California, underestimated the doubling time; in 1975, he revised it to a more realistic two years (2). But his vision was spot on. The future that he predicted started to arrive in the 1970s and 1980s, with the advent of microprocessor-equipped consumer products such as the Hewlett Packard hand calculators, the Apple II computer and the IBM PC. Demand for such products was soon exploding, and manufacturers were engaging in a brisk competition to offer more and more capable chips in smaller and smaller packages (see Moore’s lore)

    This was expensive. Improving a microprocessor’s performance meant scaling down the elements of its circuit so that more of them could be packed together on the chip, and electrons could move between them more quickly. Scaling, in turn, required major refinements in photolithography, the basic technology for etching those microscopic elements onto a silicon surface. But the boom times were such that this hardly mattered: a self-reinforcing cycle set in. Chips were so versatile that manufacturers could make only a few types — processors and memory, mostly — and sell them in huge quantities. That gave them enough cash to cover the cost of upgrading their fabrication facilities, or ‘fabs’, and still drop the prices, thereby fuelling demand even further.

    Soon, however, it became clear that this market-driven cycle could not sustain the relentless cadence of Moore’s law by itself. The chip-making process was getting too complex, often involving hundreds of stages, which meant that taking the next step down in scale required a network of materials-suppliers and apparatus-makers to deliver the right upgrades at the right time. “If you need 40 kinds of equipment and only 39 are ready, then everything stops,” says Kenneth Flamm, an economist who studies the computer industry at the University of Texas at Austin.

    To provide that coordination, the industry devised its first road map. The idea, says Gargini, was “that everyone would have a rough estimate of where they were going, and they could raise an alarm if they saw roadblocks ahead”. The US semiconductor industry launched the mapping effort in 1991, with hundreds of engineers from various companies working on the first report and its subsequent iterations, and Gargini, then the director of technology strategy at Intel, as its chair. In 1998, the effort became the International Technology Roadmap for Semiconductors, with participation from industry associations in Europe, Japan, Taiwan and South Korea. (This year’s report, in keeping with its new approach, will be called the International Roadmap for Devices and Systems.)

    “The road map was an incredibly interesting experiment,” says Flamm. “So far as I know, there is no example of anything like this in any other industry, where every manufacturer and supplier gets together and figures out what they are going to do.” In effect, it converted Moore’s law from an empirical observation into a self-fulfilling prophecy: new chips followed the law because the industry made sure that they did.

    And it all worked beautifully, says Flamm — right up until it didn’t.

    Heat death

    The first stumbling block was not unexpected. Gargini and others had warned about it as far back as 1989. But it hit hard nonetheless: things got too small.

    “It used to be that whenever we would scale to smaller feature size, good things happened automatically,” says Bill Bottoms, president of Third Millennium Test Solutions, an equipment manufacturer in Santa Clara. “The chips would go faster and consume less power.”

    But in the early 2000s, when the features began to shrink below about 90 nanometres, that automatic benefit began to fail. As electrons had to move faster and faster through silicon circuits that were smaller and smaller, the chips began to get too hot.

    That was a fundamental problem. Heat is hard to get rid of, and no one wants to buy a mobile phone that burns their hand. So manufacturers seized on the only solutions they had, says Gargini. First, they stopped trying to increase ‘clock rates’ — how fast microprocessors execute instructions. This effectively put a speed limit on the chip’s electrons and limited their ability to generate heat. The maximum clock rate hasn’t budged since 2004.

    Second, to keep the chips moving along the Moore’s law performance curve despite the speed limit, they redesigned the internal circuitry so that each chip contained not one processor, or ‘core’, but two, four or more. (Four and eight are common in today’s desktop computers and smartphones.) In principle, says Gargini, “you can have the same output with four cores going at 250 megahertz as one going at 1 gigahertz”. In practice, exploiting eight processors means that a problem has to be broken down into eight pieces — which for many algorithms is difficult to impossible. “The piece that can’t be parallelized will limit your improvement,” says Gargini.

    Even so, when combined with creative redesigns to compensate for electron leakage and other effects, these two solutions have enabled chip manufacturers to continue shrinking their circuits and keeping their transistor counts on track with Moore’s law. The question now is what will happen in the early 2020s, when continued scaling is no longer possible with silicon because quantum effects have come into play. What comes next? “We’re still struggling,” says An Chen, an electrical engineer who works for the international chipmaker GlobalFoundries in Santa Clara, California, and who chairs a committee of the new road map that is looking into the question.

    That is not for a lack of ideas. One possibility is to embrace a completely new paradigm — something like quantum computing, which promises exponential speed-up for certain calculations, or neuromorphic computing, which aims to model processing elements on neurons in the brain. But none of these alternative paradigms has made it very far out of the laboratory. And many researchers think that quantum computing will offer advantages only for niche applications, rather than for the everyday tasks at which digital computing excels. “What does it mean to quantum-balance a chequebook?” wonders John Shalf, head of computer-science research at the Lawrence Berkeley National Laboratory in Berkeley, California.

    Material differences

    A different approach, which does stay in the digital realm, is the quest to find a ‘millivolt switch’: a material that could be used for devices at least as fast as their silicon counterparts, but that would generate much less heat. There are many candidates, ranging from 2D graphene-like compounds to spintronic materials that would compute by flipping electron spins rather than by moving electrons. “There is an enormous research space to be explored once you step outside the confines of the established technology,” says Thomas Theis, a physicist who directs the nanoelectronics initiative at the Semiconductor Research Corporation (SRC), a research-funding consortium in Durham, North Carolina.

    Unfortunately, no millivolt switch has made it out of the laboratory either. That leaves the architectural approach: stick with silicon, but configure it in entirely new ways. One popular option is to go 3D. Instead of etching flat circuits onto the surface of a silicon wafer, build skyscrapers: stack many thin layers of silicon with microcircuitry etched into each. In principle, this should make it possible to pack more computational power into the same space. In practice, however, this currently works only with memory chips, which do not have a heat problem: they use circuits that consume power only when a memory cell is accessed, which is not that often. One example is the Hybrid Memory Cube design, a stack of as many as eight memory layers that is being pursued by an industry consortium originally launched by Samsung and memory-maker Micron Technology in Boise, Idaho.

    Microprocessors are more challenging: stacking layer after layer of hot things simply makes them hotter. But one way to get around that problem is to do away with separate memory and microprocessing chips, as well as the prodigious amount of heat — at least 50% of the total — that is now generated in shuttling data back and forth between the two. Instead, integrate them in the same nanoscale high-rise.

    This is tricky, not least because current-generation microprocessors and memory chips are so different that they cannot be made on the same fab line; stacking them requires a complete redesign of the chip’s structure. But several research groups are hoping to pull it off. Electrical engineer Subhasish Mitra and his colleagues at Stanford University in California have developed a hybrid architecture that stacks memory units together with transistors made from carbon nanotubes, which also carry current from layer to layer (3). The group thinks that its architecture could reduce energy use to less than one-thousandth that of standard chips.

    Going mobile

    The second stumbling block for Moore’s law was more of a surprise, but unfolded at roughly the same time as the first: computing went mobile.

    Twenty-five years ago, computing was defined by the needs of desktop and laptop machines; supercomputers and data centres used essentially the same microprocessors, just packed together in much greater numbers. Not any more. Today, computing is increasingly defined by what high-end smartphones and tablets do — not to mention by smart watches and other wearables, as well as by the exploding number of smart devices in everything from bridges to the human body. And these mobile devices have priorities very different from those of their more sedentary cousins.

    Keeping abreast of Moore’s law is fairly far down on the list — if only because mobile applications and data have largely migrated to the worldwide network of server farms known as the cloud. Those server farms now dominate the market for powerful, cutting-edge microprocessors that do follow Moore’s law. “What Google and Amazon decide to buy has a huge influence on what Intel decides to do,” says Reed.

    Much more crucial for mobiles is the ability to survive for long periods on battery power while interacting with their surroundings and users. The chips in a typical smartphone must send and receive signals for voice calls, Wi-Fi, Bluetooth and the Global Positioning System, while also sensing touch, proximity, acceleration, magnetic fields — even fingerprints. On top of that, the device must host special-purpose circuits for power management, to keep all those functions from draining the battery.

    The problem for chipmakers is that this specialization is undermining the self-reinforcing economic cycle that once kept Moore’s law humming. “The old market was that you would make a few different things, but sell a whole lot of them,” says Reed. “The new market is that you have to make a lot of things, but sell a few hundred thousand apiece — so it had better be really cheap to design and fab them.”

    Both are ongoing challenges. Getting separately manufactured technologies to work together harmoniously in a single device is often a nightmare, says Bottoms, who heads the new road map’s committee on the subject. “Different components, different materials, electronics, photonics and so on, all in the same package — these are issues that will have to be solved by new architectures, new simulations, new switches and more.”

    For many of the special-purpose circuits, design is still something of a cottage industry — which means slow and costly. At the University of California, Berkeley, electrical engineer Alberto Sangiovanni-Vincentelli and his colleagues are trying to change that: instead of starting from scratch each time, they think that people should create new devices by combining large chunks of existing circuitry that have known functionality (4). “It’s like using Lego blocks,” says Sangiovanni-Vincentelli. It’s a challenge to make sure that the blocks work together, but “if you were to use older methods of design, costs would be prohibitive”.

    Costs, not surprisingly, are very much on the chipmakers’ minds these days. “The end of Moore’s law is not a technical issue, it is an economic issue,” says Bottoms. Some companies, notably Intel, are still trying to shrink components before they hit the wall imposed by quantum effects, he says. But “the more we shrink, the more it costs”.

    Every time the scale is halved, manufacturers need a whole new generation of ever more precise photolithography machines. Building a new fab line today requires an investment typically measured in many billions of dollars — something only a handful of companies can afford. And the fragmentation of the market triggered by mobile devices is making it harder to recoup that money. “As soon as the cost per transistor at the next node exceeds the existing cost,” says Bottoms, “the scaling stops.”

    Many observers think that the industry is perilously close to that point already. “My bet is that we run out of money before we run out of physics,” says Reed.

    Certainly it is true that rising costs over the past decade have forced a massive consolidation in the chip-making industry. Most of the world’s production lines now belong to a comparative handful of multinationals such as Intel, Samsung and the Taiwan Semiconductor Manufacturing Company in Hsinchu. These manufacturing giants have tight relationships with the companies that supply them with materials and fabrication equipment; they are already coordinating, and no longer find the road-map process all that useful. “The chip manufacturer’s buy-in is definitely less than before,” says Chen.

    Take the SRC, which functions as the US industry’s research agency: it was a long-time supporter of the road map, says SRC vice-president Steven Hillenius. “But about three years ago, the SRC contributions went away because the member companies didn’t see the value in it.” The SRC, along with the SIA, wants to push a more long-term, basic research agenda and secure federal funding for it — possibly through the White House’s National Strategic Computing Initiative, launched in July last year.

    That agenda, laid out in a report (5) last September, sketches out the research challenges ahead. Energy efficiency is an urgent priority — especially for the embedded smart sensors that comprise the ‘Internet of things’, which will need new technology to survive without batteries, using energy scavenged from ambient heat and vibration. Connectivity is equally key: billions of free-roaming devices trying to communicate with one another and the cloud will need huge amounts of bandwidth, which they can get if researchers can tap the once-unreachable terahertz band lying deep in the infrared spectrum. And security is crucial — the report calls for research into new ways to build in safeguards against cyberattack and data theft.

    These priorities and others will give researchers plenty to work on in coming years. At least some industry insiders, including Shekhar Borkar, head of Intel’s advanced microprocessor research, are optimists. Yes, he says, Moore’s law is coming to an end in a literal sense, because the exponential growth in transistor count cannot continue. But from the consumer perspective, “Moore’s law simply states that user value doubles every two years”. And in that form, the law will continue as long as the industry can keep stuffing its devices with new functionality.

    The ideas are out there, says Borkar. “Our job is to engineer them.”

    Nature 530, 144–147 (11 February 2016) doi:10.1038/530144a

    1.Moore, G. E. Electronics 38, 114–117 (1965).

    2.Moore, G. E. IEDM Tech. Digest 11–13 (1975).

    3.Sabry Aly, M. M. et al. Computer 48(12), 24–33 (2015).

    4.Nikolic, B. 41th Eur. Solid-State Circuits Conf. (2015); available at http://go.nature.com/wwljk7

    5. Rebooting the IT Revolution: A Call to Action (SIA/SRC, 2015); available at http://go.nature.com/urvkhw

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Nature is a weekly international journal publishing the finest peer-reviewed research in all fields of science and technology on the basis of its originality, importance, interdisciplinary interest, timeliness, accessibility, elegance and surprising conclusions. Nature also provides rapid, authoritative, insightful and arresting news and interpretation of topical and coming trends affecting science, scientists and the wider public.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: