Tagged: Computing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:32 am on July 7, 2017 Permalink | Reply
    Tags: , , Computing, , St. Jude Children’s Research Hospital   

    From ORNL: “ORNL researchers apply imaging, computational expertise to St. Jude research” 

    i1

    Oak Ridge National Laboratory

    July 6, 2017
    Stephanie G. Seay
    seaysg@ornl.gov
    865.576.9894

    1
    Left to right: ORNL’s Derek Rose, Matthew Eicholtz, Philip Bingham, Ryan Kerekes, and Shaun Gleason.

    2
    Measuring migrating neurons in a developing mouse brain.

    3
    Identifying and analyzing neurons in a mouse auditory cortex.
    No image credits for above images

    In the quest to better understand and cure childhood diseases, scientists at St. Jude Children’s Research Hospital accumulate enormous amounts of data from powerful video microscopes. To help St. Jude scientists mine that trove of data, researchers at Oak Ridge National Laboratory have created custom algorithms that can provide a deeper understanding of the images and quicken the pace of research.

    The work resides in St. Jude’s Department of Developmental Neurobiology in Memphis, Tennessee, where scientists use advanced microscopy to capture the details of phenomena such as nerve cell growth and migration in the brains of mice. ORNL researchers take those videos and leverage their expertise in image processing, computational science, and machine learning to analyze the footage and create statistics.

    A recent Science article details St. Jude research on brain plasticity, or the ability of the brain to change and form new connections between neurons. In this work, ORNL helped track mice brain cell electrical activity in the auditory cortex when the animals were exposed to certain tones.

    ORNL researchers created an algorithm to measure electrical activations, or signals, across groups of neurons, collecting statistics and making correlations between cell activity in the auditory cortex and tones heard by the mice. The team first had to stabilize the video because it was taken while the mice were awake and moving to ensure a proper analysis was being conducted, said Derek Rose, who now leads the work at ORNL.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 3:39 pm on May 14, 2017 Permalink | Reply
    Tags: 22-Year-Old Researcher Accidentally Stops Global Cyberattack, , Computing, , Massive cyberattack thwarted   

    From Inverse: “22-Year-Old Researcher Accidentally Stops Global Cyberattack” 

    INVERSE

    INVERSE

    May 13, 2017
    Grace Lisa Scott

    And then he blogged about how he did it.

    On Friday, a massive cyberattack spread across 74 countries, infiltrating global companies like FedEx and Nissan, telecommunication networks, and most notably the UK’s National Health Service. It left the NHS temporarily crippled, with test results and patient records becoming unavailable and phones not working.

    The ransomware attack employed a malware called WannaCrypt that encrypts a user’s data and then demands a payment — in this instance $300-worth of bitcoins — to retrieve and unlock said data. The malware is spread through email and exploits a vulnerability in Windows. Microsoft did release a patch that fixes the vulnerability back in March, but any computer without the update would have remained vulnerable.

    The attack was suddenly halted early Friday afternoon (Eastern Standard Time) thanks to a 22-year-old cybersecurity researcher from southwest England. Going by the pseudonym MalwareTech on Twitter, the researcher claimed he accidentally activated the software’s “kill switch” by registering a complicated domain name hidden in the malware.

    After getting home from lunch with a friend and realizing the true severity of the cyberattack, the cybersecurity expert started looking for a weakness within the malware with the help of a few fellow researchers. On Saturday, he detailed how he managed to stop the malware spread in a blog post endearingly-titled “How to Accidentally Stop a Global Cyber Attacks”.

    “You’ve probably read about the WannaCrypt fiasco on several news sites, but I figured I’d tell my story,” he says.

    MalwareTech had registered the domain as a way to track the spread. “My job is to look for ways we can track and potentially stop botnets (and other kinds of malware), so I’m always on the lookout to pick up unregistered malware control server (C2) domains. In fact I registered several thousand of such domains in the past year,” he says.

    By registering the domain and setting up a sinkhole server he was planning to track the WannaCrypt spread.

    Fortunately, it didn’t turn out to be necessary because just by registering the domain MalwareTech he had engaged what was possibly an obscure but intentional kill switch for the ransomware. A peer linked MalwareTech to a tweet by a fellow researcher named Darien Huss who had just tweeted the discovery.

    The move gave companies and institutions time to patch their systems to avoid infection before the attackers could change the code and get the ransomware going again.

    In an interview with The Guardian Saturday, MalwareTech warned that the attack was probably not over. “The attackers will realize how we stopped it, they’ll change the code and then they’ll start again. Enable windows update, update and then reboot.”

    As for MalwareTech himself, he says he prefers to remain anonymous. “…It just doesn’t make sense to give out my personal information, obviously we’re working against bad guys and they’re not going to be happy about this,” he told the Guardian.

    To get into the nitty gritty of just why MalwareTech’s sinkhole managed to stop the international ransomware you can read his full blog post here.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 1:31 pm on April 4, 2017 Permalink | Reply
    Tags: , , Computing, , Tim Berners-Lee wins $1 million Turing Award,   

    From MIT: “Tim Berners-Lee wins $1 million Turing Award” 

    MIT News

    MIT Widget

    MIT News

    April 4, 2017
    Adam Conner-Simons

    1
    Tim Berners-Lee was honored with the Turing Award for his work inventing the World Wide Web, the first web browser, and “the fundamental protocols and algorithms [that allowed] the web to scale.” Photo: Henry Thomas

    CSAIL researcher honored for inventing the web and developing the protocols that spurred its global use.

    MIT Professor Tim Berners-Lee, the researcher who invented the World Wide Web and is one of the world’s most influential voices for online privacy and government transparency, has won the most prestigious honor in computer science, the Association for Computing Machinery (ACM) A.M. Turing Award. Often referred to as “the Nobel Prize of computing,” the award comes with a $1 million prize provided by Google.

    In its announcement today, ACM cited Berners-Lee for “inventing the World Wide Web, the first web browser, and the fundamental protocols and algorithms allowing the web to scale.” This year marks the 50th anniversary of the award.

    A principal investigator at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) with a joint appointment in the Department of Electrical Engineering and Computer Science, Berners-Lee conceived of the web in 1989 at the European Organization for Nuclear Research (CERN) as a way to allow scientists around the world to share information with each other on the internet. He introduced a naming scheme (URIs), a communications protocol (HTTP), and a language for creating webpages (HTML). His open-source approach to coding the first browser and server is often credited with helping catalyzing the web’s rapid growth.

    “I’m humbled to receive the namesake award of a computing pioneer who showed that what a programmer could do with a computer is limited only by the programmer themselves,” says Berners-Lee, the 3COM Founders Professor of Engineering at MIT. “It is an honor to receive an award like the Turing that has been bestowed to some of the most brilliant minds in the world.”

    Berners-Lee is founder and director of the World Wide Web Consortium (W3C), which sets technical standards for web development, as well as the World Wide Web Foundation, which aims to establish the open web as a public good and a basic right. He also holds a professorship at Oxford University.

    As director of CSAIL’s Decentralized Information Group, Berners-Lee has developed data systems and privacy-minded protocols such as “HTTP with Accountability” (HTTPA), which monitors the transmission of private data and enables people to examine how their information is being used. He also leads Solid (“social linked data”), a project to re-decentralize the web that allows people to control their own data and make it available only to desired applications.

    “Tim Berners-Lee’s career — as brilliant and bold as they come — exemplifies MIT’s passion for using technology to make a better world,” says MIT President L. Rafael Reif. “Today we celebrate the transcendent impact Tim has had on all of our lives, and congratulate him on this wonderful and richly deserved award.”

    While Berners-Lee was initially drawn to programming through his interest in math, there was also a familial connection: His parents met while working on the Ferranti Mark 1, the world’s first commercial general-purpose computer. Years later, he wrote a program called Enquire to track connections between different ideas and projects, indirectly inspiring what later became the web.

    “Tim’s innovative and visionary work has transformed virtually every aspect our lives, from communications and entertainment to shopping and business,” says CSAIL Director Daniela Rus. “His work has had a profound impact on people across the world, and all of us at CSAIL are so very proud of him for being recognized with the highest honor in computer science.”

    Berners-Lee has received multiple accolades for his technical contributions, from being knighted by Queen Elizabeth to being named one of TIME magazine’s “100 Most Important People of the 20th Century.” He will formally receive the Turing Award during the ACM’s annual banquet June 24 in San Francisco.

    Past Turing Award recipients who have taught at MIT include Michael Stonebraker (2014), Shafi Goldwasser and Silvio Micali (2013), Barbara Liskov (2008), Ronald Rivest (2002), Butler Lampson (1992), Fernando Corbato (1990), John McCarthy (1971) and Marvin Minsky (1969).

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 2:48 pm on January 30, 2017 Permalink | Reply
    Tags: , , Computing, Dr. Miriam Eisenstein, GSK-3, Modeling of molecules on the computer, , ,   

    From Weizmann: Women in STEM – “Staff Scientist: Dr. Miriam Eisenstein” 

    Weizmann Institute of Science logo

    Weizmann Institute of Science

    30.01.2017
    No writer credit found

    1
    Name: Dr. Miriam Eisenstein
    Department: Chemical Research Support

    “The modeling of molecules on the computer,” says Dr. Miriam Eisenstein, Head of the Macromolecular Modeling Unit of the Weizmann Institute of Science’s Chemical Research Support Department, “is sometimes the only way to understand exactly how such complex molecules as proteins interact.”

    Eisenstein was one of the first to develop molecular docking methods while working with Prof. Ephraim Katzir – over two decades ago – and she has worked in collaboration with many groups at the Weizmann Institute.

    But even with all her experience, protein interactions can still surprise her. This was the case in a recent collaboration with the lab group of Prof. Hagit Eldar-Finkelman of Tel Aviv University, in research that was hailed as a promising new direction for finding treatments for Alzheimer’s disease. Eldar-Finkelman and her group were investigating an enzyme known as GSK-3, which affects the activity of various proteins by clipping a particular type of chemical tag, known as a phosphate group, onto them. GSK-3 thus performs quite a few crucial functions in the body, but it can also become overactive, and this extra activity has been implicated in a number of diseases, including diabetes and Alzheimer’s.

    The Tel Aviv group, explains Eisenstein, was exploring a new way of blocking, or at least damping down, the activity of this enzyme. GSK-3 uses ATP — a small, phosphate-containing molecule — in the chemical tagging process, transferring one of the ATP phosphate groups to a substrate. The ATP binding site on the enzyme is often targeted with ATP-like drug compounds that by themselves binding prevent the ATP from binding, thus blocking the enzyme’s activity. But such compounds are not discriminating enough, often blocking related enzymes in the process, which is an undesired side effect. This is why Eldar-Finkelman and her team looked for molecules that would compete with the substrate and occupy its binding cavity, so that the enzyme’s normal substrates cannot attach to GSK-3 and clip onto the phosphate groups.

    After identifying one molecule – a short piece of protein, or peptide – that substituted for GSK-3’s substrates in experiments, Eldar-Finkelman turned to Eisenstein to design peptides that would be better at competing with the substrate. At first Eisenstein computed model structures of the enzyme with an attached protein substrate and the enzyme with an attached peptide; she then characterized the way in which the enzyme binds either the substrate or the competing peptide. The model structures pinpointed the contacts, and these were verified experimentally by Eldar-Finkelman.

    This led to the next phase, a collaborative effort to introduce alterations to the peptide so as to improve its binding capabilities. One of the new peptides was predicted by Eisenstein to be a good substrate, and Eldar-Finkelman’s experiments showed that it indeed was. Once chemically tagged, the new peptide proved to be excellent at binding to GSK-3 – many times better than the original – and this was the surprise, because normally, once they are tagged, such substrates are repelled from the substrate-binding cavity and end up dissociating from the enzyme. Molecular modeling explained what was happening. After initially binding as a substrate and attaining a phosphate group, the peptide slid within the substrate-binding cavity, changing its conformation in the process, and attached tightly to a position normally occupied by the protein substrate.

    Experiments in Eldar-Finkelman’s group showed that this peptide is also active in vivo and, moreover, was able to reduce the symptoms of an Alzheimer-like condition in mice. The results of this research appeared in Science Signaling.

    “This experiment is a great example of the synergy between biologists and computer modelers,” says Eisenstein. “Hagit understands the function of this enzyme in the body, and she had this great insight on a possible way to control its actions. I am interested in the way that two proteins fit together and influence one another at the molecular and atomic levels, so I can provide the complementary insight.”

    “Molecular modeling is such a useful tool, it has enabled me to work with a great many groups and take part in a lot of interesting, exciting work, over the years,” she adds. “Computers have become much stronger in that time, but the basic, chemical principles of attraction and binding between complex molecules remain the same, and our work is as relevant as ever.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Weizmann Institute Campus

    The Weizmann Institute of Science is one of the world’s leading multidisciplinary research institutions. Hundreds of scientists, laboratory technicians and research students working on its lushly landscaped campus embark daily on fascinating journeys into the unknown, seeking to improve our understanding of nature and our place within it.

    Guiding these scientists is the spirit of inquiry so characteristic of the human race. It is this spirit that propelled humans upward along the evolutionary ladder, helping them reach their utmost heights. It prompted humankind to pursue agriculture, learn to build lodgings, invent writing, harness electricity to power emerging technologies, observe distant galaxies, design drugs to combat various diseases, develop new materials and decipher the genetic code embedded in all the plants and animals on Earth.

    The quest to maintain this increasing momentum compels Weizmann Institute scientists to seek out places that have not yet been reached by the human mind. What awaits us in these places? No one has the answer to this question. But one thing is certain – the journey fired by curiosity will lead onward to a better future.

     
  • richardmitnick 4:00 pm on May 8, 2016 Permalink | Reply
    Tags: , Computing, ,   

    From INVERSE: “What Will Replace Moore’s Law as Technology Advances Beyond the Microchip?” 

    INVERSE

    INVERSE

    May 5, 2016
    Adam Toobin

    The mathematics of Moore’s Law has long baffled observers, even as it underlies much of the technological revolution that has transformed the world over the past 50 years, but as chips get smaller, there’s now renewed speculation that it will be squeezed out.

    In the 1965, Intel cofounder Dr. Gordon Moore observed that the number of digital transistors on a single microchip doubled every two years. The trend has stuck ever since: computers the size of entire rooms now rest in the palm of your hand, at a fraction of the cost.

    But with the under-girding technology approaching the size of a single atom, many fear the heyday of the digital revolution is coming to a close, forcing technologists around the world to rethink their business strategies and their notions of computing altogether.

    We have faced the end of Moore’s Law before — in fact, Brian Krzanich, Intel’s chief executive, jokes he has seen the doomsday prediction made no less than four times in his life. But what makes the coming barrier different is that whether we have another five or even ten years of boosting the silicon semiconductors that constitute the core of modern computing, we are going to hit a physical wall sooner rather than later.

    BOINC WallPaper

    1
    Transistor counts for integrated circuits plotted against their dates of introduction. The curve shows Moore’s law – the doubling of transistor counts every two years. The y-axis is logarithmic, so the line corresponds to exponential growth.

    If Moore’s Law is to survive, it would require a radical innovation, rather than the predictable progress that has sustained chip makers over recent decades.

    And most technology companies in the world are beginning to acknowledge the changing forecast for digital hardware. Semiconductor industry associations of the United States, Europe, Japan, South Korea, and Taiwan will issue only one more report forecasting chip technology growth. Intel’s CEO casts these gloomy predictions as premature and refused to participate with the final report. Krzanich insists Intel has the technical capabilities to keep improving chips while keeping costs low for manufacturers, though few in the industry believe the faltering company will maintain its quixotic course for long.


    Access mp4 video here .

    The rest of the industry is casting forth to new opportunities. New technologies like graphene (an atomic-scale honeycomb-like web of carbon atoms) and quantum computing offer a unique way out of physical limitations imposed by silicon superconductors. Graphene has recently enthralled chipmakers with its affordable carbon base and configuration that makes it an ideal candidate for faster, though still largely conventional, digital processing.

    2
    The ideal crystalline structure of graphene is a hexagonal grid.

    “As you look at Intel saying the PC industry is slowing and seeing the first signs of slowing in mobile computing, people are starting to look for new places to put semiconductors,” said David Kanter, a semiconductor industry analyst at Real World Technologies in San Francisco, told The New York Times.

    Quantum computing, on the other hand, would tap the ambiguity inherent in the universe to change computing forever. The prospect has long intrigued tech companies, and the recent debut of some radical early stage designs have reignited the fervor of quantum’s advocates.

    3
    This image appeared in an IBM promotion that read: “IBM unlocks quantum computing capabilities, lifts limits of innovation.”

    For many years, the end of Moore’s Law was viewed as a kind of apocalypse scenario for the technology industry: What would we do when there was no more room on the chip? Much of what has been forecast about the future of the digital world has been preceded on the notion that we will continue to make the incredible improvements of the past half century.

    It’s perhaps a good sign that technology companies are soberly looking to the future and getting excited about new, promising developments that may yet yield entirely new frontiers.

    Photos via Wgsimon [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0) or GFDL (http://www.gnu.org/copyleft/fdl.html)%5D, via Wikimedia Commons, AlexanderAlUS (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0) or GFDL (http://www.gnu.org/copyleft/fdl.html)%5D, via Wikimedia Commons, IBM, Jamie Baxter

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 9:00 am on March 25, 2016 Permalink | Reply
    Tags: , Computing, ,   

    From MIT Tech Review: “Intel Puts the Brakes on Moore’s Law” 

    MIT Technology Review
    MIT Technology Review

    3.25.16
    Tom Simonite

    1

    Chip maker Intel has signaled a slowing of Moore’s Law, a technological phenomenon that has played a role in just about every major advance in engineering and technology for decades.

    BOINC WallPaper
    CPU displayed by BOINC

    Since the 1970s, Intel has released chips that fit twice as many transistors into the same space roughly every two years, aiming to follow an exponential curve named after Gordon Moore, one of the company’s cofounders. That continual shrinking has helped make computers more powerful, compact, and energy-efficient. It has helped bring us smartphones, powerful Internet services, and breakthroughs in fields such as artificial intelligence and genetics. And Moore’s Law has become shorthand for the idea that anything involving computing gets more capable over time.

    But Intel disclosed in a regulatory filing last month that it is slowing the pace with which it launches new chip-making technology. The gap between successive generations of chips with new, smaller transistors will widen. With the transistors in Intel’s latest chips already as small as 14 nanometers, it is becoming more difficult to shrink them further in a way that’s cost-effective for production.

    Intel’s strategy shift is not a complete surprise. It already pushed back the debut of its first chips with 10-nanometer transistors from the end of this year to sometime in 2017. But it is notable that the company has now admitted that wasn’t a one-off, and that it can’t keep up the pace it used to. That means Moore’s Law will slow down, too.

    That doesn’t necessarily mean that our devices are about to stop improving, or that ideas such as driverless cars will stall from lack of processing power. Intel says it will deliver extra performance upgrades between generations of transistor technology by making improvements to the way chips are designed. And the company’s chips are essentially irrelevant to mobile devices, a market dominated by competitors that are generally a few years behind in terms of shrinking transistors and adopting new manufacturing technologies. It is also arguable that for many important new use cases for computing, such as wearable devices or medical implants, chips are already powerful enough and power consumption is more important.

    But raw computing power still matters. Putting more of it behind machine-learning algorithms has been crucial to recent breakthroughs in artificial intelligence, for example. And Intel is likely to have to deliver more bad news about the future of chips and Moore’s Law before too long.

    The company’s chief of manufacturing said in February that Intel needs to switch away from silicon transistors in about four years. “The new technology will be fundamentally different,” he said, before admitting that Intel doesn’t yet have a successor lined up. There are two leading candidates—technologies known as spintronics and tunneling transistors—but they may not offer big increases in computing power. And both are far from being ready for use in making processors in large volumes.

    [If one examines the details of many many supercomputers, one sees that graphics processing units (GPU’s) are becoming much mofre important than central processing units (CPU’s) which are based upon transister developments ruled by Moore’s Law]

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 11:13 am on February 13, 2016 Permalink | Reply
    Tags: , , Computing,   

    From Nature: “The chips are down for Moore’s law” 

    Nature Mag
    Nature

    09 February 2016
    M. Mitchell Waldrop

    1

    Next month, the worldwide semiconductor industry will formally acknowledge what has become increasingly obvious to everyone involved: Moore’s law, the principle that has powered the information-technology revolution since the 1960s, is nearing its end.

    A rule of thumb that has come to dominate computing, Moore’s law states that the number of transistors on a microprocessor chip will double every two years or so — which has generally meant that the chip’s performance will, too. The exponential improvement that the law describes transformed the first crude home computers of the 1970s into the sophisticated machines of the 1980s and 1990s, and from there gave rise to high-speed Internet, smartphones and the wired-up cars, refrigerators and thermostats that are becoming prevalent today.

    None of this was inevitable: chipmakers deliberately chose to stay on the Moore’s law track. At every stage, software developers came up with applications that strained the capabilities of existing chips; consumers asked more of their devices; and manufacturers rushed to meet that demand with next-generation chips. Since the 1990s, in fact, the semiconductor industry has released a research road map every two years to coordinate what its hundreds of manufacturers and suppliers are doing to stay in step with the law — a strategy sometimes called More Moore. It has been largely thanks to this road map that computers have followed the law’s exponential demands.

    Not for much longer. The doubling has already started to falter, thanks to the heat that is unavoidably generated when more and more silicon circuitry is jammed into the same small area. And some even more fundamental limits loom less than a decade away. Top-of-the-line microprocessors currently have circuit features that are around 14 nanometres across, smaller than most viruses. But by the early 2020s, says Paolo Gargini, chair of the road-mapping organization, “even with super-aggressive efforts, we’ll get to the 2–3-nanometre limit, where features are just 10 atoms across. Is that a device at all?” Probably not — if only because at that scale, electron behaviour will be governed by quantum uncertainties that will make transistors hopelessly unreliable. And despite vigorous research efforts, there is no obvious successor to today’s silicon technology.

    The industry road map released next month will for the first time lay out a research and development plan that is not centred on Moore’s law. Instead, it will follow what might be called the More than Moore strategy: rather than making the chips better and letting the applications follow, it will start with applications — from smartphones and supercomputers to data centres in the cloud — and work downwards to see what chips are needed to support them. Among those chips will be new generations of sensors, power-management circuits and other silicon devices required by a world in which computing is increasingly mobile.

    The changing landscape, in turn, could splinter the industry’s long tradition of unity in pursuit of Moore’s law. “Everybody is struggling with what the road map actually means,” says Daniel Reed, a computer scientist and vice-president for research at the University of Iowa in Iowa City. The Semiconductor Industry Association (SIA) in Washington DC, which represents all the major US firms, has already said that it will cease its participation in the road-mapping effort once the report is out, and will instead pursue its own research and development agenda.

    Everyone agrees that the twilight of Moore’s law will not mean the end of progress. “Think about what happened to airplanes,” says Reed. “A Boeing 787 doesn’t go any faster than a 707 did in the 1950s — but they are very different airplanes”, with innovations ranging from fully electronic controls to a carbon-fibre fuselage. That’s what will happen with computers, he says: “Innovation will absolutely continue — but it will be more nuanced and complicated.”

    Laying down the law

    The 1965 essay (1) that would make Gordon Moore famous started with a meditation on what could be done with the still-new technology of integrated circuits. Moore, who was then research director of Fairchild Semiconductor in San Jose, California, predicted wonders such as home computers, digital wristwatches, automatic cars and “personal portable communications equipment” — mobile phones. But the heart of the essay was Moore’s attempt to provide a timeline for this future. As a measure of a microprocessor’s computational power, he looked at transistors, the on–off switches that make computing digital. On the basis of achievements by his company and others in the previous few years, he estimated that the number of transistors and other electronic components per chip was doubling every year.

    Moore, who would later co-found Intel in Santa Clara, California, underestimated the doubling time; in 1975, he revised it to a more realistic two years (2). But his vision was spot on. The future that he predicted started to arrive in the 1970s and 1980s, with the advent of microprocessor-equipped consumer products such as the Hewlett Packard hand calculators, the Apple II computer and the IBM PC. Demand for such products was soon exploding, and manufacturers were engaging in a brisk competition to offer more and more capable chips in smaller and smaller packages (see Moore’s lore)

    This was expensive. Improving a microprocessor’s performance meant scaling down the elements of its circuit so that more of them could be packed together on the chip, and electrons could move between them more quickly. Scaling, in turn, required major refinements in photolithography, the basic technology for etching those microscopic elements onto a silicon surface. But the boom times were such that this hardly mattered: a self-reinforcing cycle set in. Chips were so versatile that manufacturers could make only a few types — processors and memory, mostly — and sell them in huge quantities. That gave them enough cash to cover the cost of upgrading their fabrication facilities, or ‘fabs’, and still drop the prices, thereby fuelling demand even further.

    Soon, however, it became clear that this market-driven cycle could not sustain the relentless cadence of Moore’s law by itself. The chip-making process was getting too complex, often involving hundreds of stages, which meant that taking the next step down in scale required a network of materials-suppliers and apparatus-makers to deliver the right upgrades at the right time. “If you need 40 kinds of equipment and only 39 are ready, then everything stops,” says Kenneth Flamm, an economist who studies the computer industry at the University of Texas at Austin.

    To provide that coordination, the industry devised its first road map. The idea, says Gargini, was “that everyone would have a rough estimate of where they were going, and they could raise an alarm if they saw roadblocks ahead”. The US semiconductor industry launched the mapping effort in 1991, with hundreds of engineers from various companies working on the first report and its subsequent iterations, and Gargini, then the director of technology strategy at Intel, as its chair. In 1998, the effort became the International Technology Roadmap for Semiconductors, with participation from industry associations in Europe, Japan, Taiwan and South Korea. (This year’s report, in keeping with its new approach, will be called the International Roadmap for Devices and Systems.)

    “The road map was an incredibly interesting experiment,” says Flamm. “So far as I know, there is no example of anything like this in any other industry, where every manufacturer and supplier gets together and figures out what they are going to do.” In effect, it converted Moore’s law from an empirical observation into a self-fulfilling prophecy: new chips followed the law because the industry made sure that they did.

    And it all worked beautifully, says Flamm — right up until it didn’t.

    Heat death

    The first stumbling block was not unexpected. Gargini and others had warned about it as far back as 1989. But it hit hard nonetheless: things got too small.

    “It used to be that whenever we would scale to smaller feature size, good things happened automatically,” says Bill Bottoms, president of Third Millennium Test Solutions, an equipment manufacturer in Santa Clara. “The chips would go faster and consume less power.”

    But in the early 2000s, when the features began to shrink below about 90 nanometres, that automatic benefit began to fail. As electrons had to move faster and faster through silicon circuits that were smaller and smaller, the chips began to get too hot.

    That was a fundamental problem. Heat is hard to get rid of, and no one wants to buy a mobile phone that burns their hand. So manufacturers seized on the only solutions they had, says Gargini. First, they stopped trying to increase ‘clock rates’ — how fast microprocessors execute instructions. This effectively put a speed limit on the chip’s electrons and limited their ability to generate heat. The maximum clock rate hasn’t budged since 2004.

    Second, to keep the chips moving along the Moore’s law performance curve despite the speed limit, they redesigned the internal circuitry so that each chip contained not one processor, or ‘core’, but two, four or more. (Four and eight are common in today’s desktop computers and smartphones.) In principle, says Gargini, “you can have the same output with four cores going at 250 megahertz as one going at 1 gigahertz”. In practice, exploiting eight processors means that a problem has to be broken down into eight pieces — which for many algorithms is difficult to impossible. “The piece that can’t be parallelized will limit your improvement,” says Gargini.

    Even so, when combined with creative redesigns to compensate for electron leakage and other effects, these two solutions have enabled chip manufacturers to continue shrinking their circuits and keeping their transistor counts on track with Moore’s law. The question now is what will happen in the early 2020s, when continued scaling is no longer possible with silicon because quantum effects have come into play. What comes next? “We’re still struggling,” says An Chen, an electrical engineer who works for the international chipmaker GlobalFoundries in Santa Clara, California, and who chairs a committee of the new road map that is looking into the question.

    That is not for a lack of ideas. One possibility is to embrace a completely new paradigm — something like quantum computing, which promises exponential speed-up for certain calculations, or neuromorphic computing, which aims to model processing elements on neurons in the brain. But none of these alternative paradigms has made it very far out of the laboratory. And many researchers think that quantum computing will offer advantages only for niche applications, rather than for the everyday tasks at which digital computing excels. “What does it mean to quantum-balance a chequebook?” wonders John Shalf, head of computer-science research at the Lawrence Berkeley National Laboratory in Berkeley, California.

    Material differences

    A different approach, which does stay in the digital realm, is the quest to find a ‘millivolt switch’: a material that could be used for devices at least as fast as their silicon counterparts, but that would generate much less heat. There are many candidates, ranging from 2D graphene-like compounds to spintronic materials that would compute by flipping electron spins rather than by moving electrons. “There is an enormous research space to be explored once you step outside the confines of the established technology,” says Thomas Theis, a physicist who directs the nanoelectronics initiative at the Semiconductor Research Corporation (SRC), a research-funding consortium in Durham, North Carolina.

    Unfortunately, no millivolt switch has made it out of the laboratory either. That leaves the architectural approach: stick with silicon, but configure it in entirely new ways. One popular option is to go 3D. Instead of etching flat circuits onto the surface of a silicon wafer, build skyscrapers: stack many thin layers of silicon with microcircuitry etched into each. In principle, this should make it possible to pack more computational power into the same space. In practice, however, this currently works only with memory chips, which do not have a heat problem: they use circuits that consume power only when a memory cell is accessed, which is not that often. One example is the Hybrid Memory Cube design, a stack of as many as eight memory layers that is being pursued by an industry consortium originally launched by Samsung and memory-maker Micron Technology in Boise, Idaho.

    Microprocessors are more challenging: stacking layer after layer of hot things simply makes them hotter. But one way to get around that problem is to do away with separate memory and microprocessing chips, as well as the prodigious amount of heat — at least 50% of the total — that is now generated in shuttling data back and forth between the two. Instead, integrate them in the same nanoscale high-rise.

    This is tricky, not least because current-generation microprocessors and memory chips are so different that they cannot be made on the same fab line; stacking them requires a complete redesign of the chip’s structure. But several research groups are hoping to pull it off. Electrical engineer Subhasish Mitra and his colleagues at Stanford University in California have developed a hybrid architecture that stacks memory units together with transistors made from carbon nanotubes, which also carry current from layer to layer (3). The group thinks that its architecture could reduce energy use to less than one-thousandth that of standard chips.

    Going mobile

    The second stumbling block for Moore’s law was more of a surprise, but unfolded at roughly the same time as the first: computing went mobile.

    Twenty-five years ago, computing was defined by the needs of desktop and laptop machines; supercomputers and data centres used essentially the same microprocessors, just packed together in much greater numbers. Not any more. Today, computing is increasingly defined by what high-end smartphones and tablets do — not to mention by smart watches and other wearables, as well as by the exploding number of smart devices in everything from bridges to the human body. And these mobile devices have priorities very different from those of their more sedentary cousins.

    Keeping abreast of Moore’s law is fairly far down on the list — if only because mobile applications and data have largely migrated to the worldwide network of server farms known as the cloud. Those server farms now dominate the market for powerful, cutting-edge microprocessors that do follow Moore’s law. “What Google and Amazon decide to buy has a huge influence on what Intel decides to do,” says Reed.

    Much more crucial for mobiles is the ability to survive for long periods on battery power while interacting with their surroundings and users. The chips in a typical smartphone must send and receive signals for voice calls, Wi-Fi, Bluetooth and the Global Positioning System, while also sensing touch, proximity, acceleration, magnetic fields — even fingerprints. On top of that, the device must host special-purpose circuits for power management, to keep all those functions from draining the battery.

    The problem for chipmakers is that this specialization is undermining the self-reinforcing economic cycle that once kept Moore’s law humming. “The old market was that you would make a few different things, but sell a whole lot of them,” says Reed. “The new market is that you have to make a lot of things, but sell a few hundred thousand apiece — so it had better be really cheap to design and fab them.”

    Both are ongoing challenges. Getting separately manufactured technologies to work together harmoniously in a single device is often a nightmare, says Bottoms, who heads the new road map’s committee on the subject. “Different components, different materials, electronics, photonics and so on, all in the same package — these are issues that will have to be solved by new architectures, new simulations, new switches and more.”

    For many of the special-purpose circuits, design is still something of a cottage industry — which means slow and costly. At the University of California, Berkeley, electrical engineer Alberto Sangiovanni-Vincentelli and his colleagues are trying to change that: instead of starting from scratch each time, they think that people should create new devices by combining large chunks of existing circuitry that have known functionality (4). “It’s like using Lego blocks,” says Sangiovanni-Vincentelli. It’s a challenge to make sure that the blocks work together, but “if you were to use older methods of design, costs would be prohibitive”.

    Costs, not surprisingly, are very much on the chipmakers’ minds these days. “The end of Moore’s law is not a technical issue, it is an economic issue,” says Bottoms. Some companies, notably Intel, are still trying to shrink components before they hit the wall imposed by quantum effects, he says. But “the more we shrink, the more it costs”.

    Every time the scale is halved, manufacturers need a whole new generation of ever more precise photolithography machines. Building a new fab line today requires an investment typically measured in many billions of dollars — something only a handful of companies can afford. And the fragmentation of the market triggered by mobile devices is making it harder to recoup that money. “As soon as the cost per transistor at the next node exceeds the existing cost,” says Bottoms, “the scaling stops.”

    Many observers think that the industry is perilously close to that point already. “My bet is that we run out of money before we run out of physics,” says Reed.

    Certainly it is true that rising costs over the past decade have forced a massive consolidation in the chip-making industry. Most of the world’s production lines now belong to a comparative handful of multinationals such as Intel, Samsung and the Taiwan Semiconductor Manufacturing Company in Hsinchu. These manufacturing giants have tight relationships with the companies that supply them with materials and fabrication equipment; they are already coordinating, and no longer find the road-map process all that useful. “The chip manufacturer’s buy-in is definitely less than before,” says Chen.

    Take the SRC, which functions as the US industry’s research agency: it was a long-time supporter of the road map, says SRC vice-president Steven Hillenius. “But about three years ago, the SRC contributions went away because the member companies didn’t see the value in it.” The SRC, along with the SIA, wants to push a more long-term, basic research agenda and secure federal funding for it — possibly through the White House’s National Strategic Computing Initiative, launched in July last year.

    That agenda, laid out in a report (5) last September, sketches out the research challenges ahead. Energy efficiency is an urgent priority — especially for the embedded smart sensors that comprise the ‘Internet of things’, which will need new technology to survive without batteries, using energy scavenged from ambient heat and vibration. Connectivity is equally key: billions of free-roaming devices trying to communicate with one another and the cloud will need huge amounts of bandwidth, which they can get if researchers can tap the once-unreachable terahertz band lying deep in the infrared spectrum. And security is crucial — the report calls for research into new ways to build in safeguards against cyberattack and data theft.

    These priorities and others will give researchers plenty to work on in coming years. At least some industry insiders, including Shekhar Borkar, head of Intel’s advanced microprocessor research, are optimists. Yes, he says, Moore’s law is coming to an end in a literal sense, because the exponential growth in transistor count cannot continue. But from the consumer perspective, “Moore’s law simply states that user value doubles every two years”. And in that form, the law will continue as long as the industry can keep stuffing its devices with new functionality.

    The ideas are out there, says Borkar. “Our job is to engineer them.”

    Nature 530, 144–147 (11 February 2016) doi:10.1038/530144a

    1.Moore, G. E. Electronics 38, 114–117 (1965).

    2.Moore, G. E. IEDM Tech. Digest 11–13 (1975).

    3.Sabry Aly, M. M. et al. Computer 48(12), 24–33 (2015).

    4.Nikolic, B. 41th Eur. Solid-State Circuits Conf. (2015); available at http://go.nature.com/wwljk7

    5. Rebooting the IT Revolution: A Call to Action (SIA/SRC, 2015); available at http://go.nature.com/urvkhw

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Nature is a weekly international journal publishing the finest peer-reviewed research in all fields of science and technology on the basis of its originality, importance, interdisciplinary interest, timeliness, accessibility, elegance and surprising conclusions. Nature also provides rapid, authoritative, insightful and arresting news and interpretation of topical and coming trends affecting science, scientists and the wider public.

     
  • richardmitnick 4:40 pm on December 23, 2015 Permalink | Reply
    Tags: , Computing,   

    From Berkeley: “Engineers demo first processor that uses light for ultrafast communications” 

    UC Berkeley

    UC Berkeley

    December 23, 2015
    Sarah Yang

    1
    The electronic-photonic processor chip communicates to the outside world directly using light, illustrated here. The photo shows the packaged microchip under illumination, revealing the chip’s primary features. (Image by Glenn J. Asakawa, University of Colorado, Glenn.Asakawa@colorado.edu)

    Engineers have successfully married electrons and photons within a single-chip microprocessor, a landmark development that opens the door to ultrafast, low-power data crunching.

    The researchers packed two processor cores with more than 70 million transistors and 850 photonic components onto a 3-by-6-millimeter chip. They fabricated the microprocessor in a foundry that mass-produces high-performance computer chips, proving that their design can be easily and quickly scaled up for commercial production.

    The new chip, described in a paper to be published Dec. 24 in the print issue of the journal Nature, marks the next step in the evolution of fiber optic communication technology by integrating into a microprocessor the photonic interconnects, or inputs and outputs (I/O), needed to talk to other chips.

    “This is a milestone. It’s the first processor that can use light to communicate with the external world,” said Vladimir Stojanović, an associate professor of electrical engineering and computer sciences at the University of California, Berkeley, who led the development of the chip. “No other processor has the photonic I/O in the chip.”

    Stojanović and fellow UC Berkeley professor Krste Asanović teamed up with Rajeev Ram at the Massachusetts Institute of Technology
    and Miloš Popović at the University of Colorado Boulder to develop the new microprocessor.

    “This is the first time we’ve put a system together at such scale, and have it actually do something useful, like run a program,” said Asanović, who helped develop the free and open architecture called RISC-V (reduced instruction set computer), used by the processor.

    Greater bandwidth with less power

    Compared with electrical wires, fiber optics support greater bandwidth, carrying more data at higher speeds over greater distances with less energy. While advances in optical communication technology have dramatically improved data transfers between computers, bringing photonics into the computer chips themselves had been difficult.

    2
    The electronic-photonic processor chip naturally illuminated by red and green bands of light. (Image by Glenn J. Asakawa, University of Colorado, Glenn.Asakawa@colorado.edu)

    That’s because no one until now had figured out how to integrate photonic devices into the same complex and expensive fabrication processes used to produce computer chips without changing the process itself. Doing so is key since it does not further increase the cost of the manufacturing or risk failure of the fabricated transistors.

    The researchers verified the functionality of the chip with the photonic interconnects by using it to run various computer programs, requiring it to send and receive instructions and data to and from memory. They showed that the chip had a bandwidth density of 300 gigabits per second per square millimeter, about 10 to 50 times greater than packaged electrical-only microprocessors currently on the market.

    The photonic I/O on the chip is also energy-efficient, using only 1.3 picojoules per bit, equivalent to consuming 1.3 watts of power to transmit a terabit of data per second. In the experiments, the data was sent to a receiver 10 meters away and back.

    “The advantage with optical is that with the same amount of power, you can go a few centimeters, a few meters or a few kilometers,” said study co-lead author Chen Sun, a recent UC Berkeley Ph.D. graduate from Stojanović’s lab at the Berkeley Wireless Research Center. “For high-speed electrical links, 1 meter is about the limit before you need repeaters to regenerate the electrical signal, and that quickly increases the amount of power needed. For an electrical signal to travel 1 kilometer, you’d need thousands of picojoules for each bit.”

    The achievement opens the door to a new era of bandwidth-hungry applications. One near-term application for this technology is to make data centers more green. According to the Natural Resources Defense Council, data centers consumed about 91 billion kilowatt-hours of electricity in 2013, about 2 percent of the total electricity consumed in the United States, and the appetite for power is growing exponentially.

    This research has already spun off two startups this year with applications in data centers in mind. SiFive is commercializing the RISC-V processors, while Ayar Labs is focusing on photonic interconnects. Earlier this year, Ayar Labs – under its previous company name of OptiBit – was awarded the MIT Clean Energy Prize. Ayar Labs is getting further traction through the CITRIS Foundry startup incubator at UC Berkeley.

    The advance is timely, coming as world leaders emerge from the COP21 United Nations climate talks with new pledges to limit global warming.

    Further down the road, this research could be used in applications such as LIDAR, the light radar technology used to guide self-driving vehicles and the eyes of a robot; brain ultrasound imaging; and new environmental biosensors.

    ‘Fiat lux’ on a chip

    The researchers came up with a number of key innovations to harness the power of light within the chip.

    3
    The illumination and camera create a rainbow-colored pattern across the electronic-photonic processor chip. (Image by Milos Popović, University of Colorado, milos.popovic@colorado.edu)

    Each of the key photonic I/O components – such as a ring modulator, photodetector and a vertical grating coupler – serves to control and guide the light waves on the chip, but the design had to conform to the constraints of a process originally thought to be hostile to photonic components. To enable light to move through the chip with minimal loss, for instance, the researchers used the silicon body of the transistor as a waveguide for the light. They did this by using available masks in the fabrication process to manipulate doping, the process used to form different parts of transistors.

    After getting the light onto the chip, the researchers needed to find a way to control it so that it can carry bits of data. They designed a silicon ring with p-n doped junction spokes next to the silicon waveguide to enable fast and low-energy modulation of light.

    Using the silicon-germanium parts of a modern transistor – an existing part of the semiconductor manufacturing process – to build a photodetector took advantage of germanium’s ability to absorb light and convert it into electricity.

    A vertical grating coupler that leverages existing poly-silicon and silicon layers in innovative ways was used to connect the chip to the external world, directing the light in the waveguide up and off the chip. The researchers integrated electronic components tightly with these photonic devices to enable stable operation in a hostile chip environment.

    The authors emphasized that these adaptations all worked within the parameters of existing microprocessor manufacturing systems, and that it will not be difficult to optimize the components to further improve their chip’s performance.

    Other co-lead authors on this paper are Mark Wade, Ph.D. student at the University of Colorado, Boulder; Yunsup Lee, a Ph.D. candidate at UC Berkeley; and Jason Orcutt, an MIT graduate who now works at the IBM Research Center in New York.

    The Defense Advanced Research Projects Agency (DARPA) helped support this work.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Founded in the wake of the gold rush by leaders of the newly established 31st state, the University of California’s flagship campus at Berkeley has become one of the preeminent universities in the world. Its early guiding lights, charged with providing education (both “practical” and “classical”) for the state’s people, gradually established a distinguished faculty (with 22 Nobel laureates to date), a stellar research library, and more than 350 academic programs.

    UC Berkeley Seal

     
  • richardmitnick 8:11 pm on December 20, 2015 Permalink | Reply
    Tags: , Computing, Hacking,   

    From The Atlantic: “Pop Culture Is Finally Getting Hacking Right” 

    Atlantic Magazine

    The Atlantic Magazine

    Dec 1, 2015
    Joe Marshall

    1
    USA

    Movies and TV shows have long relied on lazy and unrealistic depictions of how cybersecurity works. That’s beginning to change.

    The idea of a drill-wielding hacker who runs a deep-web empire selling drugs to teens seems like a fantasy embodying the worst of digital technology. It’s also, in the spirit of CSI: Cyber, completely ridiculous. So it was no surprise when a recent episode of the CBS drama outed its villain as a a video-game buff who lived at home with his mother. For a series whose principal draw is watching Patricia Arquette yell, “Find the malware!”, that sort of stereotypical characterization and lack of realism is to be expected.

    But CSI: Cyber is something of an anomaly when it comes to portraying cybersecurity on the big or small screen. Hollywood is putting more effort into creating realistic technical narratives and thoughtfully depicting programming culture, breaking new ground with shows like Mr. Robot, Halt and Catch Fire, and Silicon Valley, and films like Blackhat. It’s a smart move, in part because audiences now possess a more sophisticated understanding of such technology than they did in previous decades. Cyberattacks, such as the 2013 incident that affected tens of millions of Target customers, are a real threat, and Americans generally have little confidence that their personal records will remain private and secure. The most obvious promise of Hollywood investing in technically savvy fiction is that these works will fuel a grassroots understanding of digital culture, including topics such as adblockers and surveillance self-defense. But just as important is a film and TV industry that sees the artistic value in accurately capturing a subject that’s relevant to the entire world.

    In some ways, cyberthrillers are just a new kind of procedural—rough outlines of the technical worlds only a few inhabit. But unlike shows based on lawyers, doctors, or police officers, shows about programmers deal with especially timely material. Perry Mason, the TV detective from the ’50s and ’60s, would recognize the tactics of Detective Lennie Briscoe from Law & Order, but there’s no ’60s hacker counterpart to talk shop with Mr. Robot’s Elliot Alderson. It’s true that what you can hack has changed dramatically over the past 20 years: The amount of information is exploding, and expanding connectivity means people can program everything from refrigerators to cars . But beyond that, hacking itself looks pretty much the same, thanks to the largely unchanging appearance and utility of the command-line—a text-only interface favored by developers, hackers, and other programming types.

    2
    Laurelai Storm / Github

    So why has it taken so long for television and film to adapt and accurately portray the most essential aspects of programming? The usual excuse from producers and set designers is that it’s ugly and translates poorly to the screen. As a result, the easiest way to portray code in a movie has long been to shoot a green screen pasted onto a computer display, then add technical nonsense in post-production. Faced with dramatizing arcane details that most viewers at the time wouldn’t understand, the overwhelming temptation for filmmakers was to amp up the visuals, even if it meant creating something utterly removed from the reality of programming. That’s what led to the trippy, Tron-like graphics in 1995’s Hackers, or Hugh Jackman bravely assembling a wire cube made out of smaller, more solid cubes in 2001’s Swordfish.

    3
    A scene from Hackers (MGM)

    4
    A scene from Swordfish (Warner Bros.)

    But more recent depictions of coding are much more naturalistic than previous CGI-powered exercises in geometry. Despite its many weaknesses, this year’s Blackhat does a commendable job of representing cybersecurity. A few scenes show malware reminiscent of this decompiled glimpse of Stuxnet—the cyber superweapon created as a joint effort by the U.S. and Israel. The snippets look similar because they’re both variants of C, a popular programming language commonly used in memory-intensive applications. In Blackhat, the malware’s target was the software used to manage the cooling towers of a Chinese nuclear power plant. In real-life, Stuxnet was used to target the software controlling Iranian centrifuges to systematically and covertly degrade the country’s nuclear enrichment efforts.

    5
    An image of code used in Stuxnet (Github)

    6
    Code shown in Blackhat (Universal)

    In other words, both targeted industrial machinery and monitoring software, and both seem to be written in a language compatible with those ends. Meaning that Hollywood producers took care to research what real-life malware might look like and how it’d likely be used, even if the average audience member wouldn’t know the difference. Compared to the sky-high visuals of navigating a virtual filesystem in Hackers, where early-CGI wizardry was thought the only way to retain audience attention, Blackhat’s commitment to the terminal and actual code is refreshing.

    Though it gets the visuals right, Blackhat highlights another common Hollywood misstep when it comes to portraying computer science on screen: It uses programming for heist-related ends. For many moviegoers, hacking is how you get all green lights for your getaway car (The Italian Job) or stick surveillance cameras in a loop (Ocean’s Eleven, The Score, Speed). While most older films frequently fall into this trap, at least one action hacker flick sought to explore how such technology could affect society more broadly, even if it fumbled the details. In 1995, The Net debuted as a cybersecurity-themed Sandra Bullock vehicle that cast one of America’s sweethearts into a kafkaesque nightmare. As part of her persecution at the hands of the evil Gatekeeper corporation, Bullock’s identity is erased from a series of civil and corporate databases, turning her into a fugitive thanks to a forged criminal record. Technical jibberish aside, The Net was ahead of its time in tapping into the feeling of being powerless to contradict an entrenched digital bureaucracy.

    It’s taken a recent renaissance in scripted television to allow the space for storytellers to focus on programming as a culture, instead of a techy way to spruce up an action movie. And newer television shows have increasingly been able to capture that nuance without sacrificing mood and veracity. While design details like screens and terminal shots matter, the biggest challenge is writing a script that understands and cares about programming. Mr. Robot, which found critical success when it debuted on USA this summer, is perhaps the most accurate television show ever to depict cybersecurity. In particular, programmers have praised the show’s use of terminology, its faithful incorporation of actual security issues into the plot, and the way its protagonist uses real applications and tools. The HBO comedy series Silicon Valley, which was renewed for a third season, had a scene where a character wrote out the math behind a new compression algorithm. It turned out to be fleshed-out enough that a fan of the show actually recreated it. And even though a show like CSI: Cyber might regularly miss the mark, it has its bright spots, such as an episode about car hacking.

    There’s a more timeless reason for producers and writers to scrutinize technical detail: because it makes for good art. “We’re constantly making sure the verisimilitude of the show is as impervious as possible,” said Jonathan Lisco, the showrunner for AMC’s Halt and Catch Fire, a drama about the so-called Silicon Prairie of 1980s Texas. The actress Mackenzie Davis elaborated on the cachet such specificity could lend a show: “We need the groundswell of nerds to be like, ‘You have to watch this!’” The rise of software development as a profession means a bigger slice of the audience can now tell when a showrunner is phoning it in, and pillory the mistakes online. But it’s also no coincidence that Halt and Catch Fire is on the same network that was once home to that other stickler for accuracy—Mad Men.

    Rising technical literacy and a Golden Age of creative showrunners have resulted in a crop of shows that infuse an easy but granular technical understanding with top-notch storytelling. Coupling an authentic narrative with technical aplomb can allow even average viewers to intuitively understand high-level concepts that hold up under scrutiny. And even if audiences aren’t compelled to research on their own, the rough shape of a lesson can still seep through—like how cars are hackable, or the importance of guarding against phishing and financial fraud. But above all, more sophisticated representations of hacking make for better art. In an age of black mirrors, the soft glow of an open terminal has never radiated more promise.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 1:56 pm on July 9, 2015 Permalink | Reply
    Tags: , , Computing, Network Computing, ,   

    From Symmetry: “More data, no problem” 

    Symmetry

    July 09, 2015
    Katie Elyce Jones

    Scientists are ready to handle the increased data of the current run of the Large Hadron Collider.

    1
    Photo by Reidar Hahn, Fermilab

    Physicist Alexx Perloff, a graduate student at Texas A&M University on the CMS experiment, is using data from the first run of the Large Hadron Collider for his thesis, which he plans to complete this year.

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC

    CERN CMS Detector
    CMS

    When all is said and done, it will have taken Perloff a year and a half to conduct the computing necessary to analyze all the information he needs—not unusual for a thesis.

    But had he used the computing tools LHC scientists are using now, he estimates he could have finished his particular kind of analysis in about three weeks. Although Perloff represents only one scientist working on the LHC, his experience shows the great leaps scientists have made in LHC computing by democratizing their data, becoming more responsive to popular demand and improving their analysis software.

    A deluge of data

    Scientists estimate the current run of the LHC could create up to 10 times more data than the first one. CERN already routinely stores 6 gigabytes (or 6 billion units of digital information) per second, up from 1 gigabyte per second in the first run.

    The second run of the LHC is more data-intensive because the accelerator itself is more intense: The collision energy is 60 percent greater, resulting in “pile-up” or more collisions per proton bunch. Proton bunches are also injected into the ring closer together, resulting in more collisions per second.

    On top of that, the experiments have upgraded their triggers, which automatically choose which of the millions of particle events per second to record. The CMS trigger will now record more than twice as much data per second as it did in the previous run.

    Had CMS and ATLAS scientists relied only on adding more computers to make up for the data hike, they would likely have needed about four to six times more computing power in CPUs and storage than they used in the first run of the LHC.

    CERN ATLAS New
    ATLAS

    To avoid such a costly expansion, they found smarter ways to share and analyze the data.

    Flattening the hierarchy

    Over a decade ago, network connections were less reliable than they are today, so the Worldwide LHC Computing Grid was designed to have different levels, or tiers, that controlled data flow.

    All data recorded by the detectors goes through the CERN Data Centre, known as Tier-0, where it is initially processed, then to a handful of Tier-1 centers in different regions across the globe.

    CERN DATA Center
    One view of the Cern Data Centre

    During the last run, the Tier-1 centers served Tier-2 centers, which were mostly the smaller university computing centers where the bulk of physicists do their analyses.

    “The experience for a user on Run I was more restrictive,” says Oliver Gutsche, assistant head of the Scientific Computing Division for Science Workflows and Operations at Fermilab, the US Tier-1 center for CMS*. “You had to plan well ahead.”

    Now that the network has proved reliable, a new model “flattens” the hierarchy, enabling a user at any ATLAS or CMS Tier-2 center to access data from any of their centers in the world. This was initiated in Run I and is now fully in place for Run II.

    Through a separate upgrade known as data federation, users can also open a file from another computing center through the network, enabling them to view the file without going through the process of transferring it from center to center.

    Another significant upgrade affects the network stateside. Through its Energy Sciences Network, or ESnet, the US Department of Energy increased the bandwidth of the transatlantic network that connects the US CMS and ATLAS Tier-1 centers to Europe. A high-speed network, ESnet transfers data 15,000 times faster than the average home network provider.

    Dealing with the rush

    One of the thrilling things about being a scientist on the LHC is that when something exciting shows up in the detector, everyone wants to talk about it. The downside is everyone also wants to look at it.

    “When data is more interesting, it creates high demand and a bottleneck,” says David Lange, CMS software and computing co-coordinator and a scientist at Lawrence Livermore National Laboratory. “By making better use of our resources, we can make more data available to more people at any time.”

    To avoid bottlenecks, ATLAS and CMS are now making data accessible by popularity.

    “For CMS, this is an automated system that makes more copies when popularity rises and reduces copies when popularity declines,” Gutsche says.

    Improving the algorithms

    One of the greatest recent gains in computing efficiency for the LHC relied on the physicists who dig into the data. By working closely with physicists, software engineers edited the algorithms that describe the physics playing out in the LHC, thereby significantly improving processing time for reconstruction and simulation jobs.

    “A huge amount of effort was put in, primarily by physicists, to understand how the physics could be analyzed while making the computing more efficient,” says Richard Mount, senior research scientist at SLAC National Accelerator Laboratory who was ATLAS computing coordinator during the recent LHC upgrades.

    CMS tripled the speed of event reconstruction and halved simulation time. Similarly, ATLAS quadrupled reconstruction speed.

    Algorithms that determine data acquisition on the upgraded triggers were also improved to better capture rare physics events and filter out the background noise of routine (and therefore uninteresting) events.

    “More data” has been the drumbeat of physicists since the end of the first run, and now that it’s finally here, LHC scientists and students like Perloff can pick up where they left off in the search for new physics—anytime, anywhere.

    *While not noted in the article, I believe that Brookhaven National Laboratory is the Tier 1 site for Atlas in the United States.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: