Tagged: Computer technology Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 5:07 pm on September 28, 2015 Permalink | Reply
    Tags: , , Computer technology   

    From AAAS: “Light-based memory chip is first to permanently store data” 



    25 September 2015
    Robert F. Service

    Intense light pulses (pink) write data in a patch of GST, which can be read out as digital 1s and 0s with lower intensity light (red).C. Rios et al., Nature Photonics, Advance Online Publication (2015)

    Today’s electronic computer chips work at blazing speeds. But an alternate version that stores, manipulates, and moves data with photons of light instead of electrons would make today’s chips look like proverbial horses and buggies. Now, one team of researchers reports that it has created the first permanent optical memory on a chip, a critical step in that direction.

    “I am very positive about the work,” says Valerio Pruneri, a laser physicist at the Institute of Photonic Sciences in Barcelona, Spain, who was not involved in the research. “It’s a great demonstration of a new concept.”

    Interest in so-called photonic chips goes back decades, and it’s easy to see why. When electrons move through the basic parts of a computer chip—logic circuits that manipulate data, memory circuits that store it, and metal wires that ferry it along—they bump into one another, slowing down and generating heat that must be siphoned away. That’s not the case with photons, which travel together with no resistance, and do so at, well, light speed. Researchers have already made photon-friendly chips, with optical lines that replace metal wires and optical memory circuits. But the parts have some serious drawbacks. The memory circuits, for example, can store data only if they have a steady supply of power. When the power is turned off, the data disappear, too.

    Now, researchers led by Harish Bhaskaran, a nanoengineering expert at the University of Oxford in the United Kingdom, and electrical engineer Wolfram Pernice at the Karlsruhe Institute of Technology in Germany, have hit on a solution to the disappearing memory problem using a material at the heart of rewritable CDs and DVDs. That material—abbreviated GST—consists of a thin layer of an alloy of germanium, antimony, and tellurium. When zapped with an intense pulse of laser light, GST film changes its atomic structure from an ordered crystalline lattice to an “amorphous” jumble. These two structures reflect light in different ways, and CDs and DVDs use this difference to store data. To read out the data—stored as patterns of tiny spots with a crystalline or amorphous order—a CD or DVD drive shines low-intensity laser light on a disk and tracks the way the light bounces off.

    In their work with GST, the researchers noticed that the material affected not only how light reflects off the film, but also how much of it is absorbed. When a transparent material lay underneath the GST film, spots with a crystalline order absorbed more light than did spots with an amorphous structure.

    Next, the researchers wanted to see whether they could use this property to permanently store data on a chip and later read it out. To do so, they used standard chipmaking technology to outfit a chip with a silicon nitride device, known as a waveguide, which contains and channels pulses of light. They then placed a nanoscale patch of GST atop this waveguide. To write data in this layer, the scientists piped an intense pulse of light into the waveguide. The high intensity of the light’s electromagnetic field melted the GST, turning its crystalline atomic structure amorphous. A second, slightly less intense pulse could then cause the material to revert back to its original crystalline structure.

    When the researchers wanted to read the data, they beamed in less intense pulses of light and measured how much light was transmitted through the waveguide. If little light was absorbed, they knew their data spot on the GST had an amorphous order; if more was absorbed, that meant it was crystalline.

    Bhaskaran, Pernice, and their colleagues also took steps to dramatically increase the amount of data they could store and read. For starters, they sent multiple wavelengths of light through the waveguide at the same time, allowing them to write and read multiple bits of data simultaneously, something you can’t do with electrical data storage devices. And, as they report this week in Nature Photonics, by varying the intensity of their data-writing pulses, they were also able to control how much of each GST patch turned crystalline or amorphous at any one time. With this method, they could make one patch 90% amorphous but just 10% crystalline, and another 80% amorphous and 20% crystalline. That made it possible to store data in eight different such combinations, not just the usual binary 1s and 0s that would be used for 100% amorphous or crystalline spots. This dramatically boosts the amount of data each spot can store, Bhaskaran says.

    Photonic memories still have a long way to go if they ever hope to catch up to their electronic counterparts. At a minimum, their storage density will have to climb orders of magnitude to be competitive. Ultimately, Bhaskaran says, if a more advanced photonic memory can be integrated with photonic logic and interconnections, the resulting chips have the potential to run at 50 to 100 times the speed of today’s computer processors.

    See the full article here .

    The American Association for the Advancement of Science is an international non-profit organization dedicated to advancing science for the benefit of all people.

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

  • richardmitnick 9:22 am on August 13, 2015 Permalink | Reply
    Tags: , , Computer technology, Discrimination,   

    From The Conversation: “Big data algorithms can discriminate, and it’s not clear what to do about it” 

    The Conversation

    August 13, 2015
    Jeremy Kun

    “This program had absolutely nothing to do with race…but multi-variable equations.”

    That’s what Brett Goldstein, a former policeman for the Chicago Police Department (CPD) and current Urban Science Fellow at the University of Chicago’s School for Public Policy, said about a predictive policing algorithm he deployed at the CPD in 2010. His algorithm tells police where to look for criminals based on where people have been arrested previously. It’s a “heat map” of Chicago, and the CPD claims it helps them allocate resources more effectively.

    Chicago police also recently collaborated with Miles Wernick, a professor of electrical engineering at Illinois Institute of Technology, to algorithmically generate a “heat list” of 400 individuals it claims have the highest chance of committing a violent crime. In response to criticism, Wernick said the algorithm does not use “any racial, neighborhood, or other such information” and that the approach is “unbiased” and “quantitative.” By deferring decisions to poorly understood algorithms, industry professionals effectively shed accountability for any negative effects of their code.

    But do these algorithms discriminate, treating low-income and black neighborhoods and their inhabitants unfairly? It’s the kind of question many researchers are starting to ask as more and more industries use algorithms to make decisions. It’s true that an algorithm itself is quantitative – it boils down to a sequence of arithmetic steps for solving a problem. The danger is that these algorithms, which are trained on data produced by people, may reflect the biases in that data, perpetuating structural racism and negative biases about minority groups.

    There are a lot of challenges to figuring out whether an algorithm embodies bias. First and foremost, many practitioners and “computer experts” still don’t publicly admit that algorithms can easily discriminate. More and more evidence supports that not only is this possible, but it’s happening already. The law is unclear on the legality of biased algorithms, and even algorithms researchers don’t precisely understand what it means for an algorithm to discriminate.

    Is bias baked in? Justin Ruckman, CC BY

    Being quantitative doesn’t protect against bias

    Both Goldstein and Wernick claim their algorithms are fair by appealing to two things. First, the algorithms aren’t explicitly fed protected characteristics such as race or neighborhood as an attribute. Second, they say the algorithms aren’t biased because they’re “quantitative.” Their argument is an appeal to abstraction. Math isn’t human, and so the use of math can’t be immoral.

    Sadly, Goldstein and Wernick are repeating a common misconception about data mining, and mathematics in general, when it’s applied to social problems. The entire purpose of data mining is to discover hidden correlations. So if race is disproportionately (but not explicitly) represented in the data fed to a data-mining algorithm, the algorithm can infer race and use race indirectly to make an ultimate decision.

    Here’s a simple example of the way algorithms can result in a biased outcome based on what it learns from the people who use it. Look at how how Google search suggests finishing a query that starts with the phrase “transgenders are”:

    Taken from Google.com on 2015-08-10.

    Autocomplete features are generally a tally. Count up all the searches you’ve seen and display the most common completions of a given partial query. While most algorithms might be neutral on the face, they’re designed to find trends in the data they’re fed. Carelessly trusting an algorithm allows dominant trends to cause harmful discrimination or at least have distasteful results.

    Beyond biased data, such as Google autocompletes, there are other pitfalls, too. Moritz Hardt, a researcher at Google, describes what he calls the sample size disparity. The idea is as follows. If you want to predict, say, whether an individual will click on an ad, most algorithms optimize to reduce error based on the previous activity of users.

    But if a small fraction of users consists of a racial minority that tends to behave in a different way from the majority, the algorithm may decide it’s better to be wrong for all the minority users and lump them in the “error” category in order to be more accurate on the majority. So an algorithm with 85% accuracy on US participants could err on the entire black sub-population and still seem very good.

    Hardt continues to say it’s hard to determine why data points are erroneously classified. Algorithms rarely come equipped with an explanation for why they behave the way they do, and the easy (and dangerous) course of action is not to ask questions.

    Those smiles might not be so broad if they realized they’d be treated differently by the algorithm. Men image via http://www.shutterstock.com

    Extent of the problem

    While researchers clearly understand the theoretical dangers of algorithmic discrimination, it’s difficult to cleanly measure the scope of the issue in practice. No company or public institution is willing to publicize its data and algorithms for fear of being labeled racist or sexist, or maybe worse, having a great algorithm stolen by a competitor.

    Even when the Chicago Police Department was hit with a Freedom of Information Act request, they did not release their algorithms or heat list, claiming a credible threat to police officers and the people on the list. This makes it difficult for researchers to identify problems and potentially provide solutions.

    Legal hurdles

    Existing discrimination law in the United States isn’t helping. At best, it’s unclear on how it applies to algorithms; at worst, it’s a mess. Solon Barocas, a postdoc at Princeton, and Andrew Selbst, a law clerk for the Third Circuit US Court of Appeals, argued together that US hiring law fails to address claims about discriminatory algorithms in hiring.

    The crux of the argument is called the “business necessity” defense, in which the employer argues that a practice that has a discriminatory effect is justified by being directly related to job performance. According to Barocas and Selbst, if a company algorithmically decides whom to hire, and that algorithm is blatantly racist but even mildly successful at predicting job performance, this would count as business necessity – and not as illegal discrimination. In other words, the law seems to support using biased algorithms.

    What is fairness?

    Maybe an even deeper problem is that nobody has agreed on what it means for an algorithm to be fair in the first place. Algorithms are mathematical objects, and mathematics is far more precise than law. We can’t hope to design fair algorithms without the ability to precisely demonstrate fairness mathematically. A good mathematical definition of fairness will model biased decision-making in any setting and for any subgroup, not just hiring bias or gender bias.

    And fairness seems to have two conflicting aspects when applied to a population versus an individual. For example, say there’s a pool of applicants to fill 10 jobs, and an algorithm decides to hire candidates completely at random. From a population-wide perspective, this is as fair as possible: all races, genders and orientations are equally likely to be selected.

    But from an individual level, it’s as unfair as possible, because an extremely talented individual is unlikely to be chosen despite their qualifications. On the other hand, hiring based only on qualifications reinforces hiring gaps. Nobody knows if these two concepts are inherently at odds, or whether there is a way to define fairness that reasonably captures both. Cynthia Dwork, a Distinguished Scientist at Microsoft Research, and her colleagues have been studying the relationship between the two, but even Dwork admits they have just scratched the surface.


    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

  • richardmitnick 10:05 am on February 8, 2015 Permalink | Reply
    Tags: , Computer technology,   

    From NOVA: “Powerful and Efficient ‘Neuromorphic’ Chip Works Like a Brain” 



    08 Aug 2014
    Allison Eck

    Compared with biological computers—also known as brains—today’s computer chips are simplistic energy hogs. Which is why some computer scientists have been exploring neuromorphic computing, where they try to emulate neurons with silicon. Yesterday, researchers at IBM announced a new neuromorphic processor, dubbed TrueNorth, in an article published in the journal Science.

    At one million “neurons,” TrueNorth is about as complex as a bee’s brain. Experts are saying this little device (about the size of a postage stamp) is the newest and most promising development in “neuromorphic” computing. Despite its 5.4 billion transistors, the entire system consumes only 70 milliwatts of power, a strikingly low amount. The clock speed on the chip is slow, measured in megahertz—today’s computer chips zip along at the gigahertz level—but its vast parallel circuitry allows it to perform 46 billion operations a second per watt of energy.

    At one million “neurons,” a computer chip dubbed TrueNorth mimics the organization of the brain and is the next step in “neuromorphic” computer programming.

    Here’s John Markoff, writing for The New York Times:

    The chip’s electronic “neurons” are able to signal others when a type of data — light, for example — passes a certain threshold. Working in parallel, the neurons begin to organize the data into patterns suggesting the light is growing brighter, or changing color or shape.

    The processor may thus be able to recognize that a woman in a video is picking up a purse, or control a robot that is reaching into a pocket and pulling out a quarter. Humans are able to recognize these acts without conscious thought, yet today’s computers and robots struggle to interpret them.

    Despite the promise, some scientists are skeptical about TrueNorth’s potential, claiming that it’s not that much more impressive than what a cell phone camera can already do. Still others see it as overhyped or just one of many possible neuromorphic strategies.

    Jonathan Webb, writing for BBC News:

    Prof Steve Furber is a computer engineer at the University of Manchester who works on a similarly ambitious brain simulation project called SpiNNaker. That initiative uses a more flexible strategy, where the connections between neurons are not hard-wired.

    He told BBC News that “time will tell” which strategy succeeds in different applications.

    Proponents argue that the chip is endlessly scalable, meaning additional units can be assembled into bigger, more powerful machines. And if its processing potential improves, as traditional silicon chips did in the past, then TrueNorth’s neuromorphic successors could lead to cell phones powered by extremely high-power, energy-efficient processors, the sort that could make today’s smartphone CPUs look like those in early PCs.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 4:46 pm on November 19, 2014 Permalink | Reply
    Tags: , , Computer technology, ,   

    Fron LLNL: “Lawrence Livermore tops Graph 500” 

    Lawrence Livermore National Laboratory

    Nov. 19, 2014

    Don Johnston

    Lawrence Livermore National Laboratory scientists’ search for new ways to solve large complex national security problems led to the top ranking on Graph 500 and new techniques for solving large graph problems on small high performance computing (HPC) systems, all the way down to a single server.

    “To fulfill our missions in national security and basic science, we explore different ways to solve large, complex problems, most of which include the need to advance data analytics,” said Dona Crawford, associate director for Computation at Lawrence Livermore. “These Graph 500 achievements are a product of that work performed in collaboration with our industry partners. Furthermore, these innovations are likely to benefit the larger scientific computing community.”

    Photo from left: Robin Goldstone, Dona Crawford and Maya Gokhale with the Graph 500 certificate. Missing is Scott Futral.

    Lawrence Livermore’s Sequoia supercomputer, a 20-petaflop IBM Blue Gene/Q system, achieved the world’s best performance on the Graph 500 data analytics benchmark, announced Tuesday at SC14. LLNL and IBM computer scientists attained the No. 1 ranking by completing the largest problem scale ever attempted — scale 41 — with a performance of 23.751 teraTEPS (trillions of traversed edges per second). The team employed a technique developed by IBM.

    LLNL Sequoia supercomputer, a 20-petaflop IBM Blue Gene/Q system

    The Graph 500 offers performance metrics for data intensive computing or ‘big data,’ an area of growing importance to the high performance computing (HPC) community.

    In addition to achieving the top Graph 500 ranking, Lawrence Livermore computer scientists also have demonstrated scalable Graph 500 performance on small clusters and even a single node. To achieve these results, Livermore computational researchers have combined innovative research in graph algorithms and data-intensive runtime systems.

    Robin Goldstone, a member of LLNL’s HPC Advanced Technologies Office said: “These are really exciting results that highlight our approach of leveraging HPC to solve challenging large-scale data science problems.”

    The results achieved demonstrate, at two different scales, the ability to solve very large graph problems on modest sized computing platforms by integrating flash storage into the memory hierarchy of these systems. Enabling technologies were provided through collaborations with Cray, Intel, Saratoga Speed and Mellanox.

    A scale 40-graph problem, containing 17.6 trillion edges, was solved on 300 nodes of LLNL’s Catalyst cluster. Catalyst, designed in partnership with Intel and Cray, augments a standard HPC architecture with additional capabilities targeted at data intensive computing. Each Catalyst computer node features 128 gigabytes (GB) of dynamic random access memory (DRAM) plus an additional 800 GB of high performance flash storage and uses the LLNL DI-MMAP runtime that integrates flash into the memory hierarchy. With the HavoqGT graph traversal framework, Catalyst was able to store and process the 217 TB scale 40 graph, a feat that is otherwise only achievable on the world’s largest supercomputers. The Catalyst run was No. 4 in size on the list.

    DI-MMAP and HavoqGT also were used to solve a smaller, but equally impressive, scale 37-graph problem on a single server with 50 TB of network-attached flash storage. The server, equipped with four Intel E7-4870 v2 processors and 2 TB of DRAM, was connected to two Altamont XP all-flash arrays from Saratoga Speed Inc., over a high bandwidth Mellanox FDR Infiniband interconnect. The other scale 37 entries on the Graph 500 list required clusters of 1,024 nodes or larger to process the 2.2 trillion edges.

    “Our approach really lowers the barrier of entry for people trying to solve very large graph problems,” said Roger Pearce, a researcher in LLNL’s Center for Applied Scientific Computing (CASC).

    “These results collectively demonstrate LLNL’s preeminence as a full service data intensive HPC shop, from single server to data intensive cluster to world class supercomputer,” said Maya Gokhale, LLNL principal investigator for data-centric computing architectures.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    DOE Seal
    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 2:59 pm on October 22, 2014 Permalink | Reply
    Tags: , Computer technology, ,   

    From isgtw: “Laying the groundwork for data-driven science” 

    international science grid this week

    October 22, 2014
    Amber Harmon

    he ability to collect and analyze massive amounts of data is rapidly transforming science, industry, and everyday life — but many of the benefits of big data have yet to surface. Interoperability, tools, and hardware are still evolving to meet the needs of diverse scientific communities.

    Image courtesy istockphoto.com.

    One of the US National Science Foundation’s (NSF’s) goals is to improve the nation’s capacity in data science by investing in the development of infrastructure, building multi-institutional partnerships to increase the number of data scientists, and augmenting the usefulness and ease of using data.

    As part of that effort, the NSF announced $31 million in new funding to support 17 innovative projects under the Data Infrastructure Building Blocks (DIBBs) program. Now in its second year, the 2014 DIBBs awards support research in 22 states and touch on research topics in computer science, information technology, and nearly every field of science supported by the NSF.

    “Developed through extensive community input and vetting, NSF has an ambitious vision and strategy for advancing scientific discovery through data,” says Irene Qualters, division director for Advanced Cyberinfrastructure. “This vision requires a collaborative national data infrastructure that is aligned to research priorities and that is efficient, highly interoperable, and anticipates emerging data policies.”

    Of the 17 awards, two support early implementations of research projects that are more mature; the others support pilot demonstrations. Each is a partnership between researchers in computer science and other science domains.

    One of the two early implementation grants will support a research team led by Geoffrey Fox, a professor of computer science and informatics at Indiana University, US. Fox’s team plans to create middleware and analytics libraries that enable large-scale data science on high-performance computing systems. Fox and his team plan to test their platform with several different applications, including geospatial information systems (GIS), biomedicine, epidemiology, and remote sensing.

    “Our innovative architecture integrates key features of open source cloud computing software with supercomputing technology,” Fox said. “And our outreach involves ‘data analytics as a service’ with training and curricula set up in a Massive Open Online Course or MOOC.”Among others, US institutions collaborating on the project include Arizona State University in Phoenix; Emory University in Atlanta, Georgia; and Rutgers University in New Brunswick, New Jersey.

    Ken Koedinger, professor of human computer interaction and psychology at Carnegie Mellon University in Pittsburgh, Pennsylvania, US, leads the other early implementation project. Koedinger’s team concentrates on developing infrastructure that will drive innovation in education.

    The team will develop a distributed data infrastructure, LearnSphere, that will make more educational data accessible to course developers, while also motivating more researchers and companies to share their data with the greater learning sciences community.

    “We’ve seen the power that data has to improve performance in many fields, from medicine to movie recommendations,” Koedinger says. “Educational data holds the same potential to guide the development of courses that enhance learning while also generating even more data to give us a deeper understanding of the learning process.”

    The DIBBs program is part of a coordinated strategy within NSF to advance data-driven cyberinfrastructure. It complements other major efforts like the DataOne project, the Research Data Alliance, and Wrangler, a groundbreaking data analysis and management system for the national open science community.

    See the full article here.

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 11:42 am on September 23, 2014 Permalink | Reply
    Tags: , Computer technology,   

    From NSF: “Protecting our processors” 

    National Science Foundation

    September 23, 2014
    Media Contacts
    Aaron Dubrow, NSF, (703) 292-4489, adubrow@nsf.gov
    Dan Francisco, SRC, (916) 812-8814, dan@integrityglobal.biz

    The National Science Foundation (NSF) and Semiconductor Research Corporation (SRC) today announced nine research awards to 10 universities totaling nearly $4 million under a joint program focused on Secure, Trustworthy, Assured and Resilient Semiconductors and Systems (STARSS).

    Intentional fault injection into microprocessor hardware is an important threat to the embedded computers and microcontrollers that secure the nation’s information technology infrastructure. Virginia Tech’s FAME project develops a methodology to defend micro-controllers against malicious fault injection. The methodology is being validated with test chips which are subjected to an elaborate tamper-sensitivity analysis. The photo shows a test board used to support this analysis by enabling precise control of the operating conditions of the test chip. Credit: Photo courtesy of Jim Stroup, Virginia Tech

    The awards support research at the circuit, architecture and system levels on new strategies, methods and tools to decrease the likelihood of unintended behavior or access; increase resistance and resilience to tampering; and improve the ability to provide authentication throughout the supply chain and in the field.

    “The processes and tools used to design and manufacture semiconductors ensure that the resulting product does what it is supposed to do. However, a key question that must also be addressed is whether the product does anything else, such as behaving in ways that are unintended or malicious,” said Keith Marzullo, division director of NSF’s Computer and Network Systems Division, which leads the NSF/SRC partnership on STARSS. “Through this partnership with SRC, we are pleased to focus on hardware and systems security research addressing this challenge and to provide a unique opportunity to facilitate the transition of this research into practical use.”

    NSF’s involvement in STARSS is part of its Secure and Trustworthy Cyberspace (SaTC) portfolio, which in August announced nearly $75 million in cybersecurity awards.

    The STARRS program expands SRC’s Trustworthy and Secure Semiconductors and Systems (T3S) program, engaging 10 universities across the U.S. Initial T3S industry participants are Freescale, Intel Corporation and Mentor Graphics. NSF is the first federal partner.

    “The goal of SRC’s T3S initiative is to develop cost-effective strategies and tools for the design and manufacture of chips and systems that are reliable, trustworthy and secure,” said Celia Merzbacher, SRC vice president for innovative partnerships. “This includes designing for security and assurance at the outset so as to build in resistance and resilience to attack or tampering. The research enabled by the STARSS program with NSF is a cornerstone of this overall effort.”

    SRC is the world’s leading university-research consortium for semiconductors and related technologies.

    A number of trends are motivating industry and government to support research in hardware and system security. The design and manufacture of semiconductor circuits and systems requires many steps and involves the work of hundreds of engineers–typically distributed across multiple locations and organizations worldwide.

    Moreover, a typical microprocessor is likely to include dozens of design modules from various sources. Designers at each level need assurance that the components being incorporated can be trusted in order for the final system to be trustworthy.

    Today, the design and manufacture of semiconductor circuits and systems includes extensive verification and testing to ensure the final product does what it is intended to do. Similar approaches are needed to provide assurance that the product is authentic and does not allow unwanted functionality, access or control. This includes strategies, tools and methods at all stages, from architecture through manufacture and throughout the lifecycle of the product.

    The first round of awards made through the STARSS program will support nine research projects with diverse areas of focus. They are:

    Combating integrated circuit counterfeiting using secure chip odometers--Carnegie Mellon University researchers will design and implement secure chip odometers to provide integrated circuits (ICs) with both a secure gauge of use/age and an authentication of provenance to detect counterfeit ICs;
    Intellectual Property (IP) Trust-A comprehensive framework for IP integrity validation–Case Western Reserve University and University of Florida researchers will develop a comprehensive and scalable framework for IP trust analysis and verification by evaluating IPs of diverse types and forms and develop threat models, taxonomy and instances of IP trust/integrity issues;
    Design of low-cost, memory-based security primitives and techniques for high-volume products–University of Connecticut researchers will develop metrics and algorithms to make static RAM physical “unclonable” functions that are substantially more reliable at extreme operating conditions and aging, and extend this to dynamic RAM and Flash;
    Trojan detection and diagnosis in mixed-signal systems using on-the-fly learned, pre-computed and side channel tests–Georgia Institute of Technology researchers will leverage knowledge of state of the art mixed-signal/analog/radio frequency for detection of Trojans in generic mixed-signal systems;
    Metric and CAD for differential power analysis (DPA) resistance–Iowa State University researchers will investigate statistical metrics and design techniques to measure and defend against DPA attacks;
    Design of secure and anti-counterfeit integrated circuits–University of Minnesota researchers will develop hierarchical approaches for authentication and obfuscation of chips;
    Hardware authentication through high-capacity, physical unclonable functions (PUF)-based secret key generation and lattice coding–University of Texas at Austin researchers will develop strong machine-learning resistant PUFs, capable of producing high-entropy outputs, and a new lattice-based stability algorithm for high-capacity secret key generation;
    Fault-attack awareness using microprocessor enhancements–Virginia Institute of Technology and State University researchers will develop a collection of hardware techniques for microprocessor architectures to detect fault injection attacks, and to mitigate fault analysis through an appropriate response in software; and
    Invariant carrying machine for hardware assurance–Northwestern University researchers will develop techniques for improving the reliability and trustworthiness of hardware systems via an Invariant-Carrying Machine approach.

    A second joint NSF-SRC STARSS funding opportunity was announced on Aug. 13 as part of the latest NSF SaTC program solicitation. For more information, visit the NSF website.


    See the full article here.

    The National Science Foundation (NSF) is an independent federal agency created by Congress in 1950 “to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense…we are the funding source for approximately 24 percent of all federally supported basic research conducted by America’s colleges and universities. In many fields such as mathematics, computer science and the social sciences, NSF is the major source of federal backing.


    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 3:53 pm on August 1, 2014 Permalink | Reply
    Tags: , , , Computer technology, ,   

    From Rutgers University: “Astrophysics Professor Creates Computer Models that Help Explain How Galaxies Formed and Evolved” 

    Rutgers Banner
    Rutgers University

    Rachel Somerville (Photo: Miguel Acevedo)

    July 30, 2014
    Carl Blesch

    When most people think of astronomers, they envision scientists who spend time peering at stars and galaxies through telescopes on high mountain tops. Rutgers astronomer Rachel Somerville depends on colleagues who make such observations, but her primary tools for understanding how galaxies formed billions of years ago – and how they continue to evolve today – are large computers.

    The quality and significance of her work were affirmed this week when the Simons Foundation, a private foundation that sponsors research in mathematics and the basic sciences, awarded Somerville $500,000 in research support over five years. She is one of 16 theoretical scientists at American and Canadian universities who were named Simons Investigators for 2014.

    A professor of astrophysics in the Department of Physics and Astronomy, School of Arts and Sciences, Somerville creates computer models or simulations of the physical principles that underlie galaxy formation. These models help astronomers make sense of what they see when the Hubble Space Telescope and other instruments peer into the farthest reaches of space and reveal how galaxies looked as they took shape in a young universe.

    The Simons Foundation cited her contributions to the development of “semianalytic modeling methods that combine computational and pencil-and-paper theory.” According to the group, these contributions have helped scientists understand how the growth of supermassive black holes and the energy they release is linked to a galaxy’s properties and its ability to form stars.

    Somerville explains that astronomers cannot see any single galaxy evolve through a telescope.

    “We see galaxies at different points in their lifetimes and in different wavelengths,” she said, referring to images acquired with visible light, radio waves and X-rays. Models then help astronomers predict which kinds of early galaxies evolved into disks like our Milky Way while others evolved into the round balls of stars that astronomers call elliptical galaxies.

    As a theoretical astronomer, Somerville values the opportunities she gets to interact with observational astronomers at Rutgers and elsewhere who provide her with new data that make her models more comprehensive and robust.

    “It’s hard to make models that fit all the observations,” she said. “I try to go the extra distance to connect what the models predict with things that we can actually observe.”

    Somerville is a relative newcomer to Rutgers, appointed in October 2011 to the George A. and Margaret M. Downsbrough Chair in Astrophysics.

    In 2013, she received the Dannie Heineman Prize in Astrophysics from the American Astronomical Society and the American Institute of Physics. The prize recognizes exceptional work by mid-career astronomers, citing her for providing fundamental insights into galaxy formation and evolution using modeling, simulations, and observations.

    Before joining Rutgers, Somerville held a joint appointment as associate research professor at Johns Hopkins University and associate astronomer with tenure at the Space Telescope Science Institute (STScI). STScI manages selection, planning and scheduling of scientific activities for the Hubble Space Telescope.

    Before that, she held faculty appointments at the Max Planck Institute for Astronomy in Germany and the University of Michigan, and postdoctoral appointments at the Hebrew University in Jerusalem and Cambridge University in the United Kingdom.

    Somerville’s goal at Rutgers is to build more expertise in galaxy formation theory and help the department’s astronomy group pursue new areas such as the study of extrasolar planets.

    “Rutgers is a great place for galaxy formation theorists because we have opportunities to interact with the excellent observational astronomers here,” she said, noting the university’s involvement with the powerful new Southern African Large Telescope, also referred to as SALT. “I’ve benefitted from supportive colleagues and contact with graduate and undergraduate students. I’m constantly inspired by their enthusiasm.”

    South African Large Telescope
    South African Large Telescope

    Rutgers, The State University of New Jersey, is a leading national research university and the state’s preeminent, comprehensive public institution of higher education. Rutgers is dedicated to teaching that meets the highest standards of excellence; to conducting research that breaks new ground; and to providing services, solutions, and clinical care that help individuals and the local, national, and global communities where they live.

    Founded in 1766, Rutgers teaches across the full educational spectrum: preschool to precollege; undergraduate to graduate; postdoctoral fellowships to residencies; and continuing education for professional and personal advancement.

    Rutgers Seal

    ScienceSprings is powered by MAINGEAR computers

  • richardmitnick 2:49 pm on February 20, 2014 Permalink | Reply
    Tags: , , Computer technology,   

    From Caltech: “A New Laser for a Faster Internet” 

    Caltech Logo

    Jessica Stoller-Conrad

    A new laser developed by a research group at Caltech holds the potential to increase by orders of magnitude the rate of data transmission in the optical-fiber network—the backbone of the Internet.

    The study was published the week of February 10–14 in the online edition of the Proceedings of the National Academy of Sciences. The work is the result of a five-year effort by researchers in the laboratory of Amnon Yariv, Martin and Eileen Summerfield Professor of Applied Physics and professor of electrical engineering; the project was led by postdoctoral scholar Christos Santis (PhD ’13) and graduate student Scott Steger.Light is capable of carrying vast amounts of information—approximately 10,000 times more bandwidth than microwaves, the earlier carrier of long-distance communications. But to utilize this potential, the laser light needs to be as spectrally pure—as close to a single frequency—as possible. The purer the tone, the more information it can carry, and for decades researchers have been trying to develop a laser that comes as close as possible to emitting just one frequency.

    No image Credit

    Today’s worldwide optical-fiber network is still powered by a laser known as the distributed-feedback semiconductor (S-DFB) laser, developed in the mid 1970s in Yariv’s research group. The S-DFB laser’s unusual longevity in optical communications stemmed from its, at the time, unparalleled spectral purity—the degree to which the light emitted matched a single frequency. The laser’s increased spectral purity directly translated into a larger information bandwidth of the laser beam and longer possible transmission distances in the optical fiber—with the result that more information could be carried farther and faster than ever before.

    At the time, this unprecedented spectral purity was a direct consequence of the incorporation of a nanoscale corrugation within the multilayered structure of the laser. The washboard-like surface acted as a sort of internal filter, discriminating against spurious “noisy” waves contaminating the ideal wave frequency. Although the old S-DFB laser had a successful 40-year run in optical communications—and was cited as the main reason for Yariv receiving the 2010 National Medal of Science—the spectral purity, or coherence, of the laser no longer satisfies the ever-increasing demand for bandwidth.

    “What became the prime motivator for our project was that the present-day laser designs—even our S-DFB laser—have an internal architecture which is unfavorable for high spectral-purity operation. This is because they allow a large and theoretically unavoidable optical noise to comingle with the coherent laser and thus degrade its spectral purity,” he says.

    The old S-DFB laser consists of continuous crystalline layers of materials called III-V semiconductors—typically gallium arsenide and indium phosphide—that convert into light the applied electrical current flowing through the structure. Once generated, the light is stored within the same material. Since III-V semiconductors are also strong light absorbers—and this absorption leads to a degradation of spectral purity—the researchers sought a different solution for the new laser.

    The high-coherence new laser still converts current to light using the III-V material, but in a fundamental departure from the S-DFB laser, it stores the light in a layer of silicon, which does not absorb light. Spatial patterning of this silicon layer—a variant of the corrugated surface of the S-DFB laser—causes the silicon to act as a light concentrator, pulling the newly generated light away from the light-absorbing III-V material and into the near absorption-free silicon.

    This newly achieved high spectral purity—a 20 times narrower range of frequencies than possible with the S-DFB laser—could be especially important for the future of fiber-optic communications. Originally, laser beams in optic fibers carried information in pulses of light; data signals were impressed on the beam by rapidly turning the laser on and off, and the resulting light pulses were carried through the optic fibers. However, to meet the increasing demand for bandwidth, communications system engineers are now adopting a new method of impressing the data on laser beams that no longer requires this “on-off” technique. This method is called coherent phase communication.

    In coherent phase communications, the data resides in small delays in the arrival time of the waves; the delays—a tiny fraction (10-16) of a second in duration—can then accurately relay the information even over thousands of miles. The digital electronic bits carrying video, data, or other information are converted at the laser into these small delays in the otherwise rock-steady light wave. But the number of possible delays, and thus the data-carrying capacity of the channel, is fundamentally limited by the degree of spectral purity of the laser beam. This purity can never be absolute—a limitation of the laws of physics—but with the new laser, Yariv and his team have tried to come as close to absolute purity as is possible.

    These findings were published in a paper titled, High-coherence semiconductor lasers based on integral high-Q resonators in hybrid Si/III-V platforms. In addition to Yariv, Santis, and Steger, other Caltech coauthors include graduate student Yaakov Vilenchik, and former graduate student Arseny Vasilyev (PhD, ’13). The work was funded by the Army Research Office, the National Science Foundation, and the Defense Advanced Research Projects Agency. The lasers were fabricated at the Kavli Nanoscience Institute at Caltech.

    See the full article here.

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings

    ScienceSprings is powered by MAINGEAR computers

  • richardmitnick 3:13 pm on February 11, 2014 Permalink | Reply
    Tags: , Computer technology, , ,   

    From PPPL: “Solution to plasma-etching puzzle could mean more powerful microchips” 

    February 11, 2014
    John Greenwald

    Research conducted by PPPL in collaboration with the University of Alberta provides a key step toward the development of ever-more powerful computer chips. The researchers discovered the physics behind a mysterious process that gives chipmakers unprecedented control of a recent plasma-based technique for etching transistors on integrated circuits, or chips. This discovery could help to maintain Moore’s Law, which observes that the number of transistors on integrated circuits doubles nearly every two years.

    An integrated-circuit microchip with 456 million transistors
    (Photo by John Greenwald/PPPL Office of Communications)

    The recent technique utilizes electron beams to reach and harden the surface of the masks that are used for printing microchip circuits. More importantly, the beam creates a population of “suprathermal” electrons that produce the plasma chemistry necessary to protect the mask. The energy of these electrons is greater than simple thermal heating could produce — hence the name “suprathermal.” But how the beam electrons transform themselves into this suprathermal population has been a puzzle.

    The PPPL and University of Alberta researchers used a computer simulation to solve the puzzle. The simulation revealed that the electron beam generates intense plasma waves that move through the plasma like ripples in water. And these waves lead to the generation of the crucial suprathermal electrons.

    This discovery could bring still-greater control of the plasma-surface interactions and further increase the number of transistors on integrated circuits. Insights from both numerical simulations and experiments related to beam-plasma instabilities thus portend the development of new plasma sources and the increasingly advanced chips that they fabricate.

    See the full article here.

    Princeton Plasma Physics Laboratory is a U.S. Department of Energy national laboratory managed by Princeton University.

    ScienceSprings is powered by Maingear computers

  • richardmitnick 3:14 pm on January 15, 2014 Permalink | Reply
    Tags: , Computer technology, , ,   

    From Fermilab: “From the Scientific Computing Division – Intensity Frontier experiments develop insatiable appetite” 

    Fermilab is an enduring source of strength for the US contribution to scientific research world wide.

    Rob Roser, head of the Scientific Computing Division, wrote this column.

    The neutrino and muon experiments at Fermilab are getting more demanding! They have reached a level of sophistication and precision that the present computing resources available at Fermilab are no longer sufficient to handle. The solution: The Scientific Computing Division is now introducing grid and cloud services to satisfy those experiments’ appetite for large amounts of data and computing time.

    An insatiable appetite for computing resources is not new to Fermilab. Both Tevatron experiments as well as the CMS experiment require computing resources that far exceed our on-site capacity to successfully perform their science. As a result the scientific collaborations have been working closely with us over many years to leverage computing capabilities at the universities and other laboratories. Now, the demand from our Intensity Frontier experiments has reached this level.

    The Scientific Computing Services quadrant under the leadership of Margaret Votava has worked very hard over the past year with various computing organizations to provide experiments with the capability to run their software at remote locations, transfer data and bring the results back to Fermilab.

    See much more in the full article here.

    Fermilab Campus

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics.

    ScienceSprings is powered by MAINGEAR computers

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: