Tagged: Computing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:48 pm on August 28, 2019 Permalink | Reply
    Tags: "MIT engineers build advanced microprocessor out of carbon nanotubes", , Computing, ,   

    From MIT News: “MIT engineers build advanced microprocessor out of carbon nanotubes” 

    MIT News

    From MIT News

    August 28, 2019
    Rob Matheson

    1
    A close up of a modern microprocessor built from carbon nanotube field-effect transistors. Image: Felice Frankel

    2
    MIT engineers have built a modern microprocessor from carbon nanotube field-effect transistors (pictured), which are seen as faster and greener than silicon transistors. The new approach uses the same fabrication processes used for silicon chips. Image courtesy of the researchers

    New approach harnesses the same fabrication processes used for silicon chips, offers key advance toward next-generation computers.

    After years of tackling numerous design and manufacturing challenges, MIT researchers have built a modern microprocessor from carbon nanotube transistors, which are widely seen as a faster, greener alternative to their traditional silicon counterparts.

    The microprocessor, described today in the journal Nature, can be built using traditional silicon-chip fabrication processes, representing a major step toward making carbon nanotube microprocessors more practical.

    Silicon transistors — critical microprocessor components that switch between 1 and 0 bits to carry out computations — have carried the computer industry for decades. As predicted by Moore’s Law, industry has been able to shrink down and cram more transistors onto chips every couple of years to help carry out increasingly complex computations. But experts now foresee a time when silicon transistors will stop shrinking, and become increasingly inefficient.

    Making carbon nanotube field-effect transistors (CNFET) has become a major goal for building next-generation computers. Research indicates CNFETs have properties that promise around 10 times the energy efficiency and far greater speeds compared to silicon. But when fabricated at scale, the transistors often come with many defects that affect performance, so they remain impractical.

    The MIT researchers have invented new techniques to dramatically limit defects and enable full functional control in fabricating CNFETs, using processes in traditional silicon chip foundries. They demonstrated a 16-bit microprocessor with more than 14,000 CNFETs that performs the same tasks as commercial microprocessors. The paper describes the microprocessor design and includes more than 70 pages detailing the manufacturing methodology.

    The microprocessor is based on the RISC-V open-source chip architecture that has a set of instructions that a microprocessor can execute. The researchers’ microprocessor was able to execute the full set of instructions accurately. It also executed a modified version of the classic “Hello, World!” program, printing out, “Hello, World! I am RV16XNano, made from CNTs.”

    “This is by far the most advanced chip made from any emerging nanotechnology that is promising for high-performance and energy-efficient computing,” says co-author Max M. Shulaker, the Emanuel E Landsman Career Development Assistant Professor of Electrical Engineering and Computer Science (EECS) and a member of the Microsystems Technology Laboratories. “There are limits to silicon. If we want to continue to have gains in computing, carbon nanotubes represent one of the most promising ways to overcome those limits. [The paper] completely re-invents how we build chips with carbon nanotubes.”

    Joining Shulaker on the paper are: first author and postdoc Gage Hills, graduate students Christian Lau, Andrew Wright, Mindy D. Bishop, Tathagata Srimani, Pritpal Kanhaiya, Rebecca Ho, and Aya Amer, all of EECS; Arvind, the Johnson Professor of Computer Science and Engineering and a researcher in the Computer Science and Artificial Intelligence Laboratory; Anantha Chandrakasan, the dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science; and Samuel Fuller, Yosi Stein, and Denis Murphy, all of Analog Devices.

    Fighting the “bane” of CNFETs

    The microprocessor builds on a previous iteration designed by Shulaker and other researchers six years ago that had only 178 CNFETs and ran on a single bit of data. Since then, Shulaker and his MIT colleagues have tackled three specific challenges in producing the devices: material defects, manufacturing defects, and functional issues. Hills did the bulk of the microprocessor design, while Lau handled most of the manufacturing.

    For years, the defects intrinsic to carbon nanotubes have been a “bane of the field,” Shulaker says. Ideally, CNFETs need semiconducting properties to switch their conductivity on an off, corresponding to the bits 1 and 0. But unavoidably, a small portion of carbon nanotubes will be metallic, and will slow or stop the transistor from switching. To be robust to those failures, advanced circuits will need carbon nanotubes at around 99.999999 percent purity, which is virtually impossible to produce today.

    The researchers came up with a technique called DREAM (an acronym for “designing resiliency against metallic CNTs”), which positions metallic CNFETs in a way that they won’t disrupt computing. In doing so, they relaxed that stringent purity requirement by around four orders of magnitude — or 10,000 times — meaning they only need carbon nanotubes at about 99.99 percent purity, which is currently possible.

    Designing circuits basically requires a library of different logic gates attached to transistors that can be combined to, say, create adders and multipliers — like combining letters in the alphabet to create words. The researchers realized that the metallic carbon nanotubes impacted different pairings of these gates differently. A single metallic carbon nanotube in gate A, for instance, may break the connection between A and B. But several metallic carbon nanotubes in gates B may not impact any of its connections.

    In chip design, there are many ways to implement code onto a circuit. The researchers ran simulations to find all the different gate combinations that would be robust and wouldn’t be robust to any metallic carbon nanotubes. They then customized a chip-design program to automatically learn the combinations least likely to be affected by metallic carbon nanotubes. When designing a new chip, the program will only utilize the robust combinations and ignore the vulnerable combinations.

    “The ‘DREAM’ pun is very much intended, because it’s the dream solution,” Shulaker says. “This allows us to buy carbon nanotubes off the shelf, drop them onto a wafer, and just build our circuit like normal, without doing anything else special.”

    Exfoliating and tuning

    CNFET fabrication starts with depositing carbon nanotubes in a solution onto a wafer with predesigned transistor architectures. However, some carbon nanotubes inevitably stick randomly together to form big bundles — like strands of spaghetti formed into little balls — that form big particle contamination on the chip.

    To cleanse that contamination, the researchers created RINSE (for “removal of incubated nanotubes through selective exfoliation”). The wafer gets pretreated with an agent that promotes carbon nanotube adhesion. Then, the wafer is coated with a certain polymer and dipped in a special solvent. That washes away the polymer, which only carries away the big bundles, while the single carbon nanotubes remain stuck to the wafer. The technique leads to about a 250-times reduction in particle density on the chip compared to similar methods.

    Lastly, the researchers tackled common functional issues with CNFETs. Binary computing requires two types of transistors: “N” types, which turn on with a 1 bit and off with a 0 bit, and “P” types, which do the opposite. Traditionally, making the two types out of carbon nanotubes has been challenging, often yielding transistors that vary in performance. For this solution, the researchers developed a technique called MIXED (for “metal interface engineering crossed with electrostatic doping”), which precisely tunes transistors for function and optimization.

    In this technique, they attach certain metals to each transistor — platinum or titanium — which allows them to fix that transistor as P or N. Then, they coat the CNFETs in an oxide compound through atomic-layer deposition, which allows them to tune the transistors’ characteristics for specific applications. Servers, for instance, often require transistors that act very fast but use up energy and power. Wearables and medical implants, on the other hand, may use slower, low-power transistors.

    The main goal is to get the chips out into the real world. To that end, the researchers have now started implementing their manufacturing techniques into a silicon chip foundry through a program by Defense Advanced Research Projects Agency, which supported the research. Although no one can say when chips made entirely from carbon nanotubes will hit the shelves, Shulaker says it could be fewer than five years. “We think it’s no longer a question of if, but when,” he says.

    The work was also supported by Analog Devices, the National Science Foundation, and the Air Force Research Laboratory.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 9:39 am on August 8, 2019 Permalink | Reply
    Tags: "Stanford researchers design a light-trapping, , , color-converting crystal", Computing, , , , Photonic crystal cavities, ,   

    From Stanford University: “Stanford researchers design a light-trapping, color-converting crystal” 

    Stanford University Name
    From Stanford University

    August 7, 2019

    Taylor Kubota
    Stanford News Service
    (650) 724-7707
    tkubota@stanford.edu

    1
    Researchers propose a microscopic structure that changes laser light from infrared to green and traps both wavelengths of light to improve efficiency of that transformation. This type of structure could help advance telecommunication and computing technologies. (Image credit: Getty Images)

    Five years ago, Stanford postdoctoral scholar Momchil Minkov encountered a puzzle that he was impatient to solve. At the heart of his field of nonlinear optics are devices that change light from one color to another – a process important for many technologies within telecommunications, computing and laser-based equipment and science. But Minkov wanted a device that also traps both colors of light, a complex feat that could vastly improve the efficiency of this light-changing process – and he wanted it to be microscopic.

    “I was first exposed to this problem by Dario Gerace from the University of Pavia in Italy, while I was doing my PhD in Switzerland. I tried to work on it then but it’s very hard,” Minkov said. “It has been in the back of my mind ever since. Occasionally, I would mention it to someone in my field and they would say it was near-impossible.”

    In order to prove the near-impossible was still possible, Minkov and Shanhui Fan, professor of electrical engineering at Stanford, developed guidelines for creating a crystal structure with an unconventional two-part form. The details of their solution were published Aug. 6 in Optica, with Gerace as co-author. Now, the team is beginning to build its theorized structure for experimental testing.

    2
    An illustration of the researchers’ design. The holes in this microscopic slab structure are arranged and resized in order to control and hold two wavelengths of light. The scale bar on this image is 2 micrometers, or two millionths of a meter. (Image credit: Momchil Minkov)

    A recipe for confining light

    Anyone who’s encountered a green laser pointer has seen nonlinear optics in action. Inside that laser pointer, a crystal structure converts laser light from infrared to green. (Green laser light is easier for people to see but components to make green-only lasers are less common.) This research aims to enact a similar wavelength-halving conversion but in a much smaller space, which could lead to a large improvement in energy efficiency due to complex interactions between the light beams.

    The team’s goal was to force the coexistence of the two laser beams using a photonic crystal cavity, which can focus light in a microscopic volume. However, existing photonic crystal cavities usually only confine one wavelength of light and their structures are highly customized to accommodate that one wavelength.

    So instead of making one uniform structure to do it all, these researchers devised a structure that combines two different ways to confine light, one to hold onto the infrared light and another to hold the green, all still contained within one tiny crystal.

    “Having different methods for containing each light turned out to be easier than using one mechanism for both frequencies and, in some sense, it’s completely different from what people thought they needed to do in order to accomplish this feat,” Fan said.

    After ironing out the details of their two-part structure, the researchers produced a list of four conditions, which should guide colleagues in building a photonic crystal cavity capable of holding two very different wavelengths of light. Their result reads more like a recipe than a schematic because light-manipulating structures are useful for so many tasks and technologies that designs for them have to be flexible.

    “We have a general recipe that says, ‘Tell me what your material is and I’ll tell you the rules you need to follow to get a photonic crystal cavity that’s pretty small and confines light at both frequencies,’” Minkov said.

    Computers and curiosity

    If telecommunications channels were a highway, flipping between different wavelengths of light would equal a quick lane change to avoid a slowdown – and one structure that holds multiple channels means a faster flip. Nonlinear optics is also important for quantum computers because calculations in these computers rely on the creation of entangled particles, which can be formed through the opposite process that occurs in the Fan lab crystal – creating twinned red particles of light from one green particle of light.

    Envisioning possible applications of their work helps these researchers choose what they’ll study. But they are also motivated by their desire for a good challenge and the intricate strangeness of their science.

    “Basically, we work with a slab structure with holes and by arranging these holes, we can control and hold light,” Fan said. “We move and resize these little holes by billionths of a meter and that marks the difference between success and failure. It’s very strange and endlessly fascinating.”

    These researchers will soon be facing off with these intricacies in the lab, as they are beginning to build their photonic crystal cavity for experimental testing.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Stanford University campus. No image credit

    Stanford University

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
  • richardmitnick 6:52 am on July 22, 2019 Permalink | Reply
    Tags: , , Computing, , , , , QMCPACK, , , The quantum Monte Carlo (QMC) family of these approaches is capable of delivering the most highly accurate calculations of complex materials without biasing the results of a property of interest.   

    From insideHPC: “Supercomputing Complex Materials with QMCPACK” 

    From insideHPC

    July 21, 2019

    In this special guest feature, Scott Gibson from the Exascale Computing Project writes that computer simulations based on quantum mechanics are getting a boost through QMCPACK.

    2

    The theory of quantum mechanics underlies explorations of the behavior of matter and energy in the atomic and subatomic realms. Computer simulations based on quantum mechanics are consequently essential in designing, optimizing, and understanding the properties of materials that have, for example, unusual magnetic or electrical properties. Such materials would have potential for use in highly energy-efficient electrical systems and faster, more capable electronic devices that could vastly improve our quality of life.

    Quantum mechanics-based simulation methods render robust data by describing materials in a truly first-principles manner. This means they calculate electronic structure in the most basic terms and thus can allow speculative study of systems of materials without reference to experiment, unless researchers choose to add parameters. The quantum Monte Carlo (QMC) family of these approaches is capable of delivering the most highly accurate calculations of complex materials without biasing the results of a property of interest.

    An effort within the US Department of Energy’s Exascale Computing Project (ECP) is developing a QMC methods software named QMCPACK to find, predict, and control materials and properties at the quantum level. The ultimate aim is to achieve an unprecedented and systematically improvable accuracy by leveraging the memory and power capabilities of the forthcoming exascale computing systems.

    Greater Accuracy, Versatility, and Performance

    One of the primary objectives of the QMCPACK project is to reduce errors in calculations so that predictions concerning complex materials can be made with greater assurance.

    “We would like to be able to tell our colleagues in experimentation that we have confidence that a certain short list of materials is going to have all the properties that we think they will,” said Paul Kent of Oak Ridge National Laboratory and principal investigator of QMCPACK. “Many ways of cross-checking calculations with experimental data exist today, but we’d like to go further and make predictions where there aren’t experiments yet, such as a new material or where taking a measurement is difficult—for example, in conditions of high pressure or under an intense magnetic field.”

    The methods the QMCPACK team is developing are fully atomistic and material specific. This refers to having the capability to address all of the atoms in the material—whether it be silver, carbon, cerium, or oxygen, for example—compared with more simplified lattice model calculations where the full details of the atoms are not included.

    The team’s current activities are restricted to simpler, bulk-like materials; but exascale computing is expected to greatly widen the range of possibilities.

    “At exascale not only the increase in compute power but also important changes in the memory on the machines will enable us to explore material defects and interfaces, more-complex materials, and many different elements,” Kent said.

    With the software engineering, design, and computational aspects of delivering the science as the main focus, the project plans to improve QMCPACK’s performance by at least 50x. Based on experimentation using a mini-app version of the software, and incorporating new algorithms, the team achieved a 37x improvement on the pre-exascale Summit supercomputer versus the Titan system.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    ORNL Cray XK7 Titan Supercomputer, once the fastest in the world, to be decommissioned

    One Robust Code

    “We’re taking the lessons we’ve learned from developing the mini app and this proof of concept, the 37x, to update the design of the main application to support this high efficiency, high performance for a range of problem sizes,” Kent said. “What’s crucial for us is that we can move to a single version of the code with no internal forks, to have one source supporting all architectures. We will use all the lessons we’ve learned with experimentation to create one version where everything will work everywhere—then it’s just a matter of how fast. Moreover, in the future we will be able to optimize. But at least we won’t have a gap in the feature matrix, and the student who is running QMCPACK will always have all features work.”

    As an open-source and openly developed product, QMCPACK is improving via the help of many contributors. The QMCPACK team recently published the master citation paper for the software’s code; the publication has 48 authors with a variety of affiliations.

    “Developing these large science codes is an enormous effort,” Kent said. “QMCPACK has contributors from ECP researchers, but it also has many past developers. For example, a great deal of development was done for the Knights Landing processor on the Theta supercomputer with Intel. This doubled the performance on all CPU-like architectures.”

    ANL ALCF Theta Cray XC40 supercomputer

    A Synergistic Team

    The QMCPACK project’s collaborative team draws talent from Argonne, Lawrence Livermore, Oak Ridge, and Sandia National Laboratories.




    It also benefits from collaborations with Intel and NVIDIA.

    3

    The composition of the staff is nearly equally divided between scientific domain specialists and people centered on the software engineering and computer science aspects.

    “Bringing all of this expertise together through ECP is what has allowed us to perform the design study, reach the 37x, and improve the architecture,” Kent said. “All the materials we work with have to be doped, which means incorporating additional elements in them. We can’t run those simulations on Titan but are beginning to do so on Summit with improvements we have made as part of our ECP project. We are really looking forward to the opportunities that will open up when the exascale systems are available.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 9:33 am on June 28, 2019 Permalink | Reply
    Tags: "Jony Ive Is Leaving Apple", , Computing,   

    From WIRED: “Jony Ive Is Leaving Apple” 

    Wired logo

    From WIRED

    1
    Jony Ive. iMore

    The man who designed the iMac, the iPod, the iPhone—and even the Apple Store—is leaving Apple. Jony Ive announced in an interview with the Financial Times on Thursday that he was departing the company after more than two decades to start LoveFrom, a creative agency that will count Apple as its first client. The transition will start later this year, and LoveFrom will formally launch in 2020.

    Ive has been an indispensable leader at Apple and the chief guide of the company’s aesthetic vision. His role took on even greater importance after Apple cofounder Steve Jobs died of pancreatic cancer in 2011. Apple will not immediately appoint a new chief design officer. Instead, Alan Dye, who leads Apple’s user interface team, and Evans Hankey, head of industrial design, will report directly to Apple’s chief operating officer, Jeff Williams, according to the Financial Times.

    “This just seems like a natural and gentle time to make this change,” Ive said in the interview, somewhat perplexingly. Apple’s business is currently weathering many changes: slumping iPhone sales, an increasingly tense trade war between President Trump’s administration and China, the April departure of retail chief Angela Ahrendts. The company is also in the midst of a pivot away from hardware devices to software services.

    It’s not clear exactly what LoveFrom will work on, and Ive was relatively vague about the nature of the firm, though he said he will continue to work on technology and health care. Another Apple design employee, Marc Newson, is also leaving to join the new venture. This isn’t the first time the pair have worked on a non-Apple project together. In 2013, they designed a custom Leica camera that was sold at auction to benefit the Global Fund fighting AIDS, Tuberculosis, and Malaria.

    During an interview with Anna Wintour last November at the WIRED25 summit, Ive discussed the creative process and how he sees his responsibility as a mentor at Apple. “I still think it’s so remarkable that ideas that can become so powerful and so literally world-changing,” he said. “But those same ideas at the beginning are shockingly fragile. I think the creative process doesn’t naturally or easily sit in a large group of people.”

    Ive left the London design studio Tangerine and moved to California to join Apple in 1992. He became senior vice president of industrial design in 1997, after Jobs returned to the company. The next year, the iMac G3 was released, which would prove to be Ive’s first major hit, helping to turn around Apple’s then struggling business. He later helped oversee the design of Apple’s new headquarters, Apple Park.

    “It’s frustrating to talk about this building in terms of absurd, large numbers,” Ive told WIRED’s Steven Levy when the campus opened in 2017. “It makes for an impressive statistic, but you don’t live in an impressive statistic. While it is a technical marvel to make glass at this scale, that’s not the achievement. The achievement is to make a building where so many people can connect and collaborate and walk and talk.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 10:40 am on May 11, 2019 Permalink | Reply
    Tags: An array of artificial synapses designed by researchers at Stanford and Sandia National Laboratories can mimic how the brain processes and stores information., , , Computing, ,   

    From Stanford University: “Stanford researchers’ artificial synapse is fast, efficient and durable” 

    Stanford University Name
    From Stanford University

    April 25, 2019
    Taylor Kubota

    1
    An array of artificial synapses designed by researchers at Stanford and Sandia National Laboratories can mimic how the brain processes and stores information. (Image credit: Armantas Melianas and Scott Keene)

    The brain’s capacity for simultaneously learning and memorizing large amounts of information while requiring little energy has inspired an entire field to pursue brain-like – or neuromorphic – computers. Researchers at Stanford University and Sandia National Laboratories previously developed [Nature Materials] one portion of such a computer: a device that acts as an artificial synapse, mimicking the way neurons communicate in the brain.

    In a paper published online by the journal Science on April 25, the team reports that a prototype array of nine of these devices performed even better than expected in processing speed, energy efficiency, reproducibility and durability.

    Looking forward, the team members want to combine their artificial synapse with traditional electronics, which they hope could be a step toward supporting artificially intelligent learning on small devices.

    “If you have a memory system that can learn with the energy efficiency and speed that we’ve presented, then you can put that in a smartphone or laptop,” said Scott Keene, co-author of the paper and a graduate student in the lab of Alberto Salleo, professor of materials science and engineering at Stanford who is co-senior author. “That would open up access to the ability to train our own networks and solve problems locally on our own devices without relying on data transfer to do so.”

    A bad battery, a good synapse

    The team’s artificial synapse is similar to a battery, modified so that the researchers can dial up or down the flow of electricity between the two terminals. That flow of electricity emulates how learning is wired in the brain. This is an especially efficient design because data processing and memory storage happen in one action, rather than a more traditional computer system where the data is processed first and then later moved to storage.

    Seeing how these devices perform in an array is a crucial step because it allows the researchers to program several artificial synapses simultaneously. This is far less time consuming than having to program each synapse one-by-one and is comparable to how the brain actually works.

    In previous tests of an earlier version of this device, the researchers found their processing and memory action requires about one-tenth as much energy as a state-of-the-art computing system needs in order to carry out specific tasks. Still, the researchers worried that the sum of all these devices working together in larger arrays could risk drawing too much power. So, they retooled each device to conduct less electrical current – making them much worse batteries but making the array even more energy efficient.

    The 3-by-3 array relied on a second type of device – developed by Joshua Yang at the University of Massachusetts, Amherst, who is co-author of the paper – that acts as a switch for programming synapses within the array.

    “Wiring everything up took a lot of troubleshooting and a lot of wires. We had to ensure all of the array components were working in concert,” said Armantas Melianas, a postdoctoral scholar in the Salleo lab. “But when we saw everything light up, it was like a Christmas tree. That was the most exciting moment.”

    During testing, the array outperformed the researchers’ expectations. It performed with such speed that the team predicts the next version of these devices will need to be tested with special high-speed electronics. After measuring high energy efficiency in the 3-by-3 array, the researchers ran computer simulations of a larger 1024-by-1024 synapse array and estimated that it could be powered by the same batteries currently used in smartphones or small drones. The researchers were also able to switch the devices over a billion times – another testament to its speed – without seeing any degradation in its behavior.

    “It turns out that polymer devices, if you treat them well, can be as resilient as traditional counterparts made of silicon. That was maybe the most surprising aspect from my point of view,” Salleo said. “For me, it changes how I think about these polymer devices in terms of reliability and how we might be able to use them.”

    Room for creativity

    The researchers haven’t yet submitted their array to tests that determine how well it learns but that is something they plan to study. The team also wants to see how their device weathers different conditions – such as high temperatures – and to work on integrating it with electronics. There are also many fundamental questions left to answer that could help the researchers understand exactly why their device performs so well.

    “We hope that more people will start working on this type of device because there are not many groups focusing on this particular architecture, but we think it’s very promising,” Melianas said. “There’s still a lot of room for improvement and creativity. We only barely touched the surface.”

    To read all stories about Stanford science, subscribe to the biweekly Stanford Science Digest.

    This work was funded by Sandia National Laboratories, the U.S. Department of Energy, the National Science Foundation, the Semiconductor Research Corporation, the Stanford Graduate Fellowship fund, and the Knut and Alice Wallenberg Foundation for Postdoctoral Research at Stanford University.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Stanford University campus. No image credit

    Stanford University

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
  • richardmitnick 7:00 pm on March 21, 2019 Permalink | Reply
    Tags: "A Swiss cheese-like material’ that can solve equations", , Computing, , ,   

    From University of Pennsylvania: “A Swiss cheese-like material’ that can solve equations” 

    U Penn bloc

    From University of Pennsylvania

    March 21, 2019

    Credits

    Evan Lerner, Gwyneth K. Shaw Media Contacts
    Eric Sucar Photographer

    Engineering professor Nader Engheta and his team have demonstrated a metamaterial device that can function as an analog computer, validating an earlier theory about ‘photonic calculus.’

    1
    Nader Engheta (center), the H. Nedwill Ramsey Professor in the Department of Electrical and Systems Engineering, and lab members Brian Edwards and Nasim Mohammadi Estakhri conducted the pathbreaking work in Engheta’s lab.

    The field of metamaterials involves designing complicated, composite structures, some of which can manipulate electromagnetic waves in ways that are impossible in naturally occurring materials.

    For Nader Engheta of the School of Engineering and Applied Science, one of the loftier goals in this field has been to design metamaterials that can solve equations. This “photonic calculus” would work by encoding parameters into the properties of an incoming electromagnetic wave and sending it through a metamaterial device; once inside, the device’s unique structure would manipulate the wave in such a way that it would exit encoded with the solution to a pre-set integral equation for that arbitrary input.

    In a paper published in Science, Engheta and his team demonstrated such a device for the first time.

    Their proof-of-concept experiment was conducted with microwaves, as the long wavelengths allowed for an easier-to-construct macro-scale device. The principles behind their findings, however, can be scaled down to light waves, eventually fitting onto a microchip.

    Such metamaterial devices would function as analog computers that operate with light, rather than electricity. They could solve integral equations—ubiquitous problems in every branch of science and engineering—orders of magnitude faster than their digital counterparts, while using less power.

    2
    The demonstration device is 2-foot-square, made of a milled type of polystyrene plastic.

    Engheta, the H. Nedwill Ramsey Professor in the Department of Electrical and Systems Engineering, conducted the study along with lab members Nasim Mohammadi Estakhri and Brian Edwards.

    This approach has its roots in analog computing. The first analog computers solved mathematical problems using physical elements, such as slide-rules and sets of gears, that were manipulated in precise ways to arrive at a solution. In the mid-20th century, electronic analog computers replaced the mechanical ones, with series of resistors, capacitors, inductors, and amplifiers replacing their predecessors’ clockworks.

    Such computers were state-of-the-art, as they could solve large tables of information all at once, but were limited to the class of problems they were pre-designed to handle. The advent of reconfigurable, programmable digital computers, starting with ENIAC, constructed at Penn in 1945, made them obsolete.

    As the field of metamaterials developed, Engheta and his team devised a way of bringing the concepts behind analog computing into the 21st century. Publishing a theoretical outline for “photonic calculus” in Science in 2014, they showed how a carefully designed metamaterial could perform mathematical operations on the profile of a wave passing thought it, such as finding its first or second derivative.

    Now, Engheta and his team have performed physical experiments validating this theory and expanding it to solve equations.

    “Our device contains a block of dielectric material that has a very specific distribution of air holes,” Engheta says. “Our team likes to call it ‘Swiss cheese.’”

    The Swiss cheese material is a kind of polystyrene plastic; its intricate shape is carved by a CNC milling machine.

    “Controlling the interactions of electromagnetic waves with this Swiss cheese metastructure is the key to solving the equation,” Estakhri says. “Once the system is properly assembled, what you get out of the system is the solution to an integral equation.”

    “This structure,” Edwards adds, “was calculated through a computational process known as ‘inverse design,’ which can be used to find shapes that no human would think of trying.”

    3

    The pattern of hollow regions in the Swiss cheese is predetermined to solve an integral equation with a given “kernel,” the part of the equation that describes the relationship between two variables. This general class of such integral equations, known as “Fredholm integral equations of the second kind,” is a common way of describing different physical phenomena in a variety of scientific fields. The pre-set equation can be solved for any arbitrary inputs, which are represented by the phases and magnitudes of the waves that are introduced into the device.

    “For example,” Engheta says, “if you were trying to plan the acoustics of a concert hall, you could write an integral equation where the inputs represent the sources of the sound, such as the position of speakers or instruments, as well as how loudly they play. Other parts of the equation would represent the geometry of the room and the material its walls are made of. Solving that equation would give you the volume at different points in the concert hall.”

    In the integral equation that describes the relationship between sound sources, room shape and the volume at specific locations, the features of the room — the shape and material properties of its walls — can be represented by the equation’s kernel. This is the part the Penn Engineering researchers are able to represent in a physical way, through the precise arrangement of air holes in their metamaterial Swiss cheese.

    “Our system allows you to change the inputs that represent the locations of the sound sources by changing the properties of the wave you send into the system,” Engheta says, “but if you want to change the shape of the room, for example, you will have to make a new kernel.”

    The researchers conducted their experiment with microwaves; as such, their device was roughly two square feet, or about eight wavelengths wide and four wavelengths long.

    “Even at this proof-of-concept stage, our device is extremely fast compared to electronics,” Engheta says. “With microwaves, our analysis has shown that a solution can be obtained in hundreds of nanoseconds, and once we take it to optics the speed would be in picoseconds.”

    Scaling down the concept to the scale where it could operate on light waves and be placed on a microchip would not only make them more practical for computing, it would open the doors to other technologies that would enable them to be more like the multipurpose digital computers that first made analog computing obsolete decades ago.

    “We could use the technology behind rewritable CDs to make new Swiss cheese patterns as they’re needed,” Engheta says. “Some day you may be able to print your own reconfigurable analog computer at home!”

    Nader Engheta is the H. Nedwill Ramsey Professor in the Department of Electrical and Systems Engineering at the University of Pennsylvania’s School of Engineering and Applied Science.

    The research was supported by the Basic Research Office of the Assistant Secretary of Defense for Research and Engineering through its Vannevar Bush Faculty Fellowship program and by the Office of Naval Research through Grant N00014-16-1-2029.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U Penn campus

    Academic life at Penn is unparalleled, with 100 countries and every U.S. state represented in one of the Ivy League’s most diverse student bodies. Consistently ranked among the top 10 universities in the country, Penn enrolls 10,000 undergraduate students and welcomes an additional 10,000 students to our world-renowned graduate and professional schools.

    Penn’s award-winning educators and scholars encourage students to pursue inquiry and discovery, follow their passions, and address the world’s most challenging problems through an interdisciplinary approach.

     
  • richardmitnick 11:11 pm on January 27, 2019 Permalink | Reply
    Tags: , ARM, Computing, , ,   

    From insideHPC: “Choice Comes to HPC: A Year in Processor Development” 

    From insideHPC

    January 27, 2019

    In this special guest feature, Robert Roe from Scientific Computing World writes that a whole new set of processor choices could shake up high performance computing.

    1

    With new and old companies releasing processors for the HPC market, there are now several options for high-performance server-based CPUs. This is being compounded by setbacks and delays at Intel opening up competition for the HPC CPU market.

    AMD has begun to find success on its EPYC brand of server CPUs. While market penetration will take some time, the company is starting to deliver competitive performance figures.

    IBM supplied the CPUs for the Summit system, which currently holds the top spot on the latest list of the Top500, a biannual list of the most powerful supercomputers.

    ORNL IBM AC922 SUMMIT supercomputer. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    While a single deployment is not a particularly strong measure of success, the Summit system has generated a lot of interest, five of the six Gordon Bell Prize finalists are running their applications on this system, which highlights the potential for this CPU – particularly when it is coupled with Nvidia GPUs.

    Arm is also gathering pace, as its technology partner’s ramp up production of Arm-based CPU systems for use in HPC deployments. Cavium (now Marvell) was an early leader in this market, delivering the ThunderX processor in 2015 and its follow up ThunderX2 was released for general availability in 2015.

    There are a number of smaller test systems using the Cavium chips, but the largest is the Astra supercomputer being developed at Sandia National Laboratories by HPE.

    HPE Vanguard Astra supercomputer with ARM technology at Sandia Labs

    This system is expected to deliver 2.3 Pflops of peak performance from 5,184 Thunder X2 CPUs.

    HPE, Bull and Penguin Computing have added the ThunderX2 CPU to its line-up of products available to HPC users. Coupled with the use of Allinea software tools, this is helping to give the impression of a viable ecosystem for HPC users.

    With many chip companies failing or struggling to generate a foothold in the HPC market over the last 10 to 20 years, it is important to provide a sustainable technology with a viable ecosystem for both hardware and software development. Once this has been achieved, Arm can begin to drive the market share.

    Fujitsu is another high-profile name committed to the development of Arm HPC technology. The company has been developing its own Arm-based processor for the Japanese Post K computer, in partnership with Riken, one of the largest Japanese research institutions.

    The A64FX CPU, developed by Fujitsu, will be the first processor to feature the Scalable Vector Extension (SVE), an extension of Armv8-A instruction set designed specifically for supercomputing architectures.

    It offers a number of features, including broad utility supporting a wide range of applications, massive parallelization through the Tofu interconnect, low power consumption, and mainframe-class reliability.

    Fujitsu reported in August that the processor would be capable of delivering a peak double precision (64 bit) floating point performance of over 2.7 Tflops, with a computational throughput twice that for single precision (32 bit), and four times that amount for half precision (16 bit).

    Trouble at the top

    Intel has been seen to struggle somewhat in recent months, as it has been reported that the next generation of its processors has been delayed due to supply issues and difficulty in the 10nm fabrication processes.

    The topic was addressed in August by Intel’s interim CEO Bob Swan, who reported healthy growth figures from the previous six months but also mentioned supply struggles and record investment processor development.

    “The surprising return to PC TAM growth has put pressure on our factory network. We’re prioritizing the production of Intel Xeon and Intel Core processors so that collectively we can serve the high-performance segments of the market. That said, supply is undoubtedly tight, particularly at the entry-level of the PC market. We continue to believe we will have at least the supply to meet the full-year revenue outlook we announced in July, which was $4.5 billion higher than our January expectations,” said Swan.

    Swan stated that the 10nm fabrication process was moving along with increased yields and volume production was planned for 2019: ‘We are investing a record $15 billion in capital expenditures in 2018, up approximately $1 billion from the beginning of the year. We’re putting that $1 billion into our 14nm manufacturing sites in Oregon, Arizona, Ireland and Israel. This capital, along with other efficiencies, is increasing our supply to respond to your increased demand.’

    “While Intel is undoubtedly the king of the hill when it comes to HPC processors – with more than 90 per cent of the Top500 using Intel-based technologies – the advances made by other companies, such as AMD, the re-introduction of IBM and the maturing Arm ecosystem are all factors that mean that Intel faces stiffer competition than it has for a decade.”

    The Rise of AMD

    The company had success in the headlines at the end of 2017 when the new range of server products was released but, as Greg Gibby, senior product manager of data centre products at AMD notes, he expects the company will begin to see some momentum as several ‘significant wins’ have already been completed.

    Microsoft has announced several cloud services that make use of AMD CPUs and the two socket products are also being deployed by Chinese companies such as Tencent for cloud-based services and Baidu has adopted both CPUs and GPUs from AMD to drive its machine learning and cloud workloads.

    AMD is generating huge revenue from its console partnerships with Sony and Microsoft.

    While these custom CPUs do not directly impact HPC technology, the revenue provided valuable time for AMD to get its server products ready. In 2018 the server line-up has been successful and AMD is rumored to announce 7nm products next year. If this comes to fruition AMD could further bolster its potential to compete in the HPC market.

    Gibby also noted that as performance is a key factor for many HPC users, it is important to get these products in front of the HPC user community.

    He said: “I believe that as we get customers testing the EPYC platform on their workloads, they see the significant performance advantages that EPYC brings to the market. I think that will provide a natural follow-through of us gaining share in that space.”

    One thing that could drive adoption of AMD products could be the memory bandwidth improvements which were a key feature of AMD when developing the EPYC CPUs. Memory bandwidth has long been a potential bottleneck for HPC applications, but this has become much more acute in recent years.

    In a recent interview with Scientific Computing World, Jack Wells, director of Science at Oak Ridge National Laboratory noted it as the number one user requirement when surveying the Oak Ridge HPC users.

    This was the first time that memory bandwidth had replaced peak node flops in the user requirements for this centre.

    While AMD was designing the next generation of its server-based CPU line, it took clear steps to design a processor that could meet the demands of modern workloads.

    Gibby noted that the CPU was not just designed to increase floating point performance, as there were key bottlenecks that the company identified, such as memory bandwidth that needed to be addressed.

    “Memory bandwidth was one of the key topics we looked at, so we put in eight memory channels on each socket,” said Gibby. “So in a dual socket system, you have 16 channels of memory, which gives really good memory bandwidth to keep the data moving in and out of the core.”

    “The other thing is on the I/O side. When you look at HPC specifically, you are looking at clusters with a lot of dependency on interconnects, whether it be InfiniBand or some other fabric.”

    “A lot of the time you have GPU acceleration in there as well, so we wanted to make sure that we had the I/O bandwidth to support this.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 7:51 am on August 13, 2018 Permalink | Reply
    Tags: , , , Computers can’t have needs cravings or desires, Computing   

    From aeon: “Robot says: Whatever” 

    1

    From aeon

    8.13.18
    Margaret Boden

    1
    Chief priest Bungen Oi holds a robot AIBO dog prior to its funeral ceremony at the Kofukuji temple in Isumi, Japan, on 26 April 2018. Photo by Nicolas Datiche /AFP/Getty
    https://www.headlines24.nl/nieuwsartikel/135259/201808/robot–whatever

    What stands in the way of all-powerful AI isn’t a lack of smarts: it’s that computers can’t have needs, cravings or desires.

    In Henry James’s intriguing novella The Beast in the Jungle (1903), a young man called John Marcher believes that he is marked out from everyone else in some prodigious way. The problem is that he can’t pinpoint the nature of this difference. Marcher doesn’t even know whether it is good or bad. Halfway through the story, his companion May Bartram – a wealthy, New-England WASP, naturally – realises the answer. But by now she is middle-aged and terminally ill, and doesn’t tell it to him. On the penultimate page, Marcher (and the reader) learns what it is. For all his years of helpfulness and dutiful consideration towards May, detailed at length in the foregoing pages, not even she had ever really mattered to him.

    That no one really mattered to Marcher does indeed mark him out from his fellow humans – but not from artificial intelligence (AI) systems, for which nothing matters. Yes, they can prioritise: one goal can be marked as more important or more urgent than another. In the 1990s, the computer scientists Aaron Sloman and Ian Wright even came up with a computer model of a nursemaid in charge of several unpredictable and demanding babies, in order to illustrate aspects of Sloman’s theory about anxiety in humans who must juggle multiple goals. But this wasn’t real anxiety: the computer couldn’t care less.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Since 2012, Aeon has established itself as a unique digital magazine, publishing some of the most profound and provocative thinking on the web. We ask the big questions and find the freshest, most original answers, provided by leading thinkers on science, philosophy, society and the arts.

    Aeon has three channels, and all are completely free to enjoy:

    Essays – Longform explorations of deep issues written by serious and creative thinkers

    Ideas – Short provocations, maintaining Aeon’s high editorial standards but in a more nimble and immediate form. Our Ideas are published under a Creative Commons licence, making them available for republication.

    Video – A mixture of curated short documentaries and original Aeon productions

    Through our Partnership program, we publish pieces from university research groups, university presses and other selected cultural organisations.

    Aeon was founded in London by Paul and Brigid Hains. It now has offices in London, Melbourne and New York. We are a not-for-profit, registered charity operated by Aeon Media Group Ltd. Aeon is endorsed as a Deductible Gift Recipient (DGR) organisation in Australia and, through its affiliate Aeon America, registered as a 501(c)(3) charity in the US.

    We are committed to big ideas, serious enquiry and a humane worldview. That’s it.

     
  • richardmitnick 9:12 am on June 18, 2018 Permalink | Reply
    Tags: Computing, Deep Neural Network Training with Analog Memory Devices, ,   

    From HPC Wire: “IBM Demonstrates Deep Neural Network Training with Analog Memory Devices” 

    From HPC Wire

    June 18, 2018
    Oliver Peckham

    1
    Crossbar arrays of non-volatile memories can accelerate the training of fully connected neural networks by performing computation at the location of the data. (Source: IBM)

    From smarter, more personalized apps to seemingly-ubiquitous Google Assistant and Alexa devices, AI adoption is showing no signs of slowing down – and yet, the hardware used for AI is far from perfect. Currently, GPUs and other digital accelerators are used to speed the processing of deep neural network (DNN) tasks – but all of those systems are effectively wasting time and energy shuttling that data back and forth between memory and processing. As the scale of AI applications continues to increase, those cumulative losses are becoming massive.

    In a paper published this month in Nature, by Stefano Ambrogio, Pritish Narayanan, Hsinyu Tsai, Robert M. Shelby, Irem Boybat, Carmelo di Nolfo, Severin Sidler, Massimo Giordano, Martina Bodini, Nathan C. P. Farinha, Benjamin Killeen, Christina Cheng, Yassine Jaoudi, and Geoffrey W. Burr, IBM researchers demonstrate DNN training on analog memory devices that they report achieves equivalent accuracy to a GPU-accelerated system. IBM’s solution performs DNN calculations right where the data are located, storing and adjusting weights in memory, with the effect of conserving energy and improving speed.

    Analog computing, which uses variable signals rather than binary signals, is rarely employed in modern computing due to inherent limits on precision. IBM’s researchers, building on a growing understanding that DNN models operate effectively at lower precision, decided to attempt an accurate approach to analog DNNs.

    The research team says it was able to accelerate key training algorithms, notably the backpropagation algorithm, using analog non-volatile memories (NVM). Writing for the IBM blog, lead author Stefano Ambrogio explains:

    “These memories allow the “multiply-accumulate” operations used throughout these algorithms to be parallelized in the analog domain, at the location of weight data, using underlying physics. Instead of large circuits to multiply and add digital numbers together, we simply pass a small current through a resistor into a wire, and then connect many such wires together to let the currents build up. This lets us perform many calculations at the same time, rather than one after the other. And instead of shipping digital data on long journeys between digital memory chips and processing chips, we can perform all the computation inside the analog memory chip.”

    The authors note that their mixed hardware-software approach is able to achieve classification accuracies equivalent to pure software based-training using TensorFlow despite imperfections of existing analog memory devices. Writes Ambrogio:

    “By combining long-term storage in phase-change memory (PCM) devices, near-linear update of conventional Complementary Metal-Oxide Semiconductor (CMOS) capacitors and novel techniques for cancelling out device-to-device variability, we finessed these imperfections and achieved software-equivalent DNN accuracies on a variety of different networks. These experiments used a mixed hardware-software approach, combining software simulations of system elements that are easy to model accurately (such as CMOS devices) together with full hardware implementation of the PCM devices. It was essential to use real analog memory devices for every weight in our neural networks, because modeling approaches for such novel devices frequently fail to capture the full range of device-to-device variability they can exhibit.”

    Ambrogio and his team believe that their early design efforts indicate that a full implemention of the analog approach “should indeed offer equivalent accuracy, and thus do the same job as a digital accelerator – but faster and at lower power.” The team is exploring the design of prototype NVM-based accelerator chips, as part of an IBM Research Frontiers Institute project.

    The team estimates that it will be able to deliver chips with a computational energy efficiency of 28,065 GOP/sec/W and throughput-per-area of 3.6 TOP/sec/mm2. This would be a two orders of magnitude improvement over today’s GPUs according to the reserachers.

    The researchers will now turn their attention to demonstrating their approach on larger networks that call for large, fully-connected layers, such as recurrently-connected Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) networks with emerging utility for machine translation, captioning and text analytics. As new and better forms of analog memory are developed, they expect continued improvements in areal density and energy efficiency.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    HPCwire is the #1 news and information resource covering the fastest computers in the world and the people who run them. With a legacy dating back to 1987, HPC has enjoyed a legacy of world-class editorial and topnotch journalism, making it the portal of choice selected by science, technology and business professionals interested in high performance and data-intensive computing. For topics ranging from late-breaking news and emerging technologies in HPC, to new trends, expert analysis, and exclusive features, HPCwire delivers it all and remains the HPC communities’ most reliable and trusted resource. Don’t miss a thing – subscribe now to HPCwire’s weekly newsletter recapping the previous week’s HPC news, analysis and information at: http://www.hpcwire.com.

     
  • richardmitnick 5:24 pm on April 1, 2018 Permalink | Reply
    Tags: , , Computer searches telescope data for evidence of distant planets, Computing,   

    From MIT: “Computer searches telescope data for evidence of distant planets” 

    MIT News

    MIT Widget

    MIT News

    March 29, 2018
    Larry Hardesty

    1
    A young sun-like star encircled by its planet-forming disk of gas and dust.
    Image: NASA/JPL-Caltech

    Machine-learning system uses physics principles to augment data from NASA crowdsourcing project.

    As part of an effort to identify distant planets hospitable to life, NASA has established a crowdsourcing project in which volunteers search telescopic images for evidence of debris disks around stars, which are good indicators of exoplanets.

    Using the results of that project, researchers at MIT have now trained a machine-learning system to search for debris disks itself. The scale of the search demands automation: There are nearly 750 million possible light sources in the data accumulated through NASA’s Wide-Field Infrared Survey Explorer (WISE) mission alone.

    NASA/WISE Telescope

    In tests, the machine-learning system agreed with human identifications of debris disks 97 percent of the time. The researchers also trained their system to rate debris disks according to their likelihood of containing detectable exoplanets. In a paper describing the new work in the journal Astronomy and Computing, the MIT researchers report that their system identified 367 previously unexamined celestial objects as particularly promising candidates for further study.

    The work represents an unusual approach to machine learning, which has been championed by one of the paper’s coauthors, Victor Pankratius, a principal research scientist at MIT’s Haystack Observatory. Typically, a machine-learning system will comb through a wealth of training data, looking for consistent correlations between features of the data and some label applied by a human analyst — in this case, stars circled by debris disks.

    But Pankratius argues that in the sciences, machine-learning systems would be more useful if they explicitly incorporated a little bit of scientific understanding, to help guide their searches for correlations or identify deviations from the norm that could be of scientific interest.

    “The main vision is to go beyond what A.I. is focusing on today,” Pankratius says. “Today, we’re collecting data, and we’re trying to find features in the data. You end up with billions and billions of features. So what are you doing with them? What you want to know as a scientist is not that the computer tells you that certain pixels are certain features. You want to know ‘Oh, this is a physically relevant thing, and here are the physics parameters of the thing.’”

    Classroom conception

    The new paper grew out of an MIT seminar that Pankratius co-taught with Sara Seager, the Class of 1941 Professor of Earth, Atmospheric, and Planetary Sciences, who is well-known for her exoplanet research. The seminar, Astroinformatics for Exoplanets, introduced students to data science techniques that could be useful for interpreting the flood of data generated by new astronomical instruments. After mastering the techniques, the students were asked to apply them to outstanding astronomical questions.

    For her final project, Tam Nguyen, a graduate student in aeronautics and astronautics, chose the problem of training a machine-learning system to identify debris disks, and the new paper is an outgrowth of that work. Nguyen is first author on the paper, and she’s joined by Seager, Pankratius, and Laura Eckman, an undergraduate majoring in electrical engineering and computer science.

    From the NASA crowdsourcing project, the researchers had the celestial coordinates of the light sources that human volunteers had identified as featuring debris disks. The disks are recognizable as ellipses of light with slightly brighter ellipses at their centers. The researchers also used the raw astronomical data generated by the WISE mission.

    To prepare the data for the machine-learning system, Nguyen carved it up into small chunks, then used standard signal-processing techniques to filter out artifacts caused by the imaging instruments or by ambient light. Next, she identified those chunks with light sources at their centers, and used existing image-segmentation algorithms to remove any additional sources of light. These types of procedures are typical in any computer-vision machine-learning project.

    Coded intuitions

    But Nguyen used basic principles of physics to prune the data further. For one thing, she looked at the variation in the intensity of the light emitted by the light sources across four different frequency bands. She also used standard metrics to evaluate the position, symmetry, and scale of the light sources, establishing thresholds for inclusion in her data set.

    In addition to the tagged debris disks from NASA’s crowdsourcing project, the researchers also had a short list of stars that astronomers had identified as probably hosting exoplanets. From that information, their system also inferred characteristics of debris disks that were correlated with the presence of exoplanets, to select the 367 candidates for further study.

    “Given the scalability challenges with big data, leveraging crowdsourcing and citizen science to develop training data sets for machine-learning classifiers for astronomical observations and associated objects is an innovative way to address challenges not only in astronomy but also several different data-intensive science areas,” says Dan Crichton, who leads the Center for Data Science and Technology at NASA’s Jet Propulsion Laboratory. “The use of the computer-aided discovery pipeline described to automate the extraction, classification, and validation process is going to be helpful for systematizing how these capabilities can be brought together. The paper does a nice job of discussing the effectiveness of this approach as applied to debris disk candidates. The lessons learned are going to be important for generalizing the techniques to other astronomy and different discipline applications.”

    “The Disk Detective science team has been working on its own machine-learning project, and now that this paper is out, we’re going to have to get together and compare notes,” says Marc Kuchner, a senior astrophysicist at NASA’s Goddard Space Flight Center and leader of the crowdsourcing disk-detection project known as Disk Detective. “I’m really glad that Nguyen is looking into this because I really think that this kind of machine-human cooperation is going to be crucial for analyzing the big data sets of the future.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: