Tagged: Machine learning Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:26 am on June 17, 2021 Permalink | Reply
    Tags: "An Ally for Alloys", , , “XMAT”—eXtreme environment MATerials—consortium, , Machine learning, , Stronger materials are key to producing energy efficiently resulting in economic and decarbonization benefits.   

    From DOE’s Pacific Northwest National Laboratory (US) : “An Ally for Alloys” 

    From DOE’s Pacific Northwest National Laboratory (US)

    June 16, 2021
    Tim Ledbetter

    1

    Machine learning techniques have contributed to progress in science and technology fields ranging from health care to high-energy physics. Now, machine learning is poised to help accelerate the development of stronger alloys, particularly stainless steels, for America’s thermal power generation fleet. Stronger materials are key to producing energy efficiently resulting in economic and decarbonization benefits.

    “The use of ultra-high-strength steels in power plants dates back to the 1950s and has benefited from gradual improvements in the materials over time,” says Osman Mamun, a postdoctoral research associate at Pacific Northwest National Laboratory (PNNL). “If we can find ways to speed up improvements or create new materials, we could see enhanced efficiency in plants that also reduces the amount of carbon emitted into the atmosphere.”

    Mamun is the lead author on two recent, related journal articles that reveal new strategies for machine learning’s application in the design of advanced alloys. The articles chronicle the research outcomes of a joint effort between PNNL and the DOE National Energy Technology Lab (US). In addition to Mamun, the research team included PNNL’s Arun Sathanur and Ram Devanathan and NETL’s Madison Wenzlick and Jeff Hawk.

    The work was funded under the Department of Energy’s (US) Office of Fossil Energy via the “XMAT”—eXtreme environment MATerials—consortium, which includes research contributions from seven DOE national laboratories. The consortium seeks to accelerate the development of improved heat-resistant alloys for various power plant components and to predict the alloys’ long-term performance.

    The inside story of power plants

    A thermal power plant’s internal environment is unforgiving. Operating temperatures of more than 650 degrees Celsius and stresses exceeding 50 megapascals put a plant’s steel components to the test.

    “But also, that high temperature and pressure, along with reliable components, are critical in driving better thermodynamic efficiency that leads to reduced carbon emissions and increased cost-effectiveness,” Mamun explains.

    The PNNL–NETL collaboration focused on two material types. Austenitic stainless steel is widely used in plants because it offers strength and excellent corrosion resistance, but its service life at high temperatures is limited. Ferritic-martensitic steel that contains chromium in the 9 to 12 percent range also offers strength benefits but can be prone to oxidation and corrosion. Plant operators want materials that resist rupturing and last for decades.

    Over time, “trial and error” experimental approaches have incrementally improved steel, but are inefficient, time-consuming, and costly. It is crucial to accelerate the development of novel materials with superior properties.

    Models for predicting rupture strength and life

    Recent advances in computational modeling and machine learning, Mamun says, have become important new tools in the quest for achieving better materials more quickly.

    Machine learning, a form of artificial intelligence, applies an algorithm to datasets to develop faster solutions for science problems. This capability is making a big difference in research worldwide, in some cases shaving considerable time off scientific discovery and technology developments.

    The PNNL–NETL research team’s application of machine learning was described in their first journal article, published March 9 in Scientific Reports.

    2
    PNNL’s distinctive capabilities in joining steel to aluminum alloys enable lightweight vehicle technologies for sustainable transportation. Photo by Andrea Starr | Pacific Northwest National Laboratory.

    The paper recounts the team’s effort to enhance and analyze stainless steel datasets, contributed by NETL team members, with three different algorithms. The ultimate goal was to construct an accurate predictive model for the rupture strength of the two types of alloys. The team concluded that an algorithm known as the Gradient Boosted Decision Tree best met the needs for building machine learning models for accurate prediction of rupture strength.

    Further, the researchers maintain that integrating the resulting models into existing alloy design strategies could speed the identification of promising stainless steels that possess superior properties for dealing with stress and strain.

    “This research project not only took a step toward better approaches for extending the operating envelope of steel in power plants, but also demonstrated machine learning models grounded in physics to enable interpretation by domain scientists,” says research team member Ram Devanathan, a PNNL computational materials scientist. Devanathan leads the XMAT consortium’s data science thrust and serves on the organization’s steering committee.

    The project team’s second article was published in npj Materials Degradation’s April 16 edition.

    The team concluded in the paper that a machine-learning-based predictive model can reliably estimate the rupture life of the two alloys. The researchers also described a methodology to generate synthetic alloys that could be used to augment existing sparse stainless steel datasets, and identified the limitations of such an approach. Using these “hypothetical alloys” in machine learning models makes it possible to assess the performance of candidate materials without first synthesizing them in a laboratory.

    “The findings build on the earlier paper’s conclusions and represent another step toward establishing interpretable models of alloy performance in extreme environments, while also providing insights into data set development,” Devanathan says. “Both papers demonstrate XMAT’s thought leadership in this rapidly growing field.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    DOE’s Pacific Northwest National Laboratory (PNNL) (US) is one of the United States Department of Energy National Laboratories, managed by the Department of Energy’s Office of Science. The main campus of the laboratory is in Richland, Washington.

    PNNL scientists conduct basic and applied research and development to strengthen U.S. scientific foundations for fundamental research and innovation; prevent and counter acts of terrorism through applied research in information analysis, cyber security, and the nonproliferation of weapons of mass destruction; increase the U.S. energy capacity and reduce dependence on imported oil; and reduce the effects of human activity on the environment. PNNL has been operated by Battelle Memorial Institute since 1965.

     
  • richardmitnick 3:08 pm on January 19, 2021 Permalink | Reply
    Tags: "Rethinking Spin Chemistry from a Quantum Perspective", “Superposition” lets algorithms represent two variables at once which then allows scientists to focus on the relationship between these variables without any need to determine their individual sta, Bayesian inference, Machine learning, , ,   

    From Osaka City University (大阪市立大学: Ōsaka shiritsu daigaku) (JP): “Rethinking Spin Chemistry from a Quantum Perspective” 

    From Osaka City University (大阪市立大学: Ōsaka shiritsu daigaku) (JP)

    Jan 18, 2021
    James Gracey
    Global Exchange Office
    kokusai@ado.osaka-cu.ac.jp

    Researchers at Osaka City University use quantum superposition states and Bayesian inference to create a quantum algorithm, easily executable on quantum computers, that accurately and directly calculates energy differences between the electronic ground and excited spin states of molecular systems in polynomial time.

    1
    A quantum circuit that enables for the maximum probability of P(0)
    in the measurement of the parameter J.

    Understanding how the natural world works enables us to mimic it for the benefit of humankind. Think of how much we rely on batteries. At the core is understanding molecular structures and the behavior of electrons within them. Calculating the energy differences between a molecule’s electronic ground and excited spin states helps us understand how to better use that molecule in a variety of chemical, biomedical and industrial applications. We have made much progress in molecules with closed-shell systems, in which electrons are paired up and stable. Open-shell systems, on the other hand, are less stable and their underlying electronic behavior is complex, and thus more difficult to understand. They have unpaired electrons in their ground state, which cause their energy to vary due to the intrinsic nature of electron spins, and makes measurements difficult, especially as the molecules increase in size and complexity. Although such molecules are abundant in nature, there is a lack of algorithms that can handle this complexity. One hurdle has been dealing with what is called the exponential explosion of computational time. Using a conventional computer to calculate how the unpaired spins influence the energy of an open-shell molecule would take hundreds of millions of years, time humans do not have.

    Quantum computers are in development to help reduce this to what is called “polynomial time”. However, the process scientists have been using to calculate the energy differences of open-shell molecules has essentially been the same for both conventional and quantum computers. This hampers the practical use of quantum computing in chemical and industrial applications.

    “Approaches that invoke true quantum algorithms help us treat open-shell systems much more efficiently than by utilizing classical computers”, state Kenji Sugisaki and Takeji Takui from Osaka City University. With their colleagues, they developed a quantum algorithm executable on quantum computers, which can, for the first time, accurately calculate energy differences between the electronic ground and excited spin states of open-shell molecular systems. Their findings were published in the journal Chemical Science on 24 Dec 2020.

    The energy difference between molecular spin states is characterized by the value of the exchange interaction parameter J. Conventional quantum algorithms have been able to accurately calculate energies for closed-shell molecules “but they have not been able to handle systems with a strong multi-configurational character”, states the group. Until now, scientists have assumed that to obtain the parameter J one must first calculate the total energy of each spin state. In open-shell molecules this is difficult because the total energy of each spin state varies greatly as the molecule changes in activity and size. However, “the energy difference itself is not greatly dependent on the system size”, notes the research team. This led them to create an algorithm with calculations that focused on the spin difference, not the individual spin states. Creating such an algorithm required that they let go of assumptions developed from years of using conventional computers and focus on the unique characteristics of quantum computing – namely “quantum superposition states”.

    “Superposition” lets algorithms represent two variables at once, which then allows scientists to focus on the relationship between these variables without any need to determine their individual states first. The research team used something called a broken-symmetry wave function as a superposition of wave functions with different spin states and rewrote it into the Hamiltonian equation for the parameter J. By running this new quantum circuit, the team was able to focus on deviations from their target and by applying Bayesian inference, a machine learning technique, they brought these deviations in to determine the exchange interaction parameter J. “Numerical simulations based on this method were performed for the covalent dissociation of molecular hydrogen (H2), the triple bond dissociation of molecular nitrogen (N2), and the ground states of C, O, Si atoms and NH, OH+, CH2, NF and O2 molecules with an error of less than 1 kcal/mol”, adds the research team,

    “We plan on installing our Bayesian eXchange coupling parameter calculator with Broken-symmetry wave functions (BxB) software on near-term quantum computers equipped with noisy (no quantum error correction) intermediate-scale (several hundreds of qubits) quantum devices (NISQ devices), testing the usefulness for quantum chemical calculations of actual sizable molecular systems.”

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Osaka City University (OCU) (大阪市立大学: Ōsaka shiritsu daigaku) (JP), is a public university in Japan. It is located in Sumiyoshi-ku, Osaka.

    OCU’s predecessor was founded in 1880, as Osaka Commercial Training Institute (大阪商業講習所) with donations by local merchants. It became Osaka Commercial School in 1885, then was municipalized in 1889. Osaka City was defeated in a bid to draw the Second National Commercial College (the winner was Kobe City), so the city authorities decided to establish a municipal commercial college without any aid from the national budget.

    In 1901, the school was reorganized to become Osaka City Commercial College (市立大阪高等商業学校), later authorized under Specialized School Order in 1904. The college had grand brick buildings around the Taishō period.

    In 1928, the college became Osaka University of Commerce (大阪商科大学), the first municipal university in Japan. The city mayor, Hajime Seki (関 一, Seki Hajime, 1873–1935) declared the spirit of the municipal university, that it should not simply copy the national universities and that it should become a place for research with a background of urban activities in Osaka. But, contrary to his words, the university was removed to the most rural part of the city by 1935. The first president of the university was a liberalist, so the campus gradually became what was thought to be “a den of the Reds (Marxists)”. During World War II, the Marxists and the socialists in the university were arrested (about 50 to 80 members) soon after the liberal president died. The campus was evacuated and used by the Japanese Navy.

    After the war, the campus was occupied by the U.S. Army (named “Camp Sakai”), and a number of students became anti-American fighters and “worshipers” of the Soviet Union. The campus was returned to the university, partly in 1952, and fully in 1955. In 1949, during the allied occupation, the university was merged (with other two municipal colleges) into Osaka City University, under Japan’s new educational system.

     
  • richardmitnick 12:25 pm on January 6, 2021 Permalink | Reply
    Tags: "Advanced materials in a snap", , , Machine learning,   

    From DOE’s Sandia National Laboratories: “Advanced materials in a snap” 

    From DOE’s Sandia National Laboratories

    January 5, 2021
    Troy Rummler
    trummle@sandia.gov
    505-249-3632

    1
    Sandia National Laboratories has developed a machine learning algorithm capable of performing simulations for materials scientists nearly 40,000 times faster than normal. Credit: Image by Eric Lundin.

    If everything moved 40,000 times faster, you could eat a fresh tomato three minutes after planting a seed. You could fly from New York to L.A. in half a second. And you’d have waited in line at airport security for that flight for 30 milliseconds.

    A research team at Sandia National Laboratories has successfully used machine learning — computer algorithms that improve themselves by learning patterns in data — to complete cumbersome materials science calculations more than 40,000 times faster than normal.

    Their results, published Jan. 4 in npj Computational Materials, could herald a dramatic acceleration in the creation of new technologies for optics, aerospace, energy storage and potentially medicine while simultaneously saving laboratories money on computing costs.

    “We’re shortening the design cycle,” said David Montes de Oca Zapiain, a computational materials scientist at Sandia who helped lead the research. “The design of components grossly outpaces the design of the materials you need to build them. We want to change that. Once you design a component, we’d like to be able to design a compatible material for that component without needing to wait for years, as it happens with the current process.”

    The research, funded by the U.S. Department of Energy’s Basic Energy Sciences program, was conducted at the Center for Integrated Nanotechnologies, a DOE user research facility jointly operated by Sandia and Los Alamos National Laboratory.

    Machine learning speeds up computationally expensive simulations.

    Sandia researchers used machine learning to accelerate a computer simulation that predicts how changing a design or fabrication process, such as tweaking the amounts of metals in an alloy, will affect a material. A project might require thousands of simulations, which can take weeks, months or even years to run.

    The team clocked a single, unaided simulation on a high-performance computing cluster with 128 processing cores (a typical home computer has two to six processing cores) at 12 minutes. With machine learning, the same simulation took 60 milliseconds using only 36 cores–equivalent to 42,000 times faster on equal computers. This means researchers can now learn in under 15 minutes what would normally take a year.

    Sandia’s new algorithm arrived at an answer that was 5% different from the standard simulation’s result, a very accurate prediction for the team’s purposes. Machine learning trades some accuracy for speed because it makes approximations to shortcut calculations.

    “Our machine-learning framework achieves essentially the same accuracy as the high-fidelity model but at a fraction of the computational cost,” said Sandia materials scientist Rémi Dingreville, who also worked on the project.

    Benefits could extend beyond materials

    Dingreville and Montes de Oca Zapiain are going to use their algorithm first to research ultrathin optical technologies for next-generation monitors and screens. Their research, though, could prove widely useful because the simulation they accelerated describes a common event — the change, or evolution, of a material’s microscopic building blocks over time.

    Machine learning previously has been used to shortcut simulations that calculate how interactions between atoms and molecules change over time. The published results, however, demonstrate the first use of machine learning to accelerate simulations of materials at relatively large, microscopic scales, which the Sandia team expects will be of greater practical value to scientists and engineers.

    For instance, scientists can now quickly simulate how miniscule droplets of melted metal will glob together when they cool and solidify, or conversely, how a mixture will separate into layers of its constituent parts when it melts. Many other natural phenomena, including the formation of proteins, follow similar patterns. And while the Sandia team has not tested the machine-learning algorithm on simulations of proteins, they are interested in exploring the possibility in the future.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Sandia Campus.


    Sandia National Laboratory

    Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major R&D responsibilities in national security, energy and environmental technologies, and economic competitiveness.



     
  • richardmitnick 10:57 am on December 31, 2020 Permalink | Reply
    Tags: "An Existential Crisis in Neuroscience", , , DNNs are mathematical models that string together chains of simple functions that approximate real neurons., , , It’s clear now that while science deals with facts a crucial part of this noble endeavor is making sense of the facts., Machine learning, ,   

    From Nautilus: “An Existential Crisis in Neuroscience” 

    From Nautilus

    December 30, 2020 [Re-issued “Maps” issue January 23, 2020.]
    Grigori Guitchounts

    1
    A rendering of dendrites (red)—a neuron’s branching processes—and protruding spines that receive synaptic information, along with a saturated reconstruction (multicolored cylinder) from a mouse cortex. Credit: Lichtman Lab at Harvard University.

    We’re mapping the brain in amazing detail—but our brain can’t understand the picture.

    On a chilly evening last fall, I stared into nothingness out of the floor-to-ceiling windows in my office on the outskirts of Harvard’s campus. As a purplish-red sun set, I sat brooding over my dataset on rat brains. I thought of the cold windowless rooms in downtown Boston, home to Harvard’s high-performance computing center, where computer servers were holding on to a precious 48 terabytes of my data. I have recorded the 13 trillion numbers in this dataset as part of my Ph.D. experiments, asking how the visual parts of the rat brain respond to movement.

    Printed on paper, the dataset would fill 116 billion pages, double-spaced. When I recently finished writing the story of my data, the magnum opus fit on fewer than two dozen printed pages. Performing the experiments turned out to be the easy part. I had spent the last year agonizing over the data, observing and asking questions. The answers left out large chunks that did not pertain to the questions, like a map leaves out irrelevant details of a territory.

    But, as massive as my dataset sounds, it represents just a tiny chunk of a dataset taken from the whole brain. And the questions it asks—Do neurons in the visual cortex do anything when an animal can’t see? What happens when inputs to the visual cortex from other brain regions are shut off?—are small compared to the ultimate question in neuroscience: How does the brain work?

    2
    LIVING COLOR: This electron microscopy image of a slice of mouse cortex, which shows different neurons labeled by color, is just the beginning. “We’re working on a cortical slab of a human brain, where every synapse and every connection of every nerve cell is identifiable,” says Harvard’s Jeff Lichtman. “It’s amazing.” Credit: Lichtman Lab at Harvard University.

    The nature of the scientific process is such that researchers have to pick small, pointed questions. Scientists are like diners at a restaurant: We’d love to try everything on the menu, but choices have to be made. And so we pick our field, and subfield, read up on the hundreds of previous experiments done on the subject, design and perform our own experiments, and hope the answers advance our understanding. But if we have to ask small questions, then how do we begin to understand the whole?

    Neuroscientists have made considerable progress toward understanding brain architecture and aspects of brain function. We can identify brain regions that respond to the environment, activate our senses, generate movements and emotions. But we don’t know how different parts of the brain interact with and depend on each other. We don’t understand how their interactions contribute to behavior, perception, or memory. Technology has made it easy for us to gather behemoth datasets, but I’m not sure understanding the brain has kept pace with the size of the datasets.

    Some serious efforts, however, are now underway to map brains in full. One approach, called connectomics, strives to chart the entirety of the connections among neurons in a brain. In principle, a complete connectome would contain all the information necessary to provide a solid base on which to build a holistic understanding of the brain. We could see what each brain part is, how it supports the whole, and how it ought to interact with the other parts and the environment. We’d be able to place our brain in any hypothetical situation and have a good sense of how it would react.

    The question of how we might begin to grasp the entirety of the organ that generates our minds has been pressing me for a while. Like most neuroscientists, I’ve had to cultivate two clashing ideas: striving to understand the brain and knowing that’s likely an impossible task. I was curious how others tolerate this doublethink, so I sought out Jeff Lichtman, a leader in the field of connectomics and a professor of molecular and cellular biology at Harvard.

    Lichtman’s lab happens to be down the hall from mine, so on a recent afternoon, I meandered over to his office to ask him about the nascent field of connectomics and whether he thinks we’ll ever have a holistic understanding of the brain. His answer—“No”—was not reassuring, but our conversation was a revelation, and shed light on the questions that had been haunting me. How do I make sense of gargantuan volumes of data? Where does science end and personal interpretation begin? Were humans even capable of weaving today’s reams of information into a holistic picture? I was now on a dark path, questioning the limits of human understanding, unsettled by a future filled with big data and small comprehension.

    Lichtman likes to shoot first, ask questions later. The 68-year-old neuroscientist’s weapon of choice is a 61-beam electron microscope, which Lichtman’s team uses to visualize the tiniest of details in brain tissue. The way neurons are packed in a brain would make canned sardines look like they have a highly evolved sense of personal space. To make any sense of these images, and in turn, what the brain is doing, the parts of neurons have to be annotated in three dimensions, the result of which is a wiring diagram. Done at the scale of an entire brain, the effort constitutes a complete wiring diagram, or the connectome.

    To capture that diagram, Lichtman employs a machine that can only be described as a fancy deli slicer. The machine cuts pieces of brain tissue into 30-nanometer-thick sections, which it then pastes onto a tape conveyor belt. The tape goes on silicon wafers, and into Lichtman’s electron microscope, where billions of electrons blast the brain slices, generating images that reveal nanometer-scale features of neurons, their axons, dendrites, and the synapses through which they exchange information. The Technicolor images are a beautiful sight that evokes a fantastic thought: The mysteries of how brains create memories, thoughts, perceptions, feelings—consciousness itself—must be hidden in this labyrinth of neural connections.

    2
    THE MAPMAKER: Jeff Lichtman, a leader in brain mapping, says the word “understanding” has to undergo a revolution in reference to the human brain. “There’s no point when you can suddenly say, ‘I now understand the brain,’ just as you wouldn’t say, ‘I now get New York City.’”Credit: Lichtman Lab at Harvard University.

    A complete human connectome will be a monumental technical achievement. A complete wiring diagram for a mouse brain alone would take up two exabytes. That’s 2 billion gigabytes; by comparison, estimates of the data footprint of all books ever written come out to less than 100 terabytes, or 0.005 percent of a mouse brain. But Lichtman is not daunted. He is determined to map whole brains, exorbitant exabyte-scale storage be damned.

    Lichtman’s office is a spacious place with floor-to-ceiling windows overlooking a tree-lined walkway and an old circular building that, in the days before neuroscience even existed as a field, used to house a cyclotron. He was wearing a deeply black sweater, which contrasted with his silver hair and olive skin. When I asked if a completed connectome would give us a full understanding of the brain, he didn’t pause in his answer. I got the feeling he had thought a great deal about this question on his own.

    “I think the word ‘understanding’ has to undergo an evolution,” Lichtman said, as we sat around his desk. “Most of us know what we mean when we say ‘I understand something.’ It makes sense to us. We can hold the idea in our heads. We can explain it with language. But if I asked, ‘Do you understand New York City?’ you would probably respond, ‘What do you mean?’ There’s all this complexity. If you can’t understand New York City, it’s not because you can’t get access to the data. It’s just there’s so much going on at the same time. That’s what a human brain is. It’s millions of things happening simultaneously among different types of cells, neuromodulators, genetic components, things from the outside. There’s no point when you can suddenly say, ‘I now understand the brain,’ just as you wouldn’t say, ‘I now get New York City.’ ”

    “But we understand specific aspects of the brain,” I said. “Couldn’t we put those aspects together and get a more holistic understanding?”

    “I guess I would retreat to another beachhead, which is, ‘Can we describe the brain?’ ” Lichtman said. “There are all sorts of fundamental questions about the physical nature of the brain we don’t know. But we can learn to describe them. A lot of people think ‘description’ is a pejorative in science. But that’s what the Hubble telescope does. That’s what genomics does. They describe what’s actually there. Then from that you can generate your hypotheses.”

    “Why is description an unsexy concept for neuroscientists?”

    “Biologists are often seduced by ideas that resonate with them,” Lichtman said. That is, they try to bend the world to their idea rather than the other way around. “It’s much better—easier, actually—to start with what the world is, and then make your idea conform to it,” he said. Instead of a hypothesis-testing approach, we might be better served by following a descriptive, or hypothesis-generating methodology. Otherwise we end up chasing our own tails. “In this age, the wealth of information is an enemy to the simple idea of understanding,” Lichtman said.

    “How so?” I asked.

    “Let me put it this way,” Lichtman said. “Language itself is a fundamentally linear process, where one idea leads to the next. But if the thing you’re trying to describe has a million things happening simultaneously, language is not the right tool. It’s like understanding the stock market. The best way to make money on the stock market is probably not by understanding the fundamental concepts of economy. It’s by understanding how to utilize this data to know what to buy and when to buy it. That may have nothing to do with economics but with data and how data is used.”

    “Maybe human brains aren’t equipped to understand themselves,” I offered.

    “And maybe there’s something fundamental about that idea: that no machine can have an output more sophisticated than itself,” Lichtman said. “What a car does is trivial compared to its engineering. What a human brain does is trivial compared to its engineering. Which is the great irony here. We have this false belief there’s nothing in the universe that humans can’t understand because we have infinite intelligence. But if I asked you if your dog can understand something you’d say, ‘Well, my dog’s brain is small.’ Well, your brain is only a little bigger,” he continued, chuckling. “Why, suddenly, are you able to understand everything?”

    Was Lichtman daunted by what a connectome might achieve? Did he see his efforts as Sisyphean?

    “It’s just the opposite,” he said. “I thought at this point we would be less far along. Right now, we’re working on a cortical slab of a human brain, where every synapse is identified automatically, every connection of every nerve cell is identifiable. It’s amazing. To say I understand it would be ridiculous. But it’s an extraordinary piece of data. And it’s beautiful. From a technical standpoint, you really can see how the cells are connected together. I didn’t think that was possible.”

    Lichtman stressed his work was about more than a comprehensive picture of the brain. “If you want to know the relationship between neurons and behavior, you gotta have the wiring diagram,” he said. “The same is true for pathology. There are many incurable diseases, such as schizophrenia, that don’t have a biomarker related to the brain. They’re probably related to brain wiring but we don’t know what’s wrong. We don’t have a medical model of them. We have no pathology. So in addition to fundamental questions about how the brain works and consciousness, we can answer questions like, Where did mental disorders come from? What’s wrong with these people? Why are their brains working so differently? Those are perhaps the most important questions to human beings.”

    Late one night, after a long day of trying to make sense of my data, I came across a short story by Jorge Louis Borges that seemed to capture the essence of the brain mapping problem. In the story, On Exactitude in Science, a man named Suarez Miranda wrote of an ancient empire that, through the use of science, had perfected the art of map-making. While early maps were nothing but crude caricatures of the territories they aimed to represent, new maps grew larger and larger, filling in ever more details with each edition. Over time, Borges wrote, “the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province.” Still, the people craved more detail. “In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it.”

    The Borges story reminded me of Lichtman’s view that the brain may be too complex to be understood by humans in the colloquial sense, and that describing it may be a better goal. Still, the idea made me uncomfortable. Much like storytelling, or even information processing in the brain, descriptions must leave some details out. For a description to convey relevant information, the describer has to know which details are important and which are not. Knowing which details are irrelevant requires having some understanding about the thing you’re describing. Will my brain, as intricate as it may be, ever be able to make sense of the two exabytes in a mouse brain?

    Humans have a critical weapon in this fight. Machine learning has been a boon to brain mapping, and the self-reinforcing relationship promises to transform the whole endeavor. Deep learning algorithms (also known as deep neural networks, or DNNs) have in the past decade allowed machines to perform cognitive tasks once thought impossible for computers—not only object recognition, but text transcription and translation, or playing games like Go or chess. DNNs are mathematical models that string together chains of simple functions that approximate real neurons. These algorithms were inspired directly by the physiology and anatomy of the mammalian cortex, but are crude approximations of real brains, based on data gathered in the 1960s. Yet they have surpassed expectations of what machines can do.

    The secret to Lichtman’s progress with mapping the human brain is machine intelligence. Lichtman’s team, in collaboration with Google, is using deep networks to annotate the millions of images from brain slices their microscopes collect. Each scan from an electron microscope is just a set of pixels. Human eyes easily recognize the boundaries of each blob in the image (a neuron’s soma, axon, or dendrite, in addition to everything else in the brain), and with some effort can tell where a particular bit from one slice appears on the next slice. This kind of labeling and reconstruction is necessary to make sense of the vast datasets in connectomics, and have traditionally required armies of undergraduate students or citizen scientists to manually annotate all chunks. DNNs trained on image recognition are now doing the heavy lifting automatically, turning a job that took months or years into one that’s complete in a matter of hours or days. Recently, Google identified each neuron, axon, dendrite, and dendritic spike—and every synapse—in slices of the human cerebral cortex. “It’s unbelievable,” Lichtman said.

    Scientists still need to understand the relationship between those minute anatomical features and dynamical activity profiles of neurons—the patterns of electrical activity they generate—something the connectome data lacks. This is a point on which connectomics has received considerable criticism, mainly by way of example from the worm: Neuroscientists have had the complete wiring diagram of the worm C. elegans for a few decades now, but arguably do not understand the 300-neuron creature in its entirety; how its brain connections relate to its behaviors is still an active area of research.

    Still, structure and function go hand-in-hand in biology, so it’s reasonable to expect one day neuroscientists will know how specific neuronal morphologies contribute to activity profiles. It wouldn’t be a stretch to imagine a mapped brain could be kickstarted into action on a massive server somewhere, creating a simulation of something resembling a human mind. The next leap constitutes the dystopias in which we achieve immortality by preserving our minds digitally, or machines use our brain wiring to make super-intelligent machines that wipe humanity out. Lichtman didn’t entertain the far-out ideas in science fiction, but acknowledged that a network that would have the same wiring diagram as a human brain would be scary. “We wouldn’t understand how it was working any more than we understand how deep learning works,” he said. “Now, suddenly, we have machines that don’t need us anymore.”

    Yet a masterly deep neural network still doesn’t grant us a holistic understanding of the human brain. That point was driven home to me last year at a Computational and Systems Neuroscience conference, a meeting of the who’s-who in neuroscience, which took place outside Lisbon, Portugal. In a hotel ballroom, I listened to a talk by Arash Afraz, a 40-something neuroscientist at the National Institute of Mental Health in Bethesda, Maryland. The model neurons in DNNs are to real neurons what stick figures are to people, and the way they’re connected is equally as sketchy, he suggested.

    Afraz is short, with a dark horseshoe mustache and balding dome covered partially by a thin ponytail, reminiscent of Matthew McConaughey in True Detective. As sturdy Atlantic waves crashed into the docks below, Afraz asked the audience if we remembered René Magritte’s Ceci n’est pas une pipe painting, which depicts a pipe with the title written out below it. Afraz pointed out that the model neurons in DNNs are not real neurons, and the connections among them are not real either. He displayed a classic diagram of interconnections among brain areas found through experimental work in monkeys—a jumble of boxes with names like V1, V2, LIP, MT, HC, each a different color, and black lines connecting the boxes seemingly at random and in more combinations than seems possible. In contrast to the dizzying heap of connections in real brains, DNNs typically connect different brain areas in a simple chain, from one “layer” to the next. Try explaining that to a rigorous anatomist, Afraz said, as he flashed a meme of a shocked baby orangutan cum anatomist. “I’ve tried, believe me,” he said.

    I, too, have been curious why DNNs are so simple compared to real brains. Couldn’t we improve their performance simply by making them more faithful to the architecture of a real brain? To get a better sense for this, I called Andrew Saxe, a computational neuroscientist at Oxford University. Saxe agreed that it might be informative to make our models truer to reality. “This is always the challenge in the brain sciences: We just don’t know what the important level of detail is,” he told me over Skype.

    How do we make these decisions? “These judgments are often based on intuition, and our intuitions can vary wildly,” Saxe said. “A strong intuition among many neuroscientists is that individual neurons are exquisitely complicated: They have all of these back-propagating action potentials, they have dendritic compartments that are independent, they have all these different channels there. And so a single neuron might even itself be a network. To caricature that as a rectified linear unit”—the simple mathematical model of a neuron in DNNs—“is clearly missing out on so much.”

    As 2020 has arrived, I have thought a lot about what I have learned from Lichtman, Afraz, and Saxe and the holy grail of neuroscience: understanding the brain. I have found myself revisiting my undergrad days, when I held science up as the only method of knowing that was truly objective (I also used to think scientists would be hyper-rational, fair beings paramountly interested in the truth—so perhaps this just shows how naive I was).

    It’s clear to me now that while science deals with facts, a crucial part of this noble endeavor is making sense of the facts. The truth is screened through an interpretive lens even before experiments start. Humans, with all our quirks and biases, choose what experiment to conduct in the first place, and how to do it. And the interpretation continues after data are collected, when scientists have to figure out what the data mean. So, yes, science gathers facts about the world, but it is humans who describe it and try to understand it. All these processes require filtering the raw data through a personal sieve, sculpted by the language and culture of our times.

    It seems likely that Lichtman’s two exabytes of brain slices, and even my 48 terabytes of rat brain data, will not fit through any individual human mind. Or at least no human mind is going to orchestrate all this data into a panoramic picture of how the human brain works. As I sat at my office desk, watching the setting sun tint the cloudless sky a light crimson, my mind reached a chromatic, if mechanical, future. The machines we have built—the ones architected after cortical anatomy—fall short of capturing the nature of the human brain. But they have no trouble finding patterns in large datasets. Maybe one day, as they grow stronger building on more cortical anatomy, they will be able to explain those patterns back to us, solving the puzzle of the brain’s interconnections, creating a picture we understand. Out my window, the sparrows were chirping excitedly, not ready to call it a day.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 11:52 am on December 22, 2020 Permalink | Reply
    Tags: "Crossing the artificial intelligence thin red line?", , , Machine learning, Stuart Russell: There is huge upside potential in AI but we are already seeing the risks from the poor design of AI systems including the impacts of online misinformation; impersonation; and deception   

    From École Polytechnique Fédérale de Lausanne (CH): “Crossing the artificial intelligence thin red line?” 


    From École Polytechnique Fédérale de Lausanne (CH)

    22.12.20
    Tanya Petersen

    1

    EPFL computer science professor tells conference that AI has no legitimate roll in defining, implementing, or enforcing public policy.

    Artificial intelligence shapes our modern lives. It will be one of the defining technologies of the future, with its influence and application expected to accelerate as we go through the 2020s. Yet, the stakes are high; with the countless benefits that AI brings, there is also growing academic and public concern around a lack of transparency, and its misuse, in many areas of life.

    It’s in this environment that the European Commission has become one of the first political institutions in the world to release a white paper that could be a game-changer towards a regulatory framework for AI. In addition, this year the European Parliament adopted proposals on how the EU can best regulate artificial intelligence to boost innovation, ethical standards and trust in technology.

    Recently, an all-virtual conference on the ‘Governance Of and By Digital Technology’ hosted by EPFL’s International Risk Governance Center (IRGC) and the European Union’s Horizon 2020 TRIGGER Project explored the principles needed to govern existing and emerging digital technologies, as well as the potential danger of decision-making algorithms and how to prevent these from causing harm.

    Stuart Russell, Professor of Computer Science at the University of California, Berkeley and author of the popular textbook, Artificial Intelligence: A Modern Approach, proposed that there is huge upside potential in AI, but we are already seeing the risks from the poor design of AI systems, including the impacts of online misinformation, impersonation and deception.

    “I believe that if we don’t move quickly, human beings will just be losing their rights, their powers, their individuality and becoming more and more the subject of digital technology rather than the owners of it. For example, there is already AI from 50 different corporate representatives sitting in your pocket stealing your information, and your money, as fast as it can, and there’s nobody in your phone who actually works for you. Could we rearrange that so that the software in your phone actually works for you and negotiates with these other entities to keep all of your data private?” he asked.

    Reinforcement learning algorithms, that select the content people see on their phones or other devices, are a major problem he continued, “they currently have more power than Hitler or Stalin ever had in their wildest dreams over what billions of people see and read for most of their waking lives. We might argue that running these kinds of experiments without informed consent is a bad idea and, just as we have with pharmaceutical products, we need to have stage 1, 2, and 3 trials on human subjects and look at what effect these algorithms have on people’s minds and behavior.”

    Beyond regulating artificial intelligence aimed at individual use, one of the conference debates focused on how governments might use AI in developing and implementing public policy in areas such as healthcare, urban development or education. Bryan Ford, an Associate Professor at EPFL and head of the Decentralized and Distributed Systems Laboratory (DEDIS) in the School of Communication and Computer Sciences, argued that while the cautious use of powerful AI technologies can play many useful roles in low-level mechanisms used in many application domains, it has no legitimate role to play in defining, implementing, or enforcing public policy.

    “Matters of policy in governing humans must remain a domain reserved strictly for humans. For example, AI may have many justifiable uses in electric sensors to detect the presence of a car – how fast it is going or whether it stopped at an intersection, but I would claim AI does not belong anywhere near the policy decision of whether a car’s driver warrants suspicion and should be stopped by Highway Patrol.”

    “Because machine learning algorithms learn from data sets that represent historical experience, AI driven policy is fundamentally constrained by the assumption that our past represents the right, best, or only viable basis on which to make decisions about the future. Yet we know that all past and present societies are highly imperfect so to have any hope of genuinely improving our societies, governance must be visionary and forward looking,” Professor Ford continued.

    Artificial intelligence is heterogeneous and complex. When we talk about the governance of, and by, AI are we talking about machine learning, neural networks or autonomous agents, or the different applications of any of these in different areas? Likely, all the above in many different applications. We are only at the beginning of the journey when it comes to regulating artificial intelligence, one that most participants agreed has geopolitical implications.

    “These issues may lead directly to a set of trade and geostrategic conflicts that will make them all the more difficult to resolve and all the more crucial. The question is not only to avoid them but to avoid the decoupling of the US from Europe, and Europe and the US from China, and that is going to be a significant challenge economically and geo-strategically,” suggested John Zysman, Professor of Political Science at the University of California, Berkeley and co-Director of the Berkeley Roundtable on the International Economy.

    “Ultimately, there is a thin red line that AI should not cross and some regulation, that balances the benefits and risks from AI applications, is needed. The IRGC is looking at some of the most challenging problems facing society today, and it’s great to have them as part of IC,” said James Larus, Dean of the IC School and IRGC Academic Director.

    Concluding the conference, Marie-Valentine Florin, Executive Director of the IRGC reminded participants that artificial intelligence is a means to an end, not the end, “as societies we need a goal. Maybe that could be something like the Green Deal around sustainability to perhaps give a sense to today’s digital transformation. Digital transformation is the tool, and I don’t think society has collectively decided a real objectivel for it yet. That’s what we need to figure out.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    EPFL bloc

    EPFL campus

    EPFL (CH) is Europe’s most cosmopolitan technical university. It receives students, professors and staff from over 120 nationalities. With both a Swiss and international calling, it is therefore guided by a constant wish to open up; its missions of teaching, research and partnership impact various circles: universities and engineering schools, developing and emerging countries, secondary schools and gymnasiums, industry and economy, political circles and the general public.

     
  • richardmitnick 4:42 pm on December 21, 2020 Permalink | Reply
    Tags: "Artificial Intelligence Finds Surprising Patterns in Earth's Biological Mass Extinctions", "Radiations" may in fact cause major changes to existing ecosystems- an idea the authors call "destructive creation.", , Machine learning, Mass extinction events, , Phanerozoic Eon-the period for which fossils are available., Species evolution or "radiations" and extinctions are rarely connected., The Phanerozoic represents the most recent ~ 550-million-year period of Earth's total ~4.5 billion-year history and is significant to palaeontologists.,   

    From Tokyo Institute of Technology (JP): “Artificial Intelligence Finds Surprising Patterns in Earth’s Biological Mass Extinctions” 

    tokyo-tech-bloc

    From Tokyo Institute of Technology (JP)

    December 21, 2020

    Further Information

    Jennifer Hoyal Cuthill
    Affiliate Researcher
    Earth-Life Science Institute (ELSI),
    Tokyo Institute of Technology

    Email j.hoyal-cuthill@essex.ac.uk
    Tel +44-7834352081

    Contact
    Thilina Heenatigala
    Director of Communications
    Earth-Life Science Institute (ELSI),
    Tokyo Institute of Technology
    thilinah@elsi.jp
    Tel +81-3-5734-3163
    Fax +81-3-5734-3416

    December 21, 2020

    Charles Darwin’s landmark opus, On the Origin of the Species, ends with a beautiful summary of his theory of evolution, “There is a grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.” In fact, scientists now know that most species that have ever existed are extinct. This extinction of species has on the whole been roughly balanced by the origination of new ones over Earth’s history, with a few major temporary imbalances scientists call mass extinction events. Scientists have long believed that mass extinctions create productive periods of species evolution, or “radiations,” a model called “creative destruction.” A new study led by scientists affiliated with the Earth-Life Science Institute (ELSI) at Tokyo Institute of Technology used machine learning to examine the co-occurrence of fossil species and found that radiations and extinctions are rarely connected, and thus mass extinctions likely rarely cause radiations of a comparable scale.

    1
    Twists of fate.

    A new study [Nature] applies machine learning to the fossil record to visualise life’s history, showing the impacts of major evolutionary events. This shows the long-term evolutionary and ecological impacts of major events of extinction and speciation. Colours represent the geological periods from the Tonian, starting 1 billion years ago, in yellow, to the current Quaternary Period, shown in green. The red to blue colour transition marks the end-Permian mass extinction, one of the most disruptive events in the fossil record. Credit: J. Hoyal Cuthill and N. Guttenberg.

    Creative destruction is central to classic concepts of evolution. It seems clear that there are periods in which suddenly many species suddenly disappear, and many new species suddenly appear. However, radiations of a comparable scale to the mass extinctions, which this study, therefore, calls the mass radiations, have received far less analysis than extinction events. This study compared the impacts of both extinction and radiation across the period for which fossils are available, the so-called Phanerozoic Eon. The Phanerozoic (from the Greek meaning “apparent life”), represents the most recent ~ 550-million-year period of Earth’s total ~4.5 billion-year history, and is significant to palaeontologists: before this period most of the organisms that existed were microbes that didn’t easily form fossils, so the prior evolutionary record is hard to observe. The new study suggests creative destruction isn’t a good description of how species originated or went extinct during the Phanerozoic, and suggests that many of the most remarkable times of evolutionary radiation occurred when life entered new evolutionary and ecological arenas, such as during the Cambrian explosion of animal diversity and the Carboniferous expansion of forest biomes. Whether this is true for the previous ~ 3 billion years dominated by microbes is not known, as the scarcity of recorded information on such ancient diversity did not allow a similar analysis.

    Palaeontologists have identified a handful of the most severe, mass extinction events in the Phanerozoic fossil record. These principally include the big five mass extinctions, such as the end-Permian mass extinction in which more than 70% of species are estimated to have gone extinct. Biologists have now suggested that we may now be entering a “Sixth Mass Extinction,” which they think is mainly caused by human activity including hunting and land-use changes caused by the expansion of agriculture. A commonly noted example of the previous “Big Five” mass extinctions is the Cretaceous-Tertiary one (usually abbreviated as “K-T,” using the German spelling of Cretaceous) which appears to have been caused when a meteor hit Earth ~65 million years ago, wiping out the non-avian dinosaurs. Observing the fossil record, scientists came to believe that mass extinction events create especially productive radiations. For example, in the K-T dinosaur-exterminating event, it has conventionally been supposed that a wasteland was created, which allowed organisms like mammals to recolonise and “radiate,” allowing for the evolution of all manner of new mammal species, ultimately laying the foundation for the emergence of humans. In other words, if the K-T event of “creative destruction” had not occurred, perhaps we would not be here to discuss this question.

    The new study started with a casual discussion in ELSI’s “Agora,” a large common room where ELSI scientists and visitors often eat lunch and strike up new conversations. Two of the paper’s authors, evolutionary biologist Jennifer Hoyal Cuthill (now a research fellow at Essex University in the UK) and physicist/machine learning expert Nicholas Guttenberg (now a research scientist at Cross Labs working in collaboration with GoodAI in the Czech Republic), who were both post-doctoral scholars at ELSI when the work began, were kicking around the question of whether machine learning could be used to visualise and understand the fossil record. During a visit to ELSI, just before the COVID-19 pandemic began to restrict international travel, they worked feverishly to extend their analysis to examine the correlation between extinction and radiation events. These discussions allowed them to relate their new data to the breadth of existing ideas on mass extinctions and radiations. They quickly found that the evolutionary patterns identified with the help of machine learning differed in key ways from traditional interpretations.

    The team used a novel application of machine learning to examine the temporal co-occurrence of species in the Phanerozoic fossil record, examining over a million entries in a massive curated, public database including almost two hundred thousand species.

    Lead author Dr Hoyal Cuthill said, “Some of the most challenging aspects of understanding the history of life are the enormous timescales and numbers of species involved. New applications of machine learning can help by allowing us to visualise this information in a human-readable form. This means we can, so to speak, hold half a billion years of evolution in the palms of our hands, and gain new insights from what we see.”

    Using their objective methods, they found that the “big five” mass extinction events previously identified by palaeontologists were picked up by the machine learning methods as being among the top 5% of significant disruptions in which extinction outpaced radiation or vice versa, as were seven additional mass extinctions, two combined mass extinction-radiation events and fifteen mass radiations. Surprisingly, in contrast to previous narratives emphasising the importance of post-extinction radiations, this work found that the most comparable mass radiations and extinctions were only rarely coupled in time, refuting the idea of a causal relationship between them.

    Co-author Dr Nicholas Guttenberg said, “the ecosystem is dynamic, you don’t necessarily have to chip an existing piece off to allow something new to appear.”

    The team further found that radiations may in fact cause major changes to existing ecosystems, an idea the authors call “destructive creation.” They found that, during the Phanerozoic Eon, on average, the species that made up an ecosystem at any one time are almost all gone by 19 million years later. But when mass extinctions or radiations occur, this rate of turnover is much higher.

    This gives a new perspective on how the modern “Sixth Extinction” is occurring. The Quaternary period, which began 2.5 million years ago, had witnessed repeated climate upheavals, including dramatic alternations of glaciation, times when high latitude locations on Earth, were ice-covered. This means that the present “Sixth Extinction” is eroding biodiversity that was already disrupted, and the authors suggest it will take at least 8 million years for it to revert to the long-term average of 19 million years. Dr Hoyal Cuthill comments that “each extinction that happens on our watch erases a species, which may have existed for millions of years up to now, making it harder for the normal process of ‘new species origination’ to replace what is being lost.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    tokyo-tech-campus

    Tokyo Tech (JP) is the top national university for science and technology in Japan with a history spanning more than 130 years. Of the approximately 10,000 students at the Ookayama, Suzukakedai, and Tamachi Campuses, half are in their bachelor’s degree program while the other half are in master’s and doctoral degree programs. International students number 1,200. There are 1,200 faculty and 600 administrative and technical staff members.

    In the 21st century, the role of science and technology universities has become increasingly important. Tokyo Tech continues to develop global leaders in the fields of science and technology, and contributes to the betterment of society through its research, focusing on solutions to global issues. The Institute’s long-term goal is to become the world’s leading science and technology university.

     
  • richardmitnick 9:08 am on July 29, 2020 Permalink | Reply
    Tags: "A method to predict the properties of complex quantum systems", , Machine learning, Machines are currently unable to support quantum systems with over tens of qubits., , , Quantum state tomography, Unitary t-design   

    From Caltech via phys.org: “A method to predict the properties of complex quantum systems” 

    Caltech Logo

    From Caltech

    via


    phys.org

    July 29, 2020
    Ingrid Fadelli

    1
    Credit: Huang, Kueng & Preskill.

    Predicting the properties of complex quantum systems is a crucial step in the development of advanced quantum technologies. While research teams worldwide have already devised a number of techniques to study the characteristics of quantum systems, most of these have only proved to be effective in some cases.

    Three researchers at California Institute of Technology recently introduced a new method that can be used to predict multiple properties of complex quantum systems from a limited number of measurements. Their method, outlined in a paper published in Nature Physics, has been found to be highly efficient and could open up new possibilities for studying the ways in which machines process quantum information.

    “During my undergraduate, my research centered on statistical machine learning and deep learning,” Hsin-Yuan Huang, one of the researchers who carried out the study, told Phys.org. “A central basis for the current machine-learning era is the ability to use highly parallelized hardware, such as graphical processing units (GPU) or tensor processing units (TPU). It is natural to wonder how an even more powerful learning machine capable of harnessing quantum-mechanical processes could emerge in the far future. This was my aspiration when I started my Ph.D. at Caltech.”

    The first step toward the development of more advanced machines based on quantum-mechanical processes is to gain a better understanding of how current technologies process and manipulate quantum systems and quantum information. The standard method for doing this, known as quantum state tomography, works by learning the entire description of a quantum system. However, this requires an exponential number of measurements, as well as considerable computational memory and time.

    As a result, when using quantum state tomography, machines are currently unable to support quantum systems with over tens of qubits. In recent years, researchers have proposed a number of techniques based on artificial neural networks that could significantly enhance the quantum information processing of machines. Unfortunately, however, these techniques do not generalize well across all cases, and the specific requirements that allow them to work are still unclear.

    “To build a rigorous foundation for how machines can perceive quantum systems, we combined my previous knowledge about statistical learning theory with Richard Kueng and John Preskill’s expertise on a beautiful mathematical theory known as unitary t-design,” Huang said. “Statistical learning theory is the theory that underlies how the machine could learn an approximate model about how the world behaves, while unitary t-design is a mathematical theory that underlies how quantum information scrambles, which is central to understand quantum many-body chaos, in particular, quantum black holes.”

    By combining statistical learning and unitary t-design theory, the researchers were able to devise a rigorous and efficient procedure that allows classical machines to produce approximate classical descriptions of quantum many-body systems. These descriptions can be used to predict several properties of the quantum systems that are being studied by performing a minimal number of quantum measurements.

    “To construct an approximate classical description of the quantum state, we perform a randomized measurement procedure given as follows,” Huang said. “We sample a few random quantum evolutions that would be applied to the unknown quantum many-body system. These random quantum evolutions are typically chaotic and would scramble the quantum information stored in the quantum system.”

    The random quantum evolutions sampled by the researchers ultimately enable the use of the mathematical theory of unitary t-design to study such chaotic quantum systems as quantum black holes. In addition, Huang and his colleagues examined a number of randomly scrambled quantum systems using a measurement tool that elicits a wave function collapse, a process that turns a quantum system into a classical system. Finally, they combined the random quantum evolutions with the classical system representations derived from their measurements, producing an approximate classical description of the quantum system of interest.

    “Intuitively, one could think of this procedure as follows,” Huang explained. “We have an exponentially high-dimensional object, the quantum many-body system, that is very hard to grasp by a classical machine. We perform several random projections of this extremely high-dimension object to a much lower dimensional space through the use of random/chaotic quantum evolution. The set of random projections provides a rough picture of how this exponentially high dimensional object looks, and the classical representation allows us to predict various properties of the quantum many-body system.”

    Huang and his colleagues proved that by combining statistical learning constructs and the theory of quantum information scrambling, they could accurately predict M properties of a quantum system based solely on log(M) measurements. In other words, their method can predict an exponential number of properties simply by repeatedly measuring specific aspects of a quantum system for a specific number of times.

    “The traditional understanding is that when we want to measure M properties, we have to measure the quantum system M times,” Huang said. “This is because after we measure one property of the quantum system, the quantum system would collapse and become classical. After the quantum system has turned classical, we cannot measure other properties with the resulting classical system. Our approach avoids this by performing randomly generated measurements and infer the desired property by combining these measurement data.”

    The study partly explains the excellent performance achieved by recently developed machine learning (ML) techniques in predicting properties of quantum systems. In addition, its unique design makes the method they developed significantly faster than existing ML techniques, while also allowing it to predict properties of quantum many-body systems with a greater accuracy.

    “Our study rigorously shows that there is much more information hidden in the data obtained from quantum measurements than we originally expected,” Huang said. “By suitably combining these data, we can infer this hidden information and gain significantly more knowledge about the quantum system. This implies the importance of data science techniques for the development of quantum technology.”

    The results of tests the team conducted suggest that to leverage the power of machine learning, it is first necessary to attain a good understanding of intrinsic quantum physics mechanisms. Huang and his colleagues showed that although directly applying standard machine-learning techniques can lead to satisfactory results, organically combining the mathematics behind machine learning and quantum physics results in far better quantum information processing performance.

    “Given a rigorous ground for perceiving quantum systems with classical machines, my personal plan is to now take the next step toward creating a learning machine capable of manipulating and harnessing quantum-mechanical processes,” Huang said. “In particular, we want to provide a solid understanding of how machines could learn to solve quantum many-body problems, such as classifying quantum phases of matter or finding quantum many-body ground states.”

    This new method for constructing classical representations of quantum systems could open up new possibilities for the use of machine learning to solve challenging problems involving quantum many-body systems. To tackle these problems more efficiently, however, machines would also need to be able to simulate a number of complex computations, which would require a further synthesis between the mathematics underlying machine learning and quantum physics. In their next studies, Huang and his colleagues plan to explore new techniques that could enable this synthesis.

    “At the same time, we are also working on refining and developing new tools for inferring hidden information from the data collected by quantum experimentalists,” Huang said. “The physical limitation in the actual systems provides interesting challenges for developing more advanced techniques. This would further allow experimentalists to see what they originally could not and help advance the current state of quantum technology.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”

    Caltech campus

     
  • richardmitnick 12:47 pm on July 17, 2020 Permalink | Reply
    Tags: "Separating Gamma-Ray Bursts: Students Make Critical Breakthrough", , , , , Machine learning, , Scientists at the Niels Bohr Institute have developed a method to classify all GRBs without needing to find an afterglow.   

    From Niels Bohr Institute- “Separating Gamma-Ray Bursts: Students Make Critical Breakthrough” 

    University of Copenhagen

    Niels Bohr Institute bloc

    From Niels Bohr Institute

    17 July 2020
    Charles Louis Steinhardt, Associate professor
    The Cosmic Dawn Center
    Email: Steinhardt@nbi.ku.dk
    Phone: +45 35 33 50 10

    Gamma-Ray Bursts: By applying a machine-learning algorithm, scientists at the Niels Bohr Institute, University of Copenhagen, have developed a method to classify all gamma-ray bursts (GRBs), rapid highly energetic explosions in distant galaxies, without needing to find an afterglow – by which GRBs are presently categorized. This breakthrough, initiated by first-year B.Sc. students, may prove key in finally discovering the origins of these mysterious bursts. The result is now published in The Astrophysical Journal Letters.

    1
    The figure indicates how similar different GRBs are to each other. Points which are closer together are more similar, and points which are further away are more different. What we find is that there are two distinct groups, one orange and the other blue. The orange dots appear to correspond to “short” GRB, which have been hypothesized to be produced by mergers of neutron stars, and the blue dots appear to correspond to “long” GRB, which might instead be produced by the collapse of dying, massive stars.

    Ever since gamma-ray bursts (GRBs) were accidentally picked up by Cold War satellites in the 70s, the origin of these rapid bursts have been a significant puzzle. Although many astronomers agree that GRBs can be divided into shorter (typically less than 1 second) and longer (up to a few minutes) bursts, the two groups are overlapping. It has been thought that longer bursts might be associated with the collapse of massive stars, while shorter bursts might instead be caused by the merger of neutron stars. However, without the ability to separate the two groups and pinpoint their properties, it has been impossible to test these ideas.

    So far, it has only been possible to determine the type of a GRB about 1% of the time, when a telescope was able to point at the burst location quickly enough to pick up residual light, called an afterglow. This has been such a crucial step that astronomers have developed worldwide networks capable of interrupting other work and repointing large telescopes within minutes of the discovery of a new burst. One GRB was even detected by the LIGO Observatory using gravitational waves, for which the team was awarded the 2017 Nobel Prize.

    _________________________________________________

    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project


    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    ESA/eLISA the future of gravitational wave research


    _________________________________________________

    Breakthrough achieved using machine-learning algorithm

    Now, scientists at the Niels Bohr Institute have developed a method to classify all GRBs without needing to find an afterglow. The group, led by first-year B.Sc. Physics students Johann Bock Severin, Christian Kragh Jespersen and Jonas Vinther, applied a machine-learning algorithm to classify GRBs. They identified a clean separation between long and short GRB’s. Their work, carried out under the supervision of Charles Steinhardt, will bring astronomers a step closer to understanding GRB’s.

    This breakthrough may prove the key to finally discovering the origins of these mysterious bursts. As Charles Steinhardt, Associate Professor at the Cosmic Dawn Center of the Niels Bohr Institute explains, “Now that we have two complete sets available, we can start exploring the differences between them. So far, there had not been a tool to do that.”

    3
    Artist’s impression of a gamma-ray burst. Credit: ESA, illustration by ESA/ECF

    From algorithm to visual map

    Instead of using a limited set of summary statistics, as was typically done until then, the students decided to encode all available information on GRB’s using the machine learning algorithm t-SNE. The t-distributed Stochastic neighborhood embedding algorithm takes complex high-dimensional data and produces a simplified and visually accessible map. It does so without interfering with the structure of the dataset. “The unique thing about this approach,” explains Christian Kragh Jespersen, “is that t-SNE doesn’t force there to be two groups. You let the data speak for itself and tell you how it should be classified.”

    Shining light on the data

    The preparation of the feature space – the input you give the algorithm – was the most challenging part of the project, says Johann Bock Severin. Essentially, the students had to prepare the dataset in such a way that its most important features would stand out. “I like to compare it to hanging your data points from the ceiling in a dark room,” explains Christian Kragh Jespersen. “Our main problem was to figure out from what direction we should shine light on the data to make the separations visible.”

    Step 0 in understanding GRB’s”

    The students explored the t-SNE machine-learning algorithm as part of their 1st Year project, a 1st year course in the Bachelor of Physics. “By the time we got to the end of the course, it was clear we had quite a significant result”, their supervisor Charles Steinhardt says. The students’ mapping of the t-SNE cleanly divides all GRB’s from the Swift observatory into two groups. Importantly, it classifies GRB’s that previously were difficult to classify. “This essentially is step 0 in understanding GRB’s,” explains Steinhardt. “For the first time, we can confirm that shorter and longer GRB’s are indeed completely separate things.”

    Without any prior theoretical background in astronomy, the students have discovered a key piece of the puzzle surrounding GRB’s. From here, astronomers can start to develop models to identify the characteristics of these two separate classes.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings


    Stem Education Coalition

    Niels Bohr Institute Campus

    Niels Bohr Institute (Danish: Niels Bohr Institutet) is a research institute of the University of Copenhagen. The research of the institute spans astronomy, geophysics, nanotechnology, particle physics, quantum mechanics and biophysics.

    The Institute was founded in 1921, as the Institute for Theoretical Physics of the University of Copenhagen, by the Danish theoretical physicist Niels Bohr, who had been on the staff of the University of Copenhagen since 1914, and who had been lobbying for its creation since his appointment as professor in 1916. On the 80th anniversary of Niels Bohr’s birth – October 7, 1965 – the Institute officially became The Niels Bohr Institute.[1] Much of its original funding came from the charitable foundation of the Carlsberg brewery, and later from the Rockefeller Foundation.[2]

    During the 1920s, and 1930s, the Institute was the center of the developing disciplines of atomic physics and quantum physics. Physicists from across Europe (and sometimes further abroad) often visited the Institute to confer with Bohr on new theories and discoveries. The Copenhagen interpretation of quantum mechanics is named after work done at the Institute during this time.

    On January 1, 1993 the institute was fused with the Astronomic Observatory, the Ørsted Laboratory and the Geophysical Institute. The new resulting institute retained the name Niels Bohr Institute.

    The University of Copenhagen (UCPH) (Danish: Københavns Universitet) is the oldest university and research institution in Denmark. Founded in 1479 as a studium generale, it is the second oldest institution for higher education in Scandinavia after Uppsala University (1477). The university has 23,473 undergraduate students, 17,398 postgraduate students, 2,968 doctoral students and over 9,000 employees. The university has four campuses located in and around Copenhagen, with the headquarters located in central Copenhagen. Most courses are taught in Danish; however, many courses are also offered in English and a few in German. The university has several thousands of foreign students, about half of whom come from Nordic countries.

    The university is a member of the International Alliance of Research Universities (IARU), along with University of Cambridge, Yale University, The Australian National University, and UC Berkeley, amongst others. The 2016 Academic Ranking of World Universities ranks the University of Copenhagen as the best university in Scandinavia and 30th in the world, the 2016-2017 Times Higher Education World University Rankings as 120th in the world, and the 2016-2017 QS World University Rankings as 68th in the world. The university has had 9 alumni become Nobel laureates and has produced one Turing Award recipient

     
  • richardmitnick 7:25 am on July 15, 2020 Permalink | Reply
    Tags: , , , , Machine learning, , SPOCK — Stability of Planetary Orbital Configurations Klassifier   

    From Princeton University: “Artificial intelligence predicts which planetary systems will survive” 

    Princeton University
    From Princeton University

    Jul 13, 2020
    Liz Fuller-Wright

    1

    Why don’t planets collide more often? How do planetary systems — like our solar system or multi-planet systems around other stars — organize themselves? Of all of the possible ways planets could orbit, how many configurations will remain stable over the billions of years of a star’s life cycle?

    Rejecting the large range of unstable possibilities — all the configurations that would lead to collisions — would leave behind a sharper view of planetary systems around other stars, but it’s not as easy as it sounds.

    “Separating the stable from the unstable configurations turns out to be a fascinating and brutally hard problem,” said Daniel Tamayo, a NASA Hubble Fellowship Program Sagan Fellow in astrophysical sciences at Princeton. To make sure a planetary system is stable, astronomers need to calculate the motions of multiple interacting planets over billions of years and check each possible configuration for stability — a computationally prohibitive undertaking.

    Astronomers since Isaac Newton have wrestled with the problem of orbital stability, but while the struggle contributed to many mathematical revolutions, including calculus and chaos theory, no one has found a way to predict stable configurations theoretically. Modern astronomers still have to “brute-force” the calculations, albeit with supercomputers instead of abaci or slide rules.

    Tamayo realized that he could accelerate the process by combining simplified models of planets’ dynamical interactions with machine learning methods. This allows the elimination of huge swaths of unstable orbital configurations quickly — calculations that would have taken tens of thousands of hours can now be done in minutes. He is the lead author on a paper [PNAS] detailing the approach in the Proceedings of the National Academy of Sciences. Co-authors include graduate student Miles Cranmer and David Spergel, Princeton’s Charles A. Young Professor of Astronomy on the Class of 1897 Foundation, Emeritus.

    For most multi-planet systems, there are many orbital configurations that are possible given current observational data, of which not all will be stable. Many configurations that are theoretically possible would “quickly” — that is, in not too many millions of years — destabilize into a tangle of crossing orbits. The goal was to rule out those so-called “fast instabilities.”

    “We can’t categorically say ‘This system will be OK, but that one will blow up soon,’” Tamayo said. “The goal instead is, for a given system, to rule out all the unstable possibilities that would have already collided and couldn’t exist at the present day.”

    Instead of simulating a given configuration for a billion orbits — the traditional brute-force approach, which would take about 10 hours — Tamayo’s model instead simulates for 10,000 orbits, which only takes a fraction of a second. From this short snippet, they calculate 10 summary metrics that capture the system’s resonant dynamics. Finally, they train a machine learning algorithm to predict from these 10 features whether the configuration would remain stable if they let it keep going out to one billion orbits.

    “We called the model SPOCK — Stability of Planetary Orbital Configurations Klassifier — partly because the model determines whether systems will ‘live long and prosper,’” Tamayo said.

    SPOCK determines the long-term stability of planetary configurations about 100,000 times faster than the previous approach, breaking the computational bottleneck. Tamayo cautioned that while he and his colleagues haven’t “solved” the general problem of planetary stability, SPOCK does reliably identify fast instabilities in compact systems, which they argue are the most important in trying to do stability constrained characterization.

    “This new method will provide a clearer window into the orbital architectures of planetary systems beyond our own,” Tamayo said.

    But how many planetary systems are there? Isn’t our solar system the only one?

    In the past 25 years, astronomers have found more than 4,000 planets orbiting other stars, of which almost half are in multi-planet systems. But since small exoplanets are extremely challenging to detect, we still have an incomplete picture of their orbital configurations.

    “More than 700 stars are now known to have two or more planets orbiting around them,” said Professor Michael Strauss, chair of Princeton’s Department of Astrophysical Sciences. “Dan and his colleagues have found a fundamentally new way to explore the dynamics of these multi-planet systems, speeding up the computer time needed to make models by factors of 100,000. With this, we can hope to understand in detail the full range of solar system architectures that nature allows.”

    SPOCK is especially helpful for making sense of some of the faint, far-distant planetary systems recently spotted by the Kepler telescope, said Jessie Christiansen, an astrophysicist with the NASA Exoplanet Archive who was not involved in this research. “It’s hard to constrain their properties with our current instruments,” she said. “Are they rocky planets, ice giants, or gas giants? Or something new? This new tool will allow us to rule out potential planet compositions and configurations that would be dynamically unstable — and it lets us do it more precisely and on a substantially larger scale than was previously available.”

    “Predicting the long-term stability of compact multi-planet systems” by Daniel Tamayo, Miles Cranmer, Samuel Hadden, Hanno Rein, Peter Battaglia, Alysa Obertas, Philip J. Armitage, Shirley Ho, David Spergel, Christian Gilbertson, Naireen Hussain, Ari Silburt, Daniel Jontof-Hutter and Kristen Menou, appears in the current issue of the PNAS. Tamayo’s research was supported by the NASA Hubble Fellowship (grant HST-HF2-51423.001-A) awarded by the Space Telescope Science Institute.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    About Princeton: Overview

    Princeton University is a vibrant community of scholarship and learning that stands in the nation’s service and in the service of all nations. Chartered in 1746, Princeton is the fourth-oldest college in the United States. Princeton is an independent, coeducational, nondenominational institution that provides undergraduate and graduate instruction in the humanities, social sciences, natural sciences and engineering.

    As a world-renowned research university, Princeton seeks to achieve the highest levels of distinction in the discovery and transmission of knowledge and understanding. At the same time, Princeton is distinctive among research universities in its commitment to undergraduate teaching.

    Today, more than 1,100 faculty members instruct approximately 5,200 undergraduate students and 2,600 graduate students. The University’s generous financial aid program ensures that talented students from all economic backgrounds can afford a Princeton education.

    Princeton Shield

     
  • richardmitnick 9:49 am on October 20, 2019 Permalink | Reply
    Tags: A lot in common with facial recognition at Facebook and other social media., , , , , , , , Improving on standard methods for estimating the dark matter content of the universe through artificial intelligence., Machine learning, The scientists used their fully trained neural network to analyse actual dark matter maps from the KiDS-​450 dataset., Using cutting-​edge machine learning algorithms for cosmological data analysis.,   

    From ETH Zürich: “Artificial intelligence probes dark matter in the universe” 

    ETH Zurich bloc

    From ETH Zürich

    18.09.2019
    Oliver Morsch

    A team of physicists and computer scientists at ETH Zürich has developed a new approach to the problem of dark matter and dark energy in the universe. Using machine learning tools, they programmed computers to teach themselves how to extract the relevant information from maps of the universe.

    1
    Excerpt from a typical computer-​generated dark matter map used by the researchers to train the neural network. (Source: ETH Zürich)

    Understanding the how our universe came to be what it is today and what will be its final destiny is one of the biggest challenges in science. The awe-​inspiring display of countless stars on a clear night gives us some idea of the magnitude of the problem, and yet that is only part of the story. The deeper riddle lies in what we cannot see, at least not directly: dark matter and dark energy. With dark matter pulling the universe together and dark energy causing it to expand faster, cosmologists need to know exactly how much of those two is out there in order to refine their models.

    At ETH Zürich, scientists from the Department of Physics and the Department of Computer Science have now joined forces to improve on standard methods for estimating the dark matter content of the universe through artificial intelligence. They used cutting-​edge machine learning algorithms for cosmological data analysis that have a lot in common with those used for facial recognition by Facebook and other social media. Their results have recently been published in the scientific journal Physical Review D.

    Facial recognition for cosmology

    While there are no faces to be recognized in pictures taken of the night sky, cosmologists still look for something rather similar, as Tomasz Kacprzak, a researcher in the group of Alexandre Refregier at the Institute of Particle Physics and Astrophysics, explains: “Facebook uses its algorithms to find eyes, mouths or ears in images; we use ours to look for the tell-​tale signs of dark matter and dark energy.” As dark matter cannot be seen directly in telescope images, physicists rely on the fact that all matter – including the dark variety – slightly bends the path of light rays arriving at the Earth from distant galaxies. This effect, known as “weak gravitational lensing”, distorts the images of those galaxies very subtly, much like far-​away objects appear blurred on a hot day as light passes through layers of air at different temperatures.

    Weak gravitational lensing NASA/ESA Hubble

    Cosmologists can use that distortion to work backwards and create mass maps of the sky showing where dark matter is located. Next, they compare those dark matter maps to theoretical predictions in order to find which cosmological model most closely matches the data. Traditionally, this is done using human-​designed statistics such as so-​called correlation functions that describe how different parts of the maps are related to each other. Such statistics, however, are limited as to how well they can find complex patterns in the matter maps.

    Neural networks teach themselves

    “In our recent work, we have used a completely new methodology”, says Alexandre Refregier. “Instead of inventing the appropriate statistical analysis ourselves, we let computers do the job.” This is where Aurelien Lucchi and his colleagues from the Data Analytics Lab at the Department of Computer Science come in. Together with Janis Fluri, a PhD student in Refregier’s group and lead author of the study, they used machine learning algorithms called deep artificial neural networks and taught them to extract the largest possible amount of information from the dark matter maps.

    2
    Once the neural network has been trained, it can be used to extract cosmological parameters from actual images of the night sky. (Visualisations: ETH Zürich)

    In a first step, the scientists trained the neural networks by feeding them computer-​generated data that simulates the universe. That way, they knew what the correct answer for a given cosmological parameter – for instance, the ratio between the total amount of dark matter and dark energy – should be for each simulated dark matter map. By repeatedly analysing the dark matter maps, the neural network taught itself to look for the right kind of features in them and to extract more and more of the desired information. In the Facebook analogy, it got better at distinguishing random oval shapes from eyes or mouths.

    More accurate than human-​made analysis

    The results of that training were encouraging: the neural networks came up with values that were 30% more accurate than those obtained by traditional methods based on human-​made statistical analysis. For cosmologists, that is a huge improvement as reaching the same accuracy by increasing the number of telescope images would require twice as much observation time – which is expensive.

    Finally, the scientists used their fully trained neural network to analyse actual dark matter maps from the KiDS-​450 dataset. “This is the first time such machine learning tools have been used in this context,” says Fluri, “and we found that the deep artificial neural network enables us to extract more information from the data than previous approaches. We believe that this usage of machine learning in cosmology will have many future applications.”

    As a next step, he and his colleagues are planning to apply their method to bigger image sets such as the Dark Energy Survey.

    Also, more cosmological parameters and refinements such as details about the nature of dark energy will be fed to the neural networks.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ETH Zurich campus
    ETH Zürich is one of the leading international universities for technology and the natural sciences. It is well known for its excellent education, ground-breaking fundamental research and for implementing its results directly into practice.

    Founded in 1855, ETH Zürich today has more than 18,500 students from over 110 countries, including 4,000 doctoral students. To researchers, it offers an inspiring working environment, to students, a comprehensive education.

    Twenty-one Nobel Laureates have studied, taught or conducted research at ETH Zürich, underlining the excellent reputation of the university.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: