Tagged: Machine learning Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:01 am on July 19, 2021 Permalink | Reply
    Tags: , Machine learning, , ,   

    From Netherlands Institute for Radio Astronomy (ASTRON) (NL) : “ASTRON reveals life cycle of supermassive black hole” 

    ASTRON bloc

    From Netherlands Institute for Radio Astronomy (ASTRON) (NL)

    12 January 2021 [Just now in social media.]

    Astronomers have used, for the first time, the combination of LOFAR [below] and WSRT-Apertif, the phased array upgrade of the Westerbork Synthesis Radio Telescope [below], to measure the life cycle of supermassive black holes emitting radio waves. This study, part of the LOFAR deep fields surveys, opens the possibility of timing this cycle for many objects in the sky and explore the impact it has on galaxy evolution.

    Supermassive black holes are an important component of galaxies. When in their active phase, they eject huge amounts of energy, which eventually can expel gas and matter from galaxies and impact the entire formation of new stars.

    These ejections represent only a phase in the lifecycle of a supermassive black hole. They are believed to last from tens of millions to a few hundreds of millions of years, only a short moment in the life of a galaxy. After this, the supermassive black hole enters a quiet phase. However, astronomers think that this cycle can actually repeat multiple times in which the black hole starts a new phase of ejections. But timing this cycle is hard because the timescales involved are far too long to be directly probed: other ways to easily measure them in a large number of objects are needed.

    Radio wave ejections

    Some of the energy – also called ‘flux’ – is ejected by the supermassive black hole in the form of radio waves. Both radio waves at low and high frequencies are emitted and can be detected by sensitive radio telescopes like LOFAR (low frequency radio waves) and WSRT-Apertif (high frequency radio waves). “High frequency radio waves quickly lose their energy – and, as consequence, their flux – while those in the lower frequency do so much more slowly,” Prof. Dr. Raffaella Morganti, first author of the paper in Astronomy and Astrophysics says.

    Observing these supermassive black holes with both LOFAR and WSRT-Apertif, scientists have been able to say which supermassive black holes are, at present, ‘switched off’ and how long ago it happened. They also have identified a case where the ejection phase of the supermassive black hole has ‘recently’ restarted.

    Dying supermassive black holes

    In a previous study, LOFAR was used to find possible supermassive black holes in the dying or restarting phase, by taking advantage of their properties at low frequencies. In this study these same sources were surveyed also using WSRT-Apertif, and thus measuring radio waves at higher frequencies. The relative strength of the emission at these two frequencies is used to derive, to first order, how old a radio source is and whether it is already in a dying phase (see Figure 1).

    2
    Figure 1: LOFAR and WSRT-Apertif detection of supermassive black hole radio waves. The difference in flux at which LOFAR and WSRT-Apertif detect a supermassive black hole determines if it is in its ejection phase (a) or not (b). The lower the flux of b, the longer it has been since the supermassive black hole was in its ejection phase. © Studio Eigen Merk/ASTRON.

    Morganti: “Because of our earlier studies using LOFAR, we knew the expected relative difference in flux between the lower and higher frequencies if the supermassive black holes are in the active, ejection phase. Comparing them with the, now available, Apertif data, we were able to tell, for each of them, whether the on-going activity was confirmed or whether the ejected phase had stopped.

    “Interestingly, the relative number of radio galaxies found in the ‘out’ phase is also telling for how long a supermassive black hole has been ‘switched off’. These objects are rare, therefore large surveys are necessary to collect enough data about them so that we have a large enough data size for statistical analysis.”

    Great combination

    With this proof of concept study Morganti and colleagues have demonstrated that a combined survey of LOFAR and WSRT-Apertif can indeed detect the phase in which a supermassive black hole currently is. Morganti: “LOFAR is unique in sensitivity and spatial resolution at the low frequencies. And while there are other radio telescopes that can observe the higher frequencies, Apertif is now covering in-depth large areas of the northern sky, instead of focusing on a single source.” That is key, because Morganti and colleagues plan to chart all detectable supermassive black holes with radio emission, in order to learn more about the birth and life cycles of galaxies.

    A next step will be to create an automated way to detect these sources over much larger areas, using the large surveys that LOFAR and Apertif are doing. This is too big a job to do manually for a small group and approaches like Radio Galaxy Zoo and machine learning will be the way forward.

    3
    Figure 2: Part of the radio sky observed by this project where many galaxies with supermassive black holes emitting radio waves can be seen. The colours give an indication of the phase in the active life of the supermassive black hole. The red colours represent emission from black holes, in the later phase, at the end of their active life. Greener colours represent black holes in their “youth”. © ASTRON.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASTRON is the ASTRON-Netherlands Institute for Radio Astronomy [Nederlands Instituut voor Radioastronomie] (NL). Its main office is in Dwingeloo in the Dwingelderveld National Park in the province of Drenthe. ASTRON is part of Netherlands Organisation for Scientific Research (NWO).

    ASTRON’s main mission is to make discoveries in radio astronomy happen, via the development of new and innovative technologies, the operation of world-class radio astronomy facilities, and the pursuit of fundamental astronomical research. Engineers and astronomers at ASTRON have an outstanding international reputation for novel technology development, and fundamental research in galactic and extra-galactic astronomy. Its main funding comes from NWO.

    ASTRON’s programme has three principal elements:

    The operation of front line observing facilities, including especially the Westerbork Synthesis Radio Telescope and LOFAR,
    The pursuit of fundamental astronomical research using ASTRON facilities, together with a broad range of other telescopes around the world and space-borne instruments (e.g. Sptizer, HST etc.)
    A strong technology development programme, encompassing both innovative instrumentation for existing telescopes and the new technologies needed for future facilities.

    In addition, ASTRON is active in the international science policy arena and is one of the leaders in the international SKA project. The Square Kilometre Array will be the world’s largest and most sensitive radio telescope with a total collecting area of approximately one square kilometre. The SKA will be built in Southern Africa and in Australia. It is a global enterprise bringing together 11 countries from the 5 continents.

    Radio telescopes

    ASTRON operates the Westerbork Synthesis Radio Telescope (WSRT), one of the largest radio telescopes in the world. The WSRT and the International LOFAR Telescope (ILT) are dedicated to explore the universe at radio frequencies ranging from 10 MHz to 8 GHz.

    In addition to its use as a stand-alone radio telescope, the Westerbork array participates in the European Very Long Baseline Interferometry Network (EVN) of radio telescopes.

    ASTRON is the host institute for the Joint Institute for VLBI in Europe (JIVE).

    Its primary task is to operate the EVN MkIV VLBI Data Processor (correlator). JIVE also provides a high-level of support to astronomers and the Telescope Network. ASTRON also hosts the NOVA Optical/ Infrared instrumentation group.

    LOFAR is a radio telescope composed of an international network of antenna stations and is designed to observe the universe at frequencies between 10 and 250 MHz. Operated by ASTRON (NL), the network includes stations in the Netherlands, Germany, Sweden, the U.K., France, Poland and Ireland.

     
  • richardmitnick 10:13 am on July 16, 2021 Permalink | Reply
    Tags: "Realizing Machine Learning’s Promise in Geoscience Remote Sensing", , , Imaging spectroscopy geoscience, In recent years machine learning and pattern recognition methods have become common in Earth and space sciences., Machine learning, The writers conclude that the recent boom in machine learning and signal processing research has not yet made a commensurate impact on the use of imaging spectroscopy in applied sciences.   

    From Eos: “Realizing Machine Learning’s Promise in Geoscience Remote Sensing” 

    From AGU
    Eos news bloc

    From Eos

    8 July 2021
    David Thompson
    Philip G. Brodrick

    Machine learning and signal processing methods offer significant benefits to the geosciences, but realizing this potential will require closer engagement among different research communities.

    1
    Remote imaging spectrometers acquire a cube of data with two spatial dimensions and one spectral dimension. These rich data products are used in a wide range of geoscience applications. Their high dimensionality and volumes seem well suited to data-driven analysis with machine learning tools, but after a decade of research, machine learning’s influence on imaging spectroscopy geoscience has been limited.

    In recent years machine learning and pattern recognition methods have become common in Earth and space sciences. This is especially true for remote sensing applications, which often rely on massive archives of noisy data and so are well suited to such artificial intelligence (AI) techniques.

    As the data science revolution matures, we can assess its impact on specific research disciplines. We focus here on imaging spectroscopy, also known as hyperspectral imaging, as a data-centric remote sensing discipline expected to benefit from machine learning. Imaging spectroscopy involves collecting spectral data from airborne and satellite sensors at hundreds of electromagnetic wavelengths for each pixel in the sensors’ viewing area.

    Since the introduction of imaging spectrometers in the early 1980s, their numbers and sophistication have grown dramatically, and their application has expanded across diverse topics in Earth, space, and laboratory sciences. They have, for example, surveyed greenhouse gas emitters across California [Duren et al., 2019 (All cited references are below with links)], found water on the moon [Pieters et al., 2009], and mapped the tree chemistry of the Peruvian Amazon [Asner et al., 2017]. The data sets involved are large and complex. And a new generation of orbital instruments, slated for launch in coming years, will provide global coverage with far larger archives. Missions featuring these instruments include NASA’s Earth Surface Mineral Dust Source Investigation (EMIT) [Green et al., 2020] and Surface Biology and Geology investigation [National Academies of Sciences, Engineering, and Medicine, 2019].

    Researchers have introduced modern signal processing and machine learning concepts to imaging spectroscopy analysis, with potential benefits for numerous areas of geoscience research. But to what extent has this potential been realized? To help answer this question, we assessed whether the growth in signal processing and pattern recognition research, indicated by an increasing number of peer-reviewed technical articles, has produced a commensurate impact on science investigations using imaging spectroscopy.

    Mining for Data

    Following an established method, we surveyed all articles cataloged in the Web of Science [Harzing and Alakangas, 2016] since 1976 with titles or abstracts containing the term “imaging spectroscopy” or “hyperspectral.” Then, using a modular clustering approach [Waltman et al., 2010], we identified clustered bibliographic communities among the 13,850 connected articles within the citation network.

    We found that these articles fall into several independent and self-citing groups (Figure 1): optics and medicine, food and agriculture, machine learning, signal processing, terrestrial Earth science, aquatic Earth science, astrophysics, heliophysics, and planetary science. The articles in two of these nine groups (signal processing and machine learning) make up a distinct cluster of methodological research investigating how signal processing and machine learning can be used with imaging spectroscopy, and those in the other seven involve research using imaging spectroscopy to address questions in applied sciences. The volume of research has increased recently in all of these groups, especially those in the methods cluster (Figure 2). Nevertheless, these methods articles have seldom been cited by the applied sciences papers, drawing more than 96% of their citations internally but no more than 2% from any applied science group.

    2
    Fig. 1. Research communities tend to sort themselves into self-citing clusters. Circles in this figure represent scientific journal publications, with the size proportional to the number of citations. Map distance indicates similarity in the citation network. Seven of nine total clusters are shown; the other two (astrophysics and heliophysics) were predominantly isolated from the others. Annotations indicate keywords from representative publications. Image produced using VOSviewer.

    The siloing is even stronger among published research in high-ranked scholarly journals, defined as having h-indices among the 20 highest in the 2020 public Google Scholar ranking. Fewer than 40% of the articles in our survey came from the clinical, Earth, and space science fields noted above, yet these fields produced all of the publications in top-ranked journals. We did not find a single instance in which one of those papers in a high-impact journal cited a paper from the methods cluster.

    3
    Fig. 2. The number of publications per year in each of the nine research communities considered is shown here.

    A Dramatic Disconnect

    From our analysis, we conclude that the recent boom in machine learning and signal processing research has not yet made a commensurate impact on the use of imaging spectroscopy in applied sciences.

    A lack of citations does not necessarily imply a lack of influence. For instance, an Earth science paper that borrows techniques published in a machine learning paper may cite that manuscript once, whereas later studies applying the techniques may cite the science paper rather than the progenitor. Nonetheless, it is clear that despite constituting a large fraction of the research volume having to do with imaging spectroscopy for more than half a decade, research focused on machine learning and signal processing methods is nearly absent from high-impact science discoveries. This absence suggests a dramatic disconnect between science investigations and pure methodological research.

    Research communities focused on improving the use of signal processing and machine learning with imaging spectroscopy have produced thousands of manuscripts through person-centuries of effort. How can we improve the science impact of these efforts?

    Lowering Barriers to Entry

    We have two main recommendations. The first is technical. The methodology-science disconnect is symptomatic of high barriers to entry for data science researchers to engage applied science questions.

    Imaging spectroscopy data are still expensive to acquire, challenging to use, and regional in scale. Most top-ranked journal publications are written by career experts who plan and conduct specific acquisition campaigns and then perform each stage of the collection and analysis. This effort requires a chain of specialized steps involving instrument calibration, removal of atmospheric interference, and interpretation of reflectance spectra, all of which are challenging for nonexperts. These analyses often require expensive and complex software, raising obstacles for nonexpert researchers to engage cutting-edge geoscience problems.

    In contrast, a large fraction of methodological research related to hyperspectral imaging focuses on packaged, publicly available benchmark scenes such as the Indian Pines [Baumgardner et al., 2015] or the University of Pavia [Università degli Studi di Pavia] (IT) [Dell’Acqua et al., 2004]. These benchmark scenes reduce multifaceted real-world measurement challenges to simplified classification tasks, creating well-defined problems with debatable relevance to pressing science questions.

    Not all remote sensing disciplines have this disconnect. Hyperspectral imaging, involving hundreds of spectral channels, contrasts with multiband remote sensing, which generally involves only 3 to 10 channels and is far more commonly used. Multiband remote sensing instruments have regular global coverage, producing familiar image-like reflectance data. Although multiband instruments cannot measure the same wide range of phenomena as hyperspectral imagers, the maturity and extent of their data products democratize their use to address novel science questions.

    We support efforts to similarly democratize imaging spectrometer data by improving and disseminating core data products, making pertinent science data more accessible to machine learning researchers. Open spectral libraries like SPECCHIO and EcoSIS exemplify this trend, as do the commitments by missions such as PRISMA, EnMAP, and EMIT to distribute reflectance data for each acquisition.

    In the longer term, global imaging spectroscopy missions can increase data usage by providing data in a format that is user-friendly and ready to analyze. We also support open-source visualization and high-quality corrections for atmospheric effects to make existing hyperspectral data sets more accessible to nonexperts, thereby strengthening connections among methodological and application-based research communities. Recent efforts in this area include open source packages like the EnMAP-Box, HyTools, ISOFIT, and ImgSPEC.

    Expanding the Envelope

    Our second recommendation is cultural. Many of today’s most compelling science questions live at the limits of detectability—for example, in the first data acquisition over a new target, in a signal close to the noise, or in a relationship struggling for statistical significance. The papers in the planetary science cluster from our survey are exemplary in this respect, with many focusing on first observations of novel environments and achieving the best high-impact publication rate of any group. In contrast, a lot of methodological work makes use of standardized, well-understood benchmark data sets. Although benchmarks can help to coordinate research around key challenge areas, they should be connected to pertinent science questions.

    Journal editors should encourage submission of manuscripts reporting research about specific, new, and compelling science problems of interest while also being more skeptical of incremental improvements in generic classification, regression, or unmixing algorithms. Science investigators in turn should partner with data scientists to pursue challenging (bio)geophysical investigations, thus broadening their technical tool kits and pushing the limits of what can be measured remotely.

    Machine learning will play a central role in the next decade of imaging spectroscopy research, but its potential in the geosciences will be realized only through engagement with specific and pressing investigations. There is reason for optimism: The next generation of orbiting imaging spectrometer missions promises global coverage commensurate with existing imagers. We foresee a future in which, with judicious help from data science, imaging spectroscopy becomes as pervasive as multiband remote sensing is today.
    Acknowledgments

    The research was carried out at the Jet Propulsion Laboratory, California Institute of Technology (US), under a contract with National Aeronautics Space Agency (US) (80NM0018D0004). Copyright 2021. California Institute of Technology. Government sponsorship acknowledged.

    References:

    Asner, G. P., et al. (2017), Airborne laser-guided imaging spectroscopy to map forest trait diversity and guide conservation, Science, 355(6323), 385–389, https://doi.org/10.1126/science.aaj1987.

    Baumgardner, M. F., L. L. Biehl, and D. A. Landgrebe (2015), 220 band AVIRIS hyperspectral image data set: June 12, 1992 Indian Pine Test Site 3, Purdue Univ. Res. Repository, https://doi.org/10.4231/R7RX991C.

    Dell’Acqua, F., et al. (2004), Exploiting spectral and spatial information in hyperspectral urban data with high resolution, IEEE Geosci. Remote Sens. Lett., 1(4), 322–326, https://doi.org/10.1109/LGRS.2004.837009.

    Duren, R. M., et al. (2019), California’s methane super-emitters, Nature, 575,
    180–184, https://doi.org/10.1038/s41586-019-1720-3.

    Harzing, A.-W., and S. Alakangas (2016), Google Scholar, Scopus and the Web of Science: A longitudinal and cross-disciplinary comparison, Scientometrics, 106, 787–804, https://doi.org/10.1007/s11192-015-1798-9.

    Green, R. O., et al. (2020), The Earth Surface Mineral Dust Source Investigation: An Earth science imaging spectroscopy mission, in 2020 IEEE Aerospace Conference, pp. 1–15, IEEE, Piscataway, N.J., https://doi.org/10.1109/AERO47225.2020.9172731.

    National Academies of Sciences, Engineering, and Medicine (2019), Thriving on Our Changing Planet: A Decadal Strategy for Earth Observation from Space, Natl. Acad. Press, Washington, D.C.

    Pieters, C. M., et al. (2009), Character and spatial distribution of OH/H2O on the surface of the Moon seen by M3 on Chandrayaan-1, Science, 326(5952), 568–572, https://doi.org/10.1126/science.1178658.

    Waltman, L., N. J. van Eck, and E. C. Noyons (2010), A unified approach to mapping and clustering of bibliometric networks, J. Informetrics, 4, 629–635, https://doi.org/10.1016/j.joi.2010.07.002.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Eos is the leading source for trustworthy news and perspectives about the Earth and space sciences and their impact. Its namesake is Eos, the Greek goddess of the dawn, who represents the light shed on understanding our planet and its environment in space by the Earth and space sciences.

     
  • richardmitnick 11:09 am on July 1, 2021 Permalink | Reply
    Tags: "The power of two", , , , , , , , Ellen Zhong, Machine learning, , , Software called cryoDRGN   

    From Massachusetts Institute of Technology (US) : “The power of two” 

    MIT News

    From Massachusetts Institute of Technology (US)

    June 30, 2021
    Saima Sidik | Department of Biology

    Graduate student Ellen Zhong helped biologists and mathematicians reach across departmental lines to address a longstanding problem in electron microscopy.

    1
    Ellen Zhong, a graduate student from the Computational and Systems Biology Program, is using a computational pattern-recognition tool called a neural network to study the shapes of molecular machines.
    Credit: Matthew Brown.

    MIT’s Hockfield Court is bordered on the west by the ultramodern Stata Center, with its reflective, silver alcoves that jut off at odd angles, and on the east by Building 68, which is a simple, window-lined, cement rectangle. At first glance, Bonnie Berger’s mathematics lab in the Stata Center and Joey Davis’s biology lab in Building 68 are as different as the buildings that house them. And yet, a recent collaboration between these two labs shows how their disciplines complement each other. The partnership started when Ellen Zhong, a graduate student from the Computational and Systems Biology (CSB) Program, decided to use a computational pattern-recognition tool called a neural network to study the shapes of molecular machines. Three years later, Zhong’s project is letting scientists see patterns that run beneath the surface of their data, and deepening their understanding of the molecules that shape life.

    Zhong’s work builds on a technique from the 1970s called cryo-electron microscopy (cryo-EM), which lets researchers take high-resolution images of frozen protein complexes. Over the past decade, better microscopes and cameras have led to a “resolution revolution” in cryo-EM that’s allowed scientists to see individual atoms within proteins. But, as good as these images are, they’re still only static snapshots. In reality, many of these molecular machines are constantly changing shape and composition as cells carry out their normal functions and adjust to new situations.

    Along with former Berger lab member Tristan Belper, Zhong devised software called cryoDRGN. The tool uses neural nets to combine hundreds of thousands of cryo-EM images, and shows scientists the full range of three-dimensional conformations that protein complexes can take, letting them reconstruct the proteins’ motion as they carry out cellular functions. Understanding the range of shapes that protein complexes can take helps scientists develop drugs that block viruses from entering cells, study how pests kill crops, and even design custom proteins that can cure disease. Covid-19 vaccines, for example, work partly because they include a mutated version of the virus’s spike protein that’s stuck in its active conformation, so vaccinated people produce antibodies that block the virus from entering human cells. Scientists needed to understand the variety of shapes that spike proteins can take in order to figure out how to force spike into its active conformation.

    Getting off the computer and into the lab

    Zhong’s interest in computational biology goes back to 2011 when, as a chemical engineering undergrad at the University of Virginia (US), she worked with Professor Michael Shirts to simulate how proteins fold and unfold. After college, Zhong took her skills to a company called D. E. Shaw Research, where, as a scientific programmer, she took a computational approach to studying how proteins interact with small-molecule drugs.

    “The research was very exciting,” Zhong says, “but all based on computer simulations. To really understand biological systems, you need to do experiments.”

    This goal of combining computation with experimentation motivated Zhong to join MIT’s CSB PhD program, where students often work with multiple supervisors to blend computational work with bench work. Zhong “rotated” in both the Davis and Berger labs, then decided to combine the Davis lab’s goal of understanding how protein complexes form with the Berger lab’s expertise in machine learning and algorithms. Davis was interested in building up the computational side of his lab, so he welcomed the opportunity to co-supervise a student with Berger, who has a long history of collaborating with biologists.

    Davis himself holds a dual bachelor’s degree in computer science and biological engineering, so he’s long believed in the power of combining complementary disciplines. “There are a lot of things you can learn about biology by looking in a microscope,” he says. “But as we start to ask more complicated questions about entire systems, we’re going to require computation to manage the high-dimensional data that come back.”


    Reconstructing Molecules in Motion.

    Before rotating in the Davis lab, Zhong had never performed bench work before — or even touched a pipette. She was fascinated to find how streamlined some very powerful molecular biology techniques can be. Still, Zhong realized that physical limitations mean that biology is much slower when it’s done at the bench instead of on a computer. “With computational research, you can automate experiments and run them super quickly, whereas in the wet lab, you only have two hands, so you can only do one experiment at a time,” she says.

    Zhong says that synergizing the two different cultures of the Davis and Berger labs is helping her become a well-rounded, adaptable scientist. Working around experimentalists in the Davis lab has shown her how much labor goes into experimental results, and also helped her to understand the hurdles that scientists face at the bench. In the Berger lab, she enjoys having coworkers who understand the challenges of computer programming.

    “The key challenge in collaborating across disciplines is understanding each other’s ‘languages,’” Berger says. “Students like Ellen are fortunate to be learning both biology and computing dialects simultaneously.”

    Bringing in the community

    Last spring revealed another reason for biologists to learn computational skills: these tools can be used anywhere there’s a computer and an internet connection. When the Covid-19 pandemic hit, Zhong’s colleagues in the Davis lab had to wind down their bench work for a few months, and many of them filled their time at home by using cryo-EM data that’s freely available online to help Zhong test her cryoDRGN software. The difficulty of understanding another discipline’s language quickly became apparent, and Zhong spent a lot of time teaching her colleagues to be programmers. Seeing the problems that nonprogrammers ran into when they used cryoDRGN was very informative, Zhong says, and helped her create a more user-friendly interface.

    Although the paper announcing cryoDRGN was just published in February, the tool created a stir as soon as Zhong posted her code online, many months prior. The cryoDRGN team thinks this is because leveraging knowledge from two disciplines let them visualize the full range of structures that protein complexes can have, and that’s something researchers have wanted to do for a long time. For example, the cryoDRGN team recently collaborated with researchers from Harvard and Washington universities to study locomotion of the single-celled organism Chlamydomonas reinhardtii. The mechanisms they uncovered could shed light on human health conditions, like male infertility, that arise when cells lose the ability to move. The team is also using cryoDRGN to study the structure of the SARS-CoV-2 spike protein, which could help scientists design treatments and vaccines to fight coronaviruses.

    Zhong, Berger, and Davis say they’re excited to continue using neural nets to improve cryo-EM analysis, and to extend their computational work to other aspects of biology. Davis cited mass spectrometry as “a ripe area to apply computation.” This technique can complement cryo-EM by showing researchers the identities of proteins, how many of them are bound together, and how cells have modified them.

    “Collaborations between disciplines are the future,” Berger says. “Researchers focused on a single discipline can take it only so far with existing techniques. Shining a different lens on the problem is how advances can be made.”

    Zhong says it’s not a bad way to spend a PhD, either. Asked what she’d say to incoming graduate students considering interdisciplinary projects, she says: “Definitely do it.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    MIT Seal

    USPS “Forever” postage stamps celebrating Innovation at MIT.

    MIT Campus

    Massachusetts Institute of Technology (US) is a private land-grant research university in Cambridge, Massachusetts. The institute has an urban campus that extends more than a mile (1.6 km) alongside the Charles River. The institute also encompasses a number of major off-campus facilities such as the MIT Lincoln Laboratory, the Bates Center, and the Haystack Observatory, as well as affiliated laboratories such as the Broad and Whitehead Institutes.

    Founded in 1861 in response to the increasing industrialization of the United States, Massachusetts Institute of Technology (US) adopted a European polytechnic university model and stressed laboratory instruction in applied science and engineering. It has since played a key role in the development of many aspects of modern science, engineering, mathematics, and technology, and is widely known for its innovation and academic strength. It is frequently regarded as one of the most prestigious universities in the world.

    As of December 2020, 97 Nobel laureates, 26 Turing Award winners, and 8 Fields Medalists have been affiliated with MIT as alumni, faculty members, or researchers. In addition, 58 National Medal of Science recipients, 29 National Medals of Technology and Innovation recipients, 50 MacArthur Fellows, 80 Marshall Scholars, 3 Mitchell Scholars, 22 Schwarzman Scholars, 41 astronauts, and 16 Chief Scientists of the U.S. Air Force have been affiliated with Massachusetts Institute of Technology (US) . The university also has a strong entrepreneurial culture and MIT alumni have founded or co-founded many notable companies. Massachusetts Institute of Technology (US) is a member of the Association of American Universities (AAU).

    Foundation and vision

    In 1859, a proposal was submitted to the Massachusetts General Court to use newly filled lands in Back Bay, Boston for a “Conservatory of Art and Science”, but the proposal failed. A charter for the incorporation of the Massachusetts Institute of Technology, proposed by William Barton Rogers, was signed by John Albion Andrew, the governor of Massachusetts, on April 10, 1861.

    Rogers, a professor from the University of Virginia (US), wanted to establish an institution to address rapid scientific and technological advances. He did not wish to found a professional school, but a combination with elements of both professional and liberal education, proposing that:

    “The true and only practicable object of a polytechnic school is, as I conceive, the teaching, not of the minute details and manipulations of the arts, which can be done only in the workshop, but the inculcation of those scientific principles which form the basis and explanation of them, and along with this, a full and methodical review of all their leading processes and operations in connection with physical laws.”

    The Rogers Plan reflected the German research university model, emphasizing an independent faculty engaged in research, as well as instruction oriented around seminars and laboratories.

    Early developments

    Two days after Massachusetts Institute of Technology (US) was chartered, the first battle of the Civil War broke out. After a long delay through the war years, MIT’s first classes were held in the Mercantile Building in Boston in 1865. The new institute was founded as part of the Morrill Land-Grant Colleges Act to fund institutions “to promote the liberal and practical education of the industrial classes” and was a land-grant school. In 1863 under the same act, the Commonwealth of Massachusetts founded the Massachusetts Agricultural College, which developed as the University of Massachusetts Amherst (US)). In 1866, the proceeds from land sales went toward new buildings in the Back Bay.

    Massachusetts Institute of Technology (US) was informally called “Boston Tech”. The institute adopted the European polytechnic university model and emphasized laboratory instruction from an early date. Despite chronic financial problems, the institute saw growth in the last two decades of the 19th century under President Francis Amasa Walker. Programs in electrical, chemical, marine, and sanitary engineering were introduced, new buildings were built, and the size of the student body increased to more than one thousand.

    The curriculum drifted to a vocational emphasis, with less focus on theoretical science. The fledgling school still suffered from chronic financial shortages which diverted the attention of the MIT leadership. During these “Boston Tech” years, Massachusetts Institute of Technology (US) faculty and alumni rebuffed Harvard University (US) president (and former MIT faculty) Charles W. Eliot’s repeated attempts to merge MIT with Harvard College’s Lawrence Scientific School. There would be at least six attempts to absorb MIT into Harvard. In its cramped Back Bay location, MIT could not afford to expand its overcrowded facilities, driving a desperate search for a new campus and funding. Eventually, the MIT Corporation approved a formal agreement to merge with Harvard, over the vehement objections of MIT faculty, students, and alumni. However, a 1917 decision by the Massachusetts Supreme Judicial Court effectively put an end to the merger scheme.

    In 1916, the Massachusetts Institute of Technology (US) administration and the MIT charter crossed the Charles River on the ceremonial barge Bucentaur built for the occasion, to signify MIT’s move to a spacious new campus largely consisting of filled land on a one-mile-long (1.6 km) tract along the Cambridge side of the Charles River. The neoclassical “New Technology” campus was designed by William W. Bosworth and had been funded largely by anonymous donations from a mysterious “Mr. Smith”, starting in 1912. In January 1920, the donor was revealed to be the industrialist George Eastman of Rochester, New York, who had invented methods of film production and processing, and founded Eastman Kodak. Between 1912 and 1920, Eastman donated $20 million ($236.6 million in 2015 dollars) in cash and Kodak stock to MIT.

    Curricular reforms

    In the 1930s, President Karl Taylor Compton and Vice-President (effectively Provost) Vannevar Bush emphasized the importance of pure sciences like physics and chemistry and reduced the vocational practice required in shops and drafting studios. The Compton reforms “renewed confidence in the ability of the Institute to develop leadership in science as well as in engineering”. Unlike Ivy League schools, Massachusetts Institute of Technology (US) catered more to middle-class families, and depended more on tuition than on endowments or grants for its funding. The school was elected to the Association of American Universities (US)in 1934.

    Still, as late as 1949, the Lewis Committee lamented in its report on the state of education at Massachusetts Institute of Technology (US) that “the Institute is widely conceived as basically a vocational school”, a “partly unjustified” perception the committee sought to change. The report comprehensively reviewed the undergraduate curriculum, recommended offering a broader education, and warned against letting engineering and government-sponsored research detract from the sciences and humanities. The School of Humanities, Arts, and Social Sciences and the MIT Sloan School of Management were formed in 1950 to compete with the powerful Schools of Science and Engineering. Previously marginalized faculties in the areas of economics, management, political science, and linguistics emerged into cohesive and assertive departments by attracting respected professors and launching competitive graduate programs. The School of Humanities, Arts, and Social Sciences continued to develop under the successive terms of the more humanistically oriented presidents Howard W. Johnson and Jerome Wiesner between 1966 and 1980.

    Massachusetts Institute of Technology (US)‘s involvement in military science surged during World War II. In 1941, Vannevar Bush was appointed head of the federal Office of Scientific Research and Development and directed funding to only a select group of universities, including MIT. Engineers and scientists from across the country gathered at Massachusetts Institute of Technology (US)’s Radiation Laboratory, established in 1940 to assist the British military in developing microwave radar. The work done there significantly affected both the war and subsequent research in the area. Other defense projects included gyroscope-based and other complex control systems for gunsight, bombsight, and inertial navigation under Charles Stark Draper’s Instrumentation Laboratory; the development of a digital computer for flight simulations under Project Whirlwind; and high-speed and high-altitude photography under Harold Edgerton. By the end of the war, Massachusetts Institute of Technology (US) became the nation’s largest wartime R&D contractor (attracting some criticism of Bush), employing nearly 4000 in the Radiation Laboratory alone and receiving in excess of $100 million ($1.2 billion in 2015 dollars) before 1946. Work on defense projects continued even after then. Post-war government-sponsored research at MIT included SAGE and guidance systems for ballistic missiles and Project Apollo.

    These activities affected Massachusetts Institute of Technology (US) profoundly. A 1949 report noted the lack of “any great slackening in the pace of life at the Institute” to match the return to peacetime, remembering the “academic tranquility of the prewar years”, though acknowledging the significant contributions of military research to the increased emphasis on graduate education and rapid growth of personnel and facilities. The faculty doubled and the graduate student body quintupled during the terms of Karl Taylor Compton, president of Massachusetts Institute of Technology (US) between 1930 and 1948; James Rhyne Killian, president from 1948 to 1957; and Julius Adams Stratton, chancellor from 1952 to 1957, whose institution-building strategies shaped the expanding university. By the 1950s, Massachusetts Institute of Technology (US) no longer simply benefited the industries with which it had worked for three decades, and it had developed closer working relationships with new patrons, philanthropic foundations and the federal government.

    In late 1960s and early 1970s, student and faculty activists protested against the Vietnam War and Massachusetts Institute of Technology (US)’s defense research. In this period Massachusetts Institute of Technology (US)’s various departments were researching helicopters, smart bombs and counterinsurgency techniques for the war in Vietnam as well as guidance systems for nuclear missiles. The Union of Concerned Scientists was founded on March 4, 1969 during a meeting of faculty members and students seeking to shift the emphasis on military research toward environmental and social problems. Massachusetts Institute of Technology (US) ultimately divested itself from the Instrumentation Laboratory and moved all classified research off-campus to the MIT (US) Lincoln Laboratory facility in 1973 in response to the protests. The student body, faculty, and administration remained comparatively unpolarized during what was a tumultuous time for many other universities. Johnson was seen to be highly successful in leading his institution to “greater strength and unity” after these times of turmoil. However six Massachusetts Institute of Technology (US) students were sentenced to prison terms at this time and some former student leaders, such as Michael Albert and George Katsiaficas, are still indignant about MIT’s role in military research and its suppression of these protests. (Richard Leacock’s film, November Actions, records some of these tumultuous events.)

    In the 1980s, there was more controversy at Massachusetts Institute of Technology (US) over its involvement in SDI (space weaponry) and CBW (chemical and biological warfare) research. More recently, Massachusetts Institute of Technology (US)’s research for the military has included work on robots, drones and ‘battle suits’.

    Recent history

    Massachusetts Institute of Technology (US) has kept pace with and helped to advance the digital age. In addition to developing the predecessors to modern computing and networking technologies, students, staff, and faculty members at Project MAC, the Artificial Intelligence Laboratory, and the Tech Model Railroad Club wrote some of the earliest interactive computer video games like Spacewar! and created much of modern hacker slang and culture. Several major computer-related organizations have originated at MIT since the 1980s: Richard Stallman’s GNU Project and the subsequent Free Software Foundation were founded in the mid-1980s at the AI Lab; the MIT Media Lab was founded in 1985 by Nicholas Negroponte and Jerome Wiesner to promote research into novel uses of computer technology; the World Wide Web Consortium standards organization was founded at the Laboratory for Computer Science in 1994 by Tim Berners-Lee; the MIT OpenCourseWare project has made course materials for over 2,000 Massachusetts Institute of Technology (US) classes available online free of charge since 2002; and the One Laptop per Child initiative to expand computer education and connectivity to children worldwide was launched in 2005.

    Massachusetts Institute of Technology (US) was named a sea-grant college in 1976 to support its programs in oceanography and marine sciences and was named a space-grant college in 1989 to support its aeronautics and astronautics programs. Despite diminishing government financial support over the past quarter century, MIT launched several successful development campaigns to significantly expand the campus: new dormitories and athletics buildings on west campus; the Tang Center for Management Education; several buildings in the northeast corner of campus supporting research into biology, brain and cognitive sciences, genomics, biotechnology, and cancer research; and a number of new “backlot” buildings on Vassar Street including the Stata Center. Construction on campus in the 2000s included expansions of the Media Lab, the Sloan School’s eastern campus, and graduate residences in the northwest. In 2006, President Hockfield launched the MIT Energy Research Council to investigate the interdisciplinary challenges posed by increasing global energy consumption.

    In 2001, inspired by the open source and open access movements, Massachusetts Institute of Technology (US) launched OpenCourseWare to make the lecture notes, problem sets, syllabi, exams, and lectures from the great majority of its courses available online for no charge, though without any formal accreditation for coursework completed. While the cost of supporting and hosting the project is high, OCW expanded in 2005 to include other universities as a part of the OpenCourseWare Consortium, which currently includes more than 250 academic institutions with content available in at least six languages. In 2011, Massachusetts Institute of Technology (US) announced it would offer formal certification (but not credits or degrees) to online participants completing coursework in its “MITx” program, for a modest fee. The “edX” online platform supporting MITx was initially developed in partnership with Harvard and its analogous “Harvardx” initiative. The courseware platform is open source, and other universities have already joined and added their own course content. In March 2009 the Massachusetts Institute of Technology (US) faculty adopted an open-access policy to make its scholarship publicly accessible online.

    Massachusetts Institute of Technology (US) has its own police force. Three days after the Boston Marathon bombing of April 2013, MIT Police patrol officer Sean Collier was fatally shot by the suspects Dzhokhar and Tamerlan Tsarnaev, setting off a violent manhunt that shut down the campus and much of the Boston metropolitan area for a day. One week later, Collier’s memorial service was attended by more than 10,000 people, in a ceremony hosted by the Massachusetts Institute of Technology (US) community with thousands of police officers from the New England region and Canada. On November 25, 2013, Massachusetts Institute of Technology (US) announced the creation of the Collier Medal, to be awarded annually to “an individual or group that embodies the character and qualities that Officer Collier exhibited as a member of the Massachusetts Institute of Technology (US) community and in all aspects of his life”. The announcement further stated that “Future recipients of the award will include those whose contributions exceed the boundaries of their profession, those who have contributed to building bridges across the community, and those who consistently and selflessly perform acts of kindness”.

    In September 2017, the school announced the creation of an artificial intelligence research lab called the MIT-IBM Watson AI Lab. IBM will spend $240 million over the next decade, and the lab will be staffed by MIT and IBM scientists. In October 2018 MIT announced that it would open a new Schwarzman College of Computing dedicated to the study of artificial intelligence, named after lead donor and The Blackstone Group CEO Stephen Schwarzman. The focus of the new college is to study not just AI, but interdisciplinary AI education, and how AI can be used in fields as diverse as history and biology. The cost of buildings and new faculty for the new college is expected to be $1 billion upon completion.

    The Caltech/MIT Advanced aLIGO (US) was designed and constructed by a team of scientists from California Institute of Technology (US), Massachusetts Institute of Technology (US), and industrial contractors, and funded by the National Science Foundation (US) .

    MIT/Caltech Advanced aLigo .

    It was designed to open the field of gravitational-wave astronomy through the detection of gravitational waves predicted by general relativity. Gravitational waves were detected for the first time by the LIGO detector in 2015. For contributions to the LIGO detector and the observation of gravitational waves, two Caltech physicists, Kip Thorne and Barry Barish, and Massachusetts Institute of Technology (US) physicist Rainer Weiss won the Nobel Prize in physics in 2017. Weiss, who is also an Massachusetts Institute of Technology (US) graduate, designed the laser interferometric technique, which served as the essential blueprint for the LIGO.

    The mission of Massachusetts Institute of Technology (US) is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the Massachusetts Institute of Technology (US) community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

     
  • richardmitnick 10:26 am on June 17, 2021 Permalink | Reply
    Tags: "An Ally for Alloys", , , “XMAT”—eXtreme environment MATerials—consortium, , Machine learning, , Stronger materials are key to producing energy efficiently resulting in economic and decarbonization benefits.   

    From DOE’s Pacific Northwest National Laboratory (US) : “An Ally for Alloys” 

    From DOE’s Pacific Northwest National Laboratory (US)

    June 16, 2021
    Tim Ledbetter

    1

    Machine learning techniques have contributed to progress in science and technology fields ranging from health care to high-energy physics. Now, machine learning is poised to help accelerate the development of stronger alloys, particularly stainless steels, for America’s thermal power generation fleet. Stronger materials are key to producing energy efficiently resulting in economic and decarbonization benefits.

    “The use of ultra-high-strength steels in power plants dates back to the 1950s and has benefited from gradual improvements in the materials over time,” says Osman Mamun, a postdoctoral research associate at Pacific Northwest National Laboratory (PNNL). “If we can find ways to speed up improvements or create new materials, we could see enhanced efficiency in plants that also reduces the amount of carbon emitted into the atmosphere.”

    Mamun is the lead author on two recent, related journal articles that reveal new strategies for machine learning’s application in the design of advanced alloys. The articles chronicle the research outcomes of a joint effort between PNNL and the DOE National Energy Technology Lab (US). In addition to Mamun, the research team included PNNL’s Arun Sathanur and Ram Devanathan and NETL’s Madison Wenzlick and Jeff Hawk.

    The work was funded under the Department of Energy’s (US) Office of Fossil Energy via the “XMAT”—eXtreme environment MATerials—consortium, which includes research contributions from seven DOE national laboratories. The consortium seeks to accelerate the development of improved heat-resistant alloys for various power plant components and to predict the alloys’ long-term performance.

    The inside story of power plants

    A thermal power plant’s internal environment is unforgiving. Operating temperatures of more than 650 degrees Celsius and stresses exceeding 50 megapascals put a plant’s steel components to the test.

    “But also, that high temperature and pressure, along with reliable components, are critical in driving better thermodynamic efficiency that leads to reduced carbon emissions and increased cost-effectiveness,” Mamun explains.

    The PNNL–NETL collaboration focused on two material types. Austenitic stainless steel is widely used in plants because it offers strength and excellent corrosion resistance, but its service life at high temperatures is limited. Ferritic-martensitic steel that contains chromium in the 9 to 12 percent range also offers strength benefits but can be prone to oxidation and corrosion. Plant operators want materials that resist rupturing and last for decades.

    Over time, “trial and error” experimental approaches have incrementally improved steel, but are inefficient, time-consuming, and costly. It is crucial to accelerate the development of novel materials with superior properties.

    Models for predicting rupture strength and life

    Recent advances in computational modeling and machine learning, Mamun says, have become important new tools in the quest for achieving better materials more quickly.

    Machine learning, a form of artificial intelligence, applies an algorithm to datasets to develop faster solutions for science problems. This capability is making a big difference in research worldwide, in some cases shaving considerable time off scientific discovery and technology developments.

    The PNNL–NETL research team’s application of machine learning was described in their first journal article, published March 9 in Scientific Reports.

    2
    PNNL’s distinctive capabilities in joining steel to aluminum alloys enable lightweight vehicle technologies for sustainable transportation. Photo by Andrea Starr | Pacific Northwest National Laboratory.

    The paper recounts the team’s effort to enhance and analyze stainless steel datasets, contributed by NETL team members, with three different algorithms. The ultimate goal was to construct an accurate predictive model for the rupture strength of the two types of alloys. The team concluded that an algorithm known as the Gradient Boosted Decision Tree best met the needs for building machine learning models for accurate prediction of rupture strength.

    Further, the researchers maintain that integrating the resulting models into existing alloy design strategies could speed the identification of promising stainless steels that possess superior properties for dealing with stress and strain.

    “This research project not only took a step toward better approaches for extending the operating envelope of steel in power plants, but also demonstrated machine learning models grounded in physics to enable interpretation by domain scientists,” says research team member Ram Devanathan, a PNNL computational materials scientist. Devanathan leads the XMAT consortium’s data science thrust and serves on the organization’s steering committee.

    The project team’s second article was published in npj Materials Degradation’s April 16 edition.

    The team concluded in the paper that a machine-learning-based predictive model can reliably estimate the rupture life of the two alloys. The researchers also described a methodology to generate synthetic alloys that could be used to augment existing sparse stainless steel datasets, and identified the limitations of such an approach. Using these “hypothetical alloys” in machine learning models makes it possible to assess the performance of candidate materials without first synthesizing them in a laboratory.

    “The findings build on the earlier paper’s conclusions and represent another step toward establishing interpretable models of alloy performance in extreme environments, while also providing insights into data set development,” Devanathan says. “Both papers demonstrate XMAT’s thought leadership in this rapidly growing field.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    DOE’s Pacific Northwest National Laboratory (PNNL) (US) is one of the United States Department of Energy National Laboratories, managed by the Department of Energy’s Office of Science. The main campus of the laboratory is in Richland, Washington.

    PNNL scientists conduct basic and applied research and development to strengthen U.S. scientific foundations for fundamental research and innovation; prevent and counter acts of terrorism through applied research in information analysis, cyber security, and the nonproliferation of weapons of mass destruction; increase the U.S. energy capacity and reduce dependence on imported oil; and reduce the effects of human activity on the environment. PNNL has been operated by Battelle Memorial Institute since 1965.

     
  • richardmitnick 3:08 pm on January 19, 2021 Permalink | Reply
    Tags: "Rethinking Spin Chemistry from a Quantum Perspective", “Superposition” lets algorithms represent two variables at once which then allows scientists to focus on the relationship between these variables without any need to determine their individual sta, Bayesian inference, Machine learning, , ,   

    From Osaka City University (大阪市立大学: Ōsaka shiritsu daigaku) (JP): “Rethinking Spin Chemistry from a Quantum Perspective” 

    From Osaka City University (大阪市立大学: Ōsaka shiritsu daigaku) (JP)

    Jan 18, 2021
    James Gracey
    Global Exchange Office
    kokusai@ado.osaka-cu.ac.jp

    Researchers at Osaka City University use quantum superposition states and Bayesian inference to create a quantum algorithm, easily executable on quantum computers, that accurately and directly calculates energy differences between the electronic ground and excited spin states of molecular systems in polynomial time.

    1
    A quantum circuit that enables for the maximum probability of P(0)
    in the measurement of the parameter J.

    Understanding how the natural world works enables us to mimic it for the benefit of humankind. Think of how much we rely on batteries. At the core is understanding molecular structures and the behavior of electrons within them. Calculating the energy differences between a molecule’s electronic ground and excited spin states helps us understand how to better use that molecule in a variety of chemical, biomedical and industrial applications. We have made much progress in molecules with closed-shell systems, in which electrons are paired up and stable. Open-shell systems, on the other hand, are less stable and their underlying electronic behavior is complex, and thus more difficult to understand. They have unpaired electrons in their ground state, which cause their energy to vary due to the intrinsic nature of electron spins, and makes measurements difficult, especially as the molecules increase in size and complexity. Although such molecules are abundant in nature, there is a lack of algorithms that can handle this complexity. One hurdle has been dealing with what is called the exponential explosion of computational time. Using a conventional computer to calculate how the unpaired spins influence the energy of an open-shell molecule would take hundreds of millions of years, time humans do not have.

    Quantum computers are in development to help reduce this to what is called “polynomial time”. However, the process scientists have been using to calculate the energy differences of open-shell molecules has essentially been the same for both conventional and quantum computers. This hampers the practical use of quantum computing in chemical and industrial applications.

    “Approaches that invoke true quantum algorithms help us treat open-shell systems much more efficiently than by utilizing classical computers”, state Kenji Sugisaki and Takeji Takui from Osaka City University. With their colleagues, they developed a quantum algorithm executable on quantum computers, which can, for the first time, accurately calculate energy differences between the electronic ground and excited spin states of open-shell molecular systems. Their findings were published in the journal Chemical Science on 24 Dec 2020.

    The energy difference between molecular spin states is characterized by the value of the exchange interaction parameter J. Conventional quantum algorithms have been able to accurately calculate energies for closed-shell molecules “but they have not been able to handle systems with a strong multi-configurational character”, states the group. Until now, scientists have assumed that to obtain the parameter J one must first calculate the total energy of each spin state. In open-shell molecules this is difficult because the total energy of each spin state varies greatly as the molecule changes in activity and size. However, “the energy difference itself is not greatly dependent on the system size”, notes the research team. This led them to create an algorithm with calculations that focused on the spin difference, not the individual spin states. Creating such an algorithm required that they let go of assumptions developed from years of using conventional computers and focus on the unique characteristics of quantum computing – namely “quantum superposition states”.

    “Superposition” lets algorithms represent two variables at once, which then allows scientists to focus on the relationship between these variables without any need to determine their individual states first. The research team used something called a broken-symmetry wave function as a superposition of wave functions with different spin states and rewrote it into the Hamiltonian equation for the parameter J. By running this new quantum circuit, the team was able to focus on deviations from their target and by applying Bayesian inference, a machine learning technique, they brought these deviations in to determine the exchange interaction parameter J. “Numerical simulations based on this method were performed for the covalent dissociation of molecular hydrogen (H2), the triple bond dissociation of molecular nitrogen (N2), and the ground states of C, O, Si atoms and NH, OH+, CH2, NF and O2 molecules with an error of less than 1 kcal/mol”, adds the research team,

    “We plan on installing our Bayesian eXchange coupling parameter calculator with Broken-symmetry wave functions (BxB) software on near-term quantum computers equipped with noisy (no quantum error correction) intermediate-scale (several hundreds of qubits) quantum devices (NISQ devices), testing the usefulness for quantum chemical calculations of actual sizable molecular systems.”

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Osaka City University (OCU) (大阪市立大学: Ōsaka shiritsu daigaku) (JP), is a public university in Japan. It is located in Sumiyoshi-ku, Osaka.

    OCU’s predecessor was founded in 1880, as Osaka Commercial Training Institute (大阪商業講習所) with donations by local merchants. It became Osaka Commercial School in 1885, then was municipalized in 1889. Osaka City was defeated in a bid to draw the Second National Commercial College (the winner was Kobe City), so the city authorities decided to establish a municipal commercial college without any aid from the national budget.

    In 1901, the school was reorganized to become Osaka City Commercial College (市立大阪高等商業学校), later authorized under Specialized School Order in 1904. The college had grand brick buildings around the Taishō period.

    In 1928, the college became Osaka University of Commerce (大阪商科大学), the first municipal university in Japan. The city mayor, Hajime Seki (関 一, Seki Hajime, 1873–1935) declared the spirit of the municipal university, that it should not simply copy the national universities and that it should become a place for research with a background of urban activities in Osaka. But, contrary to his words, the university was removed to the most rural part of the city by 1935. The first president of the university was a liberalist, so the campus gradually became what was thought to be “a den of the Reds (Marxists)”. During World War II, the Marxists and the socialists in the university were arrested (about 50 to 80 members) soon after the liberal president died. The campus was evacuated and used by the Japanese Navy.

    After the war, the campus was occupied by the U.S. Army (named “Camp Sakai”), and a number of students became anti-American fighters and “worshipers” of the Soviet Union. The campus was returned to the university, partly in 1952, and fully in 1955. In 1949, during the allied occupation, the university was merged (with other two municipal colleges) into Osaka City University, under Japan’s new educational system.

     
  • richardmitnick 12:25 pm on January 6, 2021 Permalink | Reply
    Tags: "Advanced materials in a snap", , , Machine learning,   

    From DOE’s Sandia National Laboratories: “Advanced materials in a snap” 

    From DOE’s Sandia National Laboratories

    January 5, 2021
    Troy Rummler
    trummle@sandia.gov
    505-249-3632

    1
    Sandia National Laboratories has developed a machine learning algorithm capable of performing simulations for materials scientists nearly 40,000 times faster than normal. Credit: Image by Eric Lundin.

    If everything moved 40,000 times faster, you could eat a fresh tomato three minutes after planting a seed. You could fly from New York to L.A. in half a second. And you’d have waited in line at airport security for that flight for 30 milliseconds.

    A research team at Sandia National Laboratories has successfully used machine learning — computer algorithms that improve themselves by learning patterns in data — to complete cumbersome materials science calculations more than 40,000 times faster than normal.

    Their results, published Jan. 4 in npj Computational Materials, could herald a dramatic acceleration in the creation of new technologies for optics, aerospace, energy storage and potentially medicine while simultaneously saving laboratories money on computing costs.

    “We’re shortening the design cycle,” said David Montes de Oca Zapiain, a computational materials scientist at Sandia who helped lead the research. “The design of components grossly outpaces the design of the materials you need to build them. We want to change that. Once you design a component, we’d like to be able to design a compatible material for that component without needing to wait for years, as it happens with the current process.”

    The research, funded by the U.S. Department of Energy’s Basic Energy Sciences program, was conducted at the Center for Integrated Nanotechnologies, a DOE user research facility jointly operated by Sandia and Los Alamos National Laboratory.

    Machine learning speeds up computationally expensive simulations.

    Sandia researchers used machine learning to accelerate a computer simulation that predicts how changing a design or fabrication process, such as tweaking the amounts of metals in an alloy, will affect a material. A project might require thousands of simulations, which can take weeks, months or even years to run.

    The team clocked a single, unaided simulation on a high-performance computing cluster with 128 processing cores (a typical home computer has two to six processing cores) at 12 minutes. With machine learning, the same simulation took 60 milliseconds using only 36 cores–equivalent to 42,000 times faster on equal computers. This means researchers can now learn in under 15 minutes what would normally take a year.

    Sandia’s new algorithm arrived at an answer that was 5% different from the standard simulation’s result, a very accurate prediction for the team’s purposes. Machine learning trades some accuracy for speed because it makes approximations to shortcut calculations.

    “Our machine-learning framework achieves essentially the same accuracy as the high-fidelity model but at a fraction of the computational cost,” said Sandia materials scientist Rémi Dingreville, who also worked on the project.

    Benefits could extend beyond materials

    Dingreville and Montes de Oca Zapiain are going to use their algorithm first to research ultrathin optical technologies for next-generation monitors and screens. Their research, though, could prove widely useful because the simulation they accelerated describes a common event — the change, or evolution, of a material’s microscopic building blocks over time.

    Machine learning previously has been used to shortcut simulations that calculate how interactions between atoms and molecules change over time. The published results, however, demonstrate the first use of machine learning to accelerate simulations of materials at relatively large, microscopic scales, which the Sandia team expects will be of greater practical value to scientists and engineers.

    For instance, scientists can now quickly simulate how miniscule droplets of melted metal will glob together when they cool and solidify, or conversely, how a mixture will separate into layers of its constituent parts when it melts. Many other natural phenomena, including the formation of proteins, follow similar patterns. And while the Sandia team has not tested the machine-learning algorithm on simulations of proteins, they are interested in exploring the possibility in the future.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Sandia Campus.


    Sandia National Laboratory

    Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major R&D responsibilities in national security, energy and environmental technologies, and economic competitiveness.



     
  • richardmitnick 10:57 am on December 31, 2020 Permalink | Reply
    Tags: "An Existential Crisis in Neuroscience", , , DNNs are mathematical models that string together chains of simple functions that approximate real neurons., , , It’s clear now that while science deals with facts a crucial part of this noble endeavor is making sense of the facts., Machine learning, ,   

    From Nautilus: “An Existential Crisis in Neuroscience” 

    From Nautilus

    December 30, 2020 [Re-issued “Maps” issue January 23, 2020.]
    Grigori Guitchounts

    1
    A rendering of dendrites (red)—a neuron’s branching processes—and protruding spines that receive synaptic information, along with a saturated reconstruction (multicolored cylinder) from a mouse cortex. Credit: Lichtman Lab at Harvard University.

    We’re mapping the brain in amazing detail—but our brain can’t understand the picture.

    On a chilly evening last fall, I stared into nothingness out of the floor-to-ceiling windows in my office on the outskirts of Harvard’s campus. As a purplish-red sun set, I sat brooding over my dataset on rat brains. I thought of the cold windowless rooms in downtown Boston, home to Harvard’s high-performance computing center, where computer servers were holding on to a precious 48 terabytes of my data. I have recorded the 13 trillion numbers in this dataset as part of my Ph.D. experiments, asking how the visual parts of the rat brain respond to movement.

    Printed on paper, the dataset would fill 116 billion pages, double-spaced. When I recently finished writing the story of my data, the magnum opus fit on fewer than two dozen printed pages. Performing the experiments turned out to be the easy part. I had spent the last year agonizing over the data, observing and asking questions. The answers left out large chunks that did not pertain to the questions, like a map leaves out irrelevant details of a territory.

    But, as massive as my dataset sounds, it represents just a tiny chunk of a dataset taken from the whole brain. And the questions it asks—Do neurons in the visual cortex do anything when an animal can’t see? What happens when inputs to the visual cortex from other brain regions are shut off?—are small compared to the ultimate question in neuroscience: How does the brain work?

    2
    LIVING COLOR: This electron microscopy image of a slice of mouse cortex, which shows different neurons labeled by color, is just the beginning. “We’re working on a cortical slab of a human brain, where every synapse and every connection of every nerve cell is identifiable,” says Harvard’s Jeff Lichtman. “It’s amazing.” Credit: Lichtman Lab at Harvard University.

    The nature of the scientific process is such that researchers have to pick small, pointed questions. Scientists are like diners at a restaurant: We’d love to try everything on the menu, but choices have to be made. And so we pick our field, and subfield, read up on the hundreds of previous experiments done on the subject, design and perform our own experiments, and hope the answers advance our understanding. But if we have to ask small questions, then how do we begin to understand the whole?

    Neuroscientists have made considerable progress toward understanding brain architecture and aspects of brain function. We can identify brain regions that respond to the environment, activate our senses, generate movements and emotions. But we don’t know how different parts of the brain interact with and depend on each other. We don’t understand how their interactions contribute to behavior, perception, or memory. Technology has made it easy for us to gather behemoth datasets, but I’m not sure understanding the brain has kept pace with the size of the datasets.

    Some serious efforts, however, are now underway to map brains in full. One approach, called connectomics, strives to chart the entirety of the connections among neurons in a brain. In principle, a complete connectome would contain all the information necessary to provide a solid base on which to build a holistic understanding of the brain. We could see what each brain part is, how it supports the whole, and how it ought to interact with the other parts and the environment. We’d be able to place our brain in any hypothetical situation and have a good sense of how it would react.

    The question of how we might begin to grasp the entirety of the organ that generates our minds has been pressing me for a while. Like most neuroscientists, I’ve had to cultivate two clashing ideas: striving to understand the brain and knowing that’s likely an impossible task. I was curious how others tolerate this doublethink, so I sought out Jeff Lichtman, a leader in the field of connectomics and a professor of molecular and cellular biology at Harvard.

    Lichtman’s lab happens to be down the hall from mine, so on a recent afternoon, I meandered over to his office to ask him about the nascent field of connectomics and whether he thinks we’ll ever have a holistic understanding of the brain. His answer—“No”—was not reassuring, but our conversation was a revelation, and shed light on the questions that had been haunting me. How do I make sense of gargantuan volumes of data? Where does science end and personal interpretation begin? Were humans even capable of weaving today’s reams of information into a holistic picture? I was now on a dark path, questioning the limits of human understanding, unsettled by a future filled with big data and small comprehension.

    Lichtman likes to shoot first, ask questions later. The 68-year-old neuroscientist’s weapon of choice is a 61-beam electron microscope, which Lichtman’s team uses to visualize the tiniest of details in brain tissue. The way neurons are packed in a brain would make canned sardines look like they have a highly evolved sense of personal space. To make any sense of these images, and in turn, what the brain is doing, the parts of neurons have to be annotated in three dimensions, the result of which is a wiring diagram. Done at the scale of an entire brain, the effort constitutes a complete wiring diagram, or the connectome.

    To capture that diagram, Lichtman employs a machine that can only be described as a fancy deli slicer. The machine cuts pieces of brain tissue into 30-nanometer-thick sections, which it then pastes onto a tape conveyor belt. The tape goes on silicon wafers, and into Lichtman’s electron microscope, where billions of electrons blast the brain slices, generating images that reveal nanometer-scale features of neurons, their axons, dendrites, and the synapses through which they exchange information. The Technicolor images are a beautiful sight that evokes a fantastic thought: The mysteries of how brains create memories, thoughts, perceptions, feelings—consciousness itself—must be hidden in this labyrinth of neural connections.

    2
    THE MAPMAKER: Jeff Lichtman, a leader in brain mapping, says the word “understanding” has to undergo a revolution in reference to the human brain. “There’s no point when you can suddenly say, ‘I now understand the brain,’ just as you wouldn’t say, ‘I now get New York City.’”Credit: Lichtman Lab at Harvard University.

    A complete human connectome will be a monumental technical achievement. A complete wiring diagram for a mouse brain alone would take up two exabytes. That’s 2 billion gigabytes; by comparison, estimates of the data footprint of all books ever written come out to less than 100 terabytes, or 0.005 percent of a mouse brain. But Lichtman is not daunted. He is determined to map whole brains, exorbitant exabyte-scale storage be damned.

    Lichtman’s office is a spacious place with floor-to-ceiling windows overlooking a tree-lined walkway and an old circular building that, in the days before neuroscience even existed as a field, used to house a cyclotron. He was wearing a deeply black sweater, which contrasted with his silver hair and olive skin. When I asked if a completed connectome would give us a full understanding of the brain, he didn’t pause in his answer. I got the feeling he had thought a great deal about this question on his own.

    “I think the word ‘understanding’ has to undergo an evolution,” Lichtman said, as we sat around his desk. “Most of us know what we mean when we say ‘I understand something.’ It makes sense to us. We can hold the idea in our heads. We can explain it with language. But if I asked, ‘Do you understand New York City?’ you would probably respond, ‘What do you mean?’ There’s all this complexity. If you can’t understand New York City, it’s not because you can’t get access to the data. It’s just there’s so much going on at the same time. That’s what a human brain is. It’s millions of things happening simultaneously among different types of cells, neuromodulators, genetic components, things from the outside. There’s no point when you can suddenly say, ‘I now understand the brain,’ just as you wouldn’t say, ‘I now get New York City.’ ”

    “But we understand specific aspects of the brain,” I said. “Couldn’t we put those aspects together and get a more holistic understanding?”

    “I guess I would retreat to another beachhead, which is, ‘Can we describe the brain?’ ” Lichtman said. “There are all sorts of fundamental questions about the physical nature of the brain we don’t know. But we can learn to describe them. A lot of people think ‘description’ is a pejorative in science. But that’s what the Hubble telescope does. That’s what genomics does. They describe what’s actually there. Then from that you can generate your hypotheses.”

    “Why is description an unsexy concept for neuroscientists?”

    “Biologists are often seduced by ideas that resonate with them,” Lichtman said. That is, they try to bend the world to their idea rather than the other way around. “It’s much better—easier, actually—to start with what the world is, and then make your idea conform to it,” he said. Instead of a hypothesis-testing approach, we might be better served by following a descriptive, or hypothesis-generating methodology. Otherwise we end up chasing our own tails. “In this age, the wealth of information is an enemy to the simple idea of understanding,” Lichtman said.

    “How so?” I asked.

    “Let me put it this way,” Lichtman said. “Language itself is a fundamentally linear process, where one idea leads to the next. But if the thing you’re trying to describe has a million things happening simultaneously, language is not the right tool. It’s like understanding the stock market. The best way to make money on the stock market is probably not by understanding the fundamental concepts of economy. It’s by understanding how to utilize this data to know what to buy and when to buy it. That may have nothing to do with economics but with data and how data is used.”

    “Maybe human brains aren’t equipped to understand themselves,” I offered.

    “And maybe there’s something fundamental about that idea: that no machine can have an output more sophisticated than itself,” Lichtman said. “What a car does is trivial compared to its engineering. What a human brain does is trivial compared to its engineering. Which is the great irony here. We have this false belief there’s nothing in the universe that humans can’t understand because we have infinite intelligence. But if I asked you if your dog can understand something you’d say, ‘Well, my dog’s brain is small.’ Well, your brain is only a little bigger,” he continued, chuckling. “Why, suddenly, are you able to understand everything?”

    Was Lichtman daunted by what a connectome might achieve? Did he see his efforts as Sisyphean?

    “It’s just the opposite,” he said. “I thought at this point we would be less far along. Right now, we’re working on a cortical slab of a human brain, where every synapse is identified automatically, every connection of every nerve cell is identifiable. It’s amazing. To say I understand it would be ridiculous. But it’s an extraordinary piece of data. And it’s beautiful. From a technical standpoint, you really can see how the cells are connected together. I didn’t think that was possible.”

    Lichtman stressed his work was about more than a comprehensive picture of the brain. “If you want to know the relationship between neurons and behavior, you gotta have the wiring diagram,” he said. “The same is true for pathology. There are many incurable diseases, such as schizophrenia, that don’t have a biomarker related to the brain. They’re probably related to brain wiring but we don’t know what’s wrong. We don’t have a medical model of them. We have no pathology. So in addition to fundamental questions about how the brain works and consciousness, we can answer questions like, Where did mental disorders come from? What’s wrong with these people? Why are their brains working so differently? Those are perhaps the most important questions to human beings.”

    Late one night, after a long day of trying to make sense of my data, I came across a short story by Jorge Louis Borges that seemed to capture the essence of the brain mapping problem. In the story, On Exactitude in Science, a man named Suarez Miranda wrote of an ancient empire that, through the use of science, had perfected the art of map-making. While early maps were nothing but crude caricatures of the territories they aimed to represent, new maps grew larger and larger, filling in ever more details with each edition. Over time, Borges wrote, “the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province.” Still, the people craved more detail. “In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it.”

    The Borges story reminded me of Lichtman’s view that the brain may be too complex to be understood by humans in the colloquial sense, and that describing it may be a better goal. Still, the idea made me uncomfortable. Much like storytelling, or even information processing in the brain, descriptions must leave some details out. For a description to convey relevant information, the describer has to know which details are important and which are not. Knowing which details are irrelevant requires having some understanding about the thing you’re describing. Will my brain, as intricate as it may be, ever be able to make sense of the two exabytes in a mouse brain?

    Humans have a critical weapon in this fight. Machine learning has been a boon to brain mapping, and the self-reinforcing relationship promises to transform the whole endeavor. Deep learning algorithms (also known as deep neural networks, or DNNs) have in the past decade allowed machines to perform cognitive tasks once thought impossible for computers—not only object recognition, but text transcription and translation, or playing games like Go or chess. DNNs are mathematical models that string together chains of simple functions that approximate real neurons. These algorithms were inspired directly by the physiology and anatomy of the mammalian cortex, but are crude approximations of real brains, based on data gathered in the 1960s. Yet they have surpassed expectations of what machines can do.

    The secret to Lichtman’s progress with mapping the human brain is machine intelligence. Lichtman’s team, in collaboration with Google, is using deep networks to annotate the millions of images from brain slices their microscopes collect. Each scan from an electron microscope is just a set of pixels. Human eyes easily recognize the boundaries of each blob in the image (a neuron’s soma, axon, or dendrite, in addition to everything else in the brain), and with some effort can tell where a particular bit from one slice appears on the next slice. This kind of labeling and reconstruction is necessary to make sense of the vast datasets in connectomics, and have traditionally required armies of undergraduate students or citizen scientists to manually annotate all chunks. DNNs trained on image recognition are now doing the heavy lifting automatically, turning a job that took months or years into one that’s complete in a matter of hours or days. Recently, Google identified each neuron, axon, dendrite, and dendritic spike—and every synapse—in slices of the human cerebral cortex. “It’s unbelievable,” Lichtman said.

    Scientists still need to understand the relationship between those minute anatomical features and dynamical activity profiles of neurons—the patterns of electrical activity they generate—something the connectome data lacks. This is a point on which connectomics has received considerable criticism, mainly by way of example from the worm: Neuroscientists have had the complete wiring diagram of the worm C. elegans for a few decades now, but arguably do not understand the 300-neuron creature in its entirety; how its brain connections relate to its behaviors is still an active area of research.

    Still, structure and function go hand-in-hand in biology, so it’s reasonable to expect one day neuroscientists will know how specific neuronal morphologies contribute to activity profiles. It wouldn’t be a stretch to imagine a mapped brain could be kickstarted into action on a massive server somewhere, creating a simulation of something resembling a human mind. The next leap constitutes the dystopias in which we achieve immortality by preserving our minds digitally, or machines use our brain wiring to make super-intelligent machines that wipe humanity out. Lichtman didn’t entertain the far-out ideas in science fiction, but acknowledged that a network that would have the same wiring diagram as a human brain would be scary. “We wouldn’t understand how it was working any more than we understand how deep learning works,” he said. “Now, suddenly, we have machines that don’t need us anymore.”

    Yet a masterly deep neural network still doesn’t grant us a holistic understanding of the human brain. That point was driven home to me last year at a Computational and Systems Neuroscience conference, a meeting of the who’s-who in neuroscience, which took place outside Lisbon, Portugal. In a hotel ballroom, I listened to a talk by Arash Afraz, a 40-something neuroscientist at the National Institute of Mental Health in Bethesda, Maryland. The model neurons in DNNs are to real neurons what stick figures are to people, and the way they’re connected is equally as sketchy, he suggested.

    Afraz is short, with a dark horseshoe mustache and balding dome covered partially by a thin ponytail, reminiscent of Matthew McConaughey in True Detective. As sturdy Atlantic waves crashed into the docks below, Afraz asked the audience if we remembered René Magritte’s Ceci n’est pas une pipe painting, which depicts a pipe with the title written out below it. Afraz pointed out that the model neurons in DNNs are not real neurons, and the connections among them are not real either. He displayed a classic diagram of interconnections among brain areas found through experimental work in monkeys—a jumble of boxes with names like V1, V2, LIP, MT, HC, each a different color, and black lines connecting the boxes seemingly at random and in more combinations than seems possible. In contrast to the dizzying heap of connections in real brains, DNNs typically connect different brain areas in a simple chain, from one “layer” to the next. Try explaining that to a rigorous anatomist, Afraz said, as he flashed a meme of a shocked baby orangutan cum anatomist. “I’ve tried, believe me,” he said.

    I, too, have been curious why DNNs are so simple compared to real brains. Couldn’t we improve their performance simply by making them more faithful to the architecture of a real brain? To get a better sense for this, I called Andrew Saxe, a computational neuroscientist at Oxford University. Saxe agreed that it might be informative to make our models truer to reality. “This is always the challenge in the brain sciences: We just don’t know what the important level of detail is,” he told me over Skype.

    How do we make these decisions? “These judgments are often based on intuition, and our intuitions can vary wildly,” Saxe said. “A strong intuition among many neuroscientists is that individual neurons are exquisitely complicated: They have all of these back-propagating action potentials, they have dendritic compartments that are independent, they have all these different channels there. And so a single neuron might even itself be a network. To caricature that as a rectified linear unit”—the simple mathematical model of a neuron in DNNs—“is clearly missing out on so much.”

    As 2020 has arrived, I have thought a lot about what I have learned from Lichtman, Afraz, and Saxe and the holy grail of neuroscience: understanding the brain. I have found myself revisiting my undergrad days, when I held science up as the only method of knowing that was truly objective (I also used to think scientists would be hyper-rational, fair beings paramountly interested in the truth—so perhaps this just shows how naive I was).

    It’s clear to me now that while science deals with facts, a crucial part of this noble endeavor is making sense of the facts. The truth is screened through an interpretive lens even before experiments start. Humans, with all our quirks and biases, choose what experiment to conduct in the first place, and how to do it. And the interpretation continues after data are collected, when scientists have to figure out what the data mean. So, yes, science gathers facts about the world, but it is humans who describe it and try to understand it. All these processes require filtering the raw data through a personal sieve, sculpted by the language and culture of our times.

    It seems likely that Lichtman’s two exabytes of brain slices, and even my 48 terabytes of rat brain data, will not fit through any individual human mind. Or at least no human mind is going to orchestrate all this data into a panoramic picture of how the human brain works. As I sat at my office desk, watching the setting sun tint the cloudless sky a light crimson, my mind reached a chromatic, if mechanical, future. The machines we have built—the ones architected after cortical anatomy—fall short of capturing the nature of the human brain. But they have no trouble finding patterns in large datasets. Maybe one day, as they grow stronger building on more cortical anatomy, they will be able to explain those patterns back to us, solving the puzzle of the brain’s interconnections, creating a picture we understand. Out my window, the sparrows were chirping excitedly, not ready to call it a day.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 11:52 am on December 22, 2020 Permalink | Reply
    Tags: "Crossing the artificial intelligence thin red line?", , , Machine learning, Stuart Russell: There is huge upside potential in AI but we are already seeing the risks from the poor design of AI systems including the impacts of online misinformation; impersonation; and deception   

    From École Polytechnique Fédérale de Lausanne (CH): “Crossing the artificial intelligence thin red line?” 


    From École Polytechnique Fédérale de Lausanne (CH)

    22.12.20
    Tanya Petersen

    1

    EPFL computer science professor tells conference that AI has no legitimate roll in defining, implementing, or enforcing public policy.

    Artificial intelligence shapes our modern lives. It will be one of the defining technologies of the future, with its influence and application expected to accelerate as we go through the 2020s. Yet, the stakes are high; with the countless benefits that AI brings, there is also growing academic and public concern around a lack of transparency, and its misuse, in many areas of life.

    It’s in this environment that the European Commission has become one of the first political institutions in the world to release a white paper that could be a game-changer towards a regulatory framework for AI. In addition, this year the European Parliament adopted proposals on how the EU can best regulate artificial intelligence to boost innovation, ethical standards and trust in technology.

    Recently, an all-virtual conference on the ‘Governance Of and By Digital Technology’ hosted by EPFL’s International Risk Governance Center (IRGC) and the European Union’s Horizon 2020 TRIGGER Project explored the principles needed to govern existing and emerging digital technologies, as well as the potential danger of decision-making algorithms and how to prevent these from causing harm.

    Stuart Russell, Professor of Computer Science at the University of California, Berkeley and author of the popular textbook, Artificial Intelligence: A Modern Approach, proposed that there is huge upside potential in AI, but we are already seeing the risks from the poor design of AI systems, including the impacts of online misinformation, impersonation and deception.

    “I believe that if we don’t move quickly, human beings will just be losing their rights, their powers, their individuality and becoming more and more the subject of digital technology rather than the owners of it. For example, there is already AI from 50 different corporate representatives sitting in your pocket stealing your information, and your money, as fast as it can, and there’s nobody in your phone who actually works for you. Could we rearrange that so that the software in your phone actually works for you and negotiates with these other entities to keep all of your data private?” he asked.

    Reinforcement learning algorithms, that select the content people see on their phones or other devices, are a major problem he continued, “they currently have more power than Hitler or Stalin ever had in their wildest dreams over what billions of people see and read for most of their waking lives. We might argue that running these kinds of experiments without informed consent is a bad idea and, just as we have with pharmaceutical products, we need to have stage 1, 2, and 3 trials on human subjects and look at what effect these algorithms have on people’s minds and behavior.”

    Beyond regulating artificial intelligence aimed at individual use, one of the conference debates focused on how governments might use AI in developing and implementing public policy in areas such as healthcare, urban development or education. Bryan Ford, an Associate Professor at EPFL and head of the Decentralized and Distributed Systems Laboratory (DEDIS) in the School of Communication and Computer Sciences, argued that while the cautious use of powerful AI technologies can play many useful roles in low-level mechanisms used in many application domains, it has no legitimate role to play in defining, implementing, or enforcing public policy.

    “Matters of policy in governing humans must remain a domain reserved strictly for humans. For example, AI may have many justifiable uses in electric sensors to detect the presence of a car – how fast it is going or whether it stopped at an intersection, but I would claim AI does not belong anywhere near the policy decision of whether a car’s driver warrants suspicion and should be stopped by Highway Patrol.”

    “Because machine learning algorithms learn from data sets that represent historical experience, AI driven policy is fundamentally constrained by the assumption that our past represents the right, best, or only viable basis on which to make decisions about the future. Yet we know that all past and present societies are highly imperfect so to have any hope of genuinely improving our societies, governance must be visionary and forward looking,” Professor Ford continued.

    Artificial intelligence is heterogeneous and complex. When we talk about the governance of, and by, AI are we talking about machine learning, neural networks or autonomous agents, or the different applications of any of these in different areas? Likely, all the above in many different applications. We are only at the beginning of the journey when it comes to regulating artificial intelligence, one that most participants agreed has geopolitical implications.

    “These issues may lead directly to a set of trade and geostrategic conflicts that will make them all the more difficult to resolve and all the more crucial. The question is not only to avoid them but to avoid the decoupling of the US from Europe, and Europe and the US from China, and that is going to be a significant challenge economically and geo-strategically,” suggested John Zysman, Professor of Political Science at the University of California, Berkeley and co-Director of the Berkeley Roundtable on the International Economy.

    “Ultimately, there is a thin red line that AI should not cross and some regulation, that balances the benefits and risks from AI applications, is needed. The IRGC is looking at some of the most challenging problems facing society today, and it’s great to have them as part of IC,” said James Larus, Dean of the IC School and IRGC Academic Director.

    Concluding the conference, Marie-Valentine Florin, Executive Director of the IRGC reminded participants that artificial intelligence is a means to an end, not the end, “as societies we need a goal. Maybe that could be something like the Green Deal around sustainability to perhaps give a sense to today’s digital transformation. Digital transformation is the tool, and I don’t think society has collectively decided a real objectivel for it yet. That’s what we need to figure out.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    EPFL bloc

    EPFL campus

    EPFL (CH) is Europe’s most cosmopolitan technical university. It receives students, professors and staff from over 120 nationalities. With both a Swiss and international calling, it is therefore guided by a constant wish to open up; its missions of teaching, research and partnership impact various circles: universities and engineering schools, developing and emerging countries, secondary schools and gymnasiums, industry and economy, political circles and the general public.

     
  • richardmitnick 4:42 pm on December 21, 2020 Permalink | Reply
    Tags: "Artificial Intelligence Finds Surprising Patterns in Earth's Biological Mass Extinctions", "Radiations" may in fact cause major changes to existing ecosystems- an idea the authors call "destructive creation.", , Machine learning, Mass extinction events, , Phanerozoic Eon-the period for which fossils are available., Species evolution or "radiations" and extinctions are rarely connected., The Phanerozoic represents the most recent ~ 550-million-year period of Earth's total ~4.5 billion-year history and is significant to palaeontologists.,   

    From Tokyo Institute of Technology (JP): “Artificial Intelligence Finds Surprising Patterns in Earth’s Biological Mass Extinctions” 

    tokyo-tech-bloc

    From Tokyo Institute of Technology (JP)

    December 21, 2020

    Further Information

    Jennifer Hoyal Cuthill
    Affiliate Researcher
    Earth-Life Science Institute (ELSI),
    Tokyo Institute of Technology

    Email j.hoyal-cuthill@essex.ac.uk
    Tel +44-7834352081

    Contact
    Thilina Heenatigala
    Director of Communications
    Earth-Life Science Institute (ELSI),
    Tokyo Institute of Technology
    thilinah@elsi.jp
    Tel +81-3-5734-3163
    Fax +81-3-5734-3416

    December 21, 2020

    Charles Darwin’s landmark opus, On the Origin of the Species, ends with a beautiful summary of his theory of evolution, “There is a grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.” In fact, scientists now know that most species that have ever existed are extinct. This extinction of species has on the whole been roughly balanced by the origination of new ones over Earth’s history, with a few major temporary imbalances scientists call mass extinction events. Scientists have long believed that mass extinctions create productive periods of species evolution, or “radiations,” a model called “creative destruction.” A new study led by scientists affiliated with the Earth-Life Science Institute (ELSI) at Tokyo Institute of Technology used machine learning to examine the co-occurrence of fossil species and found that radiations and extinctions are rarely connected, and thus mass extinctions likely rarely cause radiations of a comparable scale.

    1
    Twists of fate.

    A new study [Nature] applies machine learning to the fossil record to visualise life’s history, showing the impacts of major evolutionary events. This shows the long-term evolutionary and ecological impacts of major events of extinction and speciation. Colours represent the geological periods from the Tonian, starting 1 billion years ago, in yellow, to the current Quaternary Period, shown in green. The red to blue colour transition marks the end-Permian mass extinction, one of the most disruptive events in the fossil record. Credit: J. Hoyal Cuthill and N. Guttenberg.

    Creative destruction is central to classic concepts of evolution. It seems clear that there are periods in which suddenly many species suddenly disappear, and many new species suddenly appear. However, radiations of a comparable scale to the mass extinctions, which this study, therefore, calls the mass radiations, have received far less analysis than extinction events. This study compared the impacts of both extinction and radiation across the period for which fossils are available, the so-called Phanerozoic Eon. The Phanerozoic (from the Greek meaning “apparent life”), represents the most recent ~ 550-million-year period of Earth’s total ~4.5 billion-year history, and is significant to palaeontologists: before this period most of the organisms that existed were microbes that didn’t easily form fossils, so the prior evolutionary record is hard to observe. The new study suggests creative destruction isn’t a good description of how species originated or went extinct during the Phanerozoic, and suggests that many of the most remarkable times of evolutionary radiation occurred when life entered new evolutionary and ecological arenas, such as during the Cambrian explosion of animal diversity and the Carboniferous expansion of forest biomes. Whether this is true for the previous ~ 3 billion years dominated by microbes is not known, as the scarcity of recorded information on such ancient diversity did not allow a similar analysis.

    Palaeontologists have identified a handful of the most severe, mass extinction events in the Phanerozoic fossil record. These principally include the big five mass extinctions, such as the end-Permian mass extinction in which more than 70% of species are estimated to have gone extinct. Biologists have now suggested that we may now be entering a “Sixth Mass Extinction,” which they think is mainly caused by human activity including hunting and land-use changes caused by the expansion of agriculture. A commonly noted example of the previous “Big Five” mass extinctions is the Cretaceous-Tertiary one (usually abbreviated as “K-T,” using the German spelling of Cretaceous) which appears to have been caused when a meteor hit Earth ~65 million years ago, wiping out the non-avian dinosaurs. Observing the fossil record, scientists came to believe that mass extinction events create especially productive radiations. For example, in the K-T dinosaur-exterminating event, it has conventionally been supposed that a wasteland was created, which allowed organisms like mammals to recolonise and “radiate,” allowing for the evolution of all manner of new mammal species, ultimately laying the foundation for the emergence of humans. In other words, if the K-T event of “creative destruction” had not occurred, perhaps we would not be here to discuss this question.

    The new study started with a casual discussion in ELSI’s “Agora,” a large common room where ELSI scientists and visitors often eat lunch and strike up new conversations. Two of the paper’s authors, evolutionary biologist Jennifer Hoyal Cuthill (now a research fellow at Essex University in the UK) and physicist/machine learning expert Nicholas Guttenberg (now a research scientist at Cross Labs working in collaboration with GoodAI in the Czech Republic), who were both post-doctoral scholars at ELSI when the work began, were kicking around the question of whether machine learning could be used to visualise and understand the fossil record. During a visit to ELSI, just before the COVID-19 pandemic began to restrict international travel, they worked feverishly to extend their analysis to examine the correlation between extinction and radiation events. These discussions allowed them to relate their new data to the breadth of existing ideas on mass extinctions and radiations. They quickly found that the evolutionary patterns identified with the help of machine learning differed in key ways from traditional interpretations.

    The team used a novel application of machine learning to examine the temporal co-occurrence of species in the Phanerozoic fossil record, examining over a million entries in a massive curated, public database including almost two hundred thousand species.

    Lead author Dr Hoyal Cuthill said, “Some of the most challenging aspects of understanding the history of life are the enormous timescales and numbers of species involved. New applications of machine learning can help by allowing us to visualise this information in a human-readable form. This means we can, so to speak, hold half a billion years of evolution in the palms of our hands, and gain new insights from what we see.”

    Using their objective methods, they found that the “big five” mass extinction events previously identified by palaeontologists were picked up by the machine learning methods as being among the top 5% of significant disruptions in which extinction outpaced radiation or vice versa, as were seven additional mass extinctions, two combined mass extinction-radiation events and fifteen mass radiations. Surprisingly, in contrast to previous narratives emphasising the importance of post-extinction radiations, this work found that the most comparable mass radiations and extinctions were only rarely coupled in time, refuting the idea of a causal relationship between them.

    Co-author Dr Nicholas Guttenberg said, “the ecosystem is dynamic, you don’t necessarily have to chip an existing piece off to allow something new to appear.”

    The team further found that radiations may in fact cause major changes to existing ecosystems, an idea the authors call “destructive creation.” They found that, during the Phanerozoic Eon, on average, the species that made up an ecosystem at any one time are almost all gone by 19 million years later. But when mass extinctions or radiations occur, this rate of turnover is much higher.

    This gives a new perspective on how the modern “Sixth Extinction” is occurring. The Quaternary period, which began 2.5 million years ago, had witnessed repeated climate upheavals, including dramatic alternations of glaciation, times when high latitude locations on Earth, were ice-covered. This means that the present “Sixth Extinction” is eroding biodiversity that was already disrupted, and the authors suggest it will take at least 8 million years for it to revert to the long-term average of 19 million years. Dr Hoyal Cuthill comments that “each extinction that happens on our watch erases a species, which may have existed for millions of years up to now, making it harder for the normal process of ‘new species origination’ to replace what is being lost.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    tokyo-tech-campus

    Tokyo Tech (JP) is the top national university for science and technology in Japan with a history spanning more than 130 years. Of the approximately 10,000 students at the Ookayama, Suzukakedai, and Tamachi Campuses, half are in their bachelor’s degree program while the other half are in master’s and doctoral degree programs. International students number 1,200. There are 1,200 faculty and 600 administrative and technical staff members.

    In the 21st century, the role of science and technology universities has become increasingly important. Tokyo Tech continues to develop global leaders in the fields of science and technology, and contributes to the betterment of society through its research, focusing on solutions to global issues. The Institute’s long-term goal is to become the world’s leading science and technology university.

     
  • richardmitnick 9:08 am on July 29, 2020 Permalink | Reply
    Tags: "A method to predict the properties of complex quantum systems", , Machine learning, Machines are currently unable to support quantum systems with over tens of qubits., , , , Unitary t-design   

    From Caltech via phys.org: “A method to predict the properties of complex quantum systems” 

    Caltech Logo

    From Caltech

    via


    phys.org

    July 29, 2020
    Ingrid Fadelli

    1
    Credit: Huang, Kueng & Preskill.

    Predicting the properties of complex quantum systems is a crucial step in the development of advanced quantum technologies. While research teams worldwide have already devised a number of techniques to study the characteristics of quantum systems, most of these have only proved to be effective in some cases.

    Three researchers at California Institute of Technology recently introduced a new method that can be used to predict multiple properties of complex quantum systems from a limited number of measurements. Their method, outlined in a paper published in Nature Physics, has been found to be highly efficient and could open up new possibilities for studying the ways in which machines process quantum information.

    “During my undergraduate, my research centered on statistical machine learning and deep learning,” Hsin-Yuan Huang, one of the researchers who carried out the study, told Phys.org. “A central basis for the current machine-learning era is the ability to use highly parallelized hardware, such as graphical processing units (GPU) or tensor processing units (TPU). It is natural to wonder how an even more powerful learning machine capable of harnessing quantum-mechanical processes could emerge in the far future. This was my aspiration when I started my Ph.D. at Caltech.”

    The first step toward the development of more advanced machines based on quantum-mechanical processes is to gain a better understanding of how current technologies process and manipulate quantum systems and quantum information. The standard method for doing this, known as quantum state tomography, works by learning the entire description of a quantum system. However, this requires an exponential number of measurements, as well as considerable computational memory and time.

    As a result, when using quantum state tomography, machines are currently unable to support quantum systems with over tens of qubits. In recent years, researchers have proposed a number of techniques based on artificial neural networks that could significantly enhance the quantum information processing of machines. Unfortunately, however, these techniques do not generalize well across all cases, and the specific requirements that allow them to work are still unclear.

    “To build a rigorous foundation for how machines can perceive quantum systems, we combined my previous knowledge about statistical learning theory with Richard Kueng and John Preskill’s expertise on a beautiful mathematical theory known as unitary t-design,” Huang said. “Statistical learning theory is the theory that underlies how the machine could learn an approximate model about how the world behaves, while unitary t-design is a mathematical theory that underlies how quantum information scrambles, which is central to understand quantum many-body chaos, in particular, quantum black holes.”

    By combining statistical learning and unitary t-design theory, the researchers were able to devise a rigorous and efficient procedure that allows classical machines to produce approximate classical descriptions of quantum many-body systems. These descriptions can be used to predict several properties of the quantum systems that are being studied by performing a minimal number of quantum measurements.

    “To construct an approximate classical description of the quantum state, we perform a randomized measurement procedure given as follows,” Huang said. “We sample a few random quantum evolutions that would be applied to the unknown quantum many-body system. These random quantum evolutions are typically chaotic and would scramble the quantum information stored in the quantum system.”

    The random quantum evolutions sampled by the researchers ultimately enable the use of the mathematical theory of unitary t-design to study such chaotic quantum systems as quantum black holes. In addition, Huang and his colleagues examined a number of randomly scrambled quantum systems using a measurement tool that elicits a wave function collapse, a process that turns a quantum system into a classical system. Finally, they combined the random quantum evolutions with the classical system representations derived from their measurements, producing an approximate classical description of the quantum system of interest.

    “Intuitively, one could think of this procedure as follows,” Huang explained. “We have an exponentially high-dimensional object, the quantum many-body system, that is very hard to grasp by a classical machine. We perform several random projections of this extremely high-dimension object to a much lower dimensional space through the use of random/chaotic quantum evolution. The set of random projections provides a rough picture of how this exponentially high dimensional object looks, and the classical representation allows us to predict various properties of the quantum many-body system.”

    Huang and his colleagues proved that by combining statistical learning constructs and the theory of quantum information scrambling, they could accurately predict M properties of a quantum system based solely on log(M) measurements. In other words, their method can predict an exponential number of properties simply by repeatedly measuring specific aspects of a quantum system for a specific number of times.

    “The traditional understanding is that when we want to measure M properties, we have to measure the quantum system M times,” Huang said. “This is because after we measure one property of the quantum system, the quantum system would collapse and become classical. After the quantum system has turned classical, we cannot measure other properties with the resulting classical system. Our approach avoids this by performing randomly generated measurements and infer the desired property by combining these measurement data.”

    The study partly explains the excellent performance achieved by recently developed machine learning (ML) techniques in predicting properties of quantum systems. In addition, its unique design makes the method they developed significantly faster than existing ML techniques, while also allowing it to predict properties of quantum many-body systems with a greater accuracy.

    “Our study rigorously shows that there is much more information hidden in the data obtained from quantum measurements than we originally expected,” Huang said. “By suitably combining these data, we can infer this hidden information and gain significantly more knowledge about the quantum system. This implies the importance of data science techniques for the development of quantum technology.”

    The results of tests the team conducted suggest that to leverage the power of machine learning, it is first necessary to attain a good understanding of intrinsic quantum physics mechanisms. Huang and his colleagues showed that although directly applying standard machine-learning techniques can lead to satisfactory results, organically combining the mathematics behind machine learning and quantum physics results in far better quantum information processing performance.

    “Given a rigorous ground for perceiving quantum systems with classical machines, my personal plan is to now take the next step toward creating a learning machine capable of manipulating and harnessing quantum-mechanical processes,” Huang said. “In particular, we want to provide a solid understanding of how machines could learn to solve quantum many-body problems, such as classifying quantum phases of matter or finding quantum many-body ground states.”

    This new method for constructing classical representations of quantum systems could open up new possibilities for the use of machine learning to solve challenging problems involving quantum many-body systems. To tackle these problems more efficiently, however, machines would also need to be able to simulate a number of complex computations, which would require a further synthesis between the mathematics underlying machine learning and quantum physics. In their next studies, Huang and his colleagues plan to explore new techniques that could enable this synthesis.

    “At the same time, we are also working on refining and developing new tools for inferring hidden information from the data collected by quantum experimentalists,” Huang said. “The physical limitation in the actual systems provides interesting challenges for developing more advanced techniques. This would further allow experimentalists to see what they originally could not and help advance the current state of quantum technology.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”

    Caltech campus

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: