Tagged: AI Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 8:59 am on September 2, 2021 Permalink | Reply
    Tags: "When Deep Learning Meets Geophysics", AI, ,   

    From Eos: “When Deep Learning Meets Geophysics” 

    From AGU
    Eos news bloc

    From Eos

    9.1.21
    Jianwei Ma
    jwm@pku.edu.cn
    Siwei Yu

    Traditional physical models are no longer the only foundational tools for processing geophysical data; “big data” help to reveal the laws of geophysics from new angles with exciting results so far.

    1
    Understanding deep learning (DL) from different perspectives. Credit: Yu and Ma [2021].

    As artificial intelligence (AI) continues to develop, geoscientists are interested in how new AI developments could contribute to geophysical discoveries. A new article in Reviews of Geophysics examines one popular AI technique, deep learning (DL). We asked the authors some questions about the connection between deep learning and the geosciences.

    How would you describe “deep learning” in simple terms to a non-expert?

    Deep learning (DL) optimizes the parameters in a system, a so-called “neural network,” by feeding it a large amount of training data. “Deep” means the system consists of a structure with multiple layers.

    DL can be understood from different angles. In terms of biology, DL is a bionic approach imitating the neurons in the human brain; a computer can learn knowledge as well as draw inferences like a human. In terms of mathematics, DL is a high-dimensional nonlinear optimization problem; DL constructs a mapping from the input samples to the output labels. In terms of information science, DL extracts useful information from a large set of redundant data.

    How can deep learning be used by the geophysical community?

    2
    Deep learning-based geophysical applications. Credit: Yu and Ma [2021].

    DL has the potential to be applied to most areas of geophysics. By providing a large database, you can train a DL architecture to perform geophysical inferring. Take earthquake science as an example. The historical records of seismic stations contain useful information such as the waveforms of an earthquake and corresponding locations. Therefore, the waveforms and locations serve as the input and output of a neural network. The parameters in the neural network are optimized to minimize the mismatch between the output of the neural network and the true locations. Then the trained neural network can predict locations of new coming seismic events. DL can be used in other fields in a similar manner.

    What advantages does deep learning have over traditional methods in geophysical data processing and analysis?

    Traditional methods suffer from inaccurate modeling and computational bottlenecks with large-scale and complex geophysical systems; DL could be helpful to solve this. First, DL can handle big data naturally where it causes a computational burden in traditional methods. Second, DL can utilize historical data and experience which are usually not considered in traditional methods. Third, an accurate description of the physical model is not required, which is useful when the physical model is not known partially. Fourth, DL can provide a high computational efficiency after the training is complete thus enabling the characterization of Earth with a high resolution. Fifth, DL can be used for discovering physical concepts, such as the solar system is heliocentric, and may even provide discoveries that are not yet known.

    In your opinion, what are some of the most exciting opportunities for deep learning applications in geophysics?

    DL has already provided some surprising results in geophysics. For instance, on the Stanford earthquake data set, the earthquake detection accuracy improved to 100 percent compared to 91 percent accuracy with the traditional method.

    In our review article, we suggest a roadmap for applying DL to different geophysical tasks, divided into three levels:

    Traditional methods are time-consuming and require intensive human labor and expert knowledge, such as in the first-arrival selection and velocity selection in exploration geophysics.
    Traditional methods have difficulties and bottlenecks. For example, geophysical inversion requires good initial values and high accuracy modeling and suffers from local minimization.
    Traditional methods cannot handle some cases, such as multimodal data fusion and inversion.

    What are some difficulties in applying deep learning in the geophysical community?

    Despite the success of DL in some geophysical applications, such as earthquake detectors or pickers, its use as a tool for most practical geophysics is still in its infancy.

    The main difficulties include a shortage of training samples, low signal-to-noise ratios, and strong nonlinearity. The lack of training samples in geophysical applications compared to those in other industries is the most critical of these challenges. Though the volume of geophysical data is large, available labels are scarce. Also, in certain geophysical fields, such as exploration geophysics, the data are not shared among companies. Further, geophysical tasks are usually much more difficult than those in computer vision.

    What are potential future directions for research involving deep learning in geophysics?

    3
    Future trends for applying deep learning in geophysics. Credit: Yu and Ma [2021].

    In terms of DL approaches, several advanced DL methods may overcome the difficulties of applying DL in geophysics, such as semi-supervised and unsupervised learning, transfer learning, multimodal DL, federated learning, and active learning. For example, in practical geophysical applications, obtaining labels for a large data set is time-consuming and can even be infeasible. Therefore, semi-supervised or unsupervised learning is required to relieve the dependence on labels.

    We would like to see research of DL in geophysics focus on the cases that traditional methods cannot handle, such as simulating the atmosphere or imaging the Earth’s interior on a large spatial and temporal scale with high resolution.

    Jianwei Ma (jwm@pku.edu.cn), Peking University [北京大学](CN)
    Siwei Yu, Harbin Institute of Technology [哈尔滨工业大学] (CN)

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Eos is the leading source for trustworthy news and perspectives about the Earth and space sciences and their impact. Its namesake is Eos, the Greek goddess of the dawn, who represents the light shed on understanding our planet and its environment in space by the Earth and space sciences.

     
  • richardmitnick 11:09 am on July 1, 2021 Permalink | Reply
    Tags: "The power of two", AI, , , , , , , Ellen Zhong, , , , Software called cryoDRGN   

    From Massachusetts Institute of Technology (US) : “The power of two” 

    MIT News

    From Massachusetts Institute of Technology (US)

    June 30, 2021
    Saima Sidik | Department of Biology

    Graduate student Ellen Zhong helped biologists and mathematicians reach across departmental lines to address a longstanding problem in electron microscopy.

    1
    Ellen Zhong, a graduate student from the Computational and Systems Biology Program, is using a computational pattern-recognition tool called a neural network to study the shapes of molecular machines.
    Credit: Matthew Brown.

    MIT’s Hockfield Court is bordered on the west by the ultramodern Stata Center, with its reflective, silver alcoves that jut off at odd angles, and on the east by Building 68, which is a simple, window-lined, cement rectangle. At first glance, Bonnie Berger’s mathematics lab in the Stata Center and Joey Davis’s biology lab in Building 68 are as different as the buildings that house them. And yet, a recent collaboration between these two labs shows how their disciplines complement each other. The partnership started when Ellen Zhong, a graduate student from the Computational and Systems Biology (CSB) Program, decided to use a computational pattern-recognition tool called a neural network to study the shapes of molecular machines. Three years later, Zhong’s project is letting scientists see patterns that run beneath the surface of their data, and deepening their understanding of the molecules that shape life.

    Zhong’s work builds on a technique from the 1970s called cryo-electron microscopy (cryo-EM), which lets researchers take high-resolution images of frozen protein complexes. Over the past decade, better microscopes and cameras have led to a “resolution revolution” in cryo-EM that’s allowed scientists to see individual atoms within proteins. But, as good as these images are, they’re still only static snapshots. In reality, many of these molecular machines are constantly changing shape and composition as cells carry out their normal functions and adjust to new situations.

    Along with former Berger lab member Tristan Belper, Zhong devised software called cryoDRGN. The tool uses neural nets to combine hundreds of thousands of cryo-EM images, and shows scientists the full range of three-dimensional conformations that protein complexes can take, letting them reconstruct the proteins’ motion as they carry out cellular functions. Understanding the range of shapes that protein complexes can take helps scientists develop drugs that block viruses from entering cells, study how pests kill crops, and even design custom proteins that can cure disease. Covid-19 vaccines, for example, work partly because they include a mutated version of the virus’s spike protein that’s stuck in its active conformation, so vaccinated people produce antibodies that block the virus from entering human cells. Scientists needed to understand the variety of shapes that spike proteins can take in order to figure out how to force spike into its active conformation.

    Getting off the computer and into the lab

    Zhong’s interest in computational biology goes back to 2011 when, as a chemical engineering undergrad at the University of Virginia (US), she worked with Professor Michael Shirts to simulate how proteins fold and unfold. After college, Zhong took her skills to a company called D. E. Shaw Research, where, as a scientific programmer, she took a computational approach to studying how proteins interact with small-molecule drugs.

    “The research was very exciting,” Zhong says, “but all based on computer simulations. To really understand biological systems, you need to do experiments.”

    This goal of combining computation with experimentation motivated Zhong to join MIT’s CSB PhD program, where students often work with multiple supervisors to blend computational work with bench work. Zhong “rotated” in both the Davis and Berger labs, then decided to combine the Davis lab’s goal of understanding how protein complexes form with the Berger lab’s expertise in machine learning and algorithms. Davis was interested in building up the computational side of his lab, so he welcomed the opportunity to co-supervise a student with Berger, who has a long history of collaborating with biologists.

    Davis himself holds a dual bachelor’s degree in computer science and biological engineering, so he’s long believed in the power of combining complementary disciplines. “There are a lot of things you can learn about biology by looking in a microscope,” he says. “But as we start to ask more complicated questions about entire systems, we’re going to require computation to manage the high-dimensional data that come back.”


    Reconstructing Molecules in Motion.

    Before rotating in the Davis lab, Zhong had never performed bench work before — or even touched a pipette. She was fascinated to find how streamlined some very powerful molecular biology techniques can be. Still, Zhong realized that physical limitations mean that biology is much slower when it’s done at the bench instead of on a computer. “With computational research, you can automate experiments and run them super quickly, whereas in the wet lab, you only have two hands, so you can only do one experiment at a time,” she says.

    Zhong says that synergizing the two different cultures of the Davis and Berger labs is helping her become a well-rounded, adaptable scientist. Working around experimentalists in the Davis lab has shown her how much labor goes into experimental results, and also helped her to understand the hurdles that scientists face at the bench. In the Berger lab, she enjoys having coworkers who understand the challenges of computer programming.

    “The key challenge in collaborating across disciplines is understanding each other’s ‘languages,’” Berger says. “Students like Ellen are fortunate to be learning both biology and computing dialects simultaneously.”

    Bringing in the community

    Last spring revealed another reason for biologists to learn computational skills: these tools can be used anywhere there’s a computer and an internet connection. When the Covid-19 pandemic hit, Zhong’s colleagues in the Davis lab had to wind down their bench work for a few months, and many of them filled their time at home by using cryo-EM data that’s freely available online to help Zhong test her cryoDRGN software. The difficulty of understanding another discipline’s language quickly became apparent, and Zhong spent a lot of time teaching her colleagues to be programmers. Seeing the problems that nonprogrammers ran into when they used cryoDRGN was very informative, Zhong says, and helped her create a more user-friendly interface.

    Although the paper announcing cryoDRGN was just published in February, the tool created a stir as soon as Zhong posted her code online, many months prior. The cryoDRGN team thinks this is because leveraging knowledge from two disciplines let them visualize the full range of structures that protein complexes can have, and that’s something researchers have wanted to do for a long time. For example, the cryoDRGN team recently collaborated with researchers from Harvard and Washington universities to study locomotion of the single-celled organism Chlamydomonas reinhardtii. The mechanisms they uncovered could shed light on human health conditions, like male infertility, that arise when cells lose the ability to move. The team is also using cryoDRGN to study the structure of the SARS-CoV-2 spike protein, which could help scientists design treatments and vaccines to fight coronaviruses.

    Zhong, Berger, and Davis say they’re excited to continue using neural nets to improve cryo-EM analysis, and to extend their computational work to other aspects of biology. Davis cited mass spectrometry as “a ripe area to apply computation.” This technique can complement cryo-EM by showing researchers the identities of proteins, how many of them are bound together, and how cells have modified them.

    “Collaborations between disciplines are the future,” Berger says. “Researchers focused on a single discipline can take it only so far with existing techniques. Shining a different lens on the problem is how advances can be made.”

    Zhong says it’s not a bad way to spend a PhD, either. Asked what she’d say to incoming graduate students considering interdisciplinary projects, she says: “Definitely do it.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    MIT Seal

    USPS “Forever” postage stamps celebrating Innovation at MIT.

    MIT Campus

    Massachusetts Institute of Technology (US) is a private land-grant research university in Cambridge, Massachusetts. The institute has an urban campus that extends more than a mile (1.6 km) alongside the Charles River. The institute also encompasses a number of major off-campus facilities such as the MIT Lincoln Laboratory, the Bates Center, and the Haystack Observatory, as well as affiliated laboratories such as the Broad and Whitehead Institutes.

    Founded in 1861 in response to the increasing industrialization of the United States, Massachusetts Institute of Technology (US) adopted a European polytechnic university model and stressed laboratory instruction in applied science and engineering. It has since played a key role in the development of many aspects of modern science, engineering, mathematics, and technology, and is widely known for its innovation and academic strength. It is frequently regarded as one of the most prestigious universities in the world.

    As of December 2020, 97 Nobel laureates, 26 Turing Award winners, and 8 Fields Medalists have been affiliated with MIT as alumni, faculty members, or researchers. In addition, 58 National Medal of Science recipients, 29 National Medals of Technology and Innovation recipients, 50 MacArthur Fellows, 80 Marshall Scholars, 3 Mitchell Scholars, 22 Schwarzman Scholars, 41 astronauts, and 16 Chief Scientists of the U.S. Air Force have been affiliated with Massachusetts Institute of Technology (US) . The university also has a strong entrepreneurial culture and MIT alumni have founded or co-founded many notable companies. Massachusetts Institute of Technology (US) is a member of the Association of American Universities (AAU).

    Foundation and vision

    In 1859, a proposal was submitted to the Massachusetts General Court to use newly filled lands in Back Bay, Boston for a “Conservatory of Art and Science”, but the proposal failed. A charter for the incorporation of the Massachusetts Institute of Technology, proposed by William Barton Rogers, was signed by John Albion Andrew, the governor of Massachusetts, on April 10, 1861.

    Rogers, a professor from the University of Virginia (US), wanted to establish an institution to address rapid scientific and technological advances. He did not wish to found a professional school, but a combination with elements of both professional and liberal education, proposing that:

    “The true and only practicable object of a polytechnic school is, as I conceive, the teaching, not of the minute details and manipulations of the arts, which can be done only in the workshop, but the inculcation of those scientific principles which form the basis and explanation of them, and along with this, a full and methodical review of all their leading processes and operations in connection with physical laws.”

    The Rogers Plan reflected the German research university model, emphasizing an independent faculty engaged in research, as well as instruction oriented around seminars and laboratories.

    Early developments

    Two days after Massachusetts Institute of Technology (US) was chartered, the first battle of the Civil War broke out. After a long delay through the war years, MIT’s first classes were held in the Mercantile Building in Boston in 1865. The new institute was founded as part of the Morrill Land-Grant Colleges Act to fund institutions “to promote the liberal and practical education of the industrial classes” and was a land-grant school. In 1863 under the same act, the Commonwealth of Massachusetts founded the Massachusetts Agricultural College, which developed as the University of Massachusetts Amherst (US)). In 1866, the proceeds from land sales went toward new buildings in the Back Bay.

    Massachusetts Institute of Technology (US) was informally called “Boston Tech”. The institute adopted the European polytechnic university model and emphasized laboratory instruction from an early date. Despite chronic financial problems, the institute saw growth in the last two decades of the 19th century under President Francis Amasa Walker. Programs in electrical, chemical, marine, and sanitary engineering were introduced, new buildings were built, and the size of the student body increased to more than one thousand.

    The curriculum drifted to a vocational emphasis, with less focus on theoretical science. The fledgling school still suffered from chronic financial shortages which diverted the attention of the MIT leadership. During these “Boston Tech” years, Massachusetts Institute of Technology (US) faculty and alumni rebuffed Harvard University (US) president (and former MIT faculty) Charles W. Eliot’s repeated attempts to merge MIT with Harvard College’s Lawrence Scientific School. There would be at least six attempts to absorb MIT into Harvard. In its cramped Back Bay location, MIT could not afford to expand its overcrowded facilities, driving a desperate search for a new campus and funding. Eventually, the MIT Corporation approved a formal agreement to merge with Harvard, over the vehement objections of MIT faculty, students, and alumni. However, a 1917 decision by the Massachusetts Supreme Judicial Court effectively put an end to the merger scheme.

    In 1916, the Massachusetts Institute of Technology (US) administration and the MIT charter crossed the Charles River on the ceremonial barge Bucentaur built for the occasion, to signify MIT’s move to a spacious new campus largely consisting of filled land on a one-mile-long (1.6 km) tract along the Cambridge side of the Charles River. The neoclassical “New Technology” campus was designed by William W. Bosworth and had been funded largely by anonymous donations from a mysterious “Mr. Smith”, starting in 1912. In January 1920, the donor was revealed to be the industrialist George Eastman of Rochester, New York, who had invented methods of film production and processing, and founded Eastman Kodak. Between 1912 and 1920, Eastman donated $20 million ($236.6 million in 2015 dollars) in cash and Kodak stock to MIT.

    Curricular reforms

    In the 1930s, President Karl Taylor Compton and Vice-President (effectively Provost) Vannevar Bush emphasized the importance of pure sciences like physics and chemistry and reduced the vocational practice required in shops and drafting studios. The Compton reforms “renewed confidence in the ability of the Institute to develop leadership in science as well as in engineering”. Unlike Ivy League schools, Massachusetts Institute of Technology (US) catered more to middle-class families, and depended more on tuition than on endowments or grants for its funding. The school was elected to the Association of American Universities (US)in 1934.

    Still, as late as 1949, the Lewis Committee lamented in its report on the state of education at Massachusetts Institute of Technology (US) that “the Institute is widely conceived as basically a vocational school”, a “partly unjustified” perception the committee sought to change. The report comprehensively reviewed the undergraduate curriculum, recommended offering a broader education, and warned against letting engineering and government-sponsored research detract from the sciences and humanities. The School of Humanities, Arts, and Social Sciences and the MIT Sloan School of Management were formed in 1950 to compete with the powerful Schools of Science and Engineering. Previously marginalized faculties in the areas of economics, management, political science, and linguistics emerged into cohesive and assertive departments by attracting respected professors and launching competitive graduate programs. The School of Humanities, Arts, and Social Sciences continued to develop under the successive terms of the more humanistically oriented presidents Howard W. Johnson and Jerome Wiesner between 1966 and 1980.

    Massachusetts Institute of Technology (US)‘s involvement in military science surged during World War II. In 1941, Vannevar Bush was appointed head of the federal Office of Scientific Research and Development and directed funding to only a select group of universities, including MIT. Engineers and scientists from across the country gathered at Massachusetts Institute of Technology (US)’s Radiation Laboratory, established in 1940 to assist the British military in developing microwave radar. The work done there significantly affected both the war and subsequent research in the area. Other defense projects included gyroscope-based and other complex control systems for gunsight, bombsight, and inertial navigation under Charles Stark Draper’s Instrumentation Laboratory; the development of a digital computer for flight simulations under Project Whirlwind; and high-speed and high-altitude photography under Harold Edgerton. By the end of the war, Massachusetts Institute of Technology (US) became the nation’s largest wartime R&D contractor (attracting some criticism of Bush), employing nearly 4000 in the Radiation Laboratory alone and receiving in excess of $100 million ($1.2 billion in 2015 dollars) before 1946. Work on defense projects continued even after then. Post-war government-sponsored research at MIT included SAGE and guidance systems for ballistic missiles and Project Apollo.

    These activities affected Massachusetts Institute of Technology (US) profoundly. A 1949 report noted the lack of “any great slackening in the pace of life at the Institute” to match the return to peacetime, remembering the “academic tranquility of the prewar years”, though acknowledging the significant contributions of military research to the increased emphasis on graduate education and rapid growth of personnel and facilities. The faculty doubled and the graduate student body quintupled during the terms of Karl Taylor Compton, president of Massachusetts Institute of Technology (US) between 1930 and 1948; James Rhyne Killian, president from 1948 to 1957; and Julius Adams Stratton, chancellor from 1952 to 1957, whose institution-building strategies shaped the expanding university. By the 1950s, Massachusetts Institute of Technology (US) no longer simply benefited the industries with which it had worked for three decades, and it had developed closer working relationships with new patrons, philanthropic foundations and the federal government.

    In late 1960s and early 1970s, student and faculty activists protested against the Vietnam War and Massachusetts Institute of Technology (US)’s defense research. In this period Massachusetts Institute of Technology (US)’s various departments were researching helicopters, smart bombs and counterinsurgency techniques for the war in Vietnam as well as guidance systems for nuclear missiles. The Union of Concerned Scientists was founded on March 4, 1969 during a meeting of faculty members and students seeking to shift the emphasis on military research toward environmental and social problems. Massachusetts Institute of Technology (US) ultimately divested itself from the Instrumentation Laboratory and moved all classified research off-campus to the MIT (US) Lincoln Laboratory facility in 1973 in response to the protests. The student body, faculty, and administration remained comparatively unpolarized during what was a tumultuous time for many other universities. Johnson was seen to be highly successful in leading his institution to “greater strength and unity” after these times of turmoil. However six Massachusetts Institute of Technology (US) students were sentenced to prison terms at this time and some former student leaders, such as Michael Albert and George Katsiaficas, are still indignant about MIT’s role in military research and its suppression of these protests. (Richard Leacock’s film, November Actions, records some of these tumultuous events.)

    In the 1980s, there was more controversy at Massachusetts Institute of Technology (US) over its involvement in SDI (space weaponry) and CBW (chemical and biological warfare) research. More recently, Massachusetts Institute of Technology (US)’s research for the military has included work on robots, drones and ‘battle suits’.

    Recent history

    Massachusetts Institute of Technology (US) has kept pace with and helped to advance the digital age. In addition to developing the predecessors to modern computing and networking technologies, students, staff, and faculty members at Project MAC, the Artificial Intelligence Laboratory, and the Tech Model Railroad Club wrote some of the earliest interactive computer video games like Spacewar! and created much of modern hacker slang and culture. Several major computer-related organizations have originated at MIT since the 1980s: Richard Stallman’s GNU Project and the subsequent Free Software Foundation were founded in the mid-1980s at the AI Lab; the MIT Media Lab was founded in 1985 by Nicholas Negroponte and Jerome Wiesner to promote research into novel uses of computer technology; the World Wide Web Consortium standards organization was founded at the Laboratory for Computer Science in 1994 by Tim Berners-Lee; the MIT OpenCourseWare project has made course materials for over 2,000 Massachusetts Institute of Technology (US) classes available online free of charge since 2002; and the One Laptop per Child initiative to expand computer education and connectivity to children worldwide was launched in 2005.

    Massachusetts Institute of Technology (US) was named a sea-grant college in 1976 to support its programs in oceanography and marine sciences and was named a space-grant college in 1989 to support its aeronautics and astronautics programs. Despite diminishing government financial support over the past quarter century, MIT launched several successful development campaigns to significantly expand the campus: new dormitories and athletics buildings on west campus; the Tang Center for Management Education; several buildings in the northeast corner of campus supporting research into biology, brain and cognitive sciences, genomics, biotechnology, and cancer research; and a number of new “backlot” buildings on Vassar Street including the Stata Center. Construction on campus in the 2000s included expansions of the Media Lab, the Sloan School’s eastern campus, and graduate residences in the northwest. In 2006, President Hockfield launched the MIT Energy Research Council to investigate the interdisciplinary challenges posed by increasing global energy consumption.

    In 2001, inspired by the open source and open access movements, Massachusetts Institute of Technology (US) launched OpenCourseWare to make the lecture notes, problem sets, syllabi, exams, and lectures from the great majority of its courses available online for no charge, though without any formal accreditation for coursework completed. While the cost of supporting and hosting the project is high, OCW expanded in 2005 to include other universities as a part of the OpenCourseWare Consortium, which currently includes more than 250 academic institutions with content available in at least six languages. In 2011, Massachusetts Institute of Technology (US) announced it would offer formal certification (but not credits or degrees) to online participants completing coursework in its “MITx” program, for a modest fee. The “edX” online platform supporting MITx was initially developed in partnership with Harvard and its analogous “Harvardx” initiative. The courseware platform is open source, and other universities have already joined and added their own course content. In March 2009 the Massachusetts Institute of Technology (US) faculty adopted an open-access policy to make its scholarship publicly accessible online.

    Massachusetts Institute of Technology (US) has its own police force. Three days after the Boston Marathon bombing of April 2013, MIT Police patrol officer Sean Collier was fatally shot by the suspects Dzhokhar and Tamerlan Tsarnaev, setting off a violent manhunt that shut down the campus and much of the Boston metropolitan area for a day. One week later, Collier’s memorial service was attended by more than 10,000 people, in a ceremony hosted by the Massachusetts Institute of Technology (US) community with thousands of police officers from the New England region and Canada. On November 25, 2013, Massachusetts Institute of Technology (US) announced the creation of the Collier Medal, to be awarded annually to “an individual or group that embodies the character and qualities that Officer Collier exhibited as a member of the Massachusetts Institute of Technology (US) community and in all aspects of his life”. The announcement further stated that “Future recipients of the award will include those whose contributions exceed the boundaries of their profession, those who have contributed to building bridges across the community, and those who consistently and selflessly perform acts of kindness”.

    In September 2017, the school announced the creation of an artificial intelligence research lab called the MIT-IBM Watson AI Lab. IBM will spend $240 million over the next decade, and the lab will be staffed by MIT and IBM scientists. In October 2018 MIT announced that it would open a new Schwarzman College of Computing dedicated to the study of artificial intelligence, named after lead donor and The Blackstone Group CEO Stephen Schwarzman. The focus of the new college is to study not just AI, but interdisciplinary AI education, and how AI can be used in fields as diverse as history and biology. The cost of buildings and new faculty for the new college is expected to be $1 billion upon completion.

    The Caltech/MIT Advanced aLIGO (US) was designed and constructed by a team of scientists from California Institute of Technology (US), Massachusetts Institute of Technology (US), and industrial contractors, and funded by the National Science Foundation (US) .

    MIT/Caltech Advanced aLigo .

    It was designed to open the field of gravitational-wave astronomy through the detection of gravitational waves predicted by general relativity. Gravitational waves were detected for the first time by the LIGO detector in 2015. For contributions to the LIGO detector and the observation of gravitational waves, two Caltech physicists, Kip Thorne and Barry Barish, and Massachusetts Institute of Technology (US) physicist Rainer Weiss won the Nobel Prize in physics in 2017. Weiss, who is also an Massachusetts Institute of Technology (US) graduate, designed the laser interferometric technique, which served as the essential blueprint for the LIGO.

    The mission of Massachusetts Institute of Technology (US) is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the Massachusetts Institute of Technology (US) community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

     
  • richardmitnick 10:21 am on May 28, 2021 Permalink | Reply
    Tags: A new method that improves how artificial intelligence learns to see., Adding noise-pixilation-along multiple layers of a network provides a more robust representation of an image that’s recognized by the AI and creates more robust explanations for AI decisions., AI, , , It’s about injecting noise into every layer. The network is now forced to learn a more robust representation of the input in all of its internal layers., Through deep learning a computer is trained to perform behaviors such as recognizing speech and identifying images or making predictions., University of Texas-San Antonio (US)   

    From University of Texas-San Antonio (US): “UTSA researchers among collaborative improving computer vision for AI” 

    From University of Texas-San Antonio (US)

    MAY 26, 2021

    Researchers from UTSA, the University of Central Florida (US), the Air Force Research Laboratory (US) and SRI International (US) have developed a new method that improves how artificial intelligence learns to see.

    1

    Led by Sumit Jha, professor in the Department of Computer Science at UTSA, the team has changed the conventional approach employed in explaining machine learning decisions that relies on a single injection of noise into the input layer of a neural network.

    The team shows that adding noise—also known as pixilation—along multiple layers of a network provides a more robust representation of an image that’s recognized by the AI and creates more robust explanations for AI decisions. This work aids in the development of what’s been called “explainable AI” which seeks to enable high-assurance applications of AI such as medical imaging and autonomous driving.

    “It’s about injecting noise into every layer,” Jha said. “The network is now forced to learn a more robust representation of the input in all of its internal layers. If every layer experiences more perturbations in every training, then the image representation will be more robust and you won’t see the AI fail just because you change a few pixels of the input image.”

    Computer vision—the ability to recognize images—has many business applications. Computer vision can better identify areas of concern in the livers and brains of cancer patients. This type of machine learning can also be employed in many other industries. Manufacturers can use it to detect defection rates, drones can use it to help detect pipeline leaks, and agriculturists have begun using it to spot early signs of crop disease to improve their yields.

    Through deep learning a computer is trained to perform behaviors such as recognizing speech and identifying images or making predictions. Instead of organizing data to run through set equations, deep learning works within basic parameters about a data set and trains the computer to learn on its own by recognizing patterns using many layers of processing.

    The team’s work, led by Jha, is a major advancement to previous work he’s conducted in this field. In a 2019 paper presented at the AI Safety workshop co-located with that year’s International Joint Conference on Artificial Intelligence (IJCAI), Jha, his students and colleagues from the DOE’s Oak Ridge National Laboratory (US) demonstrated how poor conditions in nature can lead to dangerous neural network performance. A computer vision system was asked to recognize a minivan on a road, and did so correctly. His team then added a small amount of fog and posed the same query again to the network: the AI identified the minivan as a fountain. As a result, their paper was a best paper candidate.

    In most models that rely on neural ordinary differential equations (ODEs), a machine is trained with one input through one network, and then spreads through the hidden layers to create one response in the output layer. This team of UTSA, UCF, AFRL and SRI researchers use a more dynamic approach known as a stochastic differential equations (SDEs). Exploiting the connection between dynamical systems and show that neural SDEs lead to less noisy, visually sharper, and quantitatively robust attributions than those computed using neural ODEs.

    The SDE approach learns not just from one image but from a set of nearby images due to the injection of the noise in multiple layers of the neural network. As more noise is injected, the machine will learn evolving approaches and find better ways to make explanations or attributions simply because the model created at the onset is based on evolving characteristics and/or the conditions of the image. It’s an improvement on several other attribution approaches including saliency maps and integrated gradients.

    Jha’s new research is described in the paper “On Smoother Attributions using Neural Stochastic Differential Equations.” Fellow contributors to this novel approach include UCF’s Richard Ewetz, AFRL’s Alvaro Velazquez and SRI’s Susmit Jha. The lab is funded by the Defense Advanced Research Projects Agency (US), the Office of Naval Research (US) and the National Science Foundation (US). Their research will be presented at the IJCAI 2021, a conference with about a 14% acceptance rate for submissions. Past presenters at this highly selective conference have included Facebook and Google.

    “I am delighted to share the fantastic news that our paper on explainable AI has just been accepted at IJCAI,” Jha added. “This is a big opportunity for UTSA to be part of the global conversation on how a machine sees.”

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The University of Texas-San Antonio (US) is a public research university in San Antonio, Texas. With over 34,000 students, it is the largest university in San Antonio and the eighth-largest in the state of Texas. It includes four campuses across the San Antonio metropolitan area that span 725 acres of land. University of Texas-San Antonio offers 67 bachelor’s, 69 master’s, and 24 doctoral degree programs. It is classified among “R2: Doctoral Universities – High research activity”.

    Established in 1969, University of Texas-San Antonio has become the third largest institution within the UT System by enrollment. The university has a local economic impact of $1.2 billion and the University of Texas-San Antonio Institute for Economic Development generates $2.9 billion in direct economic impact nationwide. The university’s restricted research expenditures have grown to $64.3 million while total research expenditures grew to $134 million in FY20.

    Student-athletes compete as the University of Texas-San Antonio Roadrunners and are a member of Conference USA. The football team has competed in Conference USA since 2013, previously playing a stint in the WAC and as an FCS independent.

    Research

    The University of Texas-San Antonio is classified among “R2: Doctoral Universities – High research activity”. The university reached a new record of $80.6 million for research expenditures in fiscal year 2019. University of Texas-San Antonio students and faculty conduct advanced research in many cross-disciplinary fields of study. Identified areas of research excellence include Advanced Materials, Cloud Computing, Cyber Security and Data Analytics, Integrative Biomedicine, Social and Educational Transformation, and Sustainable Communities and Critical Infrastructure.

    University of Texas-San Antonio operated the Center for Archaeological Research, which in 1984 did a study of the former Hot Wells hotel, spa and bathhouse on the San Antonio River in the southside of San Antonio. The survey determined all which remained of the resort were remnants of the 1902 hotel building, bathhouse ruins, and stones of a small nearby building. In 2015, work was authorized by the Bexar County Commissioners Court to begin restoring Hot Wells.

    A 2007 study released by Academic Analytics showed University of Texas-San Antonio was ranked fifth among other large research universities in the state of Texas for faculty scholarly productivity. The Office of the Vice President for Research publishes Discovery, an annual magazine dedicated to highlighting the research, academic and creative achievements of the University of Texas-San Antonio community. First printed in 2007, the publication is a member of the University Research Magazine Association, an organization that promotes excellence among the scholarly publications of universities.

    A three-year partnership between University of Texas-San Antonio and Microsoft was announced in April 2014. The purpose of the arrangement is the research and development of sustainable technologies to increase the energy efficiency and economic viability of data centers.

    The University of Texas-San Antonio Center for Advanced Measurements in Extreme Environments (CAMEE) collaborates with National Aeronautics Space Agency (US) to push the boundaries of current measurement and modeling technology by conducting research in harsh and extreme environments. CAMEE also studies the challenging conditions produced when traveling at hypersonic speeds.

    The U.S. Department of Energy selected UTSA to lead the Cybersecurity Manufacturing Innovation Institute (CyManII). This federal research institute focuses on achieving energy efficiency, job creation, technical innovation and security of supply chain networks and automation for goods such as electric vehicles, solar panels and wind turbines. The National Security Collaboration Center (NSCC) at UTSA, is the home base for the CyManII.

     
  • richardmitnick 9:39 pm on April 27, 2021 Permalink | Reply
    Tags: "Plasma acceleration- It’s all in the mix", AI, , , German Electron Synchrotron [Deütsches Elektronen-Synchrotron] DESY (DE), , LUX team is celebrating not just one but two milestones in the development of innovative plasma accelerators., , , Plasma acceleration is an innovative technology that is giving rise to a new generation of particle accelerators which are not only remarkably compact but also extremely versatile., The aim is to make the accelerated electrons available for applications in various different fields of industry; science; and medicine.   

    From German Electron Synchrotron [Deütsches Elektronen-Synchrotron] DESY (DE): “Plasma acceleration- It’s all in the mix” 

    From German Electron Synchrotron [Deütsches Elektronen-Synchrotron] DESY (DE)

    2021/04/27

    A pinch of nitrogen and artificial intelligence are moving laser plasma acceleration a big step closer to practical applications

    The LUX team at DESY is celebrating not just one but two milestones in the development of innovative plasma accelerators. The scientists from the University of Hamburg [Universität Hamburg] (DE) and DESY used their accelerator to test a technique that allows the energy distribution of the electron beams produced to be kept particularly narrow. They also used artificial intelligence to allow the accelerator to optimise its own operation. The scientists are reporting their experiments in two papers published shortly after one another in the journal Physical Review Letters.

    Physical Review Letters

    Physical Review Letters

    “It’s fantastic to see the speed with which the new technology of plasma acceleration is reaching a level of maturity where it can be used in a wide range of applications,” congratulates Wim Leemans, Director of the Accelerator Division at DESY.

    1
    In laser plasma acceleration, an intense laser pulse (red) in an ionised gas drives a bubble-shaped plasma wave consisting of electrons (white). An electron bunch (centre) riding this wave like a surfer is thus accelerated to high energies over shortest distances. The rendering is based on real simulation data from the LUX experiment (picture: DESY/SciComLab).

    Plasma acceleration is an innovative technology that is giving rise to a new generation of particle accelerators which are not only remarkably compact but also extremely versatile. The aim is to make the accelerated electrons available for applications in various different fields of industry; science; and medicine.

    The acceleration takes place in a tiny channel, just a few millimetres long, filled with an ionised gas called a plasma. An intense laser pulse generates a wave within the channel, which can capture and accelerate electrons from the plasma. “Like a surfer, the electrons are carried along by the plasma wave, which accelerates them to high energies,” explains Manuel Kirchen, lead author of one of the papers. “Using this technique, plasma accelerators are able to achieve accelerations that are up to a thousand times higher than those of the most powerful machines in use today,” adds Sören Jalas, author of the second paper.

    However, this compactness is both a curse and a blessing: since the acceleration process is concentrated in a tiny space that is up to 1000 times smaller than conventional, large-scale machines, the acceleration takes place under truly extreme conditions. Therefore, a number of challenges still have to be overcome before the new technology is ready to go into series production.

    The research team led by Andreas Maier, an accelerator physicist at DESY, has now reached two critical milestones at the LUX test facility – jointly operated by DESY and the University of Hamburg: they have found a way of significantly reducing the energy distribution of the accelerated electron bunches – one of the most essential properties for many potential applications. To do this, they programmed a self-learning autopilot for the accelerator, which automatically optimises LUX for maximum performance.

    The group conducted its experiments using a new type of plasma cell, specially developed for the purpose, whose plasma channel is divided into two regions. The plasma is generated from a mixture of hydrogen and nitrogen in the front part of the cell, which is about 10 millimetres long, while the region behind it is filled with pure hydrogen. As a result, the researchers were able to obtain the electrons for their particle bunch from the front part of the plasma cell, which were then accelerated over the entire rear section of the cell. “Being more tightly bound, the electrons in the nitrogen are released a little later, and that makes them ideal for being accelerated by the plasma wave,” explains Manuel Kirchen. The electron bunch also absorbs energy from the plasma wave, changing the shape of the wave. “We were able to take advantage of this effect and adjust the shape of the wave so that the electrons reach the same energy regardless of their position along the wave,” adds Kirchen.

    Based on this recipe for achieving high electron beam quality, the team then scored a second research success: Sören Jalas and his colleagues were able to use artificial intelligence (IA) to modify an algorithm that controls and optimises the complex system of the plasma accelerator. To do so, the scientists provided the algorithm with a functional model of the plasma accelerator and a set of adjustable parameters, which the algorithm then optimised on its own. Essentially, the system modified five main parameters, including the concentration and density of the gases and the energy and focus of the laser, and used the resulting measurements to search for an operating point at which the electron beam has the optimum quality. “In the course of its balancing act in 5-dimensional space, the algorithm was constantly learning and very quickly refined the model of the accelerator further and further,” says Jalas. “The AI takes about an hour to find a stable optimum operating point for the accelerator; by comparison, we estimate that human beings would need over a week.”

    A further advantage is that all the parameters and measured variables continue to train the accelerator’s AI model, making the optimisation process faster, more systematic and more targeted. “The latest progress at LUX means we are well on the way to trying out initial applications for test purposes,” explains Andreas Maier, who is in charge of developing lasers for plasma accelerators at DESY. “Ultimately, we also want to use plasma-accelerated electron bunches to operate a free-electron laser.”

    The experiments were conducted by researchers from the CFEL Center for Free-Electron Laser Science Germany [L Zentrum für Freie-Elektronen-Laser] (DE), a collaboration between DESY, the University of Hamburg and the Max Planck Society [Max-Planck-Gesellschaft zur Förderung der Wissenschaften e. V.], as well as a colleague from the DOE’s Lawrence Berkeley National Laboratory (US).

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    desi

    German Electron Synchrotron [Deütsches Elektronen-Synchrotron] DESY (DE) is one of the world’s leading accelerator centres. Researchers use the large-scale facilities at DESY to explore the microcosm in all its variety – from the interactions of tiny elementary particles and the behaviour of new types of nanomaterials to biomolecular processes that are essential to life. The accelerators and detectors that DESY develops and builds are unique research tools. The facilities generate the world’s most intense X-ray light, accelerate particles to record energies and open completely new windows onto the universe. 
That makes DESY not only a magnet for more than 3000 guest researchers from over 40 countries every year, but also a coveted partner for national and international cooperations. Committed young researchers find an exciting interdisciplinary setting at DESY. The research centre offers specialized training for a large number of professions. DESY cooperates with industry and business to promote new technologies that will benefit society and encourage innovations. This also benefits the metropolitan regions of the two DESY locations, Hamburg and Zeuthen near Berlin.

     
  • richardmitnick 10:04 pm on February 4, 2021 Permalink | Reply
    Tags: "Machine-learning model helps determine protein structures", AI, , , , , ,   

    From MIT: “Machine-learning model helps determine protein structures” 

    MIT News

    From MIT News

    February 4, 2021
    Anne Trafton

    1
    A cryoDRGN reconstruction of the SARS-CoV-2 spike protein
    Credit: Courtesy of the researchers, using cryo-EM images provided by the authors of Walls et al. 2020.


    Reconstructing Molecules in Motion
    Graduate student Ellen Zhong shows how her team combines cryo-electron microscopy and machine learning to visualize molecules in 3D.

    Cryo-electron microscopy (cryo-EM) allows scientists to produce high-resolution, three-dimensional images of tiny molecules such as proteins. This technique works best for imaging proteins that exist in only one conformation, but MIT researchers have now developed a machine-learning algorithm that helps them identify multiple possible structures that a protein can take.

    Unlike AI techniques that aim to predict protein structure from sequence data alone, protein structure can also be experimentally determined using cryo-EM, which produces hundreds of thousands, or even millions, of two-dimensional images of protein samples frozen in a thin layer of ice. Computer algorithms then piece together these images, taken from different angles, into a three-dimensional representation of the protein in a process termed reconstruction.

    In a Nature Methods paper, the MIT researchers report a new AI-based software for reconstructing multiple structures and motions of the imaged protein — a major goal in the protein science community. Instead of using the traditional representation of protein structure as electron-scattering intensities on a 3D lattice, which is impractical for modeling multiple structures, the researchers introduced a new neural network architecture that can efficiently generate the full ensemble of structures in a single model.

    “With the broad representation power of neural networks, we can extract structural information from noisy images and visualize detailed movements of macromolecular machines,” says Ellen Zhong, an MIT graduate student and the lead author of the paper.

    With their software, they discovered protein motions from imaging datasets where only a single static 3D structure was originally identified. They also visualized large-scale flexible motions of the spliceosome — a protein complex that coordinates the splicing of the protein coding sequences of transcribed RNA.

    “Our idea was to try to use machine-learning techniques to better capture the underlying structural heterogeneity, and to allow us to inspect the variety of structural states that are present in a sample,” says Joseph Davis, the Whitehead Career Development Assistant Professor in MIT’s Department of Biology.

    Davis and Bonnie Berger, the Simons Professor of Mathematics at MIT and head of the Computation and Biology group at the Computer Science and Artificial Intelligence Laboratory, are the senior authors of the study, which appears today in Nature Methods. MIT postdoc Tristan Bepler is also an author of the paper.

    Visualizing a multistep process

    The researchers demonstrated the utility of their new approach by analyzing structures that form during the process of assembling ribosomes — the cell organelles responsible for reading messenger RNA and translating it into proteins. Davis began studying the structure of ribosomes while a postdoc at the Scripps Research Institute. Ribosomes have two major subunits, each of which contains many individual proteins that are assembled in a multistep process.

    To study the steps of ribosome assembly in detail, Davis stalled the process at different points and then took electron microscope images of the resulting structures. At some points, blocking assembly resulted in accumulation of just a single structure, suggesting that there is only one way for that step to occur. However, blocking other points resulted in many different structures, suggesting that the assembly could occur in a variety of ways.

    Because some of these experiments generated so many different protein structures, traditional cryo-EM reconstruction tools did not work well to determine what those structures were.

    “In general, it’s an extremely challenging problem to try to figure out how many states you have when you have a mixture of particles,” Davis says.

    After starting his lab at MIT in 2017, he teamed up with Berger to use machine learning to develop a model that can use the two-dimensional images produced by cryo-EM to generate all of the three-dimensional structures found in the original sample.

    In the new Nature Methods study, the researchers demonstrated the power of the technique by using it to identify a new ribosomal state that hadn’t been seen before. Previous studies had suggested that as a ribosome is assembled, large structural elements, which are akin to the foundation for a building, form first. Only after this foundation is formed are the “active sites” of the ribosome, which read messenger RNA and synthesize proteins, added to the structure.

    In the new study, however, the researchers found that in a very small subset of ribosomes, about 1 percent, a structure that is normally added at the end actually appears before assembly of the foundation. To account for that, Davis hypothesizes that it might be too energetically expensive for cells to ensure that every single ribosome is assembled in the correct order.

    “The cells are likely evolved to find a balance between what they can tolerate, which is maybe a small percentage of these types of potentially deleterious structures, and what it would cost to completely remove them from the assembly pathway,” he says.

    Viral proteins

    The researchers are now using this technique to study the coronavirus spike protein, which is the viral protein that binds to receptors on human cells and allows them to enter cells. The receptor binding domain (RBD) of the spike protein has three subunits, each of which can point either up or down.

    “For me, watching the pandemic unfold over the past year has emphasized how important front-line antiviral drugs will be in battling similar viruses, which are likely to emerge in the future. As we start to think about how one might develop small molecule compounds to force all of the RBDs into the ‘down’ state so that they can’t interact with human cells, understanding exactly what the ‘up’ state looks like and how much conformational flexibility there is will be informative for drug design. We hope our new technique can reveal these sorts of structural details,” Davis says.

    The research was funded by the National Science Foundation Graduate Research Fellowship Program, the National Institutes of Health, and the MIT Jameel Clinic for Machine Learning and Health. This work was supported by MIT Satori computation cluster hosted at the MGHPCC.

    MIT IBM Satori computation cluster hosted at the Massachusetts Green High Performance Computing Center.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    USPS “Forever” postage stamps celebrating Innovation at MIT.

    MIT Campus

     
  • richardmitnick 4:42 pm on February 4, 2021 Permalink | Reply
    Tags: "Chemistry and computer science join forces to apply artificial intelligence to chemical reactions", (BO) allows faster and more efficient syntheses of chemicals., AI, , Bayesian Optimization (BO)-a widely used strategy in the sciences., , , , Reaction optimization is ubiquitous in chemical synthesis both in academia and across the chemical industry., The real strength of Bayesian Optimization is that it allows us to model high-dimensional problems and capture trends that we may not see in the data so it can process the data a lot better.   

    From Princeton University: “Chemistry and computer science join forces to apply artificial intelligence to chemical reactions” 

    Princeton University
    From Princeton University

    Feb. 3, 2021
    Wendy Plump, Department of Chemistry

    In the past few years, researchers have turned increasingly to data science techniques to aid problem-solving in organic synthesis.

    Researchers in the lab of Abigail Doyle, Princeton’s A. Barton Hepburn Professor of Chemistry, have developed open-source software that provides them with a state-of-the-art optimization algorithm to use in everyday work, folding what’s been learned in the machine learning field into synthetic chemistry.

    1
    Princeton chemists Benjamin Shields and Abigail Doyle worked with computer scientist Ryan Adams (not pictured) to create machine learning software that can optimize reactions — using artificial intelligence to speed through thousands of reactions that chemists used to have to labor through one by one. Credit: C. Todd Reichart, Department of Chemistry.

    The software adapts key principles of Bayesian Optimization (BO) to allow faster and more efficient syntheses of chemicals.

    Based on the Bayes Theorem, a mathematical formula for determining conditional probability, BO is a widely used strategy in the sciences. Broadly defined, it allows people and computers use prior knowledge to inform and optimize future decisions.

    The chemists in Doyle’s lab, in collaboration with Ryan Adams, a professor of computer science, and colleagues at Bristol-Myers Squibb, compared human decision-making capabilities with the software package. They found that the optimization tool yields both greater efficiency over human participants and less bias on a test reaction. Their work appears in the current issue of the journal Nature.

    “Reaction optimization is ubiquitous in chemical synthesis, both in academia and across the chemical industry,” said Doyle. “Since chemical space is so large, it is impossible for chemists to evaluate the entirety of a reaction space experimentally. We wanted to develop and assess BO as a tool for synthetic chemistry given its success for related optimization problems in the sciences.”

    Benjamin Shields, a former postdoctoral fellow in the Doyle lab and the paper’s lead author, created the Python package.

    “I come from a synthetic chemistry background, so I definitely appreciate that synthetic chemists are pretty good at tackling these problems on their own,” said Shields. “Where I think the real strength of Bayesian Optimization comes in is that it allows us to model these high-dimensional problems and capture trends that we may not see in the data ourselves, so it can process the data a lot better.

    “And two, within a space, it will not be held back by the biases of a human chemist,” he added.

    How it works

    The software started as an out-of-field project to fulfill Shields’ doctoral requirements. Doyle and Shield then formed a team under the Center for Computer Assisted Synthesis (C-CAS), a National Science Foundation initiative launched at five universities to transform how the synthesis of complex organic molecules is planned and executed. Doyle has been a principal investigator with C-CAS since 2019.

    “Reaction optimization can be an expensive and time-consuming process,” said Adams, who is also the director of the Program in Statistics and Machine Learning. “This approach not only accelerates it using state-of-the-art techniques, but also finds better solutions than humans would typically identify. I think this is just the beginning of what’s possible with Bayesian Optimization in this space.”

    Users start by defining a search space — plausible experiments to consider — such as a list of catalysts, reagents, ligands, solvents, temperatures, and concentrations. Once that space is prepared and the user defines how many experiments to run, the software chooses initial experimental conditions to be evaluated. Then it suggests new experiments to run, iterating through a smaller and smaller cast of choices until the reaction is optimized.

    “In designing the software, I tried to include ways for people to kind of inject what they know about a reaction,” said Shields. “No matter how you use this or machine learning in general, there’s always going to be a case where human expertise is valuable.”

    The software and examples for its use can be accessed at this repository. GitHub links are available for the following: software that represents the chemicals under evaluation in a machine-readable format via density-functional theory; software for reaction optimization; and the game that collects chemists’ decision-making on optimization of the test reaction.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    About Princeton: Overview

    Princeton University is a vibrant community of scholarship and learning that stands in the nation’s service and in the service of all nations. Chartered in 1746, Princeton is the fourth-oldest college in the United States. Princeton is an independent, coeducational, nondenominational institution that provides undergraduate and graduate instruction in the humanities, social sciences, natural sciences and engineering.

    As a world-renowned research university, Princeton seeks to achieve the highest levels of distinction in the discovery and transmission of knowledge and understanding. At the same time, Princeton is distinctive among research universities in its commitment to undergraduate teaching.

    Today, more than 1,100 faculty members instruct approximately 5,200 undergraduate students and 2,600 graduate students. The University’s generous financial aid program ensures that talented students from all economic backgrounds can afford a Princeton education.

    Princeton Shield

     
  • richardmitnick 4:08 pm on January 22, 2021 Permalink | Reply
    Tags: "Raising a global centre for deep learning", AI, Competing with the world through hardware, Deep30, DEEPCORE, DeepX, Japan Deep Learning Association (JDLA), Japan has historically been strong in hardware manufacturing and Japanese corporations hold top international shares for industrial robots., Japan’s aspiring young AI entrepreneurs view Matsuo’s lab as a gateway to success., KERNEL HONGO, Matsuo Lab at The University of Tokyo., , , The new hub dubbed Hongo Valley   

    From Nature Research: “Raising a global centre for deep learning” 

    From Nature Research

    1.21.21

    Hongo, a neighbourhood in the centre of Tokyo and home to the University of Tokyo, is rapidly transforming into a global technology hub with strengths in artificial intelligence (AI) and deep learning. “This is Japan’s answer to Silicon Valley and Shenzhen,” says Yutaka Matsuo, a professor at the School of Engineering at the University of Tokyo, who heads the laboratory spearheading this initiative.

    Japan’s aspiring young AI entrepreneurs view Matsuo’s lab as a gateway to success. This is due in part to its remarkable track record in incubating startups. It has fostered ten successful AI startups, two of which are listed on the Tokyo Stock Exchange. When all these startups are included, the lab’s market value exceeds US$2 billion. The lab also advises more than 30 companies.

    1
    Matsuo and his lab members.© Matsuo Lab, The University of Tokyo.

    Competing with the world through hardware

    Matsuo — a leading figure in Japanese AI research — is clear that the new hub, dubbed Hongo Valley, is not aiming to outsmart Silicon Valley tech giants on web-based initiatives. “It’s unrealistic to compete head-to-head with the rule makers of the web on their playing field,” says Matsuo. Instead, he points to collaboration with large manufacturers as the way forward, especially in robotics. “If Japan has any chance of competing, it’s in combining deep learning with the hardware produced by manufacturing giants like Toyota and Panasonic,” explains Matsuo.

    3
    Yutaka Matsuo is excited about combining Japan’s strength in manufacturing with AI. The University of Tokyo. © Matsuo Lab.

    4
    Robotics is an area where AI can play a critical role.© Matsuo Lab, The University of Tokyo.

    Japan has historically been strong in hardware manufacturing, and Japanese corporations hold top international shares for industrial robots. “It will be a game changer if startups in Hongo Valley can provide AI that reimagines the hardware that these manufacturers produce,” he says.

    “Deep learning has great chemistry with hardware,” Matsuo emphasizes. He envisions using AI to automate the craftsmanship that Japanese professionals display in industries like agriculture, medicine and construction. One example is DeepX, a startup that one of Matsuo’s current PhD students founded in 2016. In August this year, the company raised US$15 million to expand its team of engineers. In one project, DeepX is fully automating excavators on construction sites. Controlling the machinery requires extensive experience. DeepX engineers are using images from an operator’s eye view to model the movements that excavators should make under various conditions.

    To foster more such startups, Matsuo supported the launch of KERNEL HONGO, a co-working space for aspiring AI entrepreneurs who pass a strict selection process. KERNEL HONGO is organized by DEEPCORE, a business incubator and venture capital for AI and deep learning.

    5
    DeepX is using AI to automate excavators. © DeepX, Inc.

    Giving back to basic research

    While the lab is often credited for nurturing startups, “the lab’s success ultimately stems from our strength in basic research,” explains Matsuo. By raising venture capital, Deep30, with the lab’s alumni, Matsuo created a feedback loop in which part of the investment is returned to the basic research being undertaken in the lab. Historically, the lab has focused on topics in social media and web network analysis (for example, how Twitter users report earthquakes; a study cited more than 4,500 times) — research that has a large social impact. This goal of conducting socially relevant research continues to guide the lab.

    6
    AI is used to control the movement of robots.© Matsuo Lab, The University of Tokyo.

    A key area of focus is world models, an emerging sub-discipline of machine learning. “World models are about predicting events that happen as a result of an action; for instance, foreseeing how water in a cup will behave when the cup is moved in a certain way,” explains Yusuke Iwasawa, the basic-research leader in Matsuo’s lab. “When coupled with robots, world models can make a robot’s movement less awkward — they construct a model of how the world works and act based on it. That allows it to solve tasks that they have never learned to solve before.”

    While robots have become adept at pursuing single tasks under well-defined criteria, such as placing folded laundry in a designated space, they have a hard time performing general commands like “tidy up”. “This is because there are so many factors associated with ‘tidy up’ that robots have to take into account,” he says. “With world models, however, we can teach robots things we consider common sense, for instance, that shelves are for storing things.”

    7
    Faculty of Engineering Bldg.2, where Matsuo’s lab is located.© Matsuo Lab, The University of Tokyo.

    Leading the way in education

    Matsuo has a suite of initiatives underway to educate the next generation of academics and businesspeople with foundational skills in AI. In 2015, Matsuo’s lab began offering non-credit courses at the University of Tokyo on consumer marketing, data science and deep learning. More than 5,000 people have taken these courses. Outside the university environment, Matsuo established the Japan Deep Learning Association (JDLA) to advocate and promote the use of deep learning in Japanese society and industry. Under his guidance, the JDLA established a certification for deep learning in 2017 to facilitate structured learning. More than 40,000 people have already taken the exam, including business people, researchers and students, and there are plans to conduct an English version of the exam in 2021. In 2020, the JDLA launched a business competition called the KOSEN Deep Learning Contest (KOSEN-DCON) in which contestants from KOSEN, an educational institution that fosters technicians and engineers, present business plans that integrate deep learning with hardware. “The students are well trained on hardware, so the aim is to give them hands-on experience with deep learning,” says Matsuo. The contestants were evaluated by venture capitalists and investors, gave monetary valuations of the business plans. The best plan was evaluated as having 500 million yen of equity by capitalists. And three start-ups have already been formed. “We’ve started small, but an ecosystem will emerge once the groundwork has been laid,” says Matsuo. “Now it’s time for Hongo Valley to grow into a giant hub 100 times its current size.”

    8
    KOSEN-DCON is a contest where teams apply deep learning to hardware.© Japan Deep Learning Association.

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 7:31 am on January 22, 2021 Permalink | Reply
    Tags: "Stanford researchers combine processors and memory on multiple hybrid chips to run AI on battery-powered smart devices", AI, , “Memory wall”, , RRAM, , , The "Illusion System"   

    From Stanford University: “Stanford researchers combine processors and memory on multiple hybrid chips to run AI on battery-powered smart devices” 

    Stanford University Name
    From Stanford University

    Stanford University Engineering

    January 11, 2021
    Tom Abate
    Stanford Engineering
    tabate@stanford.edu

    In traditional electronics, separate chips process and store data, wasting energy as they toss data back and forth over what engineers call a “memory wall.” New algorithms combine several energy-efficient hybrid chips to create the illusion of one mega–AI chip.

    1
    Hardware and software innovations give eight chips the illusion that they’re one mega-chip working together to run AI. Credit: Stocksy / Drea Sullivan.

    Smartwatches and other battery-powered electronics would be even smarter if they could run AI algorithms. But efforts to build AI-capable chips for mobile devices have so far hit a wall – the so-called “memory wall” that separates data processing and memory chips that must work together to meet the massive and continually growing computational demands imposed by AI.

    “Transactions between processors and memory can consume 95 percent of the energy needed to do machine learning and AI, and that severely limits battery life,” said computer scientist Subhasish Mitra, senior author of a new study published in Nature Electronics.

    Now, a team that includes Stanford computer scientist Mary Wootters and electrical engineer H.-S. Philip Wong has designed a system that can run AI tasks faster, and with less energy, by harnessing eight hybrid chips, each with its own data processor built right next to its own memory storage.

    This paper builds on the team’s prior development of a new memory technology, called RRAM, that stores data even when power is switched off – like flash memory – only faster and more energy efficiently. Their RRAM advance enabled the Stanford researchers to develop an earlier generation of hybrid chips that worked alone. Their latest design incorporates a critical new element: algorithms that meld the eight, separate hybrid chips into one energy-efficient AI-processing engine.

    “If we could have built one massive, conventional chip with all the processing and memory needed, we’d have done so, but the amount of data it takes to solve AI problems makes that a dream,” Mitra said. “Instead, we trick the hybrids into thinking they’re one chip, which is why we call this the Illusion System.”

    The researchers developed Illusion as part of the Electronics Resurgence Initiative (ERI), a $1.5 billion program sponsored by the Defense Advanced Research Projects Agency. DARPA, which helped spawn the internet more than 50 years ago, is supporting research investigating workarounds to Moore’s Law, which has driven electronic advances by shrinking transistors. But transistors can’t keep shrinking forever.

    “To surpass the limits of conventional electronics, we’ll need new hardware technologies and new ideas about how to use them,” Wootters said.

    The Stanford-led team built and tested its prototype with help from collaborators at the French research institute CEA-Leti and at Nanyang Technological University in Singapore. The team’s eight-chip system is just the beginning. In simulations, the researchers showed how systems with 64 hybrid chips could run AI applications seven times faster than current processors, using one-seventh as much energy.

    Such capabilities could one day enable Illusion Systems to become the brains of augmented and virtual reality glasses that would use deep neural networks to learn by spotting objects and people in the environment, and provide wearers with contextual information – imagine an AR/VR system to help birdwatchers identify unknown specimens.

    Stanford graduate student Robert Radway, who is first author of the Nature Electronics study, said the team also developed new algorithms to recompile existing AI programs, written for today’s processors, to run on the new multi-chip systems. Collaborators from Facebook helped the team test AI programs that validated their efforts. Next steps include increasing the processing and memory capabilities of individual hybrid chips and demonstrating how to mass produce them cheaply.

    “The fact that our fabricated prototype is working as we expected suggests we’re on the right track,” said Wong, who believes Illusion Systems could be ready for marketability within three to five years.

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Stanford University campus. No image credit

    Stanford University

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
  • richardmitnick 11:02 am on December 19, 2020 Permalink | Reply
    Tags: "Machine Learning Boosts the Search for ‘Superhard’ Materials", AI, , ,   

    From University of Houston: “Machine Learning Boosts the Search for ‘Superhard’ Materials” 

    From University of Houston

    December 17, 2020
    Jeannie Kever
    jekever@uh.edu
    713-743-0778

    1
    Researchers have developed a machine learning model that can accurately predict the hardness of new materials, allowing scientists to more readily find compounds suitable for use in a variety of applications.

    Superhard materials are in high demand in industry, from energy production to aerospace, but finding suitable new materials has largely been a matter of trial and error based on classical materials such as diamonds. Until now.

    Researchers from the University of Houston and Manhattan College have reported a machine learning model that can accurately predict the hardness of new materials, allowing scientists to more readily find compounds suitable for use in a variety of applications. The work was reported in Advanced Materials.

    Materials that are superhard – defined as those with a hardness value exceeding 40 gigapascals on the Vickers scale, meaning it would take more than 40 gigapascals of pressure to leave an indentation on the material’s surface – are rare.

    “That makes identifying new materials challenging,” said Jakoah Brgoch, associate professor of chemistry at UH and corresponding author for the paper. “That is why materials like synthetic diamond are still used even though they are challenging and expensive to make.”

    One of the complicating factors is that the hardness of a material may vary depending on the amount of pressure exerted, known as load dependence. That makes testing a material experimentally complex and using computational modeling today almost impossible.

    The model reported by the researchers overcomes that by predicting the load-dependent Vickers hardness based solely on the chemical composition of the material. The researchers report finding more than 10 new and promising stable borocarbide phases; work is now underway to design and produce the materials so they can be tested in the lab.

    Based on the model’s reported accuracy, the odds are good. Researchers reported the accuracy at 97%.

    First author Ziyan Zhang, a doctoral student at UH, said the database built to train the algorithm is based on data involving 560 different compounds, each yielding several data points. Finding the data required poring over hundreds of published academic papers to find data needed to build a representative dataset.

    “All good machine learning projects start with a good dataset,” said Brgoch, who is also a principal investigator with the Texas Center for Superconductivity at UH. “The true success is largely the development of this dataset.”

    In addition to Brgoch and Zhang, additional researchers on the project include Aria Mansouri Tehrani and Blake Day, both with UH, and Anton O. Oliynyk from Manhattan College.

    Researchers traditionally have used machine learning to predict a single variable of hardness, Brgoch said, but that doesn’t account for the complexities of the property like load dependence, which he said still aren’t well understood. That makes machine learning a good tool, despite earlier limitations.

    “A machine learning system doesn’t need to understand the physics,” he said. “It just analyzes the training data and makes new predictions based on statistics.”

    Machine learning does have limitations, though.

    “The idea of using machine learning isn’t to say, ‘Here is the next greatest material,’ but to help guide our experimental search,” Brgoch said. “It tells you where you should look.”

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The University of Houston (UH) is a public research university in Houston, Texas, and the main institution of the University of Houston System. Founded in 1927, UH is the third-largest university in Texas with over 46,000 students. Its campus spans 667 acres (2.70 km2) in southeast Houston, and was known as University of Houston–University Park from 1983 to 1991. The university is classified among “R1: Doctoral Universities – Very high research activity”.

    The university offers more than 282 degree programs through its 14 academic colleges on campus—including programs leading to professional degrees in architecture, law, optometry, and pharmacy. The institution conducts $150 million annually in research, and operates more than 40 research centers and institutes on campus. Interdisciplinary research includes superconductivity, space commercialization and exploration, biomedical sciences and engineering, energy and natural resources, and artificial intelligence. Awarding more than 9,000 degrees annually, UH’s alumni base exceeds 260,000. The economic impact of the university contributes over $3 billion annually to the Texas economy, while generating about 24,000 jobs.

    The University of Houston hosts a variety of theatrical performances, concerts, lectures, and events. It has more than 400 student organizations and 17 intercollegiate sports teams. Annual UH events and traditions include The Cat’s Back, Homecoming, and Frontier Fiesta. The university’s varsity athletic teams, known as the Houston Cougars, are members of the American Athletic Conference and compete in the NCAA Division I in all sports. The football team regularly makes bowl game appearances, and the men’s basketball team has made 21 appearances in the NCAA Division I Tournament—including five Final Four appearances. The men’s golf team has won 16 national championships—the most in NCAA history.

     
  • richardmitnick 3:45 pm on November 19, 2020 Permalink | Reply
    Tags: "Machine learning yields a breakthrough in the study of stellar nurseries", AI, Artificial intelligence can make it possible to see astrophysical phenomena that were previously beyond reach., , , , , ,   

    From Centre National de la Recherche Scientifique [CNRS ] (FR) via phys.org: “Machine learning yields a breakthrough in the study of stellar nurseries” 

    CNRS bloc

    From Centre National de la Recherche Scientifique [CNRS ](FR)

    1
    Emission of carbon monoxide in the Orion B molecular cloud. Credit: J. Pety/ORION-B Collaboration/IRAM(FR).

    Artificial intelligence can make it possible to see astrophysical phenomena that were previously beyond reach. This has now been demonstrated by scientists from the CNRS, IRAM (FR), Observatoire de Paris-PSL (FR), Ecole Centrale Marseille (FR) and Ecole Centrale Lille (FR), working together in the ORION-B program. In a series of three papers published in Astronomy & Astrophysics on 19 November 2020, they present the most comprehensive observations yet carried out of one of the star-forming regions closest to the Earth.

    Quantitative inference of the H2 column densities from 3mm molecular emission: Case study towards Orion B

    Tracers of the ionization fraction in dense and translucent gas. I. Automated exploitation of massive astrochemical model grids

    C18O, 13CO, and 12CO abundances and excitation temperatures in the Orion B molecular cloud: An analysis of the precision achievable when modeling spectral line within the Local Thermodynamic Equilibrium approximation

    The gas clouds in which stars are born and evolve are vast regions that are extremely rich in matter, and hence in physical processes. All these processes are intertwined at different size and time scales, making it almost impossible to fully understand such stellar nurseries. However, the scientists in the ORION-B program have now shown that statistics and artificial intelligence can help to break down the barriers still standing in the way of astrophysicists.

    With the aim of providing the most detailed analysis yet of the Orion molecular cloud, one of the star-forming regions nearest the Earth, the ORION-B team included in its ranks scientists specializing in massive data processing. This enabled them to develop novel methods based on statistical learning and machine learning to study observations of the cloud made at 240 000 frequencies of light.

    Based on artificial intelligence algorithms, these tools make it possible to retrieve new information from a large mass of data such as that used in the ORION-B project. This enabled the scientists to uncover a certain number of characteristics governing the Orion molecular cloud.

    For instance, they were able to discover the relationships between the light emitted by certain molecules and information that was previously inaccessible, namely, the quantity of hydrogen and of free electrons in the cloud, which they were able to estimate from their calculations without observing them directly. By analyzing all the data available to them, the research team was also able to determine ways of further improving their observations by eliminating a certain amount of unwanted information.

    The ORION-B teams now wish to put this theoretical work to the test, by applying the estimates and recommendations obtained and verifying them under real conditions. Another major theoretical challenge will be to extract information about the speed of molecules, and hence visualize the motion of matter in order to see how it moves within the cloud.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    CNRS (FR) campus via Glassdoor

    CNRS (FR) encourages collaboration between specialists from different disciplines in particular with the university thus opening up new fields of enquiry to meet social and economic needs. CNRS has developed interdisciplinary programs which bring together various CNRS departments as well as other research institutions and industry.

    Interdisciplinary research is undertaken in the following domains:

    Life and its social implications
    Information, communication and knowledge
    Environment, energy and sustainable development
    Nanosciences, nanotechnologies, materials
    Astroparticles: from particles to the Universe

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: