Tagged: Brain Studies Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 8:52 am on December 1, 2022 Permalink | Reply
    Tags: "eMAP": epitope-preserving Magnified Analysis of the Proteome, "Silent synapses are abundant in the adult brain", , “Silent synapses”: immature connections between neurons that remain inactive until they’re recruited to help form new memories., , Brain Studies, It was believed that silent synapses were present only during early development., NMDA receptors normally require cooperation with AMPA receptors to pass signals because NMDA receptors are blocked by magnesium ions at the normal resting potential of neurons., Scientists found that filopodia had neurotransmitter receptors called NMDA receptors but no AMPA receptors., Some neuroscientists have proposed that silent synapses may persist into adulthood and help with the formation of new memories., Synapses that have only NMDA receptors cannot pass along an electric current and are referred to as “silent.”, The brain creates new memories without overwriting the important memories stored in mature synapses., , The researchers also showed that they could “unsilence” these synapses by combining glutamate release with an electrical current coming from the body of the neuron., Tiny structures called "filopodia", When important new information is presented connections between the relevant neurons are strengthened.   

    From The Massachusetts Institute of Technology: “Silent synapses are abundant in the adult brain” 

    From The Massachusetts Institute of Technology

    11.30.22
    Anne Trafton

    1
    MIT researchers have discovered that the adult mouse brain contains millions of silent synapses, located on tiny structures called “filopodia”. Image: Dimitra Vardalaki and Mark Harnett.

    MIT neuroscientists have discovered that the adult brain contains millions of “silent synapses”: immature connections between neurons that remain inactive until they’re recruited to help form new memories.

    Until now, it was believed that silent synapses were present only during early development, when they help the brain learn the new information that it’s exposed to early in life. However, the new MIT study [Nature (below)] revealed that in adult mice, about 30 percent of all synapses in the brain’s cortex are silent.

    The existence of these silent synapses may help to explain how the adult brain is able to continually form new memories and learn new things without having to modify existing conventional synapses, the researchers say.

    “These silent synapses are looking for new connections, and when important new information is presented connections between the relevant neurons are strengthened. This lets the brain create new memories without overwriting the important memories stored in mature synapses, which are harder to change,” says Dimitra Vardalaki, an MIT graduate student and the lead author of the new study.

    Mark Harnett, an associate professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research, is the senior author of the paper, which appears today in Nature [below].

    A surprising discovery

    When scientists first discovered silent synapses decades ago, they were seen primarily in the brains of young mice and other animals. During early development, these synapses are believed to help the brain acquire the massive amounts of information that babies need to learn about their environment and how to interact with it. In mice, these synapses were believed to disappear by about 12 days of age (equivalent to the first months of human life).

    However, some neuroscientists have proposed that silent synapses may persist into adulthood and help with the formation of new memories. Evidence for this has been seen in animal models of addiction, which is thought to be largely a disorder of aberrant learning.

    Theoretical work in the field from Stefano Fusi and Larry Abbott of Columbia University has also proposed that neurons must display a wide range of different plasticity mechanisms to explain how brains can both efficiently learn new things and retain them in long-term memory. In this scenario, some synapses must be established or modified easily, to form the new memories, while others must remain much more stable, to preserve long-term memories.

    In the new study, the MIT team did not set out specifically to look for silent synapses. Instead, they were following up on an intriguing finding from a previous study in Harnett’s lab. In that paper [Neuron (below)], the researchers showed that within a single neuron, dendrites — antenna-like extensions that protrude from neurons — can process synaptic input in different ways, depending on their location.

    As part of that study, the researchers tried to measure neurotransmitter receptors in different dendritic branches, to see if that would help to account for the differences in their behavior. To do that, they used a technique called eMAP (epitope-preserving Magnified Analysis of the Proteome), developed by Chung. Using this technique, researchers can physically expand a tissue sample and then label specific proteins in the sample, making it possible to obtain super-high-resolution images. 

    While they were doing that imaging, they made a surprising discovery. “The first thing we saw, which was super bizarre and we didn’t expect, was that there were filopodia everywhere,” Harnett says.

    Filopodia, thin membrane protrusions that extend from dendrites, have been seen before, but neuroscientists didn’t know exactly what they do. That’s partly because filopodia are so tiny that they are difficult to see using traditional imaging techniques. 

    After making this observation, the MIT team set out to try to find filopodia in other parts of the adult brain, using the eMAP technique. To their surprise, they found filopodia in the mouse visual cortex and other parts of the brain, at a level 10 times higher than previously seen. They also found that filopodia had neurotransmitter receptors called NMDA receptors, but no AMPA receptors.

    A typical active synapse has both of these types of receptors, which bind the neurotransmitter glutamate. NMDA receptors normally require cooperation with AMPA receptors to pass signals because NMDA receptors are blocked by magnesium ions at the normal resting potential of neurons. Thus, when AMPA receptors are not present, synapses that have only NMDA receptors cannot pass along an electric current and are referred to as “silent.”

    Unsilencing synapses

    To investigate whether these filopodia might be silent synapses, the researchers used a modified version of an experimental technique known as patch clamping. This allowed them to monitor the electrical activity generated at individual filopodia as they tried to stimulate them by mimicking the release of the neurotransmitter glutamate from a neighboring neuron.

    Using this technique, the researchers found that glutamate would not generate any electrical signal in the filopodium receiving the input, unless the NMDA receptors were experimentally unblocked. This offers strong support for the theory the filopodia represent silent synapses within the brain, the researchers say.

    The researchers also showed that they could “unsilence” these synapses by combining glutamate release with an electrical current coming from the body of the neuron. This combined stimulation leads to accumulation of AMPA receptors in the silent synapse, allowing it to form a strong connection with the nearby axon that is releasing glutamate.

    The researchers found that converting silent synapses into active synapses was much easier than altering mature synapses.

    “If you start with an already functional synapse, that plasticity protocol doesn’t work,” Harnett says. “The synapses in the adult brain have a much higher threshold, presumably because you want those memories to be pretty resilient. You don’t want them constantly being overwritten. Filopodia, on the other hand, can be captured to form new memories.”

    “Flexible and robust”

    The findings offer support for the theory proposed by Abbott and Fusi that the adult brain includes highly plastic synapses that can be recruited to form new memories, the researchers say.

    “This paper is, as far as I know, the first real evidence that this is how it actually works in a mammalian brain,” Harnett says. “Filopodia allow a memory system to be both flexible and robust. You need flexibility to acquire new information, but you also need stability to retain the important information.”

    The researchers are now looking for evidence of these silent synapses in human brain tissue. They also hope to study whether the number or function of these synapses is affected by factors such as aging or neurodegenerative disease.

    “It’s entirely possible that by changing the amount of flexibility you’ve got in a memory system, it could become much harder to change your behaviors and habits or incorporate new information,” Harnett says. “You could also imagine finding some of the molecular players that are involved in filopodia and trying to manipulate some of those things to try to restore flexible memory as we age.”

    The research was funded by the Boehringer Ingelheim Fonds, the National Institutes of Health, the James W. and Patricia T. Poitras Fund at MIT, a Klingenstein-Simons Fellowship, and Vallee Foundation Scholarship, and a McKnight Scholarship.

    Science papers:
    Neuron
    Nature

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    MIT Seal

    USPS “Forever” postage stamps celebrating Innovation at MIT.

    MIT Campus

    The Massachusetts Institute of Technology is a private land-grant research university in Cambridge, Massachusetts. The institute has an urban campus that extends more than a mile (1.6 km) alongside the Charles River. The institute also encompasses a number of major off-campus facilities such as the MIT Lincoln Laboratory , the MIT Bates Research and Engineering Center , and the Haystack Observatory , as well as affiliated laboratories such as the Broad Institute of MIT and Harvard and Whitehead Institute.

    Massachusettes Institute of Technology-Haystack Observatory Westford, Massachusetts, USA, Altitude 131 m (430 ft).

    Founded in 1861 in response to the increasing industrialization of the United States, Massachusetts Institute of Technology adopted a European polytechnic university model and stressed laboratory instruction in applied science and engineering. It has since played a key role in the development of many aspects of modern science, engineering, mathematics, and technology, and is widely known for its innovation and academic strength. It is frequently regarded as one of the most prestigious universities in the world.

    As of December 2020, 97 Nobel laureates, 26 Turing Award winners, and 8 Fields Medalists have been affiliated with MIT as alumni, faculty members, or researchers. In addition, 58 National Medal of Science recipients, 29 National Medals of Technology and Innovation recipients, 50 MacArthur Fellows, 80 Marshall Scholars, 3 Mitchell Scholars, 22 Schwarzman Scholars, 41 astronauts, and 16 Chief Scientists of the U.S. Air Force have been affiliated with The Massachusetts Institute of Technology. The university also has a strong entrepreneurial culture and MIT alumni have founded or co-founded many notable companies. Massachusetts Institute of Technology is a member of the Association of American Universities.

    Foundation and vision

    In 1859, a proposal was submitted to the Massachusetts General Court to use newly filled lands in Back Bay, Boston for a “Conservatory of Art and Science”, but the proposal failed. A charter for the incorporation of the Massachusetts Institute of Technology, proposed by William Barton Rogers, was signed by John Albion Andrew, the governor of Massachusetts, on April 10, 1861.

    Rogers, a professor from the University of Virginia , wanted to establish an institution to address rapid scientific and technological advances. He did not wish to found a professional school, but a combination with elements of both professional and liberal education, proposing that:

    “The true and only practicable object of a polytechnic school is, as I conceive, the teaching, not of the minute details and manipulations of the arts, which can be done only in the workshop, but the inculcation of those scientific principles which form the basis and explanation of them, and along with this, a full and methodical review of all their leading processes and operations in connection with physical laws.”

    The Rogers Plan reflected the German research university model, emphasizing an independent faculty engaged in research, as well as instruction oriented around seminars and laboratories.

    Early developments

    Two days after The Massachusetts Institute of Technology was chartered, the first battle of the Civil War broke out. After a long delay through the war years, MIT’s first classes were held in the Mercantile Building in Boston in 1865. The new institute was founded as part of the Morrill Land-Grant Colleges Act to fund institutions “to promote the liberal and practical education of the industrial classes” and was a land-grant school. In 1863 under the same act, the Commonwealth of Massachusetts founded the Massachusetts Agricultural College, which developed as the University of Massachusetts Amherst ). In 1866, the proceeds from land sales went toward new buildings in the Back Bay.

    The Massachusetts Institute of Technology was informally called “Boston Tech”. The institute adopted the European polytechnic university model and emphasized laboratory instruction from an early date. Despite chronic financial problems, the institute saw growth in the last two decades of the 19th century under President Francis Amasa Walker. Programs in electrical, chemical, marine, and sanitary engineering were introduced, new buildings were built, and the size of the student body increased to more than one thousand.

    The curriculum drifted to a vocational emphasis, with less focus on theoretical science. The fledgling school still suffered from chronic financial shortages which diverted the attention of the MIT leadership. During these “Boston Tech” years, Massachusetts Institute of Technology faculty and alumni rebuffed Harvard University president (and former MIT faculty) Charles W. Eliot’s repeated attempts to merge MIT with Harvard College’s Lawrence Scientific School. There would be at least six attempts to absorb MIT into Harvard. In its cramped Back Bay location, MIT could not afford to expand its overcrowded facilities, driving a desperate search for a new campus and funding. Eventually, the MIT Corporation approved a formal agreement to merge with Harvard, over the vehement objections of MIT faculty, students, and alumni. However, a 1917 decision by the Massachusetts Supreme Judicial Court effectively put an end to the merger scheme.

    In 1916, The Massachusetts Institute of Technology administration and the MIT charter crossed the Charles River on the ceremonial barge Bucentaur built for the occasion, to signify MIT’s move to a spacious new campus largely consisting of filled land on a one-mile-long (1.6 km) tract along the Cambridge side of the Charles River. The neoclassical “New Technology” campus was designed by William W. Bosworth and had been funded largely by anonymous donations from a mysterious “Mr. Smith”, starting in 1912. In January 1920, the donor was revealed to be the industrialist George Eastman of Rochester, New York, who had invented methods of film production and processing, and founded Eastman Kodak. Between 1912 and 1920, Eastman donated $20 million ($236.6 million in 2015 dollars) in cash and Kodak stock to MIT.

    Curricular reforms

    In the 1930s, President Karl Taylor Compton and Vice-President (effectively Provost) Vannevar Bush emphasized the importance of pure sciences like physics and chemistry and reduced the vocational practice required in shops and drafting studios. The Compton reforms “renewed confidence in the ability of the Institute to develop leadership in science as well as in engineering”. Unlike Ivy League schools, Massachusetts Institute of Technology catered more to middle-class families, and depended more on tuition than on endowments or grants for its funding. The school was elected to the Association of American Universities in 1934.

    Still, as late as 1949, the Lewis Committee lamented in its report on the state of education at The Massachusetts Institute of Technology that “the Institute is widely conceived as basically a vocational school”, a “partly unjustified” perception the committee sought to change. The report comprehensively reviewed the undergraduate curriculum, recommended offering a broader education, and warned against letting engineering and government-sponsored research detract from the sciences and humanities. The School of Humanities, Arts, and Social Sciences and the MIT Sloan School of Management were formed in 1950 to compete with the powerful Schools of Science and Engineering. Previously marginalized faculties in the areas of economics, management, political science, and linguistics emerged into cohesive and assertive departments by attracting respected professors and launching competitive graduate programs. The School of Humanities, Arts, and Social Sciences continued to develop under the successive terms of the more humanistically oriented presidents Howard W. Johnson and Jerome Wiesner between 1966 and 1980.

    The Massachusetts Institute of Technology‘s involvement in military science surged during World War II. In 1941, Vannevar Bush was appointed head of the federal Office of Scientific Research and Development and directed funding to only a select group of universities, including MIT. Engineers and scientists from across the country gathered at Massachusetts Institute of Technology ‘s Radiation Laboratory, established in 1940 to assist the British military in developing microwave radar. The work done there significantly affected both the war and subsequent research in the area. Other defense projects included gyroscope-based and other complex control systems for gunsight, bombsight, and inertial navigation under Charles Stark Draper’s Instrumentation Laboratory; the development of a digital computer for flight simulations under Project Whirlwind; and high-speed and high-altitude photography under Harold Edgerton. By the end of the war, The Massachusetts Institute of Technology became the nation’s largest wartime R&D contractor (attracting some criticism of Bush), employing nearly 4000 in the Radiation Laboratory alone and receiving in excess of $100 million ($1.2 billion in 2015 dollars) before 1946. Work on defense projects continued even after then. Post-war government-sponsored research at MIT included SAGE and guidance systems for ballistic missiles and Project Apollo.

    These activities affected The Massachusetts Institute of Technology profoundly. A 1949 report noted the lack of “any great slackening in the pace of life at the Institute” to match the return to peacetime, remembering the “academic tranquility of the prewar years”, though acknowledging the significant contributions of military research to the increased emphasis on graduate education and rapid growth of personnel and facilities. The faculty doubled and the graduate student body quintupled during the terms of Karl Taylor Compton, president of The Massachusetts Institute of Technology between 1930 and 1948; James Rhyne Killian, president from 1948 to 1957; and Julius Adams Stratton, chancellor from 1952 to 1957, whose institution-building strategies shaped the expanding university. By the 1950s, The Massachusetts Institute of Technology no longer simply benefited the industries with which it had worked for three decades, and it had developed closer working relationships with new patrons, philanthropic foundations and the federal government.

    In late 1960s and early 1970s, student and faculty activists protested against the Vietnam War and The Massachusetts Institute of Technology ‘s defense research. In this period Massachusetts Institute of Technology’s various departments were researching helicopters, smart bombs and counterinsurgency techniques for the war in Vietnam as well as guidance systems for nuclear missiles. The Union of Concerned Scientists was founded on March 4, 1969 during a meeting of faculty members and students seeking to shift the emphasis on military research toward environmental and social problems. The Massachusetts Institute of Technology ultimately divested itself from the Instrumentation Laboratory and moved all classified research off-campus to the MIT Lincoln Laboratory facility in 1973 in response to the protests. The student body, faculty, and administration remained comparatively unpolarized during what was a tumultuous time for many other universities. Johnson was seen to be highly successful in leading his institution to “greater strength and unity” after these times of turmoil. However, six Massachusetts Institute of Technology students were sentenced to prison terms at this time and some former student leaders, such as Michael Albert and George Katsiaficas, are still indignant about MIT’s role in military research and its suppression of these protests. (Richard Leacock’s film, November Actions, records some of these tumultuous events.)

    In the 1980s, there was more controversy at The Massachusetts Institute of Technology over its involvement in SDI (space weaponry) and CBW (chemical and biological warfare) research. More recently, The Massachusetts Institute of Technology’s research for the military has included work on robots, drones and ‘battle suits’.

    Recent history

    The Massachusetts Institute of Technology has kept pace with and helped to advance the digital age. In addition to developing the predecessors to modern computing and networking technologies, students, staff, and faculty members at Project MAC, the Artificial Intelligence Laboratory, and the Tech Model Railroad Club wrote some of the earliest interactive computer video games like Spacewar! and created much of modern hacker slang and culture. Several major computer-related organizations have originated at MIT since the 1980s: Richard Stallman’s GNU Project and the subsequent Free Software Foundation were founded in the mid-1980s at the AI Lab; the MIT Media Lab was founded in 1985 by Nicholas Negroponte and Jerome Wiesner to promote research into novel uses of computer technology; the World Wide Web Consortium standards organization was founded at the Laboratory for Computer Science in 1994 by Tim Berners-Lee; the MIT OpenCourseWare project has made course materials for over 2,000 Massachusetts Institute of Technology classes available online free of charge since 2002; and the One Laptop per Child initiative to expand computer education and connectivity to children worldwide was launched in 2005.

    The Massachusetts Institute of Technology was named a sea-grant college in 1976 to support its programs in oceanography and marine sciences and was named a space-grant college in 1989 to support its aeronautics and astronautics programs. Despite diminishing government financial support over the past quarter century, MIT launched several successful development campaigns to significantly expand the campus: new dormitories and athletics buildings on west campus; the Tang Center for Management Education; several buildings in the northeast corner of campus supporting research into biology, brain and cognitive sciences, genomics, biotechnology, and cancer research; and a number of new “backlot” buildings on Vassar Street including the Stata Center. Construction on campus in the 2000s included expansions of the Media Lab, the Sloan School’s eastern campus, and graduate residences in the northwest. In 2006, President Hockfield launched the MIT Energy Research Council to investigate the interdisciplinary challenges posed by increasing global energy consumption.

    In 2001, inspired by the open source and open access movements, The Massachusetts Institute of Technology launched “OpenCourseWare” to make the lecture notes, problem sets, syllabi, exams, and lectures from the great majority of its courses available online for no charge, though without any formal accreditation for coursework completed. While the cost of supporting and hosting the project is high, OCW expanded in 2005 to include other universities as a part of the OpenCourseWare Consortium, which currently includes more than 250 academic institutions with content available in at least six languages. In 2011, The Massachusetts Institute of Technology announced it would offer formal certification (but not credits or degrees) to online participants completing coursework in its “MITx” program, for a modest fee. The “edX” online platform supporting MITx was initially developed in partnership with Harvard and its analogous “Harvardx” initiative. The courseware platform is open source, and other universities have already joined and added their own course content. In March 2009 the Massachusetts Institute of Technology faculty adopted an open-access policy to make its scholarship publicly accessible online.

    The Massachusetts Institute of Technology has its own police force. Three days after the Boston Marathon bombing of April 2013, MIT Police patrol officer Sean Collier was fatally shot by the suspects Dzhokhar and Tamerlan Tsarnaev, setting off a violent manhunt that shut down the campus and much of the Boston metropolitan area for a day. One week later, Collier’s memorial service was attended by more than 10,000 people, in a ceremony hosted by the Massachusetts Institute of Technology community with thousands of police officers from the New England region and Canada. On November 25, 2013, The Massachusetts Institute of Technology announced the creation of the Collier Medal, to be awarded annually to “an individual or group that embodies the character and qualities that Officer Collier exhibited as a member of The Massachusetts Institute of Technology community and in all aspects of his life”. The announcement further stated that “Future recipients of the award will include those whose contributions exceed the boundaries of their profession, those who have contributed to building bridges across the community, and those who consistently and selflessly perform acts of kindness”.

    In September 2017, the school announced the creation of an artificial intelligence research lab called the MIT-IBM Watson AI Lab. IBM will spend $240 million over the next decade, and the lab will be staffed by MIT and IBM scientists. In October 2018 MIT announced that it would open a new Schwarzman College of Computing dedicated to the study of artificial intelligence, named after lead donor and The Blackstone Group CEO Stephen Schwarzman. The focus of the new college is to study not just AI, but interdisciplinary AI education, and how AI can be used in fields as diverse as history and biology. The cost of buildings and new faculty for the new college is expected to be $1 billion upon completion.

    The Caltech/MIT Advanced aLIGO was designed and constructed by a team of scientists from California Institute of Technology , Massachusetts Institute of Technology, and industrial contractors, and funded by the National Science Foundation .

    Caltech /MIT Advanced aLigo

    It was designed to open the field of gravitational-wave astronomy through the detection of gravitational waves predicted by general relativity. Gravitational waves were detected for the first time by the LIGO detector in 2015. For contributions to the LIGO detector and the observation of gravitational waves, two Caltech physicists, Kip Thorne and Barry Barish, and Massachusetts Institute of Technology physicist Rainer Weiss won the Nobel Prize in physics in 2017. Weiss, who is also a Massachusetts Institute of Technology graduate, designed the laser interferometric technique, which served as the essential blueprint for the LIGO.

    The mission of The Massachusetts Institute of Technology is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of The Massachusetts Institute of Technology community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

     
  • richardmitnick 9:32 am on November 23, 2022 Permalink | Reply
    Tags: "Neuromarker for ADHD could improve diagnosis of the disorder", , Brain Studies, , For children with attention-deficit/hyperactivity disorder (ADHD) timely intervention is key., , Yale researchers identified differences in brain structure and activity in children with ADHD that could serve as a more objective diagnostic tool in the future.,   

    From Yale University: “Neuromarker for ADHD could improve diagnosis of the disorder” 

    From Yale University

    11.23.22
    Mallory Locklear

    Media Contact
    Fred Mamoun
    fred.mamoun@yale.edu
    203-436-2643

    Yale researchers identified differences in brain structure and activity in children with ADHD that could serve as a more objective diagnostic tool in the future.

    1
    Illustration by Michael S. Helfenbein.

    For children with attention-deficit/hyperactivity disorder (ADHD), timely intervention is key. But diagnoses typically rely on questionnaires and observations of a child’s behavior, which are subjective and can lead to delays in treatment.

    Yale researchers aim to establish a more objective measure of ADHD, and in a new study, they report an important step in that direction. Using brain imaging data from children with and without ADHD, they identified differences in brain structure and activity in children with ADHD that could serve as a neuromarker for the disorder.

    They will present their findings Nov. 27 at the Radiological Society of North America annual meeting.

    The subjectivity of ADHD assessments can cause children to be misdiagnosed or remain undiagnosed, explained Huang Lin, a research fellow at Yale School of Medicine and lead author of the study. Questionnaires given to a child’s parent or caregiver can be influenced by life events or stress, for example. The questionnaires also require caregivers to have spent a sufficient amount of time with the child, meaning children with less stable care may go undiagnosed. And as people age, they tend to show different symptoms, making diagnosis more difficult in older individuals.

    “When people get older, the hyperactivity aspect of the disorder is decreased,” said Lin. “That can make it more difficult to diagnose observationally, and without a diagnosis, people with ADHD may assume that what they’re experiencing is standard.”

    Lin and her colleagues used data from the Adolescent Brain Cognitive Development (ABCD) Study, which includes nearly 12,000 children from across the United States. The participants joined the study at age 9 or 10; researchers will continue tracking their biological and behavioral development into young adulthood, which will yield new data over the next few years. The demographics of the study participants mirror those of the U.S. population.

    “That the study group is representative of the greater U.S. population means our findings will be generalizable to the U.S. population as well,” said Lin.

    The researchers conducted a whole-brain analysis using images that measured brain structure and function in 7,805 9- to 10-year-olds. They found that the frontal cortex of the brain — an area responsible for functions like impulse control, attention, and working memory — was thinner in children with ADHD than in those without the disorder. Brain networks related to memory processing, alertness, and auditory processing were also different in children with ADHD. Further, white matter, which is composed of nerve fibers that project from one part of the brain to another, was thinner in children with ADHD. This could have implications for how different brain regions communicate with each other.

    The pervasiveness of the differences was surprising, said Lin.

    “I expected some brain regions to stand out. But we saw a more overall change throughout the entire brain,” she said.

    The pattern the researchers uncovered was sufficiently stable across study participants that the research team used it to train a machine learning algorithm to predict who has ADHD based on brain images alone — meaning it holds promise as a diagnostic tool going forward, they said.

    “The algorithm still needs further validation,” said Lin. “But once it is ready for clinical use, combining this more objective measure with the assessments already in use could allow more children to be accurately diagnosed in the future.”

    The findings also emphasize that ADHD is not simply a disorder of behavior.

    “Externalized behavior is certainly a part of ADHD, but there’s also a neurological correlate,” said Sam Payabvash, assistant professor of radiology and biomedical imaging at Yale School of Medicine and senior author of the study. “Better understanding of the neurological component will help with diagnosis and treatment in the future.”

    It may also reduce the stigma attached to mental illness.

    “If you measured someone’s blood pressure and found it was high, nobody would question that it was a condition that should be addressed. But a lot of people question diagnoses of mental illness,” said Lin. “Being able to measure it like we can blood pressure could help address that stigma.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Yale University is a private Ivy League research university in New Haven, Connecticut. Founded in 1701 as the Collegiate School, it is the third-oldest institution of higher education in the United States and one of the nine Colonial Colleges chartered before the American Revolution. The Collegiate School was renamed Yale College in 1718 to honor the school’s largest private benefactor for the first century of its existence, Elihu Yale. Yale University is consistently ranked as one of the top universities and is considered one of the most prestigious in the nation.

    Chartered by Connecticut Colony, the Collegiate School was established in 1701 by clergy to educate Congregational ministers before moving to New Haven in 1716. Originally restricted to theology and sacred languages, the curriculum began to incorporate humanities and sciences by the time of the American Revolution. In the 19th century, the college expanded into graduate and professional instruction, awarding the first PhD in the United States in 1861 and organizing as a university in 1887. Yale’s faculty and student populations grew after 1890 with rapid expansion of the physical campus and scientific research.

    Yale is organized into fourteen constituent schools: the original undergraduate college, the Yale Graduate School of Arts and Sciences and twelve professional schools. While the university is governed by the Yale Corporation, each school’s faculty oversees its curriculum and degree programs. In addition to a central campus in downtown New Haven, the university owns athletic facilities in western New Haven, a campus in West Haven, Connecticut, and forests and nature preserves throughout New England. As of June 2020, the university’s endowment was valued at $31.1 billion, the second largest of any educational institution. The Yale University Library, serving all constituent schools, holds more than 15 million volumes and is the third-largest academic library in the United States. Students compete in intercollegiate sports as the Yale Bulldogs in the NCAA Division I – Ivy League.

    As of October 2020, 65 Nobel laureates, five Fields Medalists, four Abel Prize laureates, and three Turing award winners have been affiliated with Yale University. In addition, Yale has graduated many notable alumni, including five U.S. Presidents, 19 U.S. Supreme Court Justices, 31 living billionaires, and many heads of state. Hundreds of members of Congress and many U.S. diplomats, 78 MacArthur Fellows, 252 Rhodes Scholars, 123 Marshall Scholars, and nine Mitchell Scholars have been affiliated with the university.

    Research

    Yale is a member of the Association of American Universities (AAU) and is classified among “R1: Doctoral Universities – Very high research activity”. According to the National Science Foundation , Yale spent $990 million on research and development in 2018, ranking it 15th in the nation.

    Yale’s faculty include 61 members of the National Academy of Sciences , 7 members of the National Academy of Engineering and 49 members of the American Academy of Arts and Sciences . The college is, after normalization for institution size, the tenth-largest baccalaureate source of doctoral degree recipients in the United States, and the largest such source within the Ivy League.

    Yale’s English and Comparative Literature departments were part of the New Criticism movement. Of the New Critics, Robert Penn Warren, W.K. Wimsatt, and Cleanth Brooks were all Yale faculty. Later, the Yale Comparative literature department became a center of American deconstruction. Jacques Derrida, the father of deconstruction, taught at the Department of Comparative Literature from the late seventies to mid-1980s. Several other Yale faculty members were also associated with deconstruction, forming the so-called “Yale School”. These included Paul de Man who taught in the Departments of Comparative Literature and French, J. Hillis Miller, Geoffrey Hartman (both taught in the Departments of English and Comparative Literature), and Harold Bloom (English), whose theoretical position was always somewhat specific, and who ultimately took a very different path from the rest of this group. Yale’s history department has also originated important intellectual trends. Historians C. Vann Woodward and David Brion Davis are credited with beginning in the 1960s and 1970s an important stream of southern historians; likewise, David Montgomery, a labor historian, advised many of the current generation of labor historians in the country. Yale’s Music School and Department fostered the growth of Music Theory in the latter half of the 20th century. The Journal of Music Theory was founded there in 1957; Allen Forte and David Lewin were influential teachers and scholars.

    In addition to eminent faculty members, Yale research relies heavily on the presence of roughly 1200 Postdocs from various national and international origin working in the multiple laboratories in the sciences, social sciences, humanities, and professional schools of the university. The university progressively recognized this working force with the recent creation of the Office for Postdoctoral Affairs and the Yale Postdoctoral Association.

    Notable alumni

    Over its history, Yale has produced many distinguished alumni in a variety of fields, ranging from the public to private sector. According to 2020 data, around 71% of undergraduates join the workforce, while the next largest majority of 16.6% go on to attend graduate or professional schools. Yale graduates have been recipients of 252 Rhodes Scholarships, 123 Marshall Scholarships, 67 Truman Scholarships, 21 Churchill Scholarships, and 9 Mitchell Scholarships. The university is also the second largest producer of Fulbright Scholars, with a total of 1,199 in its history and has produced 89 MacArthur Fellows. The U.S. Department of State Bureau of Educational and Cultural Affairs ranked Yale fifth among research institutions producing the most 2020–2021 Fulbright Scholars. Additionally, 31 living billionaires are Yale alumni.

    At Yale, one of the most popular undergraduate majors among Juniors and Seniors is political science, with many students going on to serve careers in government and politics. Former presidents who attended Yale for undergrad include William Howard Taft, George H. W. Bush, and George W. Bush while former presidents Gerald Ford and Bill Clinton attended Yale Law School. Former vice-president and influential antebellum era politician John C. Calhoun also graduated from Yale. Former world leaders include Italian prime minister Mario Monti, Turkish prime minister Tansu Çiller, Mexican president Ernesto Zedillo, German president Karl Carstens, Philippine president José Paciano Laurel, Latvian president Valdis Zatlers, Taiwanese premier Jiang Yi-huah, and Malawian president Peter Mutharika, among others. Prominent royals who graduated are Crown Princess Victoria of Sweden, and Olympia Bonaparte, Princess Napoléon.

    Yale alumni have had considerable presence in U.S. government in all three branches. On the U.S. Supreme Court, 19 justices have been Yale alumni, including current Associate Justices Sonia Sotomayor, Samuel Alito, Clarence Thomas, and Brett Kavanaugh. Numerous Yale alumni have been U.S. Senators, including current Senators Michael Bennet, Richard Blumenthal, Cory Booker, Sherrod Brown, Chris Coons, Amy Klobuchar, Ben Sasse, and Sheldon Whitehouse. Current and former cabinet members include Secretaries of State John Kerry, Hillary Clinton, Cyrus Vance, and Dean Acheson; U.S. Secretaries of the Treasury Oliver Wolcott, Robert Rubin, Nicholas F. Brady, Steven Mnuchin, and Janet Yellen; U.S. Attorneys General Nicholas Katzenbach, John Ashcroft, and Edward H. Levi; and many others. Peace Corps founder and American diplomat Sargent Shriver and public official and urban planner Robert Moses are Yale alumni.

    Yale has produced numerous award-winning authors and influential writers, like Nobel Prize in Literature laureate Sinclair Lewis and Pulitzer Prize winners Stephen Vincent Benét, Thornton Wilder, Doug Wright, and David McCullough. Academy Award winning actors, actresses, and directors include Jodie Foster, Paul Newman, Meryl Streep, Elia Kazan, George Roy Hill, Lupita Nyong’o, Oliver Stone, and Frances McDormand. Alumni from Yale have also made notable contributions to both music and the arts. Leading American composer from the 20th century Charles Ives, Broadway composer Cole Porter, Grammy award winner David Lang, and award-winning jazz pianist and composer Vijay Iyer all hail from Yale. Hugo Boss Prize winner Matthew Barney, famed American sculptor Richard Serra, President Barack Obama presidential portrait painter Kehinde Wiley, MacArthur Fellow and contemporary artist Sarah Sze, Pulitzer Prize winning cartoonist Garry Trudeau, and National Medal of Arts photorealist painter Chuck Close all graduated from Yale. Additional alumni include architect and Presidential Medal of Freedom winner Maya Lin, Pritzker Prize winner Norman Foster, and Gateway Arch designer Eero Saarinen. Journalists and pundits include Dick Cavett, Chris Cuomo, Anderson Cooper, William F. Buckley, Jr., and Fareed Zakaria.

    In business, Yale has had numerous alumni and former students go on to become founders of influential business, like William Boeing (Boeing, United Airlines), Briton Hadden and Henry Luce (Time Magazine), Stephen A. Schwarzman (Blackstone Group), Frederick W. Smith (FedEx), Juan Trippe (Pan Am), Harold Stanley (Morgan Stanley), Bing Gordon (Electronic Arts), and Ben Silbermann (Pinterest). Other business people from Yale include former chairman and CEO of Sears Holdings Edward Lampert, former Time Warner president Jeffrey Bewkes, former PepsiCo chairperson and CEO Indra Nooyi, sports agent Donald Dell, and investor/philanthropist Sir John Templeton,

    Yale alumni distinguished in academia include literary critic and historian Henry Louis Gates, economists Irving Fischer, Mahbub ul Haq, and Nobel Prize laureate Paul Krugman; Nobel Prize in Physics laureates Ernest Lawrence and Murray Gell-Mann; Fields Medalist John G. Thompson; Human Genome Project leader and National Institutes of Health director Francis S. Collins; brain surgery pioneer Harvey Cushing; pioneering computer scientist Grace Hopper; influential mathematician and chemist Josiah Willard Gibbs; National Women’s Hall of Fame inductee and biochemist Florence B. Seibert; Turing Award recipient Ron Rivest; inventors Samuel F.B. Morse and Eli Whitney; Nobel Prize in Chemistry laureate John B. Goodenough; lexicographer Noah Webster; and theologians Jonathan Edwards and Reinhold Niebuhr.

    In the sporting arena, Yale alumni include baseball players Ron Darling and Craig Breslow and baseball executives Theo Epstein and George Weiss; football players Calvin Hill, Gary Fenick, Amos Alonzo Stagg, and “the Father of American Football” Walter Camp; ice hockey players Chris Higgins and Olympian Helen Resor; Olympic figure skaters Sarah Hughes and Nathan Chen; nine-time U.S. Squash men’s champion Julian Illingworth; Olympic swimmer Don Schollander; Olympic rowers Josh West and Rusty Wailes; Olympic sailor Stuart McNay; Olympic runner Frank Shorter; and others.

     
  • richardmitnick 1:04 pm on November 21, 2022 Permalink | Reply
    Tags: "CfC": closed-form continuous-time neural network, "Solving brain dynamics gives rise to flexible machine-learning models", , Brain Studies, Differential equations enable us to compute the state of the world or a phenomenon as it evolves but not all the way through time — just step-by-step., , , Studying the brains of small species recently helped MIT researchers better model the interaction between neurons and synapses — the building blocks of natural and artificial neural networks., , , The team reached into a bag of mathematical tricks to find a “closed form” solution that models the entire description of a whole system in a single compute step., There is early evidence of Liquid CfC models in learning tasks in one environment from visual inputs and transferring their learned skills to an entirely new environment without additional training., This framework can help solve more complex machine learning tasks — enabling better representation learning — and should be the basic building blocks of any future embedded intelligence system., With the models one can compute the equations at any time in the future and at any time in the past.   

    From The Computer Science & Artificial Intelligence Laboratory (CSAIL) At The Massachusetts Institute of Technology: “Solving brain dynamics gives rise to flexible machine-learning models” 

    1

    From The Computer Science & Artificial Intelligence Laboratory (CSAIL)

    At

    The Massachusetts Institute of Technology

    11.15.22
    Rachel Gordon

    1
    Studying the brains of small species recently helped MIT researchers better model the interaction between neurons and synapses — the building blocks of natural and artificial neural networks — into a class of flexible, robust machine-learning models that learn on the job and can adapt to changing conditions. Image: Ramin Hasani/Stable Diffusion.

    Last year, MIT researchers announced that they had built “liquid” neural networks, inspired by the brains of small species: a class of flexible, robust machine learning models that learn on the job and can adapt to changing conditions, for real-world safety-critical tasks, like driving and flying. The flexibility of these “liquid” neural nets meant boosting the bloodline to our connected world, yielding better decision-making for many tasks involving time-series data, such as brain and heart monitoring, weather forecasting, and stock pricing.

    But these models become computationally expensive as their number of neurons and synapses increase and require clunky computer programs to solve their underlying, complicated math. And all of this math, similar to many physical phenomena, becomes harder to solve with size, meaning computing lots of small steps to arrive at a solution. 

    Now, the same team of scientists has discovered a way to alleviate this bottleneck by solving the differential equation behind the interaction of two neurons through synapses to unlock a new type of fast and efficient artificial intelligence algorithms. These modes have the same characteristics of liquid neural nets — flexible, causal, robust, and explainable — but are orders of magnitude faster, and scalable. This type of neural net could therefore be used for any task that involves getting insight into data over time, as they’re compact and adaptable even after training — while many traditional models are fixed. There hasn’t been a known solution since 1907 — the year that the differential equation of the neuron model was introduced.

    The models, dubbed a “closed-form continuous-time” (CfC) neural network, outperformed state-of-the-art counterparts on a slew of tasks, with considerably higher speedups and performance in recognizing human activities from motion sensors, modeling physical dynamics of a simulated walker robot, and event-based sequential image processing. On a medical prediction task, for example, the new models were 220 times faster on a sampling of 8,000 patients. 

    A new paper on the work is published today in Nature Machine Intelligence [below].

    “The new machine-learning models we call ‘CfC’s’ replace the differential equation defining the computation of the neuron with a closed form approximation, preserving the beautiful properties of liquid networks without the need for numerical integration,” says MIT Professor Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and senior author on the new paper. “CfC models are causal, compact, explainable, and efficient to train and predict. They open the way to trustworthy machine learning for safety-critical applications.”

    Keeping things liquid 

    Differential equations enable us to compute the state of the world or a phenomenon as it evolves, but not all the way through time — just step-by-step. To model natural phenomena through time and understand previous and future behavior, like human activity recognition or a robot’s path, for example, the team reached into a bag of mathematical tricks to find just the ticket: a “closed form’” solution that models the entire description of a whole system, in a single compute step. 

    With their models, one can compute this equation at any time in the future, and at any time in the past. Not only that, but the speed of computation is much faster because you don’t need to solve the differential equation step-by-step. 

    Imagine an end-to-end neural network that receives driving input from a camera mounted on a car. The network is trained to generate outputs, like the car’s steering angle. In 2020, the team solved this by using liquid neural networks with 19 nodes, so 19 neurons plus a small perception module could drive a car. A differential equation describes each node of that system. With the closed-form solution, if you replace it inside this network, it would give you the exact behavior, as it’s a good approximation of the actual dynamics of the system. They can thus solve the problem with an even lower number of neurons, which means it would be faster and less computationally expensive. 

    These models can receive inputs as time series (events that happened in time), which could be used for classification, controlling a car, moving a humanoid robot, or forecasting financial and medical events. With all of these various modes, it can also increase accuracy, robustness, and performance, and, importantly, computation speed — which sometimes comes as a trade-off. 

    Solving this equation has far-reaching implications for advancing research in both natural and artificial intelligence systems. “When we have a closed-form description of neurons and synapses’ communication, we can build computational models of brains with billions of cells, a capability that is not possible today due to the high computational complexity of neuroscience models. The closed-form equation could facilitate such grand-level simulations and therefore opens new avenues of research for us to understand intelligence,” says MIT CSAIL Research Affiliate Ramin Hasani, first author on the new paper.

    Portable learning

    Moreover, there is early evidence of Liquid CfC models in learning tasks in one environment from visual inputs, and transferring their learned skills to an entirely new environment without additional training. This is called out-of-distribution generalization, which is one of the most fundamental open challenges of artificial intelligence research.  

    “Neural network systems based on differential equations are tough to solve and scale to, say, millions and billions of parameters. Getting that description of how neurons interact with each other, not just the threshold, but solving the physical dynamics between cells enables us to build up larger-scale neural networks,” says Hasani. “This framework can help solve more complex machine learning tasks — enabling better representation learning — and should be the basic building blocks of any future embedded intelligence system.”

    “Recent neural network architectures, such as neural ODEs and liquid neural networks, have hidden layers composed of specific dynamical systems representing infinite latent states instead of explicit stacks of layers,” says Sildomar Monteiro, AI and Machine Learning Group lead at Aurora Flight Sciences, a Boeing company, who was not involved in this paper. “These implicitly-defined models have shown state-of-the-art performance while requiring far fewer parameters than conventional architectures. However, their practical adoption has been limited due to the high computational cost required for training and inference.” He adds that this paper “shows a significant improvement in the computation efficiency for this class of neural networks … [and] has the potential to enable a broader range of practical applications relevant to safety-critical commercial and defense systems.”

    Hasani and Mathias Lechner, a postdoc at MIT CSAIL, wrote the paper supervised by Rus, alongside MIT Alexander Amini, a CSAIL postdoc; Lucas Liebenwein SM ’18, PhD ’21; Aaron Ray, an MIT electrical engineering and computer science PhD student and CSAIL affiliate; Max Tschaikowski, associate professor in computer science at Aalborg University in Denmark; and Gerald Teschl, professor of mathematics at the University of Vienna.

    Science paper:
    Nature Machine Intelligence
    See the science paper for instructive material with images.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    4

    The Computer Science and Artificial Intelligence Laboratory (CSAIL) is a research institute at the Massachusetts Institute of Technology (MIT) formed by the 2003 merger of the Laboratory for Computer Science (LCS) and the Artificial Intelligence Laboratory (AI Lab). Housed within the Ray and Maria Stata Center, CSAIL is the largest on-campus laboratory as measured by research scope and membership. It is part of the Schwarzman College of Computing but is also overseen by the MIT Vice President of Research.

    Research activities

    CSAIL’s research activities are organized around a number of semi-autonomous research groups, each of which is headed by one or more professors or research scientists. These groups are divided up into seven general areas of research:

    Artificial intelligence
    Computational biology
    Graphics and vision
    Language and learning
    Theory of computation
    Robotics
    Systems (includes computer architecture, databases, distributed systems, networks and networked systems, operating systems, programming methodology, and software engineering among others)

    In addition, CSAIL hosts the World Wide Web Consortium (W3C).

    MIT Seal

    USPS “Forever” postage stamps celebrating Innovation at MIT.

    MIT Campus

    The Massachusetts Institute of Technology is a private land-grant research university in Cambridge, Massachusetts. The institute has an urban campus that extends more than a mile (1.6 km) alongside the Charles River. The institute also encompasses a number of major off-campus facilities such as the MIT Lincoln Laboratory , the MIT Bates Research and Engineering Center (US), and the Haystack Observatory, as well as affiliated laboratories such as the Broad Institute of MIT and Harvard(US) and Whitehead Institute (US).

    Massachusettes Institute of Technology-Haystack Observatory Westford, Massachusetts, USA, Altitude 131 m (430 ft).

    Founded in 1861 in response to the increasing industrialization of the United States, Massachusetts Institute of Technology adopted a European polytechnic university model and stressed laboratory instruction in applied science and engineering. It has since played a key role in the development of many aspects of modern science, engineering, mathematics, and technology, and is widely known for its innovation and academic strength. It is frequently regarded as one of the most prestigious universities in the world.

    As of December 2020, 97 Nobel laureates, 26 Turing Award winners, and 8 Fields Medalists have been affiliated with MIT as alumni, faculty members, or researchers. In addition, 58 National Medal of Science recipients, 29 National Medals of Technology and Innovation recipients, 50 MacArthur Fellows, 80 Marshall Scholars, 3 Mitchell Scholars, 22 Schwarzman Scholars, 41 astronauts, and 16 Chief Scientists of the U.S. Air Force have been affiliated with The Massachusetts Institute of Technology. The university also has a strong entrepreneurial culture and MIT alumni have founded or co-founded many notable companies. Massachusetts Institute of Technology is a member of the Association of American Universities (AAU).

    Foundation and vision

    In 1859, a proposal was submitted to the Massachusetts General Court to use newly filled lands in Back Bay, Boston for a “Conservatory of Art and Science”, but the proposal failed. A charter for the incorporation of the Massachusetts Institute of Technology, proposed by William Barton Rogers, was signed by John Albion Andrew, the governor of Massachusetts, on April 10, 1861.

    Rogers, a professor from the University of Virginia , wanted to establish an institution to address rapid scientific and technological advances. He did not wish to found a professional school, but a combination with elements of both professional and liberal education, proposing that:

    “The true and only practicable object of a polytechnic school is, as I conceive, the teaching, not of the minute details and manipulations of the arts, which can be done only in the workshop, but the inculcation of those scientific principles which form the basis and explanation of them, and along with this, a full and methodical review of all their leading processes and operations in connection with physical laws.”

    The Rogers Plan reflected the German research university model, emphasizing an independent faculty engaged in research, as well as instruction oriented around seminars and laboratories.

    Early developments

    Two days after Massachusetts Institute of Technology was chartered, the first battle of the Civil War broke out. After a long delay through the war years, MIT’s first classes were held in the Mercantile Building in Boston in 1865. The new institute was founded as part of the Morrill Land-Grant Colleges Act to fund institutions “to promote the liberal and practical education of the industrial classes” and was a land-grant school. In 1863 under the same act, the Commonwealth of Massachusetts founded the Massachusetts Agricultural College, which developed as the University of Massachusetts Amherst ). In 1866, the proceeds from land sales went toward new buildings in the Back Bay.

    Massachusetts Institute of Technology was informally called “Boston Tech”. The institute adopted the European polytechnic university model and emphasized laboratory instruction from an early date. Despite chronic financial problems, the institute saw growth in the last two decades of the 19th century under President Francis Amasa Walker. Programs in electrical, chemical, marine, and sanitary engineering were introduced, new buildings were built, and the size of the student body increased to more than one thousand.

    The curriculum drifted to a vocational emphasis, with less focus on theoretical science. The fledgling school still suffered from chronic financial shortages which diverted the attention of the MIT leadership. During these “Boston Tech” years, Massachusetts Institute of Technology faculty and alumni rebuffed Harvard University president (and former MIT faculty) Charles W. Eliot’s repeated attempts to merge MIT with Harvard College’s Lawrence Scientific School. There would be at least six attempts to absorb MIT into Harvard. In its cramped Back Bay location, MIT could not afford to expand its overcrowded facilities, driving a desperate search for a new campus and funding. Eventually, the MIT Corporation approved a formal agreement to merge with Harvard, over the vehement objections of MIT faculty, students, and alumni. However, a 1917 decision by the Massachusetts Supreme Judicial Court effectively put an end to the merger scheme.

    In 1916, the Massachusetts Institute of Technology administration and the MIT charter crossed the Charles River on the ceremonial barge Bucentaur built for the occasion, to signify MIT’s move to a spacious new campus largely consisting of filled land on a one-mile-long (1.6 km) tract along the Cambridge side of the Charles River. The neoclassical “New Technology” campus was designed by William W. Bosworth and had been funded largely by anonymous donations from a mysterious “Mr. Smith”, starting in 1912. In January 1920, the donor was revealed to be the industrialist George Eastman of Rochester, New York, who had invented methods of film production and processing, and founded Eastman Kodak. Between 1912 and 1920, Eastman donated $20 million ($236.6 million in 2015 dollars) in cash and Kodak stock to MIT.

    Curricular reforms

    In the 1930s, President Karl Taylor Compton and Vice-President (effectively Provost) Vannevar Bush emphasized the importance of pure sciences like physics and chemistry and reduced the vocational practice required in shops and drafting studios. The Compton reforms “renewed confidence in the ability of the Institute to develop leadership in science as well as in engineering”. Unlike Ivy League schools, Massachusetts Institute of Technology (US) catered more to middle-class families, and depended more on tuition than on endowments or grants for its funding. The school was elected to the Association of American Universities in 1934.

    Still, as late as 1949, the Lewis Committee lamented in its report on the state of education at Massachusetts Institute of Technology that “the Institute is widely conceived as basically a vocational school”, a “partly unjustified” perception the committee sought to change. The report comprehensively reviewed the undergraduate curriculum, recommended offering a broader education, and warned against letting engineering and government-sponsored research detract from the sciences and humanities. The School of Humanities, Arts, and Social Sciences and the MIT Sloan School of Management were formed in 1950 to compete with the powerful Schools of Science and Engineering. Previously marginalized faculties in the areas of economics, management, political science, and linguistics emerged into cohesive and assertive departments by attracting respected professors and launching competitive graduate programs. The School of Humanities, Arts, and Social Sciences continued to develop under the successive terms of the more humanistically oriented presidents Howard W. Johnson and Jerome Wiesner between 1966 and 1980.

    Massachusetts Institute of Technology ‘s involvement in military science surged during World War II. In 1941, Vannevar Bush was appointed head of the federal Office of Scientific Research and Development and directed funding to only a select group of universities, including MIT. Engineers and scientists from across the country gathered at Massachusetts Institute of Technology (US)’s Radiation Laboratory, established in 1940 to assist the British military in developing microwave radar. The work done there significantly affected both the war and subsequent research in the area. Other defense projects included gyroscope-based and other complex control systems for gunsight, bombsight, and inertial navigation under Charles Stark Draper’s Instrumentation Laboratory; the development of a digital computer for flight simulations under Project Whirlwind; and high-speed and high-altitude photography under Harold Edgerton. By the end of the war, Massachusetts Institute of Technology became the nation’s largest wartime R&D contractor (attracting some criticism of Bush), employing nearly 4000 in the Radiation Laboratory alone and receiving in excess of $100 million ($1.2 billion in 2015 dollars) before 1946. Work on defense projects continued even after then. Post-war government-sponsored research at MIT included SAGE and guidance systems for ballistic missiles and Project Apollo.

    These activities affected Massachusetts Institute of Technology profoundly. A 1949 report noted the lack of “any great slackening in the pace of life at the Institute” to match the return to peacetime, remembering the “academic tranquility of the prewar years”, though acknowledging the significant contributions of military research to the increased emphasis on graduate education and rapid growth of personnel and facilities. The faculty doubled and the graduate student body quintupled during the terms of Karl Taylor Compton, president of Massachusetts Institute of Technology between 1930 and 1948; James Rhyne Killian, president from 1948 to 1957; and Julius Adams Stratton, chancellor from 1952 to 1957, whose institution-building strategies shaped the expanding university. By the 1950s, Massachusetts Institute of Technology no longer simply benefited the industries with which it had worked for three decades, and it had developed closer working relationships with new patrons, philanthropic foundations and the federal government.

    In late 1960s and early 1970s, student and faculty activists protested against the Vietnam War and Massachusetts Institute of Technology ‘s defense research. In this period Massachusetts Institute of Technology’s various departments were researching helicopters, smart bombs and counterinsurgency techniques for the war in Vietnam as well as guidance systems for nuclear missiles. The Union of Concerned Scientists was founded on March 4, 1969 during a meeting of faculty members and students seeking to shift the emphasis on military research toward environmental and social problems. Massachusetts Institute of Technology ultimately divested itself from the Instrumentation Laboratory and moved all classified research off-campus to the MIT Lincoln Laboratory facility in 1973 in response to the protests. The student body, faculty, and administration remained comparatively unpolarized during what was a tumultuous time for many other universities. Johnson was seen to be highly successful in leading his institution to “greater strength and unity” after these times of turmoil. However six Massachusetts Institute of Technology ( students were sentenced to prison terms at this time and some former student leaders, such as Michael Albert and George Katsiaficas, are still indignant about MIT’s role in military research and its suppression of these protests. (Richard Leacock’s film, November Actions, records some of these tumultuous events.)

    In the 1980s, there was more controversy at Massachusetts Institute of Technology over its involvement in SDI (space weaponry) and CBW (chemical and biological warfare) research. More recently, Massachusetts Institute of Technology (US)’s research for the military has included work on robots, drones and ‘battle suits’.

    Recent history

    Massachusetts Institute of Technology has kept pace with and helped to advance the digital age. In addition to developing the predecessors to modern computing and networking technologies, students, staff, and faculty members at Project MAC, the Artificial Intelligence Laboratory, and the Tech Model Railroad Club wrote some of the earliest interactive computer video games like Spacewar! and created much of modern hacker slang and culture. Several major computer-related organizations have originated at MIT since the 1980s: Richard Stallman’s GNU Project and the subsequent Free Software Foundation were founded in the mid-1980s at the AI Lab; the MIT Media Lab was founded in 1985 by Nicholas Negroponte and Jerome Wiesner to promote research into novel uses of computer technology; the World Wide Web Consortium standards organization was founded at the Laboratory for Computer Science in 1994 by Tim Berners-Lee; the MIT OpenCourseWare project has made course materials for over 2,000 Massachusetts Institute of Technology classes available online free of charge since 2002; and the One Laptop per Child initiative to expand computer education and connectivity to children worldwide was launched in 2005.

    Massachusetts Institute of Technology was named a sea-grant college in 1976 to support its programs in oceanography and marine sciences and was named a space-grant college in 1989 to support its aeronautics and astronautics programs. Despite diminishing government financial support over the past quarter century, MIT launched several successful development campaigns to significantly expand the campus: new dormitories and athletics buildings on west campus; the Tang Center for Management Education; several buildings in the northeast corner of campus supporting research into biology, brain and cognitive sciences, genomics, biotechnology, and cancer research; and a number of new “backlot” buildings on Vassar Street including the Stata Center. Construction on campus in the 2000s included expansions of the Media Lab, the Sloan School’s eastern campus, and graduate residences in the northwest. In 2006, President Hockfield launched the MIT Energy Research Council to investigate the interdisciplinary challenges posed by increasing global energy consumption.

    In 2001, inspired by the open source and open access movements, Massachusetts Institute of Technology launched OpenCourseWare to make the lecture notes, problem sets, syllabi, exams, and lectures from the great majority of its courses available online for no charge, though without any formal accreditation for coursework completed. While the cost of supporting and hosting the project is high, OCW expanded in 2005 to include other universities as a part of the OpenCourseWare Consortium, which currently includes more than 250 academic institutions with content available in at least six languages. In 2011, Massachusetts Institute of Technology announced it would offer formal certification (but not credits or degrees) to online participants completing coursework in its “MITx” program, for a modest fee. The “edX” online platform supporting MITx was initially developed in partnership with Harvard and its analogous “Harvardx” initiative. The courseware platform is open source, and other universities have already joined and added their own course content. In March 2009 the Massachusetts Institute of Technology faculty adopted an open-access policy to make its scholarship publicly accessible online.

    Massachusetts Institute of Technology has its own police force. Three days after the Boston Marathon bombing of April 2013, MIT Police patrol officer Sean Collier was fatally shot by the suspects Dzhokhar and Tamerlan Tsarnaev, setting off a violent manhunt that shut down the campus and much of the Boston metropolitan area for a day. One week later, Collier’s memorial service was attended by more than 10,000 people, in a ceremony hosted by the Massachusetts Institute of Technology community with thousands of police officers from the New England region and Canada. On November 25, 2013, Massachusetts Institute of Technology announced the creation of the Collier Medal, to be awarded annually to “an individual or group that embodies the character and qualities that Officer Collier exhibited as a member of the Massachusetts Institute of Technology community and in all aspects of his life”. The announcement further stated that “Future recipients of the award will include those whose contributions exceed the boundaries of their profession, those who have contributed to building bridges across the community, and those who consistently and selflessly perform acts of kindness”.

    In September 2017, the school announced the creation of an artificial intelligence research lab called the MIT-IBM Watson AI Lab. IBM will spend $240 million over the next decade, and the lab will be staffed by MIT and IBM scientists. In October 2018 MIT announced that it would open a new Schwarzman College of Computing dedicated to the study of artificial intelligence, named after lead donor and The Blackstone Group CEO Stephen Schwarzman. The focus of the new college is to study not just AI, but interdisciplinary AI education, and how AI can be used in fields as diverse as history and biology. The cost of buildings and new faculty for the new college is expected to be $1 billion upon completion.

    The Caltech/MIT Advanced aLIGO was designed and constructed by a team of scientists from California Institute of Technology, Massachusetts Institute of Technology , and industrial contractors, and funded by the National Science Foundation .

    Caltech /MIT Advanced aLigo

    It was designed to open the field of gravitational-wave astronomy through the detection of gravitational waves predicted by general relativity. Gravitational waves were detected for the first time by the LIGO detector in 2015. For contributions to the LIGO detector and the observation of gravitational waves, two Caltech physicists, Kip Thorne and Barry Barish, and Massachusetts Institute of Technology physicist Rainer Weiss won the Nobel Prize in physics in 2017. Weiss, who is also an Massachusetts Institute of Technology graduate, designed the laser interferometric technique, which served as the essential blueprint for the LIGO.

    The mission of Massachusetts Institute of Technology is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of The Massachusetts Institute of Technology community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

     
  • richardmitnick 9:25 am on November 21, 2022 Permalink | Reply
    Tags: "How big brains are made", , , , Brain Studies, Cephalopods can also communicate with one another and show signs of spatial learning and use tools to solve problems. They are so smart they even get bored and start making mischief., Cephalopods can quickly process information to transform shape and color and even texture blending in with their surroundings., Harvard study looks at how cephalopods develop their big brains.,   

    From “The Gazette” At Harvard University: “How big brains are made” 

    From “The Gazette”

    At

    Harvard University

    Harvard study looks at how cephalopods develop their big brains.

    11.16.22
    Juan Siliezar

    1
    Four squid embryos in their egg sac. These are the squid species Doryteuthis pealeii. Credit: Kristen Koenig.

    Cephalopods are capable of some truly impressive behaviors. They can quickly process information to transform shape, color, and even texture, blending in with their surroundings. They can also communicate with one another, show signs of spatial learning, and use tools to solve problems. They are so smart they even get bored and start making mischief.

    It’s no secret what makes this all possible: These marine animals, which include octopus, squid, and their cuttlefish cousins, have the most complex brains of any invertebrates on the planet. What remains something of a mystery, however, is how cephalopods developed those big brains in the first place. A Harvard lab that studies the visual systems of these soft-bodied creatures — which is where two-thirds of their central processing tissue are focused — believe they’ve come close to figuring it out.

    Researchers from the FAS Center for Systems Biology describe in a new study [Cell Current Biology (below)] how they used a new live-imaging technique to watch neurons being created in squid embryos almost in real-time. They were then able to track those cells through the development of the nervous system in the retina.

    They were surprised to discover that these neural stem cells behaved very much like those in vertebrates during nervous-system development. The results suggest that while vertebrates and cephalopods diverged from one other 500 million years ago, the process by which both developed big brains was similar. In addition the way the cells act, divide, and are shaped may essentially follow a kind of blueprint required for this kind of nervous system.

    “Our conclusions were surprising because a lot of what we know about nervous system development in vertebrates has long been thought to be special to that lineage,” said Kristen Koenig, a John Harvard Distinguished Fellow and senior author of the study. “By observing the fact that the process is very similar, what it suggested to us is that these two independently evolved, very large nervous systems are using the same mechanisms to build them. What that suggests is that those mechanisms — those tools — the animals use during development may be important for building big nervous systems.”

    The scientists from the Koenig Lab focused on the retina of a squid called Doryteuthis pealeii, more simply a longfin inshore squid. The squid grow to be about a foot long and are abundant in the northwest Atlantic Ocean. The embryos look like adorable anime characters with big heads and eyes.

    The researchers employed similar techniques to those regularly used to study model organisms, like fruit flies and zebrafish. They created special tools and made use of cutting-edge microscopes that can take high-resolution images every 10 minutes for hours on end to see how individual cells behave. The researchers used florescent dyes to mark the cells so they could map them and track them.

    This live-imaging technique allowed the team to observe stem cells called neural progenitor cells and how they are organized. The cells formed a special kind of structure called a pseudostratified epithelium. Its main feature is that the cells are elongated so they can be densely packed. Researchers also saw the nucleus of these structures move up and down before and after dividing. This movement is important for keeping the tissue organized and allowing for continued growth, they said.

    This type of structure is universally seen in brain and eye development in vertebrate species. It long has been considered one of the reasons the vertebrate nervous system could grow so large and complex. Scientists have observed examples of this type of neural epithelium in other animals, but the squid tissue was also strikingly similar to that of vertebrates in size, organization, and nucleus movement.

    The research was led by Francesca R. Napoli and Christina M. Daly, research assistants in the Koenig Lab.

    Next, the lab plans to look at how different cell types in cephalopod brains emerge. Koenig wants to determine whether they’re expressed at different times, how they decide to become one type of neuron versus another, and whether this action is similar across species.

    “One of the big takeaways from this type of work is just how valuable it is to study the diversity of life,” Koenig said. “By studying this diversity, you can actually really come back to fundamental ideas about even our own development and our own biomedically relevant questions. You can really speak to those questions.”

    Science paper:
    Cell Current Biology

    Graphical abstract
    1

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Harvard University campus

    Harvard University is the oldest institution of higher education in the United States, established in 1636 by vote of the Great and General Court of the Massachusetts Bay Colony. It was named after the College’s first benefactor, the young minister John Harvard of Charlestown, who upon his death in 1638 left his library and half his estate to the institution. A statue of John Harvard stands today in front of University Hall in Harvard Yard, and is perhaps the University’s best-known landmark.

    Harvard University has 12 degree-granting Schools in addition to the Radcliffe Institute for Advanced Study. The University has grown from nine students with a single master to an enrollment of more than 20,000 degree candidates including undergraduate, graduate, and professional students. There are more than 360,000 living alumni in the U.S. and over 190 other countries.

    The Massachusetts colonial legislature, the General Court, authorized Harvard University’s founding. In its early years, Harvard College primarily trained Congregational and Unitarian clergy, although it has never been formally affiliated with any denomination. Its curriculum and student body were gradually secularized during the 18th century, and by the 19th century, Harvard University (US) had emerged as the central cultural establishment among the Boston elite. Following the American Civil War, President Charles William Eliot’s long tenure (1869–1909) transformed the college and affiliated professional schools into a modern research university; Harvard became a founding member of the Association of American Universities in 1900. James B. Conant led the university through the Great Depression and World War II; he liberalized admissions after the war.

    The university is composed of ten academic faculties plus the Radcliffe Institute for Advanced Study. Arts and Sciences offers study in a wide range of academic disciplines for undergraduates and for graduates, while the other faculties offer only graduate degrees, mostly professional. Harvard has three main campuses: the 209-acre (85 ha) Cambridge campus centered on Harvard Yard; an adjoining campus immediately across the Charles River in the Allston neighborhood of Boston; and the medical campus in Boston’s Longwood Medical Area. Harvard University’s endowment is valued at $41.9 billion, making it the largest of any academic institution. Endowment income helps enable the undergraduate college to admit students regardless of financial need and provide generous financial aid with no loans The Harvard Library is the world’s largest academic library system, comprising 79 individual libraries holding about 20.4 million items.

    Harvard University has more alumni, faculty, and researchers who have won Nobel Prizes (161) and Fields Medals (18) than any other university in the world and more alumni who have been members of the U.S. Congress, MacArthur Fellows, Rhodes Scholars (375), and Marshall Scholars (255) than any other university in the United States. Its alumni also include eight U.S. presidents and 188 living billionaires, the most of any university. Fourteen Turing Award laureates have been Harvard affiliates. Students and alumni have also won 10 Academy Awards, 48 Pulitzer Prizes, and 108 Olympic medals (46 gold), and they have founded many notable companies.

    Colonial

    Harvard University was established in 1636 by vote of the Great and General Court of the Massachusetts Bay Colony. In 1638, it acquired British North America’s first known printing press. In 1639, it was named Harvard College after deceased clergyman John Harvard, an alumnus of the University of Cambridge(UK) who had left the school £779 and his library of some 400 volumes. The charter creating the Harvard Corporation was granted in 1650.

    A 1643 publication gave the school’s purpose as “to advance learning and perpetuate it to posterity, dreading to leave an illiterate ministry to the churches when our present ministers shall lie in the dust.” It trained many Puritan ministers in its early years and offered a classic curriculum based on the English university model—many leaders in the colony had attended the University of Cambridge—but conformed to the tenets of Puritanism. Harvard University has never affiliated with any particular denomination, though many of its earliest graduates went on to become clergymen in Congregational and Unitarian churches.

    Increase Mather served as president from 1681 to 1701. In 1708, John Leverett became the first president who was not also a clergyman, marking a turning of the college away from Puritanism and toward intellectual independence.

    19th century

    In the 19th century, Enlightenment ideas of reason and free will were widespread among Congregational ministers, putting those ministers and their congregations in tension with more traditionalist, Calvinist parties. When Hollis Professor of Divinity David Tappan died in 1803 and President Joseph Willard died a year later, a struggle broke out over their replacements. Henry Ware was elected to the Hollis chair in 1805, and the liberal Samuel Webber was appointed to the presidency two years later, signaling the shift from the dominance of traditional ideas at Harvard to the dominance of liberal, Arminian ideas.

    Charles William Eliot, president 1869–1909, eliminated the favored position of Christianity from the curriculum while opening it to student self-direction. Though Eliot was the crucial figure in the secularization of American higher education, he was motivated not by a desire to secularize education but by Transcendentalist Unitarian convictions influenced by William Ellery Channing and Ralph Waldo Emerson.

    20th century

    In the 20th century, Harvard University’s reputation grew as a burgeoning endowment and prominent professors expanded the university’s scope. Rapid enrollment growth continued as new graduate schools were begun and the undergraduate college expanded. Radcliffe College, established in 1879 as the female counterpart of Harvard College, became one of the most prominent schools for women in the United States. Harvard University became a founding member of the Association of American Universities in 1900.

    The student body in the early decades of the century was predominantly “old-stock, high-status Protestants, especially Episcopalians, Congregationalists, and Presbyterians.” A 1923 proposal by President A. Lawrence Lowell that Jews be limited to 15% of undergraduates was rejected, but Lowell did ban blacks from freshman dormitories.

    President James B. Conant reinvigorated creative scholarship to guarantee Harvard University’s preeminence among research institutions. He saw higher education as a vehicle of opportunity for the talented rather than an entitlement for the wealthy, so Conant devised programs to identify, recruit, and support talented youth. In 1943, he asked the faculty to make a definitive statement about what general education ought to be, at the secondary as well as at the college level. The resulting Report, published in 1945, was one of the most influential manifestos in 20th century American education.

    Between 1945 and 1960, admissions were opened up to bring in a more diverse group of students. No longer drawing mostly from select New England prep schools, the undergraduate college became accessible to striving middle class students from public schools; many more Jews and Catholics were admitted, but few blacks, Hispanics, or Asians. Throughout the rest of the 20th century, Harvard became more diverse.

    Harvard University’s graduate schools began admitting women in small numbers in the late 19th century. During World War II, students at Radcliffe College (which since 1879 had been paying Harvard University professors to repeat their lectures for women) began attending Harvard University classes alongside men. Women were first admitted to the medical school in 1945. Since 1971, Harvard University has controlled essentially all aspects of undergraduate admission, instruction, and housing for Radcliffe women. In 1999, Radcliffe was formally merged into Harvard University.

    21st century

    Drew Gilpin Faust, previously the dean of the Radcliffe Institute for Advanced Study, became Harvard University’s first woman president on July 1, 2007. She was succeeded by Lawrence Bacow on July 1, 2018.

     
  • richardmitnick 8:35 am on October 31, 2022 Permalink | Reply
    Tags: "In the Jungle of Neurons", "Neurotransmitters": chemical messengers, "Synapses": the junctions where two cells meet and are only separated by a tiny gap., , , , Brain Studies, Neuroscientists are looking closely at what goes on in our brains when we learn., Neurotransmitters such as dopamine or noradrenaline which trigger happy feelings or alert our attention strengthen the synaptic changes and thus memory formation., Researchers discovered that glial cells play an active part in the nervous system., The brain builds models of the world and then checks to see whether they correspond with real experiences. If the model is false it is revised., The hippocampus is important to the formation of declarative (or explicit) memories., The human brain comprises some 100 billion neurons each of which can have several thousand synaptic connections to other neurons., , There is increasing empirical evidence that glial cells play a key role in memory processes., Where in this network are memories formed?   

    From The University of Zürich (Universität Zürich) (CH): “In the Jungle of Neurons” 

    From The University of Zürich (Universität Zürich) (CH)

    10.31.22
    Stefan Stöcklin

    A big part of learning involves our memory. Neuroscientists are looking closely at what goes on in our brains when we learn, and are slowly unraveling the mysteries of this incredible ability of ours.

    1
    View of a complex and dynamic network of neurons with their branched projections from the cerebral cortex of a mouse. The stained neurons were imaged with a modern light sheet microscope (mesospim.org). (Image: F. Voigt, W. Luo, C. Földy, F. Helmchen)

    In the Harry Potter books, a person’s memory can be extracted from their brain with a sieve and examined and manipulated from the outside. Memories appear as silver wisps and can be stored in bottles. Two thousand years earlier, Plato likened memory to a wax tablet in which our memories are stamped. But what do brain researchers think nowadays? How does the brain learn, and how does it absorb information from the environment and reactivate those experiences sometimes days or decades later? Staying with the Harry Potter metaphor: where in the brain are those silver wisps?

    Very little is known

    Finding answers to these questions is incredibly difficult. Scientists have learned countless details about neurons and synapses in recent years thanks to large-scale research programs: for example, they have decrypted the biochemical processes involved in stimulation and saltatory conduction, and understand the activity patterns triggered in various parts of the brain by external stimuli. But they are still lacking an in-depth understanding of the neural networks and a comprehensive theory on the processes in the over 100 billion neurons in our brains that can explain memory and memory processes. “Unfortunately, we still know very little,” says neuroscientist and head of the Neuroscience Center Zürich, Fritjof Helmchen.

    3
    Neuroforscher Fritjof Helmchen. (Bild: zVg)

    Helmchen studied physics and medicine and has been working on the brain for over a quarter of a century. Together with professor of developmental neurobiology Esther Stöckli, he has led the URPP Adaptive Brain Circuits in Development and Learning at UZH since 2020. There is probably no one better qualified to shed light on this complex issue.

    Sea slugs that can learn

    The journey of discovery starts in neurons and their synapses – the junctions where two cells meet and are only separated by a tiny gap. When a neuron produces or fires an electric signal, it transmits this via chemical messengers – called neurotransmitters – that it releases into this gap. This signal is then processed in the adjacent neurons. As US neuroscientist Eric Kandel showed in a groundbreaking experiment involving sea slugs, an amplification or weakening of the synaptic signal is central to learning.

    In Kandel’s experiment – for which he won the Nobel Prize in 2000 – the sea slugs learned to respond to an external pressure stimulus by adapting the strength of synaptic connections. “This principle still holds true, although the synaptic activity is not only influenced by the neurons involved, but also by other factors, such as the surrounding cells and neuromodulators,” says Helmchen.

    Patient H. Molaison

    The basic principle is plausible but doesn’t help us much. The human brain comprises some 100 billion neurons, each of which can have several thousand synaptic connections to other neurons. That results in a vast number of signaling pathways that can be amplified or weakened. Where in this network are memories formed? A look at the hippocampus – a region embedded deep in each side of the brain – brings some insight. Since the momentous operation on patient Henry Molaison, we have known that the hippocampus is important to the formation of declarative (or explicit) memories.

    Molaison underwent surgery in 1951 to treat epileptic seizures in which doctors removed parts of his temporal lobes and both hippocampi. Molaison paid a high price: while his epileptic seizures decreased, he was unable to store new experiences. He would very quickly forget what he had just been told or what he had just done. Only his short-term memory still worked – and his memories from the period before the operation.

    Navigating the world around us

    The hippocampus is essential to memory. “We think that when we learn, activity patterns in the neural networks are consolidated in the hippocampus and then moved to other parts of the brain,” says Fritjof Helmchen. Nobel Prize-winning experiments on the brain’s positioning system can help us understand what these neural maps could look like.

    Three neuroscientists won the prestigious prize in 2014 for their discovery of “place cells” and “grid cells” in the hippocampus. They proved that these specialized nerve cells build a navigational map of the world around us. If we return to a place, the same cells are activated and help us navigate them. Like a GPS, these specialized cells register where we are.

    Emotions reinforce memories

    “These activity patterns that position us in space and time are a model of how memories could be stored,” says Helmchen. Like with navigation and positioning, our daily experiences or the learning of new facts would be encrypted in characteristic activity patterns of special groups of nerve cells. The hippocampus is in turn involved as an interim storage facility where activity patterns are stored temporarily and repeated before they are filed away for the longer term in various areas of the cerebral cortex.

    When we recall an event, say a concert, the associated, stored neural patterns in the cerebral cortex are activated. This could explain why we don’t just remember the concert itself, but also the associated emotions, people or conversations. Like a giant spider web, a whole set of different regions of the brain are activated, particularly those that are associated with most emotions. Neurotransmitters such as dopamine or noradrenaline, which trigger happy feelings or alert our attention, strengthen the synaptic changes and thus memory formation. But memory is also improved through the act of remembering, and when we sleep as the brain solidifies and consolidates what we have experienced during the day.

    Jennfier Aniston Cells

    Memories are not only based on experiences, but also on abstract concepts and models of the world built by our brains. Neuroscientist Rodrigo Quian Quiroga conducted an experiment that showed that test subjects had neurons in the medial temporal lobe that responded to images of certain objects or people. These are specialized cells that are involved in the representation of a single object, regardless of the perspective. They are only activated if the person is shown a picture of the object or person in question, such as the Eiffel Tower, a briefcase or Jennifer Aniston. Because these “concept cells” were discovered using a picture of the American actress, they are known as Jennifer Aniston cells. The object in question can then appear in any context. “Objects and people are extracted and represented in certain neural groups,” says Fritjof Helmchen.

    The role of errors

    The neuroscientist mentions the term error minimization, which is also a fundamental concept in learning. The brain builds models of the world and then checks to see whether they correspond with real experiences. If the model is false, it is revised. Helmchen and his team have recently conducted experiments that convincingly corroborate this principle.

    In their experiments, researchers trained mice to adapt their behavior by rewarding them for a specific action, and then switching the behavior that would be rewarded. This revealed that the “old” rule, which no longer led to a reward, e.g. water, was initially stored in the activity patterns in the neurons of the cerebral cortex. Researchers were able to prove that the new rule was learned by the activity patterns being adapted on the basis of a strong error signal on the frontal lobe. According to Helmchen, error minimization is a general principle that the brain uses to interact with the environment.

    Overlooked supporting cells

    The principle of error correction sounds suspiciously like artificial intelligence. The algorithms of artificial, digital neural networks also ‘learn’ by gradually aligning their results with reality and altering the strength of their connections until the result matches. So is artificial intelligence well on its way to replicating our brains? Is it on a par with human intelligence? Brain researcher Helmchen shakes his head. What our brains are able to do, for example in terms of creativity and inventiveness, is miles away from artificial intelligence. “Our brains have many more tricks up their sleeves than artificial neural networks,” says Helmchen – even if we are only just starting to understand them.

    Our understanding of the remarkable processes in the brain becomes much less clear when glial cells, or glia, come into play. Previously, it was all about neurons and neural networks, which dominated brain research for decades. Until a few years ago, glia or supporting cells – of which there are about the same number in the brain as there are neurons – were overlooked, and over a century ago were thought to be merely passive and non-functional glue for neurons. For a long time, they were the wallflowers – until researchers discovered that glial cells play an active part in the nervous system. Eighty percent of glia consist of astrocytes, which are star-shaped cells with long projections, which surround and connect with neurons. They modulate the signals received from neurons by releasing neurotransmitters and diffuse them to their own and ramified networks of astrocytes. There is increasing empirical evidence that glial cells play a key role in memory processes, even if that role is still poorly understood. They are also being closely studied as part of the URPP at UZH.

    Revolutionary techniques

    “We know a great deal about the brain and at the same time are largely clueless,” says Fritjof Helmchen. Amazingly, despite detailed knowledge of some processes, some basic understanding is lacking. For example, there are still many types of neuron that are unknown – Helmchen estimates the number to be over a hundred. As the researcher explains, it is inherently difficult to decipher the interrelationships of activity patterns spread over the entire cerebral cortex, but he believes that this is precisely where the key to understanding memory lies.

    At the same time, the neuroscientist is confident because a revolution is currently taking place in imaging techniques and other methods, which will allow neural activities across different areas of the brain to be observed in experiments over longer periods. And this also applies to learning processes. Thanks to these new possibilities, Helmchen expects “major progress” in the next one to two decades. It is quite possible that this research could reveal a previously unknown substrate for memories. What is less likely is that it will look anything like the memory wisps from Harry Potter.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The University of Zürich (Universität Zürich) (CH), located in the city of Zürich, is the largest university in Switzerland, with over 26,000 students. It was founded in 1833 from the existing colleges of theology, law, medicine and a new faculty of philosophy.

    Currently, the university has seven faculties: Philosophy, Human Medicine, Economic Sciences, Law, Mathematics and Natural Sciences, Theology and Veterinary Medicine. The university offers the widest range of subjects and courses of any Swiss higher education institutions.
    Since 1833

    As a member of the League of European Research Universities (EU) (LERU) and Universitas 21 (U21) network, the University of Zürich belongs to Europe’s most prestigious research institutions. In 2017, the University of Zürich became a member of the Universitas 21 (U21) network, a global network of 27 research universities from around the world, promoting research collaboration and exchange of knowledge.

    Numerous distinctions highlight the University’s international renown in the fields of medicine, immunology, genetics, neuroscience and structural biology as well as in economics. To date, the Nobel Prize has been conferred on twelve UZH scholars.

    Sharing Knowledge

    The academic excellence of the University of Zürich brings benefits to both the public and the private sectors not only in the Canton of Zürich, but throughout Switzerland. Knowledge is shared in a variety of ways: in addition to granting the general public access to its twelve museums and many of its libraries, the University makes findings from cutting-edge research available to the public in accessible and engaging lecture series and panel discussions.

    1. Identity of the University of Zürich

    Scholarship

    The University of Zürich (UZH) is an institution with a strong commitment to the free and open pursuit of scholarship.

    Scholarship is the acquisition, the advancement and the dissemination of knowledge in a methodological and critical manner.

    Academic freedom and responsibility

    To flourish, scholarship must be free from external influences, constraints and ideological pressures. The University of Zürich is committed to unrestricted freedom in research and teaching.

    Academic freedom calls for a high degree of responsibility, including reflection on the ethical implications of research activities for humans, animals and the environment.

    Universitas

    Work in all disciplines at the University is based on a scholarly inquiry into the realities of our world

    As Switzerland’s largest university, the University of Zürich promotes wide diversity in both scholarship and in the fields of study offered. The University fosters free dialogue, respects the individual characteristics of the disciplines, and advances interdisciplinary work.

    2. The University of Zurich’s goals and responsibilities

    Basic principles

    UZH pursues scholarly research and teaching, and provides services for the benefit of the public.

    UZH has successfully positioned itself among the world’s foremost universities. The University attracts the best researchers and students, and promotes junior scholars at all levels of their academic career.

    UZH sets priorities in research and teaching by considering academic requirements and the needs of society. These priorities presuppose basic research and interdisciplinary methods.

    UZH strives to uphold the highest quality in all its activities.
    To secure and improve quality, the University regularly monitors and evaluates its performance.

    Research

    UZH contributes to the increase of knowledge through the pursuit of cutting-edge research.

    UZH is primarily a research institution. As such, it enables and expects its members to conduct research, and supports them in doing so.

    While basic research is the core focus at UZH, the University also pursues applied research.

     
  • richardmitnick 11:28 am on October 7, 2022 Permalink | Reply
    Tags: "Mapping human brain development", , , Brain Studies, Researchers at ETH Zürich are growing human brain-​like tissue from stem cells and are then mapping the cell types.,   

    From The Swiss Federal Institute of Technology in Zürich [ETH Zürich] [Eidgenössische Technische Hochschule Zürich] (CH): “Mapping human brain development” 

    From The Swiss Federal Institute of Technology in Zürich [ETH Zürich] [Eidgenössische Technische Hochschule Zürich] (CH)

    10.7.22
    Peter Rüegg

    Researchers at ETH Zürich are growing human brain-​like tissue from stem cells and are then mapping the cell types that occur in different brain regions and the genes that regulate their development.

    1
    Brain organoid from human stem cells under the fluorescence microscope: the protein GLI3 is stained purple and marks neuronal precursor cells in forebrain regions of the organoid. Neurons are stained green. (Photograph: F. Sanchís Calleja, A. Jain, P. Wahle / ETH Zürich)

    The human brain is probably the most complex organ in the entire living world and has long been an object of fascination for researchers. However, studying the brain, and especially the genes and molecular switches that regulate and direct its development, is no easy task.

    To date, scientists have proceeded using animal models, primarily mice, but their findings cannot be transferred directly to humans. A mouse’s brain is structured differently and lacks the furrowed surface typical of the human brain. Cell cultures have thus far been of limited value in this field, as cells tend to spread over a large area when grown on a culture dish; this does not correspond to the natural three-dimensional structure of the brain.

    Mapping molecular fingerprints

    A group of researchers led by Barbara Treutlein, ETH Professor at the Department of Biosystems Science and Engineering in Basel, has now taken a new approach to studying the development of the human brain: they are growing and using organoids – millimetre-sized three-dimensional tissues that can be grown from what are known as pluripotent stem cells.

    Provided these stem cells receive the right stimulus, researchers can program them to become any kind of cell present in the body, including neurons. When the stem cells are aggregated into a small ball of tissue and then exposed to the appropriate stimulus, they can even self-organize and form a three-dimensional brain organoid with a complex tissue architecture.

    In a new study just published in Nature [below], Treutlein and her colleagues have now studied thousands of individual cells within a brain organoid at various points in time and in great detail. Their goal was to characterise the cells in molecular-genetic terms: in other words, the totality of all gene transcripts (transcriptome) as a measure of gene expression, but also the accessibility of the genome as a measure of regulatory activity. They have managed to represent this data as a kind of map showing the molecular fingerprint of each cell within the organoid.

    However, this procedure generates immense data sets: each cell in the organoid has 20,000 genes, and each organoid in turn consists of many thousands of cells. “This results in a gigantic matrix, and the only way we can solve it is with the help of suitable programs and machine learning,” explains Jonas Fleck, a doctoral student in Treutlein’s group and one of the study’s co-lead authors. To analyse all this data and predict gene regulation mechanisms, the researchers developed their own program. “We can use it to generate an entire interaction network for each individual gene and predict what will happen in real cells when that gene fails,” Fleck says.

    Identifying genetic switches

    The aim of this study was to systematically identify those genetic switches that have a significant impact on the development of neurons in the different regions of brain organoids.

    With the help of a CRISPR-Cas9 system, the ETH researchers selectively switched off one gene in each cell, altogether about two dozen genes simultaneously in the entire organoid. This enabled them to find out what role the respective genes played in the development of the brain organoid.

    “This technique can be used to screen genes involved in disease. In addition, we can look at the effect these genes have on how different cells within the organoid develop,” explains Sophie Jansen, also a doctoral student in Treutlein’s group and the second co-lead author of the study.

    2
    Map of a brain organoid: The colours of the cells shown as circles indicate different cell types. Right: Regulatory network of transcription factor genes that controls the development of a brain organoid. (Graphics: Barbara Treutlein / ETH Zürich)

    Checking pattern formation in the forebrain

    To test their theory, the researchers chose the GLI3 gene as an example. This gene is the blueprint for the transcription factor of the same name, a protein that docks onto certain sites on DNA in order to regulate another gene. When GLI3 is switched off, the cellular machinery is prevented from reading this gene and transcribing it into an RNA molecule.

    In mice, mutations in the GLI3 gene can lead to malformations in the central nervous system. Its role in human neuronal development was previously unexplored, but it is known that mutations in the gene lead to diseases such as Greig cephalopolysyndactyly and Pallister Hall Syndromes.

    Silencing this GLI3 gene enabled the researchers both to verify their theoretical predictions and to determine directly in the cell culture how the loss of this gene affected the brain organoid’s further development. “We have shown for the first time that the GLI3 gene is involved in the formation of forebrain patterns in humans. This had previously been shown only in mice,” Treutlein says.

    Model systems reflect developmental biology

    “The exciting thing about this research is that it lets you use genome-wide data from so many individual cells to postulate what roles individual genes play,” she explains. “What’s equally exciting in my opinion is that these model systems made in a Petri dish really do reflect developmental biology as we know it from mice.”

    Treutlein also finds it fascinating how the culture medium can give rise to self-organized tissue with structures comparable to those of the human brain – not only at the morphological level but also (as the researchers have shown in their latest study) at the level of gene regulation and pattern formation. “Organoids like this are truly an excellent way to study human developmental biology,” she points out.

    Versatile brain organoids

    Research on organoids made up of human cell material has the advantage that the findings are transferable to humans. They can be used to study not only basic developmental biology but also the role of genes in diseases or developmental brain disorders. For example, Treutlein and her colleagues are working with organoids of this type to investigate the genetic cause of autism and of heterotopia; in the latter, neurons appear outside their usual anatomical location in the cerebral cortex.

    Organoids may also be used for testing drugs, and possibly for culturing transplantable organs or organ parts. Treutlein confirms that the pharmaceutical industry is very interested in these cell cultures.

    However, growing organoids takes both time and effort. Moreover, each clump of cells develops individually rather than in a standardised way. That is why Treutlein and her team are working to improve the organoids and automate their manufacturing process.
    __________________________________________________
    Human Cell Atlas

    The research and mapping of brain organoids is embedded in the Human Developmental Cell Atlas; this, in turn, is part of the Human Cell Atlas. The Human Cell Atlas is an attempt by researchers worldwide both to map all cell types in the human body and to compile data on which genes are active in which cells at which times as well as on which genes might be involved in diseases. The head of the Human Cell Atlas project is Aviv Regev, a biology professor at MIT; she received an honorary doctorate from ETH Zürich in 2021. ETH Professor Barbara Treutlein is co-coordinating the Organoid Cell Atlas subsection, which aims to map all the cell stages that can be produced in cell culture and then to compare them with the original cells of the human body.
    __________________________________________________

    Science paper:
    Nature
    See the science paper for instructive material.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ETH Zurich campus

    The Swiss Federal Institute of Technology in Zürich [ETH Zürich] [Eidgenössische Technische Hochschule Zürich] (CH) is a public research university in the city of Zürich, Switzerland. Founded by the Swiss Federal Government in 1854 with the stated mission to educate engineers and scientists, the school focuses exclusively on science, technology, engineering and mathematics. Like its sister institution The Swiss Federal Institute of Technology in Lausanne [EPFL-École Polytechnique Fédérale de Lausanne](CH) , it is part of The Swiss Federal Institutes of Technology Domain (ETH Domain)) , part of the The Swiss Federal Department of Economic Affairs, Education and Research [EAER][Eidgenössisches Departement für Wirtschaft, Bildung und Forschung] [Département fédéral de l’économie, de la formation et de la recherche] (CH).

    The university is an attractive destination for international students thanks to low tuition fees of 809 CHF per semester, PhD and graduate salaries that are amongst the world’s highest, and a world-class reputation in academia and industry. There are currently 22,200 students from over 120 countries, of which 4,180 are pursuing doctoral degrees. In the 2021 edition of the QS World University Rankings ETH Zürich is ranked 6th in the world and 8th by the Times Higher Education World Rankings 2020. In the 2020 QS World University Rankings by subject it is ranked 4th in the world for engineering and technology (2nd in Europe) and 1st for earth & marine science.

    As of November 2019, 21 Nobel laureates, 2 Fields Medalists, 2 Pritzker Prize winners, and 1 Turing Award winner have been affiliated with the Institute, including Albert Einstein. Other notable alumni include John von Neumann and Santiago Calatrava. It is a founding member of the IDEA League and the International Alliance of Research Universities (IARU) and a member of the CESAER network.

    ETH Zürich was founded on 7 February 1854 by the Swiss Confederation and began giving its first lectures on 16 October 1855 as a polytechnic institute (eidgenössische polytechnische schule) at various sites throughout the city of Zurich. It was initially composed of six faculties: architecture, civil engineering, mechanical engineering, chemistry, forestry, and an integrated department for the fields of mathematics, natural sciences, literature, and social and political sciences.

    It is locally still known as Polytechnikum, or simply as Poly, derived from the original name eidgenössische polytechnische schule, which translates to “federal polytechnic school”.

    ETH Zürich is a federal institute (i.e., under direct administration by the Swiss government), whereas The University of Zürich [Universität Zürich ] (CH) is a cantonal institution. The decision for a new federal university was heavily disputed at the time; the liberals pressed for a “federal university”, while the conservative forces wanted all universities to remain under cantonal control, worried that the liberals would gain more political power than they already had. In the beginning, both universities were co-located in the buildings of the University of Zürich.

    From 1905 to 1908, under the presidency of Jérôme Franel, the course program of ETH Zürich was restructured to that of a real university and ETH Zürich was granted the right to award doctorates. In 1909 the first doctorates were awarded. In 1911, it was given its current name, Eidgenössische Technische Hochschule. In 1924, another reorganization structured the university in 12 departments. However, it now has 16 departments.

    ETH Zürich, EPFL (Swiss Federal Institute of Technology in Lausanne) [École polytechnique fédérale de Lausanne](CH), and four associated research institutes form The Domain of the Swiss Federal Institutes of Technology (ETH Domain) [ETH-Bereich; Domaine des Écoles polytechniques fédérales] (CH) with the aim of collaborating on scientific projects.

    Reputation and ranking

    ETH Zürich is ranked among the top universities in the world. Typically, popular rankings place the institution as the best university in continental Europe and ETH Zürich is consistently ranked among the top 1-5 universities in Europe, and among the top 3-10 best universities of the world.

    Historically, ETH Zürich has achieved its reputation particularly in the fields of chemistry, mathematics and physics. There are 32 Nobel laureates who are associated with ETH Zürich, the most recent of whom is Richard F. Heck, awarded the Nobel Prize in chemistry in 2010. Albert Einstein is perhaps its most famous alumnus.

    In 2018, the QS World University Rankings placed ETH Zürich at 7th overall in the world. In 2015, ETH Zürich was ranked 5th in the world in Engineering, Science and Technology, just behind the Massachusetts Institute of Technology, Stanford University and University of Cambridge (UK). In 2015, ETH Zürich also ranked 6th in the world in Natural Sciences, and in 2016 ranked 1st in the world for Earth & Marine Sciences for the second consecutive year.

    In 2016, Times Higher Education World University Rankings ranked ETH Zürich 9th overall in the world and 8th in the world in the field of Engineering & Technology, just behind the Massachusetts Institute of Technology, Stanford University, California Institute of Technology, Princeton University, University of Cambridge(UK), Imperial College London(UK) and University of Oxford(UK) .

    In a comparison of Swiss universities by swissUP Ranking and in rankings published by CHE comparing the universities of German-speaking countries, ETH Zürich traditionally is ranked first in natural sciences, computer science and engineering sciences.

    In the survey CHE Excellence Ranking on the quality of Western European graduate school programs in the fields of biology, chemistry, physics and mathematics, ETH Zürich was assessed as one of the three institutions to have excellent programs in all the considered fields, the other two being Imperial College London (UK) and the University of Cambridge (UK), respectively.

     
  • richardmitnick 12:01 pm on January 10, 2022 Permalink | Reply
    Tags: "Turning Information into Action", , Brain Studies, Collaborations allow theorists like me to work with gifted experimentalists in a fruitful way., Computational neuroscience, How does the brain combine multiple sources of information across time to make decisions?, Jan Drugowitsch- assistant professor of neurobiology at The Blavatnik Institute at Harvard Medical School (US)., , Most decisions happen in an unconscious way using different sources of information., Place cells-a population of cells in the hippocampus of the brain that represent our location in space., Sensory perceptions on very short timescales, The adoption of more computational tools is in part a response to the many possibilities nowadays for collecting complex data., , The role of computation and the importance of collaboration in unraveling the mysteries of decision-making.   

    From The Harvard Medical School (US) : “Turning Information into Action” 

    harvard-medical-school-bloc

    From The Harvard Medical School (US)

    at

    Harvard University (US)
    News & Research

    December 15, 2021 [Just today in social media.]
    CATHERINE CARUSO

    Computational tools can help scientists understand how the brain makes split-second decisions.

    1
    Image: olaser/iStock/Getty Images Plus

    Our brains help us make countless decisions every day, from choosing whether to cross the road to selecting the most efficient route to the supermarket. Yet many of these decisions, even those that require our brains to factor in multiple sources of information at the same time, happen so quickly that we’re barely aware of the process involved.

    Jan Drugowitsch, assistant professor of neurobiology in The Blavatnik Institute at Harvard Medical School (US), is intrigued by this process. As a neurobiologist with a doctorate in machine learning, he uses a computational lens to study how the brain operates.

    He is particularly interested in how the brain takes in information about the world and uses this information to inform behavior. Drugowitsch’s lab focuses on theory, teaming up with experimentalists to test theories using computational tools.

    In a conversation with Harvard Medicine News, Drugowitsch delves into the details of his research on how the brain processes information to make split-second decisions. He also discusses the role of computation and the importance of collaboration in unraveling the mysteries of decision-making.

    HMNews: What aspects of the brain and behavior are you studying?

    Drugowitsch: A lot of our work focuses on sensory perceptions on very short timescales—from milliseconds to seconds—and how we turn those perceptions into decisions. For example, an everyday human experience is making a decision about crossing the road. To do this, we need to figure out if the traffic situation is safe, including whether we have enough time to cross before a car arrives.

    For most people, this decision happens in an unconscious way using different sources of information, such as the traffic flow on the left and right and the sound of oncoming cars. In my lab we are studying processes like this one that happen automatically and efficiently in the brain. We’re asking, how does the brain combine multiple sources of information across time to make these kinds of decisions?

    Over the last few years, we’ve been studying increasingly complex domains of how we make these choices. We’ve shown that many of these choices follow principles of statistical decision-making because the information we have is uncertain, so we have to gauge different sources of information against each other and ask, “Are we certain enough to commit to a choice?” My lab has been formulating statistical models that capture the process, including complexities such as the trade-off between speed and accuracy.

    Now, we are shifting to understanding more continuous behaviors such as navigation. For example, keeping track of direction during navigation is a process that doesn’t have discrete steps—we keep track of our direction on a constant basis, and use this information to make behavioral decisions. We want to know how the brain does this on a continuous timescale.

    HMNews: You use computational tools in your research. What is computational neuroscience?

    Drugowitsch: There are currently two forms of computational neuroscience. Traditional computational neuroscience involves building models in the language of mathematics, physics, and engineering to describe hypotheses about how the brain performs computations. These computations are usually related to how the brain processes information about the world.

    There is also a newer form of computational neuroscience that has emerged with the ability to gather much larger datasets about the brain. This kind of computational neuroscience involves developing and using more sophisticated tools to process complex neural data. We use both in our work.

    A focus of my lab is how humans and animals deal with uncertain information. Essentially all of the information that we have about the world is uncertain, and handling uncertain information moves us into the realm of statistics. We use a lot of tools from statistics because they provide the adequate language to talk about beliefs about things in the world. More specifically, we use Bayesian statistics to formulate models of how uncertain information is processed in the abstract sense. Then we use tools from physics to define how this information processing that we’ve worked with on a statistical level can be realized in the brain. This is where biology comes in—it introduces constraints on how the brain operates and how it executes these statistical computations.

    HMNews: Your recently published paper in Neuron about navigation in the brain uses some of the above approaches. Can you tell us a bit more about this work?

    Drugowitsch: Our research builds on an earlier experimental observation about place cells-a population of cells in the hippocampus of the brain that represent our location in space. This observation, made in mice and rats, is that while a rodent is standing still, place cells suddenly become active in a rapid sequence of bursts that seems to simulate the animal’s trajectory through the environment. There are two hypotheses about the role of this activity. One is that it helps us memorize what we’ve done before and move it to long-term memory. The other is that it helps us plan future navigation.

    Before addressing these hypotheses, we wanted to refine our understanding of what these bursts actually do by understanding the data better. We used existing data on rats foraging for food in a two-meter by two-meter environment and applied Bayesian statistical methods to gain a fuller picture of activity in place cells.

    Previously, scientists thought that only a small subset of the bursts in place cells stimulated trajectories through open environments. However, we found that the majority of bursts are part of these trajectories. Additionally, the trajectories of these bursts feature momentum as if the animal were actually moving through space, even though it’s stationary.

    This is interesting because earlier work on activity of place cells during sleep found that the trajectories of those bursts don’t feature momentum. Thus, our findings suggest that bursts of activity in place cells may play a fundamentally different role depending on whether an animal is awake or asleep. Now that we have this information, we can move back to building computational models to understand how place cells help us plan and navigate through the world.

    HMNews: Why do you think neuroscience is moving in a computational direction?

    Drugowitsch: I think the adoption of more computational tools is in part a response to the many possibilities nowadays for collecting complex data. Previously, if we recorded from a single neuron while an animal did a simple task, we could interpret our data without using complex models. Now, we routinely record from hundreds or thousands of neurons in the brain while animals perform complex tasks, leading to data that can only be analyzed with complex computational models.

    There has been a realization that most neuroscientists need at least a basic understanding of how these computational models work, which has created a push towards greater literacy in computational neuroscience.

    To this end, I co-direct a certificate program in computational neuroscience for graduate students at HMS. The program started because we noticed an increasing demand for students to learn quantitative skills, yet the courses we offered in this area weren’t broad enough.

    Our aim is to develop new courses that provide students with the skills they need to understand the full array of computational tools being developed to analyze neuroscience data. We also want to increase cohesion of the computational neuroscience community at HMS, and provide more forums where students can discuss questions in the field.

    HMNews: What motivated you to pursue computational neuroscience?

    Drugowitsch: I wanted to become a computational neuroscientist because I strongly believe that understanding the brain requires a complexity of thinking that cannot be achieved by intuition alone—and a lot of traditional experiments rely on intuition.

    Very often I find that things are different than I expected, which strengthens my belief that we should build formal models of how the brain operates in order to make progress in our understanding. Formulating these models expands our ability to think about complex interactions in the brain that are beyond what we can hold in our heads. We’re outsourcing this complexity to tools that have been developed in math and physics.

    In general, I’m driven by curiosity, trying to figure out new things and trying to discover the principles that define how we operate.

    In my lab, we like to ask specific questions because this is the only way to make experimentally testable predictions. However, we hope to discover general principles that underlie these questions. If we are studying how an animal performs particular behaviors, we try to extract a generalization from that specific situation that we can test in another set of experiments. Computational neuroscience gives us the tools we need to explore these questions.

    HMNews: In your work, you often team up with colleagues from other branches of neurobiology. Why?

    Drugowitsch: Building theories and running experiments require a different set of skills, so collaborations allow theorists like me to work with gifted experimentalists in a fruitful way.

    There are many theories in computational neuroscience that remain untested, so by collaborating with experimentalists we can test those theories to see if they are supported by the data.

    In some cases, we work with scientists running experiments with humans. The benefit of human experiments is that the training is fast—humans can perform complex tasks right away. The disadvantage is that it’s hard to look into their brains.

    For other questions, especially those about specific neural connections, we collaborate with scientists studying animals. For example, we’re working with Rachel Wilson, who studies drosophila [fruit fly] neurophysiology. We are asking, how does a specific neural circuit in the drosophila brain perform specific computations? We hope that the motifs we discover can be generalized across species, including humans.

    In my lab, we may be able to develop blue-sky theories, but at the end of the day we need to connect those theories to data gathered in the real world. Working with people who conduct experiments allows us to do that.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    harvard-medical-school-campus

    The The Harvard Medical School (US) community is dedicated to excellence and leadership in medicine, education, research and clinical care. To achieve our highest aspirations, and to ensure the success of all members of our community, we value and promote common ideals that center on collaboration and service, diversity, respect, integrity and accountability, lifelong learning, and wellness and balance. To be a citizen of this community means embracing a collegial spirit that fosters inclusion and promotes achievement.

    Harvard University campus

    Harvard University (US) is the oldest institution of higher education in the United States, established in 1636 by vote of the Great and General Court of the Massachusetts Bay Colony. It was named after the College’s first benefactor, the young minister John Harvard of Charlestown, who upon his death in 1638 left his library and half his estate to the institution. A statue of John Harvard stands today in front of University Hall in Harvard Yard, and is perhaps the University’s bestknown landmark.

    Harvard University (US) has 12 degree-granting Schools in addition to the Radcliffe Institute for Advanced Study. The University has grown from nine students with a single master to an enrollment of more than 20,000 degree candidates including undergraduate, graduate, and professional students. There are more than 360,000 living alumni in the U.S. and over 190 other countries.

    The Massachusetts colonial legislature, the General Court, authorized Harvard University (US)’s founding. In its early years, Harvard College primarily trained Congregational and Unitarian clergy, although it has never been formally affiliated with any denomination. Its curriculum and student body were gradually secularized during the 18th century, and by the 19th century, Harvard University (US) had emerged as the central cultural establishment among the Boston elite. Following the American Civil War, President Charles William Eliot’s long tenure (1869–1909) transformed the college and affiliated professional schools into a modern research university; Harvard became a founding member of the Association of American Universities in 1900. James B. Conant led the university through the Great Depression and World War II; he liberalized admissions after the war.

    The university is composed of ten academic faculties plus the Radcliffe Institute for Advanced Study. Arts and Sciences offers study in a wide range of academic disciplines for undergraduates and for graduates, while the other faculties offer only graduate degrees, mostly professional. Harvard has three main campuses: the 209-acre (85 ha) Cambridge campus centered on Harvard Yard; an adjoining campus immediately across the Charles River in the Allston neighborhood of Boston; and the medical campus in Boston’s Longwood Medical Area. Harvard University (US)’s endowment is valued at $41.9 billion, making it the largest of any academic institution. Endowment income helps enable the undergraduate college to admit students regardless of financial need and provide generous financial aid with no loans The Harvard Library is the world’s largest academic library system, comprising 79 individual libraries holding about 20.4 million items.

    Harvard University (US) has more alumni, faculty, and researchers who have won Nobel Prizes (161) and Fields Medals (18) than any other university in the world and more alumni who have been members of the U.S. Congress, MacArthur Fellows, Rhodes Scholars (375), and Marshall Scholars (255) than any other university in the United States. Its alumni also include eight U.S. presidents and 188 living billionaires, the most of any university. Fourteen Turing Award laureates have been Harvard affiliates. Students and alumni have also won 10 Academy Awards, 48 Pulitzer Prizes, and 108 Olympic medals (46 gold), and they have founded many notable companies.

    Colonial

    Harvard University (US) was established in 1636 by vote of the Great and General Court of the Massachusetts Bay Colony. In 1638, it acquired British North America’s first known printing press. In 1639, it was named Harvard College after deceased clergyman John Harvard, an alumnus of the University of Cambridge(UK) who had left the school £779 and his library of some 400 volumes. The charter creating the Harvard Corporation was granted in 1650.

    A 1643 publication gave the school’s purpose as “to advance learning and perpetuate it to posterity, dreading to leave an illiterate ministry to the churches when our present ministers shall lie in the dust.” It trained many Puritan ministers in its early years and offered a classic curriculum based on the English university model—many leaders in the colony had attended the University of Cambridge—but conformed to the tenets of Puritanism. Harvard University (US) has never affiliated with any particular denomination, though many of its earliest graduates went on to become clergymen in Congregational and Unitarian churches.

    Increase Mather served as president from 1681 to 1701. In 1708, John Leverett became the first president who was not also a clergyman, marking a turning of the college away from Puritanism and toward intellectual independence.

    19th century

    In the 19th century, Enlightenment ideas of reason and free will were widespread among Congregational ministers, putting those ministers and their congregations in tension with more traditionalist, Calvinist parties. When Hollis Professor of Divinity David Tappan died in 1803 and President Joseph Willard died a year later, a struggle broke out over their replacements. Henry Ware was elected to the Hollis chair in 1805, and the liberal Samuel Webber was appointed to the presidency two years later, signaling the shift from the dominance of traditional ideas at Harvard to the dominance of liberal, Arminian ideas.

    Charles William Eliot, president 1869–1909, eliminated the favored position of Christianity from the curriculum while opening it to student self-direction. Though Eliot was the crucial figure in the secularization of American higher education, he was motivated not by a desire to secularize education but by Transcendentalist Unitarian convictions influenced by William Ellery Channing and Ralph Waldo Emerson.

    20th century

    In the 20th century, Harvard University (US)’s reputation grew as a burgeoning endowment and prominent professors expanded the university’s scope. Rapid enrollment growth continued as new graduate schools were begun and the undergraduate college expanded. Radcliffe College, established in 1879 as the female counterpart of Harvard College, became one of the most prominent schools for women in the United States. Harvard University (US) became a founding member of the Association of American Universities in 1900.

    The student body in the early decades of the century was predominantly “old-stock, high-status Protestants, especially Episcopalians, Congregationalists, and Presbyterians.” A 1923 proposal by President A. Lawrence Lowell that Jews be limited to 15% of undergraduates was rejected, but Lowell did ban blacks from freshman dormitories.

    President James B. Conant reinvigorated creative scholarship to guarantee Harvard University (US)’s preeminence among research institutions. He saw higher education as a vehicle of opportunity for the talented rather than an entitlement for the wealthy, so Conant devised programs to identify, recruit, and support talented youth. In 1943, he asked the faculty to make a definitive statement about what general education ought to be, at the secondary as well as at the college level. The resulting Report, published in 1945, was one of the most influential manifestos in 20th century American education.

    Between 1945 and 1960, admissions were opened up to bring in a more diverse group of students. No longer drawing mostly from select New England prep schools, the undergraduate college became accessible to striving middle class students from public schools; many more Jews and Catholics were admitted, but few blacks, Hispanics, or Asians. Throughout the rest of the 20th century, Harvard became more diverse.

    Harvard University (US)’s graduate schools began admitting women in small numbers in the late 19th century. During World War II, students at Radcliffe College (which since 1879 had been paying Harvard University (US) professors to repeat their lectures for women) began attending Harvard University (US) classes alongside men. Women were first admitted to the medical school in 1945. Since 1971, Harvard University (US) has controlled essentially all aspects of undergraduate admission, instruction, and housing for Radcliffe women. In 1999, Radcliffe was formally merged into Harvard University (US).

    21st century

    Drew Gilpin Faust, previously the dean of the Radcliffe Institute for Advanced Study, became Harvard University (US)’s first woman president on July 1, 2007. She was succeeded by Lawrence Bacow on July 1, 2018.

     
  • richardmitnick 9:28 am on July 18, 2021 Permalink | Reply
    Tags: "Neurons Unexpectedly Encode Information in the Timing of Their Firing", , Artificial Intelligence researchers typically have to train artificial neural networks on hundreds or thousands of examples of a pattern or concept before the synapse strengthens., , , Brain Studies, , Information seems to be encoded through the strengthening of synapses only when two neurons fire within tens of milliseconds of each other., It’s really important not just how many [neuron activations] occur but when exactly they occur., Phase precession: a relationship between the continuous rhythm of a brain wave and the specific moments that neurons in that brain area activate., , Place cells: each of which is tuned to a specific region or “place field.”, , The closer you get to the center of a place field the faster the corresponding place cell fires., The pattern of phase precession was elusive in humans until now., There are other theories about our rapid learning abilities. And researchers stressed that it’s difficult to draw conclusions about any widespread role for phase precession., These studies suggest that phase precession allows the brain to link sequences of times; images; and events in the same way as it does spatial positions.   

    From Quanta Magazine : “Neurons Unexpectedly Encode Information in the Timing of Their Firing” 

    From Quanta Magazine

    July 7, 2021
    Elena Renken

    1
    Samuel Velasco/Quanta Magazine.

    For decades, neuroscientists have treated the brain somewhat like a Geiger counter: The rate at which neurons fire is taken as a measure of activity, just as a Geiger counter’s click rate indicates the strength of radiation. But new research suggests the brain may be more like a musical instrument. When you play the piano, how often you hit the keys matters, but the precise timing of the notes is also essential to the melody.

    “It’s really important not just how many [neuron activations] occur but when exactly they occur,” said Joshua Jacobs, a neuroscientist and biomedical engineer at Columbia University (US) who reported new evidence for this claim last month in Cell.

    For the first time, Jacobs and two coauthors spied neurons in the human brain encoding spatial information through the timing, rather than rate, of their firing. This temporal firing phenomenon is well documented in certain brain areas of rats, but the new study and others suggest it might be far more widespread in mammalian brains. “The more we look for it, the more we see it,” Jacobs said.

    Some researchers think the discovery might help solve a major mystery: how brains can learn so quickly.

    The phenomenon is called phase precession. It’s a relationship between the continuous rhythm of a brain wave — the overall ebb and flow of electrical signaling in an area of the brain — and the specific moments that neurons in that brain area activate. A theta brain wave, for instance, rises and falls in a consistent pattern over time, but neurons fire inconsistently, at different points on the wave’s trajectory. In this way, brain waves act like a clock, said one of the study’s coauthors, Salman Qasim, also of Columbia. They let neurons time their firings precisely so that they’ll land in range of other neurons’ firing — thereby forging connections between neurons.

    Researchers began noticing phase precession decades ago among the neurons in rat brains that encode information about spatial position. Human brains and rat brains both contain these so-called place cells, each of which is tuned to a specific region or “place field.” Our brains seem to scale these place fields to cover our current surroundings, whether that’s miles of freeway or the rooms of one’s home, said Kamran Diba, a neuroscientist at the University of Michigan (US). The closer you get to the center of a place field the faster the corresponding place cell fires. As you leave one place field and enter another, the firing of the first place cell peters out, while that of the second picks up.

    But along with rate, there’s also timing: As the rat passes through a place field, the associated place cell fires earlier and earlier with respect to the cycle of the background theta wave. As the rat crosses from one place field into another, the very early firing of the first place cell occurs close in time with the late firing of the next place cell. Their near-coincident firings cause the synapse, or connection, between them to strengthen, and this coupling of the place cells ingrains the rat’s trajectory into the brain. (Information seems to be encoded through the strengthening of synapses only when two neurons fire within tens of milliseconds of each other.)

    Phase precession is obvious in rats. “It’s so prominent and prevalent in the rodent brain that it makes you want to assume it’s a generalizable mechanism,” Qasim said. Scientists had also identified phase precession in the spatial processing of bats and marmosets, but the pattern was elusive in humans until now.

    Monitoring individual neurons is too invasive to do on the average human study participant, but the Columbia team took advantage of data collected years ago from 13 epilepsy patients who had already had electrodes implanted to map the electrical signals of their seizures. The electrodes recorded the firings of individual neurons while patients steered their way through a virtual-reality simulation using a joystick. As the patients maneuvered themselves around, the researchers identified phase precession in 12% of the neurons they were monitoring.

    Pulling out these signals required sophisticated statistical analysis, because humans exhibit a more complicated pattern of overlapping brain waves than rodents do — and because less of our neural activity is devoted to navigation. But the researchers could say definitively that phase precession is there.

    Other research suggests that phase precession may be crucial beyond navigation. In animals, the phenomenon has been tied to non-spatial perceptions, including processing sounds and smell. And in humans, research co-authored by Jacobs last year found phase precession in time-sensitive brain cells NIH-NLB-PNAS. A not-yet-peer-reviewed preprint [bioRxiv] by cognitive scientists in France and the Netherlands indicated that processing serial images involved phase precession, too. Finally, in Jacobs’ new study, it was found not just in literal navigation, but also as the humans progressed toward abstract goals in the simulation.

    These studies suggest that phase precession allows the brain to link sequences of times; images; and events in the same way as it does spatial positions. “Finding that first evidence really opens the door for it to be some sort of universal coding mechanism in the brain — across mammalian species, possibly,” Qasim said. “You might be missing a whole lot of information coding if you’re not tracking the relative timing of neural activity.”

    Neuroscientists are, in fact, on the lookout for a new kind of coding in the brain to answer the longstanding question: How does the brain encode information so quickly? It’s understood that patterns in external data become ingrained in the firing patterns of the network through the strengthening and weakening of synaptic connections. But artificial intelligence researchers typically have to train artificial neural networks on hundreds or thousands of examples of a pattern or concept before the synapse strengths adjust enough for the network to learn the pattern. Mysteriously, humans can typically learn from just one or a handful of examples.

    Phase precession could play a role in that disparity. One hint of this comes from a study [Journal of Neuroscience] by Johns Hopkins University (US) researchers who found that phase precession showed up in rats learning an unfamiliar track — on their first lap. “As soon as you’re learning something, this pattern for learning sequences is already in place,” Qasim added. “That might facilitate very rapid learning of sequences.”

    Phase precession organizes the timing so that learning happens more often than it could otherwise. It arranges for neurons activated by related information to fire in quick-enough succession for the synapse between them to strengthen. “It would point to this notion that the brain is basically computing faster than you would imagine from rate coding alone,” Diba said.

    There are other theories about our rapid learning abilities. And researchers stressed that it’s difficult to draw conclusions about any widespread role for phase precession in the brain from the limited studies so far.

    Still, a thorough search for the phenomenon may be in order. Bradley Lega, a neurologist at the UTexas Southwestern Medical Center(US), said, “There’s a lot of problems that phase precession can solve.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 9:14 am on September 21, 2020 Permalink | Reply
    Tags: "Can't get you outta my head: neuroscience study finds 'hidden' thoughts in visual part of brain", , Brain Studies, ,   

    From University of New South Wales- “Can’t get you outta my head: neuroscience study finds ‘hidden’ thoughts in visual part of brain” 

    U NSW bloc

    From University of New South Wales

    21 Sep 2020
    Sherry Landow

    Why is it so hard to control our thoughts? New research led by UNSW Sydney shows suppressed thoughts could be hiding in the visual part of our brains – without us even knowing.

    1
    You might have less control over your thoughts than you think. Image: Shutterstock.

    How much control do you have over your thoughts? What if you were specifically told not to think of something – like a pink elephant?

    A recent study led by UNSW psychologists has mapped what happens in the brain when a person tries to suppress a thought. The neuroscientists managed to ‘decode’ the complex brain activity using functional brain imaging (called fMRI) and an imaging algorithm.

    The findings suggest that even when a person succeeds in ignoring a thought, like the pink elephant, it can still exist in another part of the brain – without them being aware of it.

    “We were able to find visual representation of the thought – even when participants believed they successfully pushed the image out of their minds,” says Joel Pearson, senior author on the study and professor of cognitive neuroscience at UNSW Science.

    “This suggests mental images can form even when we’re trying to stop them.”

    The study, recently published in the Journal of Cognitive Neuroscience, tracked the brain activity in 15 participants as they completed several visualisations and thought suppression exercises.

    Participants were given a written prompt – either green broccoli or a red apple – and challenged not to think of it. To make this task even harder, they were asked to not replace the image with another thought.

    After 12 seconds, participants confirmed whether they were able to successfully suppress the image or if the thought suppression failed. Eight people were confident they’d successfully suppressed the images – but their brain scans told a different story.

    “The visual cortex – the part of the brain responsible for mental imagery – seemed to be producing thoughts without their awareness,” says Prof. Pearson.

    2
    Participants used the left side of their brains to come up with the thought, and the right side to try and suppress it. Photo: Unsplash.

    Brain neurons fired and then pulled oxygen into the blood each time a thought took place. This movement of oxygen, which was measured by the fMRI machine, created particular spatial patterns in the brain.

    The researchers decoded these spatial patterns using an algorithm called multivoxel pattern analysis (MVPA). The algorithm could distinguish brain patterns caused by the vegetable/fruit prompts.

    “MVPA is a type of decoding algorithm based in machine learning that allows us to read thoughts,” says Dr Roger Koenig-Robert, first author on the study and postdoctoral researcher at UNSW Science and Monash University.

    “Using this algorithm, we can see what people are imagining even when they’re not aware of it.”

    The scans showed that participants used the left side of their brains to come up with the thought, and the right side to try and suppress it. Prof. Pearson hopes this functional brain mapping will help future researchers know which areas of the brain to target for potential intrusive thought therapies.

    “This study can help explain why forcefully trying not to think about something always fails,” he says.

    “For example, for someone trying to quit smoking, trying not to think about having a cigarette is a very bad strategy.”

    3
    Eight study participants were confident they’d successfully suppressed the images of the red apple or green broccoli, but their brain scans suggested otherwise. Photos: Shutterstock.

    These findings build on a behavioural study Prof. Pearson’s team at UNSW Science’s Future Minds Lab conducted last year, which tested how suppressed thoughts can influence perception.

    “We know that you can have conscious and unconscious perception in your visual cortex – for example, I can show someone an image of a spider, make the image invisible, but their brain will still process it,” says Prof. Pearson.

    “But until now, we didn’t know this also worked with thoughts.”

    Both studies point towards the elusive world of the ‘unconscious’, which Prof. Pearson plans to explore in his future work.

    “I’m interested in this idea that imagination can be unconscious – that these thoughts can appear and influence our behaviour, without us even noticing.

    “More evidence is starting to suggest unconscious thoughts do occur, and we can decode them.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U NSW Campus

    Welcome to UNSW Australia (The University of New South Wales), one of Australia’s leading research and teaching universities. At UNSW, we take pride in the broad range and high quality of our teaching programs. Our teaching gains strength and currency from our research activities, strong industry links and our international nature; UNSW has a strong regional and global engagement.

    In developing new ideas and promoting lasting knowledge we are creating an academic environment where outstanding students and scholars from around the world can be inspired to excel in their programs of study and research. Partnerships with both local and global communities allow UNSW to share knowledge, debate and research outcomes. UNSW’s public events include concert performances, open days and public forums on issues such as the environment, healthcare and global politics. We encourage you to explore the UNSW website so you can find out more about what we do.

     
  • richardmitnick 10:37 am on July 28, 2020 Permalink | Reply
    Tags: "Looking into the black box", , Brain Studies, ,   

    From MIT News: “Looking into the black box” 

    MIT News

    From MIT News

    July 27, 2020
    Sabbi Lall | McGovern Institute for Brain Research

    1
    Neural network

    Deep learning systems are revolutionizing technology around us, from voice recognition that pairs you with your phone to autonomous vehicles that are increasingly able to see and recognize obstacles ahead. But much of this success involves trial and error when it comes to the deep learning networks themselves. A group of MIT researchers recently reviewed [PNAS] their contributions to a better theoretical understanding of deep learning networks, providing direction for the field moving forward.

    “Deep learning was in some ways an accidental discovery,” explains Tommy Poggio, investigator at the McGovern Institute for Brain Research, director of the Center for Brains, Minds, and Machines (CBMM), and the Eugene McDermott Professor in Brain and Cognitive Sciences. “We still do not understand why it works. A theoretical framework is taking form, and I believe that we are now close to a satisfactory theory. It is time to stand back and review recent insights.”

    Climbing data mountains

    Our current era is marked by a superabundance of data — data from inexpensive sensors of all types, text, the internet, and large amounts of genomic data being generated in the life sciences. Computers nowadays ingest these multidimensional datasets, creating a set of problems dubbed the “curse of dimensionality” by the late mathematician Richard Bellman.

    One of these problems is that representing a smooth, high-dimensional function requires an astronomically large number of parameters. We know that deep neural networks are particularly good at learning how to represent, or approximate, such complex data, but why? Understanding why could potentially help advance deep learning applications.


    Theoretical issues in deep networks

    “Deep learning is like electricity after Volta discovered the battery, but before Maxwell,” explains Poggio, who is the founding scientific advisor of The Core, MIT Quest for Intelligence, and an investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. “Useful applications were certainly possible after Volta, but it was Maxwell’s theory of electromagnetism, this deeper understanding that then opened the way to the radio, the TV, the radar, the transistor, the computers, and the internet.”

    The theoretical treatment by Poggio, Andrzej Banburski, and Qianli Liao points to why deep learning might overcome data problems such as “the curse of dimensionality.” Their approach starts with the observation that many natural structures are hierarchical. To model the growth and development of a tree doesn’t require that we specify the location of every twig. Instead, a model can use local rules to drive branching hierarchically. The primate visual system appears to do something similar when processing complex data. When we look at natural images — including trees, cats, and faces — the brain successively integrates local image patches, then small collections of patches, and then collections of collections of patches.

    “The physical world is compositional — in other words, composed of many local physical interactions,” explains Qianli Liao, an author of the study, and a graduate student in the Department of Electrical Engineering and Computer Science and a member of the CBMM. “This goes beyond images. Language and our thoughts are compositional, and even our nervous system is compositional in terms of how neurons connect with each other. Our review explains theoretically why deep networks are so good at representing this complexity.”

    The intuition is that a hierarchical neural network should be better at approximating a compositional function than a single “layer” of neurons, even if the total number of neurons is the same. The technical part of their work identifies what “better at approximating” means and proves that the intuition is correct.

    Generalization puzzle

    There is a second puzzle about what is sometimes called the unreasonable effectiveness of deep networks. Deep network models often have far more parameters than data to fit them, despite the mountains of data we produce these days. This situation ought to lead to what is called “overfitting,” where your current data fit the model well, but any new data fit the model terribly. This is dubbed poor generalization in conventional models. The conventional solution is to constrain some aspect of the fitting procedure. However, deep networks do not seem to require this constraint. Poggio and his colleagues prove that, in many cases, the process of training a deep network implicitly “regularizes” the solution, providing constraints.

    The work has a number of implications going forward. Though deep learning is actively being applied in the world, this has so far occurred without a comprehensive underlying theory. A theory of deep learning that explains why and how deep networks work, and what their limitations are, will likely allow development of even much more powerful learning approaches.

    “In the long term, the ability to develop and build better intelligent machines will be essential to any technology-based economy,” explains Poggio. “After all, even in its current — still highly imperfect — state, deep learning is impacting, or about to impact, just about every aspect of our society and life.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: