Tagged: Machine learning Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:39 pm on June 2, 2023 Permalink | Reply
    Tags: "New Method Predicts Extreme Events More Accurately", , , , Climate models have currently predicted a smaller variance in precipitation with a bias toward light rain., , , Data Science Institute, , Earth and Environmental Engineering, , Extreme Weather Events, , Machine learning, Machine-learning algorithm will improve future projections, Missing piece in current algorithms: cloud organization, New algorithm predicts precipitation especially extreme events more accurately., The "Stochasticity": in the case of the variability of random fluctuations in precipitation intensity, , Using AI to design neural network algorithm   

    From The Fu Foundation School of Engineering and Applied Science At Columbia University: “New Method Predicts Extreme Events More Accurately” 

    From The Fu Foundation School of Engineering and Applied Science

    At

    Columbia U bloc

    5.23.23

    Holly Evarts
    Director of Strategic Communications and Media Relations
    (c) 347-453-7408
    (o) 212-854-3206
    holly.evarts@columbia.edu
    Columbia University

    New algorithm predicts precipitation especially extreme events more accurately.

    Columbia Engineers develop machine-learning algorithm that will help researchers to better understand and mitigate the impact of extreme weather events, which are becoming more frequent in our warming climate.

    1
    Credit: “Rain Storm Colorado Springs Colorado” by Brokentaco/Flickr is licensed under CC BY 2.0.

    With the rise of extreme weather events, which are becoming more frequent in our warming climate, accurate predictions are becoming more critical for all of us, from farmers to city-dwellers to businesses around the world. To date, climate models have failed to accurately predict precipitation intensity, particularly extremes. While in nature, precipitation can be very varied, with many extremes of precipitation, climate models predict a smaller variance in precipitation with a bias toward light rain.

    Missing piece in current algorithms: cloud organization

    Researchers have been working to develop algorithms that will improve prediction accuracy but, as Columbia Engineering climate scientists report, there has been a missing piece of information in traditional climate model parameterizations–a way to describe cloud structure and organization that is so fine-scale it is not captured on the computational grid being used. These organization measurements affect predictions of both precipitation intensity and its stochasticity, the variability of random fluctuations in precipitation intensity. Up to now, there has not been an effective, accurate way to measure cloud structure and quantify its impact.

    A new study [PNAS (below)] from a team led by Pierre Gentine, director of the Learning the Earth with Artificial Intelligence and Physics (LEAP) Center, used global storm-resolving simulations and machine learning to create an algorithm that can deal separately with two different scales of cloud organization: those resolved by a climate model, and those that cannot be resolved as they are too small. This new approach addresses the missing piece of information in traditional climate model parameterizations and provides a way to predict precipitation intensity and variability more precisely.

    “Our findings are especially exciting because, for many years, the scientific community has debated whether to include cloud organization in climate models,” said Gentine, Maurice Ewing and J. Lamar Worzel Professor of Geophysics in the Departments of Earth and Environmental Engineering and Earth Environmental Sciences and a member of the Data Science Institute. “Our work provides an answer to the debate and a novel solution for including organization, showing that including this information can significantly improve our prediction of precipitation intensity and variability.”

    Using AI to design neural network algorithm

    Sarah Shamekh, a PhD student working with Gentine, developed a neural network algorithm that learns the relevant information about the role of fine-scale cloud organization (unresolved scales) on precipitation. Because Shamekh did not define a metric or formula in advance, the model learns implicitly–on its own–how to measure the clustering of clouds, a metric of organization, and then uses this metric to improve the prediction of precipitation. Shamekh trained the algorithm on a high-resolution moisture field, encoding the degree of small-scale organization.

    “We discovered that our organization metric explains precipitation variability almost entirely and could replace a stochastic parameterization in climate models,” said Shamekh, lead author of the study, published May 8, 2023, by PNAS. “Including this information significantly improved precipitation prediction at the scale relevant to climate models, accurately predicting precipitation extremes and spatial variability.”

    Machine-learning algorithm will improve future projections

    The researchers are now using their machine-learning approach, which implicitly learns the sub-grid cloud organization metric, in climate models. This should significantly improve the prediction of precipitation intensity and variability, including extreme precipitation events, and enable scientists to better project future changes in the water cycle and extreme weather patterns in a warming climate.

    Future work

    This research also opens up new avenues for investigation, such as exploring the possibility of precipitation creating memory, where the atmosphere retains information about recent weather conditions, which in turn influences atmospheric conditions later on, in the climate system. This new approach could have wide-ranging applications beyond just precipitation modeling, including better modeling of the ice sheet and ocean surface.

    PNAS

    Fig. 1.
    2
    Global storm resolving model. Snapshot of a cloud scene on 24 February 2016 from SAM as part of the DYAMOND dataset. Ten days, randomly selected, of the tropical regions (displayed between the two white dashed lines) from this simulation are used for this analysis. The inset plot shows precipitation versus precipitable water for 10 d of SAM simulations. Lines show the precipitation conditionally averaged by 0.3-mm bins of precipitable water and for 1-K bins of free tropospheric temperature. Scatter dots show the spread in precipitation for each bin of precipitable water and averaged free tropospheric temperature across the simulation domain and time period.

    Fig. 2.
    3
    Overview of proposed framework for parameterizing precipitation. (A) Coarse-graining the high-resolution data. (B) Baseline-NN architecture: This network receives coarse-scale variables (e.g., SST and PW) as input and predicts coarse-scale precipitation. (C). Org-NN architecture: The Left panel shows the autoencoder that receives the high-resolution PW as input and reconstructs it after passing it through a bottleneck. The Right panel shows the neural network that predicts coarse-scale precipitation. The input to this network is the coarse-scale variables (as for the baseline network) as well as org extracted from the autoencoder. The two blocks are trained simultaneously.

    See the science paper for instructive material with images.

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Columbia University Fu Foundation School of Engineering and Applied Science is the engineering and applied science school of Columbia University. It was founded as the School of Mines in 1863 and then the School of Mines, Engineering and Chemistry before becoming the School of Engineering and Applied Science. On October 1, 1997, the school was renamed in honor of Chinese businessman Z.Y. Fu, who had donated $26 million to the school.

    The Fu Foundation School of Engineering and Applied Science maintains a close research tie with other institutions including National Aeronautics and Space Administration, IBM, Massachusetts Institute of Technology, and The Earth Institute. Patents owned by the school generate over $100 million annually for the university. Faculty and alumni are responsible for technological achievements including the developments of FM radio and the maser.

    The School’s applied mathematics, biomedical engineering, computer science and the financial engineering program in operations research are very famous and ranked high. The current faculty include 27 members of the National Academy of Engineering and one Nobel laureate. In all, the faculty and alumni of Columbia Engineering have won 10 Nobel Prizes in physics, chemistry, medicine, and economics.

    The school consists of approximately 300 undergraduates in each graduating class and maintains close links with its undergraduate liberal arts sister school Columbia College which shares housing with SEAS students.

    Original charter of 1754

    Included in the original charter for Columbia College was the direction to teach “the arts of Number and Measuring, of Surveying and Navigation […] the knowledge of […] various kinds of Meteors, Stones, Mines and Minerals, Plants and Animals, and everything useful for the Comfort, the Convenience and Elegance of Life.” Engineering has always been a part of Columbia, even before the establishment of any separate school of engineering.

    An early and influential graduate from the school was John Stevens, Class of 1768. Instrumental in the establishment of U.S. patent law. Stevens procured many patents in early steamboat technology; operated the first steam ferry between New York and New Jersey; received the first railroad charter in the U.S.; built a pioneer locomotive; and amassed a fortune, which allowed his sons to found the Stevens Institute of Technology.

    When Columbia University first resided on Wall Street, engineering did not have a school under the Columbia umbrella. After Columbia outgrew its space on Wall Street, it relocated to what is now Midtown Manhattan in 1857. Then President Barnard and the Trustees of the University, with the urging of Professor Thomas Egleston and General Vinton, approved the School of Mines in 1863. The intention was to establish a School of Mines and Metallurgy with a three-year program open to professionally motivated students with or without prior undergraduate training. It was officially founded in 1864 under the leadership of its first dean, Columbia professor Charles F. Chandler, and specialized in mining and mineralogical engineering. An example of work from a student at the School of Mines was William Barclay Parsons, Class of 1882. He was an engineer on the Chinese railway and the Cape Cod and Panama Canals. Most importantly he worked for New York, as a chief engineer of the city’s first subway system, the Interborough Rapid Transit Company. Opened in 1904, the subway’s electric cars took passengers from City Hall to Brooklyn, the Bronx, and the newly renamed and relocated Columbia University in Morningside Heights, its present location on the Upper West Side of Manhattan.

    Columbia U Campus
    Columbia University was founded in 1754 as King’s College by royal charter of King George II of England. It is the oldest institution of higher learning in the state of New York and the fifth oldest in the United States.

    University Mission Statement

    Columbia University is one of the world’s most important centers of research and at the same time a distinctive and distinguished learning environment for undergraduates and graduate students in many scholarly and professional fields. The University recognizes the importance of its location in New York City and seeks to link its research and teaching to the vast resources of a great metropolis. It seeks to attract a diverse and international faculty and student body, to support research and teaching on global issues, and to create academic relationships with many countries and regions. It expects all areas of the University to advance knowledge and learning at the highest level and to convey the products of its efforts to the world.

    Columbia University is a private Ivy League research university in New York City. Established in 1754 on the grounds of Trinity Church in Manhattan Columbia is the oldest institution of higher education in New York and the fifth-oldest institution of higher learning in the United States. It is one of nine colonial colleges founded prior to the Declaration of Independence, seven of which belong to the Ivy League. Columbia is ranked among the top universities in the world by major education publications.

    Columbia was established as King’s College by royal charter from King George II of Great Britain in reaction to the founding of Princeton College. It was renamed Columbia College in 1784 following the American Revolution, and in 1787 was placed under a private board of trustees headed by former students Alexander Hamilton and John Jay. In 1896, the campus was moved to its current location in Morningside Heights and renamed Columbia University.

    Columbia scientists and scholars have played an important role in scientific breakthroughs including brain-computer interface; the laser and maser; nuclear magnetic resonance; the first nuclear pile; the first nuclear fission reaction in the Americas; the first evidence for plate tectonics and continental drift; and much of the initial research and planning for the Manhattan Project during World War II. Columbia is organized into twenty schools, including four undergraduate schools and 15 graduate schools. The university’s research efforts include the Lamont–Doherty Earth Observatory, the Goddard Institute for Space Studies, and accelerator laboratories with major technology firms such as IBM. Columbia is a founding member of the Association of American Universities and was the first school in the United States to grant the M.D. degree. With over 14 million volumes, Columbia University Library is the third largest private research library in the United States.

    The university’s endowment stands at $11.26 billion in 2020, among the largest of any academic institution. As of October 2020, Columbia’s alumni, faculty, and staff have included: five Founding Fathers of the United States—among them a co-author of the United States Constitution and a co-author of the Declaration of Independence; three U.S. presidents; 29 foreign heads of state; ten justices of the United States Supreme Court, one of whom currently serves; 96 Nobel laureates; five Fields Medalists; 122 National Academy of Sciences members; 53 living billionaires; eleven Olympic medalists; 33 Academy Award winners; and 125 Pulitzer Prize recipients.

     
  • richardmitnick 3:53 pm on May 23, 2023 Permalink | Reply
    Tags: "How Artificial Intelligence is helping astronomers", AI algorithms have begun helping astronomers tame massive data sets and discover new knowledge about the universe., , , Better telescopes and more data, , , , Machine learning,   

    From “EarthSky” : “How Artificial Intelligence is helping astronomers” 

    1

    From “EarthSky”

    5.23.23
    Chris Impey | University of Arizona

    1

    AI is helping astronomers

    “The famous first image of a black hole just got two times sharper.

    2
    Researchers used computer simulations of black holes and machine learning to generate a revised version (right) of the famous first image of a black hole that was released back in 2019 (left). Medeiros et al 2023 [below]

    A research team used artificial intelligence to dramatically improve upon its first image from 2019, which now shows the black hole at the center of the Messier 87 galaxy as darker and bigger than the first image depicted.

    I’m an astronomer who studies and has written about cosmology, black holes and exoplanets. Astronomers have been using AI for decades. In fact, in 1990, astronomers from the University of Arizona, where I am a professor, were among the first to use a type of AI called a neural network to study the shapes of galaxies.

    Since then, AI has spread into every field of astronomy. As the technology has become more powerful, AI algorithms have begun helping astronomers tame massive data sets and discover new knowledge about the universe.

    Better telescopes and more data

    As long as astronomy has been a science, it has involved trying to make sense of the multitude of objects in the night sky. That was relatively simple when the only tools were the unaided eye or a simple telescope, and all that we could see were a few thousand stars and a handful of planets.

    A hundred years ago, Edwin Hubble used newly built telescopes to show that the universe teems with not just stars and clouds of gas, but countless galaxies.

    Edwin Hubble Cepheid variable moving with Messier 31 at the 100 inch Hooker telescope on Mt Wilson, California.

    As telescopes have continued to improve, the sheer number of celestial objects humans can see and the amount of data astronomers need to sort through have both grown exponentially, too.

    For example, the soon-to-be-completed Vera Rubin Observatory in Chile will make images so large that it would take 1,500 high-definition TV screens to view each one in its entirety.

    Over 10 years it is expected to generate 0.5 exabytes of data – about 50,000 times the amount of information held in all of the books contained within the Library of Congress.

    There are 20 telescopes with mirrors larger than 20 feet (6 meters) in diameter.

    Some examples:

    AI algorithms are the only way astronomers could ever hope to work through all of the data available to them today. There are a number of ways AI is proving useful in processing this data.

    3
    One of the earliest uses of AI in astronomy was to pick out the multitude of faint galaxies hidden in the background of images. Image credit J. Rigby via NASA/ESA/CSA Webb. J.CC BY.

    Picking out patterns

    Astronomy often involves looking for needles in a haystack. About 99% of the pixels in an astronomical image contain background radiation, light from other sources or the blackness of space. Only 1% have the subtle shapes of faint galaxies.

    AI algorithms – in particular, neural networks that use many interconnected nodes and learn to recognize patterns – are perfectly suited for picking out the patterns of galaxies. Astronomers began using neural networks to classify galaxies in the early 2010s. Now the algorithms are so effective that they can classify galaxies with an accuracy of 98%.

    We’ve seen this in other areas of astronomy. Astronomers working on SETI, the Search for Extraterrestrial Intelligence, use radio telescopes to look for signals from distant civilizations. Early on, radio astronomers scanned charts by eye to look for unexplained anomalies. More recently, researchers harnessed 150,000 personal computers and 1.8 million citizen scientists to look for artificial radio signals. Now, researchers are using AI to sift through reams of data much more quickly and thoroughly than people can. This has allowed SETI efforts to cover more ground while also greatly reducing the number of false positive signals.

    Another example is the search for exoplanets. Astronomers discovered most of the 5,300 known exoplanets by measuring a dip in the amount of light coming from a star when a planet passes in front of it. AI tools can now pick out the signs of an exoplanet with 96% accuracy.

    3
    AI tools can help astronomers discover new exoplanets like TRAPPIST-1 b. Image credit: Joseph Olmsted (STScI) NASA/ ESA/ CSA Webb , CC BY.

    Making new discoveries

    AI has proved itself to be excellent at identifying known objects – like galaxies or exoplanets – that astronomers tell it to look for. But it is also quite powerful at finding objects or phenomena that are theorized but have not yet been discovered in the real world.

    Teams have used this approach to detect new exoplanets, learn about the ancestral stars that led to the formation and growth of the Milky Way, and predict the signatures of new types of gravitational waves.

    To do this, astronomers first use AI to convert theoretical models into observational signatures, including realistic levels of noise. Then they use machine learning to sharpen the ability of AI to detect the predicted phenomena.

    Finally, radio astronomers have also been using AI algorithms to sift through signals that don’t correspond to known phenomena. Recently a team from South Africa found a unique object that may be a remnant of the explosive merging of two supermassive black holes. If this proves to be true, the data will allow a new test of general relativity: Albert Einstein’s description of space-time.

    Not well explicated, this might be either of the two objects below, found with MeerkAT or HERA.

    5
    A portion of the field centered on GRS 1915+105 as seen by the MeerKAT radio telescope at 1.28 GHz. Credit: Motta et al., 2023.

    6
    MeerKAT 1.3 GHz radio continuum image of a newly discovered double relic associated with the galaxy cluster PSZ2 G277.93+12.34. Credit: Koribalski et al, 2023.

    Making predictions and plugging holes

    As in many areas of life recently, generative AI and large language models like ChatGPT are also making waves in the astronomy world.

    The team that created the first image of a black hole in 2019 used a generative AI to produce its new image [The Astrophysical Journal Letters (below)]. To do so, it first taught an AI how to recognize black holes by feeding it simulations of many kinds of black holes. Then, the team used the AI model it had built to fill in gaps in the massive amount of data collected by the radio telescopes on the black hole M87.

    Using this simulated data, the team was able to create a new image that is two times sharper than the original and is fully consistent with the predictions of general relativity.

    Astronomers are also turning to AI to help tame the complexity of modern research. A team from the Harvard-Smithsonian Center for Astrophysics created a language model called astroBERT to read and organize 15 million scientific papers on astronomy. Another team, based at NASA, has even proposed using AI to prioritize astronomy projects, a process that astronomers engage in every 10 years.

    As AI has progressed, it has become an essential tool for astronomers. As telescopes get better, as data sets get larger and as AIs continue to improve, it is likely that this technology will play a central role in future discoveries about the universe.

    The Astrophysical Journal
    The Astrophysical Journal Letters
    6
    Image of black hole M87* reconstructed to an amazing higher resolution using AI. https://www.360onhistory.com
    More instructive images are available in the science papers.

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    Deborah Byrd created the EarthSky radio series in 1991 and founded EarthSky.org in 1994. Today, she serves as Editor-in-Chief of this website. She has won a galaxy of awards from the broadcasting and science communities, including having an asteroid named 3505 Byrd in her honor. A science communicator and educator since 1976, Byrd believes in science as a force for good in the world and a vital tool for the 21st century. “Being an EarthSky editor is like hosting a big global party for cool nature-lovers,” she says.

     
  • richardmitnick 12:38 pm on May 1, 2023 Permalink | Reply
    Tags: "The Computer Scientist Peering Inside AI’s Black Boxes", , , At least 581 AI models involved in medical decisions have received authorization from the Food and Drug Administration., , Cynthia Rudin, Cynthia Rudin wants machine learning models-responsible for increasingly important decisions-to show their work., Cynthia Rudin works to make machine learning more transparent through “interpretable” models., Machine learning, Machine learning models are incredibly powerful tools. They extract deeply hidden patterns in large data sets that our limited human brains can’t parse., Machine learning was designed to be black box — predictive models that are either too complicated for any human to understand or proprietary., Many algorithms are black boxes — either because they’re proprietary or because they’re too complicated for a human to understand., , Rudin and her team set out to prove that even the most complex machine learning models can be transformed into interpretable glass boxes that show their work.   

    From “Quanta Magazine” : “The Computer Scientist Peering Inside AI’s Black Boxes” Cynthia Rudin 

    From “Quanta Magazine”

    4.27.23
    Allison Parshall

    1
    Cynthia Rudin works to make machine learning more transparent through “interpretable” models. Rudin at Duke University, where she studies machine learning. Credit: Alex M. Sanchez for Quanta Magazine.

    Cynthia Rudin wants machine learning models-responsible for increasingly important decisions-to show their work.

    Machine learning models are incredibly powerful tools. They extract deeply hidden patterns in large data sets that our limited human brains can’t parse. These complex algorithms, then, need to be incomprehensible “black boxes,” because a model that we could crack open and understand would be useless. Right?

    That’s all wrong, at least according to Cynthia Rudin, who studies interpretable machine learning at Duke University. She’s spent much of her career pushing for transparent but still accurate models to replace the black boxes favored by her field.

    The stakes are high. These opaque models are becoming more common in situations where their decisions have real consequences, like the decision to biopsy a potential tumor, grant bail or approve a loan application. Today, at least 581 AI models involved in medical decisions have received authorization from the Food and Drug Administration. Nearly 400 of them are aimed at helping radiologists detect abnormalities in medical imaging, like malignant tumors or signs of a stroke.

    Many of these algorithms are black boxes — either because they’re proprietary or because they’re too complicated for a human to understand. “It makes me very nervous,” Rudin said. “The whole framework of machine learning just needs be changed when you’re working with something higher-stakes.”

    But changed to what? Recently, Rudin and her team set out to prove that even the most complex machine learning models, neural networks doing computer vision tasks, can be transformed into interpretable glass boxes that show their work to doctors.

    Rudin, who grew up outside Buffalo, New York, grew to share her father’s love of physics and math — he’s a medical physicist who helped calibrate X-ray machines — but she realized she preferred to solve problems with computers. Now she leads Duke’s Interpretable Machine Learning lab, where she and her colleagues scrutinize the most complex puzzle boxes in machine learning — neural networks — to create accurate models that show their work.

    Quanta spoke with Rudin about these efforts, ethical obligations in machine learning and weird computer poetry. The interviews have been condensed and edited for clarity.

    Did you always dream of being a computer scientist?

    No, definitely not. As a kid, I wanted to be an orchestra conductor, or something like it. And I wanted to be a composer and write music.

    What kind of music?

    That’s the problem. I write French music from the turn of the previous century, like Ravel and Debussy. And then I realized that few people cared about that kind of music, so I decided not to pursue it as a career. As an undergraduate, I wanted to be an applied mathematician — but I went in the opposite direction, which was machine learning.

    When did you begin thinking about interpretability?

    After I graduated, I ended up working at Columbia with the New York City power company, Con Edison. And they were doing real-world work. We were supposed to predict which manholes were going to have a fire or an explosion — at the time, it was about 1% of the manholes in Manhattan every year. I joked that I was always trying to take a picture of myself on the “most likely to explode” manhole — though I never actually did.

    I found out very quickly that this was not a problem that machine learning was helping with, because the data was so messy. They had accounting records dating back to the 1890s. So we processed all the data and turned it into these tiny models that the company could understand and work with. It was interpretable machine learning, though I didn’t know that at the time.

    What did you know about interpretability back then?

    I didn’t really know anything about interpretability because they didn’t teach it to anyone. Machine learning was designed to be black box — predictive models that are either too complicated for any human to understand or proprietary, somebody’s secret sauce. The whole idea was that you didn’t need to deal with the data; the algorithm would handle all that under the hood. It was so elegant, but that just made it very difficult to figure out what was going on.

    2
    Many researchers accept that models of machine learning are “black boxes” and impossible for humans to understand, but Rudin has shown that interpretable variations can work just as well. Credit: Alex M. Sanchez for Quanta Magazine.

    But why does knowing what’s going on under the hood matter?

    If you want to trust a prediction, you need to understand how all the computations work. For example, in health care, you need to know if the model even applies to your patient. And it’s really hard to troubleshoot models if you don’t know what’s in them. Sometimes models depend on variables in ways that you might not like if you knew what they were doing. For example, with the power company in New York, we gave them a model that depended on the number of neutral cables. They looked at it and said, “Neutral cables? That should not be in your model. There’s something wrong.” And of course there was a flaw in the database, and if we hadn’t been able to pinpoint it, we would have had a serious problem. So it’s really useful to be able to see into the model so you can troubleshoot it.

    When did you first get concerned about non-transparent AI models in medicine?

    My dad is a medical physicist. Several years ago, he was going to medical physics and radiology conferences. I remember calling him on my way to work, and he was saying, “You’re not going to believe this, but all the AI sessions are full. AI is taking over radiology.” Then my student Alina [Barnett] roped us into studying [AI models that examine] mammograms. Then I realized, OK, hold on. They’re not using interpretable models. They’re using just these black boxes; then they’re trying to explain their results. Maybe we should do something about this.

    So we decided we would try to prove that you could construct interpretable models for mammography that did not lose accuracy over their black box counterparts. We just wanted to prove that it could be done.

    How do you make a radiology AI that shows its work?

    We decided to use case-based reasoning. That’s where you say, “Well, I think this thing looks like this other thing that I’ve seen before.” It’s like what Dr. House does with his patients in the TV show. Like: “This patient has a heart condition, and I’ve seen her condition before in a patient 20 years ago. This patient is a young woman, and that patient was an old man, but the heart condition is similar.” And so I can reason about this case in terms of that other case.

    We decided to do that with computer vision: “Well, this part of the image looks like that part of that image that I’ve seen before.” This would explain the reasoning process in a way that is similar to how a human might explain their reasoning about an image to another human.

    These are high-complexity models. They’re neural networks. But as long as they’re reasoning about a current case in terms of its relationship to past cases, that’s a constraint that forces the model to be interpretable. And we haven’t lost any accuracy compared to the benchmarks in computer vision.

    Would this ‘Dr. House’ technique work for other areas of health care?

    You could use case-based reasoning for anything. Once we had the mammography project established, my students Alina Barnett and Stark Guo, and a physician collaborator named Brandon Westover, transferred their knowledge directly to EEG scans for critically ill patients. It’s a similar neural architecture, and they trained it within a couple of months, very quick.

    If this approach is just as accurate as black boxes, why not use it for everything?

    Well, first of all, it’s much harder to train an interpretable model, because you have to think about the reasoning process and make sure that’s correct. For low-stakes decisions, it’s not really worth it. Like for advertising, if the ad gets to the right people and makes money, then people tend to be happy. But for high-stakes decisions, I think it’s worth that extra effort.

    Are there other ways to figure out what a neural network is doing?

    Around 2017, people started working on “explainability,” which was explaining the predictions of a black box. So you have some complicated function — like a neural network. You can think about these explanation methods as trying to approximate these functions. Or they might try to pick out which variables are important for a specific prediction.

    And that work has some serious problems with it. The explanations have to be wrong, because if their explanations were always right, you could just replace the black box with the explanations. And so the fact that the explainability people casually claim the same kinds of guarantees that the interpretability people are actually providing made me very uncomfortable, especially when it came to high-stakes decisions. Even with an explanation, you could have your freedom denied if you were a prisoner and truly not understand why. Or you could be denied a loan that would give you a house, and again, you wouldn’t be able to know why. They could give you some crappy explanation, and there’s nothing you could do about it, really.

    Are people taking interpretability more seriously now?

    I think so. It used to be that I would give a talk and some people would come up and yell at me after. And they’d be like, “We don’t need interpretable models; we just test it really carefully and it’s fine.” Now people are coming up afterward and saying, “Yeah, I agree with you, and I’m working on this too.” I think you still have the explainability people ruling the land at the moment — again, it’s easier to poke at a black box than it is to replace it. Those guys I haven’t managed to convince, and I view that as somewhat of a personal failure, but I’m working on it. [Laughs.] I’m hoping that this next generation will help me out.

    Would any low-stakes applications of machine learning benefit from more interpretability?

    People are working on interpretable models for natural language processing. These large language-generation models like ChatGPT are very difficult to understand. We’ve realized now when they say something offensive, it would be useful to know why they did that. It’s really hard to troubleshoot these black box models. Before ChatGPT, I used to run our computer-generated poetry team at Duke. We were working with GPT-2, a predecessor to ChatGPT, and I often felt like we were trying to convince it to do something it really didn’t want to do. It just wasn’t good at figuring out which words generally make sense together.

    Why did you make computer-generated poetry?

    Well, I was hoping to do something meta-creative. The team started with sonnets, then went on to limericks. They wrote this paper called There Once Was a Really Bad Poet, It Was Automated but You Didn’t Know It We forced the model to follow a certain template — like Mad Libs on steroids. There were a whole bunch of poems that were just a riot. It’s so fun when you get some weird piece of poetry that the computer wrote and you’re like, “Wow, that’s pretty funky.”

    But all of this was before ChatGPT, which has no trouble with text generation, even with very difficult constraints like rhyming and iambic pentameter. But ChatGPT taught me something important. If we don’t have interpretability on large scale language and image generation models, they are harder to control, which means they are likely to assist in propagating dangerous misinformation more quickly. So they changed my mind on the value of interpretability — even for low-stakes decisions it seems we need it.

    Do you ever use machine learning to compose music?

    We published a beautiful computer generation algorithm for four-part harmony that is fully interpretable, written by one of my students, Stephen Hahn. All of the co-authors were musicians, and we incorporated music theory into the algorithm. It isn’t a neural network, and it produces beautiful music.

    I mean, when we find a tiny little model for predicting whether someone will have a seizure, I think that’s beautiful, because it’s a very small pattern that someone can appreciate and use. And music is all about patterns. Poetry is all about patterns. They’re all beautiful patterns.

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 3:00 pm on April 13, 2023 Permalink | Reply
    Tags: "A Sharper Look at the First Image of a Black Hole", "PRIMO" was developed by EHT members Lia Medeiros (Institute for Advanced Study) and Dimitrios Psaltis (Georgia Tech) and Tod Lauer (NSF’s NOIRLab) and Feryal Ozel (Georgia Tech)., "PRIMO": principal-component interferometric modeling, , , , , , Machine learning, , The Event Horizon Telescope Collaboration,   

    From The NSF/ NOAO/NOIRLab (National Optical-Infrared Astronomy Research Laboratory) : “A Sharper Look at the First Image of a Black Hole” 

    From The NSF/ NOAO/NOIRLab (National Optical-Infrared Astronomy Research Laboratory)

    4.13.23
    Tod Lauer
    NSF’s NOIRLab
    Email: tod.lauer@noirlab.edu

    Charles Blue
    NSF’s NOIRLab
    Tel: +1 202 236 6324
    Email: charles.blue@noirlab.edu

    Machine learning reconstructs new image of Messier 87* from Event Horizon Telescope data

    1
    A team of researchers, including an astronomer with NSF’s NOIRLab, has developed a new machine-learning technique to enhance the fidelity and sharpness of radio interferometry images. To demonstrate the power of their new approach, which is called PRIMO [principal-component interferometric modeling], the team created a new, high-fidelity version of the iconic Event Horizon Telescope’s image of the supermassive black hole at the center of Messier 87, a giant elliptical galaxy located 55 million light-years from Earth. The image of the Messier 87* supermassive black hole originally published by the EHT collaboration in 2019 (left); and a new image generated by the PRIMO algorithm using the same data set (right). Credit: L. Medeiros (Institute for Advanced Study), D. Psaltis (Georgia Tech), T. Lauer (NSF’s NOIRLab), and F. Ozel (Georgia Tech)

    The iconic image of the supermassive black hole at the center of Messier 87 has received its first official makeover, thanks to a new machine-learning technique known as “PRIMO” [principal-component interferometric modeling]. This new image better illustrates the full extent of the object’s dark central region and the surprisingly narrow outer ring. To achieve this result, a team of researchers used the original 2017 data obtained by the Event Horizon Telescope (EHT) collaboration and created a new image that, for the first time, represents the full resolution of the EHT. [1]

    “PRIMO”:, which stands for principal-component interferometric modeling, was developed by EHT members Lia Medeiros (Institute for Advanced Study), Dimitrios Psaltis (Georgia Tech), Tod Lauer (NSF’s NOIRLab), and Feryal Ozel (Georgia Tech). A paper describing their work is published in The Astrophysical Journal Letters [below].


    PRIMO Black Hole Simulations. Overview of simulations that were generated for the training set of
    the PRIMO algorithm.

    In 2017 the EHT collaboration used a network of seven radio telescopes at different locations around the world to form an Earth-sized virtual telescope with the power and resolution capable of observing the “shadow” of a black hole’s event horizon. [2] Though this technique allowed astronomers to see remarkably fine details, it lacked the collecting power of an actual Earth-sized telescope, leaving gaps in the data. The researchers’ new machine-learning technique helped fill in those gaps.

    “With our new machine-learning technique, PRIMO, we were able to achieve the maximum resolution of the current array,” says lead author Lia Medeiros. “Since we cannot study black holes up close, the detail in an image plays a critical role in our ability to understand its behavior. The width of the ring in the image is now smaller by about a factor of two, which will be a powerful constraint for our theoretical models and tests of gravity.”

    PRIMO relies on a branch of machine learning known as “dictionary learning”, which teaches computers certain rules by exposing them to thousands of examples. The power of this type of machine learning has been demonstrated in numerous ways, from creating Renaissance-style works of art to completing the unfinished work of Beethoven.

    Applying PRIMO to the EHT image of Messier 87*, computers analyzed over 30,000 high-fidelity simulated images of gas accreting onto a black hole to look for common patterns in the images. The results were then blended to provide a highly accurate representation of the EHT observations, simultaneously providing a high-fidelity estimate of the missing structure of the image. A paper pertaining to the algorithm itself was published previously in The Astrophysical Journal [below] on 3 February 2023.

    “PRIMO is a new approach to the difficult task of constructing images from EHT observations,” said Lauer. “It provides a way to compensate for the missing information about the object being observed, which is required to generate the image that would have been seen using a single gigantic radio telescope the size of the Earth.”

    The team confirmed that the newly rendered image is consistent with the EHT data and with theoretical expectations, including the bright ring of emission expected to be produced by hot gas falling into the black hole.

    The new image should lead to more accurate determinations of the mass of the Messier 87 black hole and the physical parameters that determine its present appearance. The data also provide an opportunity for researchers to place greater constraints on alternatives to the event horizon (based on the darker central brightness depression) and perform more robust tests of gravity (based on the narrower ring size). PRIMO can also be applied to additional EHT observations, including those of Sagittarius A*, the central black hole in our own Milky Way Galaxy.

    “The 2019 image was just the beginning,” said Medeiros. “If a picture is worth a thousand words, the data underlying that image have many more stories to tell. PRIMO will continue to be a critical tool in extracting such insights.”

    More information

    [1] One of the telescopes comprising the EHT, the South Pole Telescope, was not part of the Messier 87 observation. Since that time, the EHT has added additional telescopes to the array.

    [2] The shadow of a black hole is the closest we can come to an image of the black hole itself, a completely dark object from which light cannot escape. In the case of Messier 87, the black hole’s boundary — the event horizon from which the EHT takes its name — is around 2.5 times smaller than the shadow it casts and measures just under 40 billion kilometers across.

    Development of the PRIMO algorithm was enabled through the support of a National Science Foundation Astronomy and Astrophysics Postdoctoral Fellowship.

    The Astrophysical Journal Letters
    The Astrophysical Journal
    See the science papers for instructive material with images.

    See the full article here.

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    What is NOIRLab?

    NSF’s NOIRLab (National Optical-Infrared Astronomy Research Laboratory), the US center for ground-based optical-infrared astronomy, operates the international Gemini Observatory (a facility of National Science Foundation, NRC–Canada, ANID–Chile, MCTIC–Brazil, MINCyT–Argentina, and Korea Astronomy and Space Science Institute [한국천문연구원] (KR)), NOAO Kitt Peak National Observatory (KPNO), Cerro Tololo Inter-American Observatory(CL) (CTIO), the Community Science and Data Center (CSDC), and Vera C. Rubin Observatory (in cooperation with DOE’s SLAC National Accelerator Laboratory). It is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with NSF and is headquartered in Tucson, Arizona. The astronomical community is honored to have the opportunity to conduct astronomical research on Iolkam Du’ag (Kitt Peak) in Arizona, on Mauna Kea in Hawaiʻi, and on Cerro Tololo and Cerro Pachón in Chile. We recognize and acknowledge the very significant cultural role and reverence that these sites have to the Tohono O’odham Nation, to the Native Hawaiian community, and to the local communities in Chile, respectively.

    National Science Foundation NOIRLab’s Gemini North Frederick C Gillett telescope at Mauna Kea Observatory in Hawai’i Altitude 4,213 m (13,822 ft).

    The National Science Foundation NOIRLab National Optical Astronomy Observatory Gemini South telescope on the summit of Cerro Pachón at an altitude of 7200 feet. There are currently two telescopes commissioned on Cerro Pachón, Gemini South and the SOAR Telescope — Southern Astrophysics Research Telescope. A third, the Vera C. Rubin Observatory, is under construction.

    The National Science Foundation NOIRLab National Optical Astronomy Observatory Vera C. Rubin Observatory [LSST] Telescope currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing NSF NOIRLab NOAO The Association of Universities for Research in Astronomy (AURA) Gemini South Telescope and Southern Astrophysical Research Telescope.

    National Science Foundation NOIRLab National Optical Astronomy Observatory Kitt Peak National Observatory on Kitt Peak of the Quinlan Mountains in the Arizona-Sonoran Desert on the Tohono O’odham Nation, 88 kilometers (55 mi) west-southwest of Tucson, Arizona, Altitude 2,096 m (6,877 ft), annotated.

    NSF NOIRLab NOAO Cerro Tololo Inter-American Observatory(CL) approximately 80 km to the East of La Serena, Chile, at an altitude of 2200 meters.

    The NOAO-Community Science and Data Center

    This work is supported in part by The Department of Energy Office of Science.
    The Dark Energy Survey is a collaboration of more than 400 scientists from 26 institutions in seven countries. Funding for the DES Projects has been provided by the US Department of Energy Office of Science, The National Science Foundation, Ministry of Science and Education of Spain, The Science and Technology Facilities Council (UK), The Higher Education Funding Council for England (UK), The Swiss Federal Institute of Technology ETH Zürich [Eidgenössische Technische Hochschule Zürich)](CH), The National Center for Supercomputing Applications at The University of Illinois at Urbana-Champaign, The Kavli Institute of Cosmological Physics at The University of Chicago, Center for Cosmology and AstroParticle Physics at The Ohio State University, Mitchell Institute for Fundamental Physics and Astronomy at The Texas A&M University, Brazil Funding Authority for Studies and Projects for Scientific and Technological Development [Financiadora de Estudos e Projetos ](BR) , Carlos Chagas Filho Foundation for Research Support of the State of Rio de Janeiro [Fundação Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de Janeiro](BR), Ministry of Science, Technology, Innovation and Communications [Ministério da Ciência, Tecnolgia, Inovação e Comunicações](BR), German Research Foundation [Deutsche Forschungsgemeinschaft](DE), and the collaborating institutions in the Dark Energy Survey.

    The National Center for Supercomputing Applications at The University of Illinois at Urbana-Champaign provides
    supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, The University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one-third of the Fortune 50® for more than 30 years by bringing industry, researchers, and students together to solve grand challenges at rapid speed and scale.
    The DOE Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time.

     
  • richardmitnick 7:23 am on April 5, 2023 Permalink | Reply
    Tags: , "Scientists use computational modeling to design 'ultrastable' materials", A really good MOF material for catalysis or for gas storage would have a very open structure., , , , , , Machine learning, MOF structures are good candidates for applications such as converting methane gas to methanol., Recently scientists have also begun to explore using MOFs to deliver drugs or imaging agents within the body., Scientists are interested in MOFs because they have a porous structure that makes them well-suited to applications involving gases., Scientists can control the overall structure of the metal organic framework by picking and choosing how you assemble different components., ,   

    From The School of Engineering At The Massachusetts Institute of Technology: “Scientists use computational modeling to design ‘ultrastable’ materials” 

    From The School of Engineering

    At

    The Massachusetts Institute of Technology

    4.4.23
    Anne Trafton

    1
    Materials known as metal-organic frameworks (MOFs) have a rigid, cage-like structure that lends itself to a variety of applications, from gas storage to drug delivery. Image: David Kastner.

    2
    MIT computational chemists developed a model that can analyze the features of a metal-organic framework structure and predict if it will be stable enough to be useful. Image: Courtesy of the researchers. [From 2021 paper below.]

    Materials known as metal-organic frameworks (MOFs) have a rigid, cage-like structure that lends itself to a variety of applications, from gas storage to drug delivery. By changing the building blocks that go into the materials, or the way they are arranged, researchers can design MOFs suited to different uses.

    However, not all possible MOF structures are stable enough to be deployed for applications such as catalyzing reactions or storing gases. To help researchers figure out which MOF structures might work best for a given application, MIT researchers have developed a computational approach that allows them to predict which structures will be the most stable.

    Using their computational model, the researchers have identified about 10,000 possible MOF structures that they classify as “ultrastable,” making them good candidates for applications such as converting methane gas to methanol.

    “When people come up with hypothetical MOF materials, they don’t necessarily know beforehand how stable that material is,” says Heather Kulik, an MIT associate professor of chemistry and chemical engineering, and the senior author of the study. “We used data and our machine-learning models to come up with building blocks that were expected to have high stability, and when we recombined those in ways that were considerably more diverse, our dataset was enriched with materials with higher stability than any previous set of hypothetical materials people had come up with.”

    MIT graduate student Aditya Nandy is the lead author of the paper, which appears today in the journal Matter [below]. Other authors are MIT postdoc Shuwen Yue, graduate students Changhwan Oh and Gianmarco Terrones, Chenru Duan PhD ’22, and Yongchul G. Chung, an associate professor of chemical and biomolecular engineering at Pusan National University.

    Modeling MOFs

    Scientists are interested in MOFs because they have a porous structure that makes them well-suited to applications involving gases, such as gas storage, separating similar gases from each other, or converting one gas to another. Recently, scientists have also begun to explore using them to deliver drugs or imaging agents within the body.

    The two main components of MOFs are secondary building units — organic molecules that incorporate metal atoms such as zinc or copper — and organic molecules called linkers, which connect the secondary building units. These parts can be combined together in many different ways, just like LEGO building blocks, Kulik says.

    “Because there are so many different types of LEGO blocks and ways you can assemble them, it gives rise to a combinatorial explosion of different possible metal organic framework materials,” she says. “You can really control the overall structure of the metal organic framework by picking and choosing how you assemble different components.”

    Currently, the most common way to design MOFs is through trial-and-error. More recently, researchers have begun to try computational approaches to designing these materials. Most such studies have been based on predictions of how well the material will work for a particular application, but they don’t always take into account the stability of the resulting material.

    “A really good MOF material for catalysis or for gas storage would have a very open structure, but once you have this open structure, it may be really hard to make sure that that material is also stable under long-term use,” Kulik says.

    In a 2021 study [Journal of the American Chemical Society (below)], Kulik reported a new model that she created by mining a few thousand papers on MOFs to find data on the temperature at which a given MOF would break down and whether particular MOFs can withstand the conditions needed to remove solvents used to synthesize them. She trained the computer model to predict those two features — known as thermal stability and activation stability — based on the molecules’ structure.

    In the new study, Kulik and her students used that model to identify about 500 MOFs with very high stability. Then, they broke those MOFs down into their most common building blocks — 120 secondary building units and 16 linkers.

    By recombining these building blocks using about 750 different types of architectures, including many that are not usually included in such models, the researchers generated about 50,000 new MOF structures.

    “One of the things that was unique about our set was that we looked at a lot more diverse crystal symmetries than had ever been looked at before, but [we did so] using these building blocks that had only come from experimentally synthesized highly stable MOFs,” Kulik says.

    Ultrastability

    The researchers then used their computational models to predict how stable each of these 50,000 structures would be, and identified about 10,000 that they deemed ultrastable, both for thermal stability and activation stability.

    They also screened the structures for their “deliverable capacity” — a measure of a material’s ability to store and release gases. For this analysis, the researchers used methane gas, because capturing methane could be useful for removing it from the atmosphere or converting it to methanol. They found that the 10,000 ultrastable materials they identified had good deliverable capacities for methane and they were also mechanically stable, as measured by their predicted elastic modulus.

    “Designing a MOF requires consideration of many types of stability, but our models enable a near-zero-cost prediction of thermal and activation stability,” Nandy says. “By also understanding the mechanical stability of these materials, we provide a new way to identify promising materials.”

    The researchers also identified certain building blocks that tend to produce more stable materials. One of the secondary building units with the best stability was a molecule that contains gadolinium, a rare-earth metal. Another was a cobalt-containing porphyrin — a large organic molecule made of four interconnected rings.

    Students in Kulik’s lab are now working on synthesizing some of these MOF structures and testing them in the lab for their stability and potential catalytic ability and gas separation ability. The researchers have also made their database of ultrastable materials available for researchers interested in testing them for their own scientific applications.

    The research was funded by the U.S. Defense Advanced Research Projects Agency, a National Science Foundation Graduate Research Fellowship, the Office of Naval Research, the Department of Energy, an MIT Portugal Seed Fund, and the National Research Foundation of Korea.

    Matter
    Journal of the American Chemical Society 2021

    Graphical abstract
    2

    See the full article here.

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    The MIT School of Engineering is one of the five schools of the Massachusetts Institute of Technology, located in Cambridge, Massachusetts. The School of Engineering has eight academic departments and two interdisciplinary institutes. The School grants SB, MEng, SM, engineer’s degrees, and PhD or ScD degrees. The school is the largest at MIT as measured by undergraduate and graduate enrollments and faculty members.

    Departments and initiatives:

    Departments:

    Aeronautics and Astronautics (Course 16)
    Biological Engineering (Course 20)
    Chemical Engineering (Course 10)
    Civil and Environmental Engineering (Course 1)
    Electrical Engineering and Computer Science (Course 6, joint department with MIT Schwarzman College of Computing)
    Materials Science and Engineering (Course 3)
    Mechanical Engineering (Course 2)
    Nuclear Science and Engineering (Course 22)

    Institutes:

    Institute for Medical Engineering and Science
    Health Sciences and Technology program (joint MIT-Harvard, “HST” in the course catalog)

    (Departments and degree programs are commonly referred to by course catalog numbers on campus.)

    Laboratories and research centers

    Abdul Latif Jameel Water and Food Systems Lab
    Center for Advanced Nuclear Energy Systems
    Center for Computational Engineering
    Center for Materials Science and Engineering
    Center for Ocean Engineering
    Center for Transportation and Logistics
    Industrial Performance Center
    Institute for Soldier Nanotechnologies
    Koch Institute for Integrative Cancer Research
    Laboratory for Information and Decision Systems
    Laboratory for Manufacturing and Productivity
    Materials Processing Center
    Microsystems Technology Laboratories
    MIT Lincoln Laboratory Beaver Works Center
    Novartis-MIT Center for Continuous Manufacturing
    Ocean Engineering Design Laboratory
    Research Laboratory of Electronics
    SMART Center
    Sociotechnical Systems Research Center
    Tata Center for Technology and Design

    MIT Seal

    USPS “Forever” postage stamps celebrating Innovation at MIT.

    Massachusettes Institute of Technology-Haystack Observatory Westford, Massachusetts, USA, Altitude 131 m (430 ft).

    4

    The Computer Science and Artificial Intelligence Laboratory (CSAIL)

    The Kavli Institute For Astrophysics and Space Research

    MIT’s Institute for Medical Engineering and Science is a research institute at the Massachusetts Institute of Technology

    The MIT Laboratory for Nuclear Science

    The MIT Media Lab

    The MIT Sloan School of Management

    Spectrum

    MIT.nano

    MIT Campus

    The Massachusetts Institute of Technology is a private land-grant research university in Cambridge, Massachusetts. The institute has an urban campus that extends more than a mile (1.6 km) alongside the Charles River. The institute also encompasses a number of major off-campus facilities such as the MIT Lincoln Laboratory , the MIT Bates Research and Engineering Center , and the Haystack Observatory , as well as affiliated laboratories such as the Broad Institute of MIT and Harvard and Whitehead Institute.

    Founded in 1861 in response to the increasing industrialization of the United States, Massachusetts Institute of Technology adopted a European polytechnic university model and stressed laboratory instruction in applied science and engineering. It has since played a key role in the development of many aspects of modern science, engineering, mathematics, and technology, and is widely known for its innovation and academic strength. It is frequently regarded as one of the most prestigious universities in the world.

    As of December 2020, 97 Nobel laureates, 26 Turing Award winners, and 8 Fields Medalists have been affiliated with MIT as alumni, faculty members, or researchers. In addition, 58 National Medal of Science recipients, 29 National Medals of Technology and Innovation recipients, 50 MacArthur Fellows, 80 Marshall Scholars, 3 Mitchell Scholars, 22 Schwarzman Scholars, 41 astronauts, and 16 Chief Scientists of the U.S. Air Force have been affiliated with The Massachusetts Institute of Technology. The university also has a strong entrepreneurial culture and MIT alumni have founded or co-founded many notable companies. Massachusetts Institute of Technology is a member of the Association of American Universities.

    Foundation and vision

    In 1859, a proposal was submitted to the Massachusetts General Court to use newly filled lands in Back Bay, Boston for a “Conservatory of Art and Science”, but the proposal failed. A charter for the incorporation of the Massachusetts Institute of Technology, proposed by William Barton Rogers, was signed by John Albion Andrew, the governor of Massachusetts, on April 10, 1861.

    Rogers, a professor from the University of Virginia , wanted to establish an institution to address rapid scientific and technological advances. He did not wish to found a professional school, but a combination with elements of both professional and liberal education, proposing that:

    “The true and only practicable object of a polytechnic school is, as I conceive, the teaching, not of the minute details and manipulations of the arts, which can be done only in the workshop, but the inculcation of those scientific principles which form the basis and explanation of them, and along with this, a full and methodical review of all their leading processes and operations in connection with physical laws.”

    The Rogers Plan reflected the German research university model, emphasizing an independent faculty engaged in research, as well as instruction oriented around seminars and laboratories.

    Early developments

    Two days after The Massachusetts Institute of Technology was chartered, the first battle of the Civil War broke out. After a long delay through the war years, MIT’s first classes were held in the Mercantile Building in Boston in 1865. The new institute was founded as part of the Morrill Land-Grant Colleges Act to fund institutions “to promote the liberal and practical education of the industrial classes” and was a land-grant school. In 1863 under the same act, the Commonwealth of Massachusetts founded the Massachusetts Agricultural College, which developed as the University of Massachusetts Amherst ). In 1866, the proceeds from land sales went toward new buildings in the Back Bay.

    The Massachusetts Institute of Technology was informally called “Boston Tech”. The institute adopted the European polytechnic university model and emphasized laboratory instruction from an early date. Despite chronic financial problems, the institute saw growth in the last two decades of the 19th century under President Francis Amasa Walker. Programs in electrical, chemical, marine, and sanitary engineering were introduced, new buildings were built, and the size of the student body increased to more than one thousand.

    The curriculum drifted to a vocational emphasis, with less focus on theoretical science. The fledgling school still suffered from chronic financial shortages which diverted the attention of the MIT leadership. During these “Boston Tech” years, Massachusetts Institute of Technology faculty and alumni rebuffed Harvard University president (and former MIT faculty) Charles W. Eliot’s repeated attempts to merge MIT with Harvard College’s Lawrence Scientific School. There would be at least six attempts to absorb MIT into Harvard. In its cramped Back Bay location, MIT could not afford to expand its overcrowded facilities, driving a desperate search for a new campus and funding. Eventually, the MIT Corporation approved a formal agreement to merge with Harvard, over the vehement objections of MIT faculty, students, and alumni. However, a 1917 decision by the Massachusetts Supreme Judicial Court effectively put an end to the merger scheme.

    In 1916, The Massachusetts Institute of Technology administration and the MIT charter crossed the Charles River on the ceremonial barge Bucentaur built for the occasion, to signify MIT’s move to a spacious new campus largely consisting of filled land on a one-mile-long (1.6 km) tract along the Cambridge side of the Charles River. The neoclassical “New Technology” campus was designed by William W. Bosworth and had been funded largely by anonymous donations from a mysterious “Mr. Smith”, starting in 1912. In January 1920, the donor was revealed to be the industrialist George Eastman of Rochester, New York, who had invented methods of film production and processing, and founded Eastman Kodak. Between 1912 and 1920, Eastman donated $20 million ($236.6 million in 2015 dollars) in cash and Kodak stock to MIT.

    Curricular reforms

    In the 1930s, President Karl Taylor Compton and Vice-President (effectively Provost) Vannevar Bush emphasized the importance of pure sciences like physics and chemistry and reduced the vocational practice required in shops and drafting studios. The Compton reforms “renewed confidence in the ability of the Institute to develop leadership in science as well as in engineering”. Unlike Ivy League schools, Massachusetts Institute of Technology catered more to middle-class families, and depended more on tuition than on endowments or grants for its funding. The school was elected to the Association of American Universities in 1934.

    Still, as late as 1949, the Lewis Committee lamented in its report on the state of education at The Massachusetts Institute of Technology that “the Institute is widely conceived as basically a vocational school”, a “partly unjustified” perception the committee sought to change. The report comprehensively reviewed the undergraduate curriculum, recommended offering a broader education, and warned against letting engineering and government-sponsored research detract from the sciences and humanities. The School of Humanities, Arts, and Social Sciences and the MIT Sloan School of Management were formed in 1950 to compete with the powerful Schools of Science and Engineering. Previously marginalized faculties in the areas of economics, management, political science, and linguistics emerged into cohesive and assertive departments by attracting respected professors and launching competitive graduate programs. The School of Humanities, Arts, and Social Sciences continued to develop under the successive terms of the more humanistically oriented presidents Howard W. Johnson and Jerome Wiesner between 1966 and 1980.

    The Massachusetts Institute of Technology‘s involvement in military science surged during World War II. In 1941, Vannevar Bush was appointed head of the federal Office of Scientific Research and Development and directed funding to only a select group of universities, including MIT. Engineers and scientists from across the country gathered at Massachusetts Institute of Technology ‘s Radiation Laboratory, established in 1940 to assist the British military in developing microwave radar. The work done there significantly affected both the war and subsequent research in the area. Other defense projects included gyroscope-based and other complex control systems for gunsight, bombsight, and inertial navigation under Charles Stark Draper’s Instrumentation Laboratory; the development of a digital computer for flight simulations under Project Whirlwind; and high-speed and high-altitude photography under Harold Edgerton. By the end of the war, The Massachusetts Institute of Technology became the nation’s largest wartime R&D contractor (attracting some criticism of Bush), employing nearly 4000 in the Radiation Laboratory alone and receiving in excess of $100 million ($1.2 billion in 2015 dollars) before 1946. Work on defense projects continued even after then. Post-war government-sponsored research at MIT included SAGE and guidance systems for ballistic missiles and Project Apollo.

    These activities affected The Massachusetts Institute of Technology profoundly. A 1949 report noted the lack of “any great slackening in the pace of life at the Institute” to match the return to peacetime, remembering the “academic tranquility of the prewar years”, though acknowledging the significant contributions of military research to the increased emphasis on graduate education and rapid growth of personnel and facilities. The faculty doubled and the graduate student body quintupled during the terms of Karl Taylor Compton, president of The Massachusetts Institute of Technology between 1930 and 1948; James Rhyne Killian, president from 1948 to 1957; and Julius Adams Stratton, chancellor from 1952 to 1957, whose institution-building strategies shaped the expanding university. By the 1950s, The Massachusetts Institute of Technology no longer simply benefited the industries with which it had worked for three decades, and it had developed closer working relationships with new patrons, philanthropic foundations and the federal government.

    In late 1960s and early 1970s, student and faculty activists protested against the Vietnam War and The Massachusetts Institute of Technology ‘s defense research. In this period Massachusetts Institute of Technology’s various departments were researching helicopters, smart bombs and counterinsurgency techniques for the war in Vietnam as well as guidance systems for nuclear missiles. The Union of Concerned Scientists was founded on March 4, 1969 during a meeting of faculty members and students seeking to shift the emphasis on military research toward environmental and social problems. The Massachusetts Institute of Technology ultimately divested itself from the Instrumentation Laboratory and moved all classified research off-campus to the MIT Lincoln Laboratory facility in 1973 in response to the protests. The student body, faculty, and administration remained comparatively unpolarized during what was a tumultuous time for many other universities. Johnson was seen to be highly successful in leading his institution to “greater strength and unity” after these times of turmoil. However, six Massachusetts Institute of Technology students were sentenced to prison terms at this time and some former student leaders, such as Michael Albert and George Katsiaficas, are still indignant about MIT’s role in military research and its suppression of these protests. (Richard Leacock’s film, November Actions, records some of these tumultuous events.)

    In the 1980s, there was more controversy at The Massachusetts Institute of Technology over its involvement in SDI (space weaponry) and CBW (chemical and biological warfare) research. More recently, The Massachusetts Institute of Technology’s research for the military has included work on robots, drones and ‘battle suits’.

    Recent history

    The Massachusetts Institute of Technology has kept pace with and helped to advance the digital age. In addition to developing the predecessors to modern computing and networking technologies, students, staff, and faculty members at Project MAC, the Artificial Intelligence Laboratory, and the Tech Model Railroad Club wrote some of the earliest interactive computer video games like Spacewar! and created much of modern hacker slang and culture. Several major computer-related organizations have originated at MIT since the 1980s: Richard Stallman’s GNU Project and the subsequent Free Software Foundation were founded in the mid-1980s at the AI Lab; the MIT Media Lab was founded in 1985 by Nicholas Negroponte and Jerome Wiesner to promote research into novel uses of computer technology; the World Wide Web Consortium standards organization was founded at the Laboratory for Computer Science in 1994 by Tim Berners-Lee; the MIT OpenCourseWare project has made course materials for over 2,000 Massachusetts Institute of Technology classes available online free of charge since 2002; and the One Laptop per Child initiative to expand computer education and connectivity to children worldwide was launched in 2005.

    The Massachusetts Institute of Technology was named a sea-grant college in 1976 to support its programs in oceanography and marine sciences and was named a space-grant college in 1989 to support its aeronautics and astronautics programs. Despite diminishing government financial support over the past quarter century, MIT launched several successful development campaigns to significantly expand the campus: new dormitories and athletics buildings on west campus; the Tang Center for Management Education; several buildings in the northeast corner of campus supporting research into biology, brain and cognitive sciences, genomics, biotechnology, and cancer research; and a number of new “backlot” buildings on Vassar Street including the Stata Center. Construction on campus in the 2000s included expansions of the Media Lab, the Sloan School’s eastern campus, and graduate residences in the northwest. In 2006, President Hockfield launched the MIT Energy Research Council to investigate the interdisciplinary challenges posed by increasing global energy consumption.

    In 2001, inspired by the open source and open access movements, The Massachusetts Institute of Technology launched “OpenCourseWare” to make the lecture notes, problem sets, syllabi, exams, and lectures from the great majority of its courses available online for no charge, though without any formal accreditation for coursework completed. While the cost of supporting and hosting the project is high, OCW expanded in 2005 to include other universities as a part of the OpenCourseWare Consortium, which currently includes more than 250 academic institutions with content available in at least six languages. In 2011, The Massachusetts Institute of Technology announced it would offer formal certification (but not credits or degrees) to online participants completing coursework in its “MITx” program, for a modest fee. The “edX” online platform supporting MITx was initially developed in partnership with Harvard and its analogous “Harvardx” initiative. The courseware platform is open source, and other universities have already joined and added their own course content. In March 2009 the Massachusetts Institute of Technology faculty adopted an open-access policy to make its scholarship publicly accessible online.

    The Massachusetts Institute of Technology has its own police force. Three days after the Boston Marathon bombing of April 2013, MIT Police patrol officer Sean Collier was fatally shot by the suspects Dzhokhar and Tamerlan Tsarnaev, setting off a violent manhunt that shut down the campus and much of the Boston metropolitan area for a day. One week later, Collier’s memorial service was attended by more than 10,000 people, in a ceremony hosted by the Massachusetts Institute of Technology community with thousands of police officers from the New England region and Canada. On November 25, 2013, The Massachusetts Institute of Technology announced the creation of the Collier Medal, to be awarded annually to “an individual or group that embodies the character and qualities that Officer Collier exhibited as a member of The Massachusetts Institute of Technology community and in all aspects of his life”. The announcement further stated that “Future recipients of the award will include those whose contributions exceed the boundaries of their profession, those who have contributed to building bridges across the community, and those who consistently and selflessly perform acts of kindness”.

    In September 2017, the school announced the creation of an artificial intelligence research lab called the MIT-IBM Watson AI Lab. IBM will spend $240 million over the next decade, and the lab will be staffed by MIT and IBM scientists. In October 2018 MIT announced that it would open a new Schwarzman College of Computing dedicated to the study of artificial intelligence, named after lead donor and The Blackstone Group CEO Stephen Schwarzman. The focus of the new college is to study not just AI, but interdisciplinary AI education, and how AI can be used in fields as diverse as history and biology. The cost of buildings and new faculty for the new college is expected to be $1 billion upon completion.

    The Caltech/MIT Advanced aLIGO was designed and constructed by a team of scientists from California Institute of Technology , Massachusetts Institute of Technology, and industrial contractors, and funded by the National Science Foundation .

    Caltech /MIT Advanced aLigo

    It was designed to open the field of gravitational-wave astronomy through the detection of gravitational waves predicted by general relativity. Gravitational waves were detected for the first time by the LIGO detector in 2015. For contributions to the LIGO detector and the observation of gravitational waves, two Caltech physicists, Kip Thorne and Barry Barish, and Massachusetts Institute of Technology physicist Rainer Weiss won the Nobel Prize in physics in 2017. Weiss, who is also a Massachusetts Institute of Technology graduate, designed the laser interferometric technique, which served as the essential blueprint for the LIGO.

    The mission of The Massachusetts Institute of Technology is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of The Massachusetts Institute of Technology community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

     
  • richardmitnick 9:31 am on January 16, 2023 Permalink | Reply
    Tags: "LAIs": long-acting injectables, "University of Toronto scientists use AI to fast-track drug formulation development", , , , , Machine learning, Machine-learning algorithms can be used to predict experimental drug release from long-acting injectables (LAI) and can also help guide the design of new LAIs., , Reducing ‘trial and error’ for new drug development, , Theoretical and Quantum Chemistry   

    From The University of Toronto (CA): “University of Toronto scientists use AI to fast-track drug formulation development” 

    From The University of Toronto (CA)

    1.11.23
    Kate Richards | Leslie Dan Faculty of Pharmacy

    1
    Researchers Christine Allen and Alán Aspuru-Guzik used machine learning to predict experimental drug release from long-acting injectables (photo by Steve Southon)

    In a bid to reduce the time and cost associated with developing promising new medicines, University of Toronto scientists have successfully tested the use of artificial intelligence to guide the design of long-acting injectable drug formulations.

    The study, published this week in Nature Communication [below], was led by Professor Christine Allen in the Leslie Dan Faculty of Pharmacy and Alán Aspuru-Guzik in the departments of chemistry and computer science in the Faculty of Arts & Science.

    Fig. 1: Schematic demonstrating traditional and data-driven formulation development approaches for long-acting injectables (LAIs).
    2
    [a] Selected routes of administration for FDA-approved LAI formulations. [b] Typical trial-and-error loop commonly employed during the development of LAIs termed “traditional LAI formulation development”. [c] Workflow employed in this study to train and analyze machine learning (ML) models to accelerate the design of new LAI systems, termed “Data-driven LAI formulation development”.

    Their multidisciplinary research shows that machine-learning algorithms can be used to predict experimental drug release from long-acting injectables (LAI) and can also help guide the design of new LAIs.

    “This study takes a critical step towards data-driven drug formulation development with an emphasis on long-acting injectables,” said Allen, who is a member of U of T’s Acceleration Consortium, a global initiative that uses artificial intelligence and automation to accelerate the discovery of materials and molecules needed for a sustainable future.

    “We’ve seen how machine learning has enabled incredible leap-step advances in the discovery of new molecules that have the potential to become medicines. We are now working to apply the same techniques to help us design better drug formulations and, ultimately, better medicines.”

    Considered one of the most promising therapeutic strategies for the treatment of chronic diseases, long-acting injectables are a class of advanced drug delivery systems that are designed to release their cargo over extended periods of time to achieve a prolonged therapeutic effect. This approach can help patients better adhere to their medication regimen, reduce side effects and increase efficacy when injected close to the site of action in the body.

    However, achieving the optimal amount of drug release over the desired period of time requires the development of a wide array of formulation candidates through extensive and time-consuming experiments. This trial-and-error approach has created a significant bottleneck in LAI development compared to more conventional types of drug formulation.

    “AI is transforming the way we do science. It helps accelerate discovery and optimization. This is a perfect example of a ‘before AI’ and an ‘after AI’ moment and shows how drug delivery can be impacted by this multidisciplinary research,” said Aspuru-Guzik, who is director of the Acceleration Consortium and holds the CIFAR Artificial Intelligence Research Chair at the Vector Institute in Toronto and the Canada 150 Research Chair in Theoretical and Quantum Chemistry.

    3
    From left: Zeqing Bao, PhD trainee in pharmaceutical sciences, and Riley Hickman, PhD trainee in chemistry, are co-authors on the study published in Nature Communication (photo by Steve Southon)

    Reducing ‘trial and error’ for new drug development

    To investigate whether machine-learning tools could accurately predict the rate of drug release, the research team trained and evaluated a series of 11 different models, including multiple linear regression (MLR), random forest (RF), light gradient boosting machine (lightGBM) and neural networks (NN). The data set used to train the selected panel of machine learning models was constructed from previously published studies by the authors and other research groups.

    “Once we had the data set, we split it into two subsets: one used for training the models and one for testing,” said Pauric Bannigan, research associate with the Allen research group at the Leslie Dan Faculty of Pharmacy. “We then asked the models to predict the results of the test set and directly compared with previous experimental data. We found that the tree-based models, and specifically lightGBM, delivered the most accurate predictions.”

    As a next step, the team worked to apply these predictions and illustrate how machine learning models might be used to inform the design of new LAIs by using advanced analytical techniques to extract design criteria from the lightGBM model. This allowed the design of a new LAI formulation for a drug currently used to treat ovarian cancer.

    Expectations around the speed with which new drug formulations are developed have heightened drastically since the onset of the COVID-19 pandemic.

    “We’ve seen in the pandemic that there was a need to design a new formulation in weeks, to catch up with evolving variants. Allowing for new formulations to be developed in a short period of time, relative to what has been done in the past using conventional methods, is crucially important so that patients can benefit from new therapies,” Allen said, explaining that the research team is also investigating using machine learning to support the development of novel mRNA and lipid nanoparticle formulations.

    More robust databases needed for future advances

    The results of the current study signal the potential for machine learning to reduce reliance on trial-and-error testing. However, Allen and the research team identify that the lack of available open-source data sets in pharmaceutical sciences represents a significant challenge to future progress.

    “When we began this project, we were surprised by the lack of data reported across numerous studies using polymeric microparticles,” Allen said. “This meant the studies and the work that went into them couldn’t be leveraged to develop the machine learning models we need to propel advances in this space. There is a real need to create robust databases in pharmaceutical sciences that are open access and available for all so that we can work together to advance the field.”

    To that end, Allen and the research team have published their datasets and code on the open-source platform Zenodo.

    “For this study our goal was to lower the barrier of entry to applying machine learning in pharmaceutical sciences,” Bannigan said. “We’ve made our data sets fully available so others can hopefully build on this work. We want this to be the start of something and not the end of the story for machine learning in drug formulation.”

    The study was supported by the Natural Sciences and Engineering Research Council of Canada, the Defense Advance Research Projects Agency and the Vector Institute.

    Science paper:
    Nature Communication

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The The University of Toronto (CA) is a public research university in Toronto, Ontario, Canada, located on the grounds that surround Queen’s Park. It was founded by royal charter in 1827 as King’s College, the oldest university in the province of Ontario.

    Originally controlled by the Church of England, the university assumed its present name in 1850 upon becoming a secular institution.

    As a collegiate university, it comprises eleven colleges each with substantial autonomy on financial and institutional affairs and significant differences in character and history. The university also operates two satellite campuses located in Scarborough and Mississauga.

    University of Toronto has evolved into Canada’s leading institution of learning, discovery and knowledge creation. We are proud to be one of the world’s top research-intensive universities, driven to invent and innovate.

    Our students have the opportunity to learn from and work with preeminent thought leaders through our multidisciplinary network of teaching and research faculty, alumni and partners.

    The ideas, innovations and actions of more than 560,000 graduates continue to have a positive impact on the world.

    Academically, the University of Toronto is noted for movements and curricula in literary criticism and communication theory, known collectively as the Toronto School.

    The university was the birthplace of insulin and stem cell research, and was the site of the first electron microscope in North America; the identification of the first black hole Cygnus X-1; multi-touch technology, and the development of the theory of NP-completeness.

    The university was one of several universities involved in early research of deep learning. It receives the most annual scientific research funding of any Canadian university and is one of two members of the Association of American Universities outside the United States, the other being McGill(CA).

    The Varsity Blues are the athletic teams that represent the university in intercollegiate league matches, with ties to gridiron football, rowing and ice hockey. The earliest recorded instance of gridiron football occurred at University of Toronto’s University College in November 1861.

    The university’s Hart House is an early example of the North American student centre, simultaneously serving cultural, intellectual, and recreational interests within its large Gothic-revival complex.

    The University of Toronto has educated three Governors General of Canada, four Prime Ministers of Canada, three foreign leaders, and fourteen Justices of the Supreme Court. As of March 2019, ten Nobel laureates, five Turing Award winners, 94 Rhodes Scholars, and one Fields Medalist have been affiliated with the university.

    Early history

    The founding of a colonial college had long been the desire of John Graves Simcoe, the first Lieutenant-Governor of Upper Canada and founder of York, the colonial capital. As an University of Oxford (UK)-educated military commander who had fought in the American Revolutionary War, Simcoe believed a college was needed to counter the spread of republicanism from the United States. The Upper Canada Executive Committee recommended in 1798 that a college be established in York.

    On March 15, 1827, a royal charter was formally issued by King George IV, proclaiming “from this time one College, with the style and privileges of a University … for the education of youth in the principles of the Christian Religion, and for their instruction in the various branches of Science and Literature … to continue for ever, to be called King’s College.” The granting of the charter was largely the result of intense lobbying by John Strachan, the influential Anglican Bishop of Toronto who took office as the college’s first president. The original three-storey Greek Revival school building was built on the present site of Queen’s Park.

    Under Strachan’s stewardship, King’s College was a religious institution closely aligned with the Church of England and the British colonial elite, known as the Family Compact. Reformist politicians opposed the clergy’s control over colonial institutions and fought to have the college secularized. In 1849, after a lengthy and heated debate, the newly elected responsible government of the Province of Canada voted to rename King’s College as the University of Toronto and severed the school’s ties with the church. Having anticipated this decision, the enraged Strachan had resigned a year earlier to open Trinity College as a private Anglican seminary. University College was created as the nondenominational teaching branch of the University of Toronto. During the American Civil War the threat of Union blockade on British North America prompted the creation of the University Rifle Corps which saw battle in resisting the Fenian raids on the Niagara border in 1866. The Corps was part of the Reserve Militia lead by Professor Henry Croft.

    Established in 1878, the School of Practical Science was the precursor to the Faculty of Applied Science and Engineering which has been nicknamed Skule since its earliest days. While the Faculty of Medicine opened in 1843 medical teaching was conducted by proprietary schools from 1853 until 1887 when the faculty absorbed the Toronto School of Medicine. Meanwhile the university continued to set examinations and confer medical degrees. The university opened the Faculty of Law in 1887, followed by the Faculty of Dentistry in 1888 when the Royal College of Dental Surgeons became an affiliate. Women were first admitted to the university in 1884.

    A devastating fire in 1890 gutted the interior of University College and destroyed 33,000 volumes from the library but the university restored the building and replenished its library within two years. Over the next two decades a collegiate system took shape as the university arranged federation with several ecclesiastical colleges including Strachan’s Trinity College in 1904. The university operated the Royal Conservatory of Music from 1896 to 1991 and the Royal Ontario Museum from 1912 to 1968; both still retain close ties with the university as independent institutions. The University of Toronto Press was founded in 1901 as Canada’s first academic publishing house. The Faculty of Forestry founded in 1907 with Bernhard Fernow as dean was Canada’s first university faculty devoted to forest science. In 1910, the Faculty of Education opened its laboratory school, the University of Toronto Schools.

    World wars and post-war years

    The First and Second World Wars curtailed some university activities as undergraduate and graduate men eagerly enlisted. Intercollegiate athletic competitions and the Hart House Debates were suspended although exhibition and interfaculty games were still held. The David Dunlap Observatory in Richmond Hill opened in 1935 followed by the University of Toronto Institute for Aerospace Studies in 1949. The university opened satellite campuses in Scarborough in 1964 and in Mississauga in 1967. The university’s former affiliated schools at the Ontario Agricultural College and Glendon Hall became fully independent of the University of Toronto and became part of University of Guelph (CA) in 1964 and York University (CA) in 1965 respectively. Beginning in the 1980s reductions in government funding prompted more rigorous fundraising efforts.

    Since 2000

    In 2000 Kin-Yip Chun was reinstated as a professor of the university after he launched an unsuccessful lawsuit against the university alleging racial discrimination. In 2017 a human rights application was filed against the University by one of its students for allegedly delaying the investigation of sexual assault and being dismissive of their concerns. In 2018 the university cleared one of its professors of allegations of discrimination and antisemitism in an internal investigation after a complaint was filed by one of its students.

    The University of Toronto was the first Canadian university to amass a financial endowment greater than c. $1 billion in 2007. On September 24, 2020 the university announced a $250 million gift to the Faculty of Medicine from businessman and philanthropist James C. Temerty- the largest single philanthropic donation in Canadian history. This broke the previous record for the school set in 2019 when Gerry Schwartz and Heather Reisman jointly donated $100 million for the creation of a 750,000-square foot innovation and artificial intelligence centre.

    Research

    Since 1926 the University of Toronto has been a member of the Association of American Universities a consortium of the leading North American research universities. The university manages by far the largest annual research budget of any university in Canada with sponsored direct-cost expenditures of $878 million in 2010. In 2018 the University of Toronto was named the top research university in Canada by Research Infosource with a sponsored research income (external sources of funding) of $1,147.584 million in 2017. In the same year the university’s faculty averaged a sponsored research income of $428,200 while graduate students averaged a sponsored research income of $63,700. The federal government was the largest source of funding with grants from the Canadian Institutes of Health Research; the Natural Sciences and Engineering Research Council; and the Social Sciences and Humanities Research Council amounting to about one-third of the research budget. About eight percent of research funding came from corporations- mostly in the healthcare industry.

    The first practical electron microscope was built by the physics department in 1938. During World War II the university developed the G-suit- a life-saving garment worn by Allied fighter plane pilots later adopted for use by astronauts.Development of the infrared chemiluminescence technique improved analyses of energy behaviours in chemical reactions. In 1963 the asteroid 2104 Toronto was discovered in the David Dunlap Observatory (CA) in Richmond Hill and is named after the university. In 1972 studies on Cygnus X-1 led to the publication of the first observational evidence proving the existence of black holes. Toronto astronomers have also discovered the Uranian moons of Caliban and Sycorax; the dwarf galaxies of Andromeda I, II and III; and the supernova SN 1987A. A pioneer in computing technology the university designed and built UTEC- one of the world’s first operational computers- and later purchased Ferut- the second commercial computer after UNIVAC I. Multi-touch technology was developed at Toronto with applications ranging from handheld devices to collaboration walls. The AeroVelo Atlas which won the Igor I. Sikorsky Human Powered Helicopter Competition in 2013 was developed by the university’s team of students and graduates and was tested in Vaughan.

    The discovery of insulin at the University of Toronto in 1921 is considered among the most significant events in the history of medicine. The stem cell was discovered at the university in 1963 forming the basis for bone marrow transplantation and all subsequent research on adult and embryonic stem cells. This was the first of many findings at Toronto relating to stem cells including the identification of pancreatic and retinal stem cells. The cancer stem cell was first identified in 1997 by Toronto researchers who have since found stem cell associations in leukemia; brain tumors; and colorectal cancer. Medical inventions developed at Toronto include the glycaemic index; the infant cereal Pablum; the use of protective hypothermia in open heart surgery; and the first artificial cardiac pacemaker. The first successful single-lung transplant was performed at Toronto in 1981 followed by the first nerve transplant in 1988; and the first double-lung transplant in 1989. Researchers identified the maturation promoting factor that regulates cell division and discovered the T-cell receptor which triggers responses of the immune system. The university is credited with isolating the genes that cause Fanconi anemia; cystic fibrosis; and early-onset Alzheimer’s disease among numerous other diseases. Between 1914 and 1972 the university operated the Connaught Medical Research Laboratories- now part of the pharmaceutical corporation Sanofi-Aventis. Among the research conducted at the laboratory was the development of gel electrophoresis.

    The University of Toronto is the primary research presence that supports one of the world’s largest concentrations of biotechnology firms. More than 5,000 principal investigators reside within 2 kilometres (1.2 mi) from the university grounds in Toronto’s Discovery District conducting $1 billion of medical research annually. MaRS Discovery District is a research park that serves commercial enterprises and the university’s technology transfer ventures. In 2008, the university disclosed 159 inventions and had 114 active start-up companies. Its SciNet Consortium operates the most powerful supercomputer in Canada.

     
  • richardmitnick 9:56 am on January 11, 2023 Permalink | Reply
    Tags: "AI to monitor changes to globally important glacier", , , , , Crevassing is an important component of ice shelf dynamics., , Machine learning, , , Un-corking the flow of ice - a process known as "unbuttressing", Using radar satellite images   

    From The University of Leeds (UK) And The University of Bristol (UK): “AI to monitor changes to globally important glacier” 

    U Leeds bloc

    From The University of Leeds (UK)

    And

    The University of Bristol (UK)

    1.9.23

    1
    Crevasses on Antarctic ice shelves change the material properties of the ice and influence their flow-speed. Research shows this coupling to be relevant but more complicated than previously thought for the Thwaites Glacier Ice Tongue. Credit: Dr Anna Hogg, University of Leeds.

    Scientists have developed AI to track the development of crevasses – or fractures – on the Thwaites Glacier ice tongue in west Antarctica.

    Crevasses are indicators of stress building-up in the glacier. 

    A team of researchers from the University of Leeds and University of Bristol have adapted an AI algorithm originally developed to identify cells in microscope images to spot crevasses forming in the ice from satellite images.

    Thwaites is a particularly important part of the Antarctic Ice Sheet because it holds enough ice to raise global sea levels by around 60 centimetres and is considered by many to be at risk of rapid retreat, threatening coastal communities around the world.

    Use of AI will allow scientists to more accurately monitor and model changes to this important glacier. 

    Published in the journal Nature Geoscience [below], the research focussed on a part of the glacier system where the ice flows into the sea and begins to float. Where this happens is known as the grounding line and it forms the start of the Thwaites Eastern ice shelf and the Thwaites Glacier ice tongue, which is also an ice shelf.

    Despite being small in comparison to the size of the entire glacier, changes to these ice shelves could have wide-ranging implications for the whole glacier system and future sea-level rise. 

    The scientists wanted to know if crevassing or fracture formation was more likely to occur with changes to the speed of the ice flow. 

    2
    Scientists have mapped the crevasses on the Thwaites Glacier Ice Tongue through time using deep learning. This new research marks a change in the way in which the structural and dynamic properties of ice shelves can be investigated. Credit: Trystan Surawy-Stepney, University of Leeds.

    Developing the algorithm

    Using machine learning, the researchers taught a computer to look at radar satellite images and identify changes over the last decade. The images were taken by the European Space Agency’s Sentinel-1 satellites, which can “see” through the top layer of snow and onto the glacier, revealing the fractured surface of the ice normally hidden from sight.

    The analysis revealed that over the last six years, the Thwaites Glacier ice tongue has sped up and slowed down twice, by around 40% each time – from four km/year to six km/year before slowing. This is a substantial increase in the magnitude and frequency of speed change compared with past records.

    The study found a complex interplay between crevasse formation and speed of the ice flow. When the ice flow quickens or slows, more crevasses are likely to form. In turn, the increase in crevasses causes the ice to change speed as the level of friction between the ice and underlying rock alters.

    Dr Anna Hogg, a glaciologist in the Satellite Ice Dynamics group at Leeds and an author on the study, said: “Dynamic changes on ice shelves are traditionally thought to occur on timescales of decades to centuries, so it was surprising to see this huge glacier speed up and slow down so quickly.”

    “The study also demonstrates the key role that fractures play in un-corking the flow of ice – a process known as “unbuttressing”.

    3
    Scientists have used radar imagery from the European Space Agency’s Sentinel-1 satellites to measure flow speed of the Thwaites Glacier Ice Tongue (shown) and analyse its structural integrity using deep learning. Credit: Benjamin J. Davison, University of Leeds.

    “Ice sheet models must be evolved to account for the fact that ice can fracture, which will allow us to measure future sea level contributions more accurately.”

    Trystan Surawy-Stepney, lead author of the paper and a doctoral researcher at Leeds, added: “The nice thing about this study is the precision with which the crevasses were mapped.

    “It has been known for a while that crevassing is an important component of ice shelf dynamics and this study demonstrates that this link can be studied on a large scale with beautiful resolution, using computer vision techniques applied to the deluge of satellite images acquired each week.” 

    Satellites orbiting the Earth provide scientists with new data over the most remote and inaccessible regions of Antarctica. The radar on board Sentinel-1 allows places like Thwaites Glacier to be imaged day or night, every week, all year round.

    Dr Mark Drinkwater of the European Space Agency commented: “Studies like this would not be possible without the large volume of high-resolution data provided by Sentinel-1. By continuing to plan future missions, we can carry on supporting work like this and broaden the scope of scientific research on vital areas of the Earth’s climate system.”

    As for Thwaites Glacier ice tongue, it remains to be seen whether such short-term changes have any impact on the long-term dynamics of the glacier, or whether they are simply isolated symptoms of an ice shelf close to its end. 

    Science paper:
    Nature Geoscience

    See the full article here.

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The University of Bristol (UK) is one of the most popular and successful universities in the UK and was ranked within the top 50 universities in the world in the QS World University Rankings 2018.

    The U Bristol (UK) is at the cutting edge of global research. We have made innovations in areas ranging from cot death prevention to nanotechnology.

    The University has had a reputation for innovation since its founding in 1876. Our research tackles some of the world’s most pressing issues in areas as diverse as infection and immunity, human rights, climate change, and cryptography and information security.

    The University currently has 40 Fellows of the Royal Society and 15 of the British Academy – a remarkable achievement for a relatively small institution.

    We aim to bring together the best minds in individual fields, and encourage researchers from different disciplines and institutions to work together to find lasting solutions to society’s pressing problems.

    We are involved in numerous international research collaborations and integrate practical experience in our curriculum, so that students work on real-life projects in partnership with business, government and community sectors.

    U Leeds Campus

    The University of Leeds is a public research university in Leeds, West Yorkshire, England. It was established in 1874 as the Yorkshire College of Science. In 1884 it merged with the Leeds School of Medicine (established 1831) and was renamed Yorkshire College. It became part of the federal Victoria University in 1887, joining Owens College (which became The University of Manchester (UK)) and University College Liverpool (which became The University of Liverpool (UK)). In 1904 a royal charter was granted to the University of Leeds by King Edward VII.

    The university has 36,330 students, the 5th largest university in the UK (out of 169). From 2006 to present, the university has consistently been ranked within the top 5 (alongside the University of Manchester, The Manchester Metropolitan University (UK), The University of Nottingham (UK) and The University of Edinburgh (SCT)) in the United Kingdom for the number of applications received. Leeds had an income of £751.7 million in 2020/21, of which £130.1 million was from research grants and contracts. The university has financial endowments of £90.5 million (2020–21), ranking outside the top ten British universities by financial endowment.

    Notable alumni include current Leader of the Labour Party Keir Starmer, former Secretary of State Jack Straw, former co-chairman of the Conservative Party Sayeeda Warsi, Piers Sellers (NASA astronaut) and six Nobel laureates.

    The university’s history is linked to the development of Leeds as an international centre for the textile industry and clothing manufacture in the United Kingdom during the Victorian era. The university’s roots can be traced back to the formation of schools of medicine in English cities to serve the general public.

    Before 1900, only six universities had been established in England and Wales: The University of Oxford (UK) (founded c. 1096–1201), The University of Cambridge (UK) (c. 1201), The University of London (UK) (1836), The University of Durham (UK) (1837), Victoria University (UK) (1880), and The University of Wales Trinity Saint David[ Prifysgol Cymru Y Drindod Dewi Sant](WLS) (1893).

    The Victoria University was established in Manchester in 1880 as a federal university in the North of England, instead of the government elevating Owens College to a university and grant it a royal charter. Owens College was the sole college of Victoria University from 1880 to 1884; in 1887 Yorkshire College was the third to join the university.

    Leeds was given its first university in 1887 when the Yorkshire College joined the federal Victoria University on 3 November. The Victoria University had been established by royal charter in 1880; Owens College being at first the only member college. Leeds now found itself in an educational union with close social cousins from Manchester and Liverpool.

    Unlike Owens College, the Leeds campus of the Victoria University had never barred women from its courses. However, it was not until special facilities were provided at the Day Training College in 1896 that women began enrolling in significant numbers. The first female student to begin a course at the university was Lilias Annie Clark, who studied Modern Literature and Education.

    The Victoria (Leeds) University was a short-lived concept, as the multiple university locations in Manchester and Liverpool were keen to establish themselves as separate, independent universities. This was partially due to the benefits a university had for the cities of Liverpool and Manchester whilst the institutions were also unhappy with the practical difficulties posed by maintaining a federal arrangement across broad distances. The interests of the universities and respective cities in creating independent institutions was further spurred by the granting of a charter to the University of Birmingham in 1900 after lobbying from Joseph Chamberlain.

    Following a Royal Charter and Act of Parliament in 1903, the then newly formed University of Liverpool began the fragmentation of the Victoria University by being the first member to gain independence. The University of Leeds soon followed suit and had been granted a royal charter as an independent body by King Edward VII by 1904.

    The Victoria University continued after the break-up of the group, with an amended constitution and renamed as the Victoria University of Manchester (though “Victoria” was usually omitted from its name except in formal usage) until September 2004. On 1 October 2004 a merger with the University of Manchester Institute of Science and Technology was enacted to form The University of Manchester.

    In December 2004, financial pressures forced the university’s governing body (the Council) to decide to close the Bretton campus. Activities at Bretton were moved to the main university campus in the summer of 2007 (allowing all Bretton-based students to complete their studies there). There was substantial opposition to the closure by the Bretton students. The university’s other satellite site, Manygates in Wakefield, also closed, but Lifelong Learning and Healthcare programmes are continuing on a new site next to Wakefield College.

    In May 2006, the university began re-branding itself to consolidate its visual identity to promote one consistent image. A new logo was produced, based on that used during the centenary celebrations in 2004, to replace the combined use of the modified university arms and the Parkinson Building, which has been in use since 2004. The university arms will still be used in its original form for ceremonial purposes only. Four university colours were also specified as being green, red, black and beige.

    Leeds provides the local community with over 2,000 university student volunteers. With 8,700 staff employed in 2019-20, the university is the third largest employer in Leeds and contributes around £1.23bn a year to the local economy – students add a further £211m through rents and living costs.

    The university’s educational partnerships have included providing formal accreditation of degree awards to The Leeds Arts University (UK) and The Leeds Trinity University (UK), although the latter now has the power to award its own degrees. The College of the Resurrection, an Anglican theological college in Mirfield with monastic roots, has, since its inception in 1904, been affiliated to the university, and ties remain close. The university is also a founding member of The Northern Consortium (UK).

    In August 2010, the university was one of the most targeted institutions by students entering the UCAS clearing process for 2010 admission, which matches undersubscribed courses to students who did not meet their firm or insurance choices. The university was one of nine The Russell Group Association(UK) universities offering extremely limited places to “exceptional” students after the universities in Birmingham, Bristol, Cambridge, Edinburgh and Oxford declared they would not enter the process due to courses being full to capacity.

    On 12 October 2010, The Refectory of the Leeds University Union hosted a live edition of the Channel 4 News, with students, academics and economists expressing their reaction to the Browne Review, an independent review of Higher Education funding and student finance conducted by John Browne, Baron Browne of Madingley. University of Leeds Vice-Chancellor and Russell Group chairman Michael Arthur participated, giving an academic perspective alongside current vice-chancellor of The Kingston University (UK) and former Pro Vice-Chancellor and Professor of Education at the University of Leeds, Sir Peter Scott. Midway through the broadcast a small group of protesters against the potential rise of student debt entered the building before being restrained and evacuated.

    In 2016, The University of Leeds became University of the Year 2017 in The Times and The Sunday Times’ Good University Guide. The university has risen to 13th place overall, which reflects impressive results in student experience, high entry standards, services and facilities, and graduate prospects.

    In 2018, the global world ranking of the University of Leeds is No.93. There are currently 30,842 students are studying in this university. The average tuition fee is 12,000 – US$14,000.

    Research

    Many of the academic departments have specialist research facilities, for use by staff and students to support research from internationally significant collections in university libraries to state-of-the-art laboratories. These include those hosted at the Institute for Transport Studies, such as the University of Leeds Driving Simulator which is one of the most advanced worldwide in a research environment, allowing transport researchers to watch driver behaviour in accurately controlled laboratory conditions without the risks associated with a live, physical environment.

    With extensive links to the St James’s University Hospital through the Leeds School of Medicine, the university operates a range of high-tech research laboratories for biomedical and physical sciences, food and engineering – including clean rooms for bionanotechnology and plant science greenhouses. The university is connected to Leeds General Infirmary and the institute of molecular medicine based at St James’s University Hospital which aids integration of research and practice in the medical field.

    The university also operate research facilities in the aviation field, with the Airbus A320 flight simulator. The simulator was devised with an aim to promote the safety and efficiency of flight operations; where students use the simulator to develop their reactions to critical situations such as engine failure, display malfunctioning and freak weather.

    In addition to these facilities, many university departments conduct research in their respective fields. There are also various research centres, including Leeds University Centre for African Studies.

    Leeds was ranked joint 19th (along with The University of St Andrews (SCT)) amongst multi-faculty institutions in the UK for the quality (GPA) of its research and 10th for its Research Power in the 2014 Research Excellence Framework.

    Between 2014-15, Leeds was ranked as the 10th most targeted British university by graduate employers, a two place decrease from 8th position in the previous 2014 rankings.

    The 2021 The Times Higher Education World University Rankings ranked Leeds as 153rd in the world. The university ranks 84th in the world in the CWTS Leiden Ranking. Leeds is ranked 91st in the world (and 15th in the UK) in the 2021 QS World University Rankings.

    The university won the biennially awarded Queen’s Anniversary Prize in 2009 for services to engineering and technology. The honour being awarded to the university’s Institute for Transport Studies (ITS) which for over forty years has been a world leader in transport teaching and research.

    The university is a founding member of The Russell Group Association(UK), comprising the leading research-intensive universities in the UK, as well as the N8 Group for research collaboration, The Worldwide Universities Network (UK), The Association of Commonwealth Universities (UK), The European University Association (EU), The White Rose University Consortium (UK), the Santander Network and the CDIO Initiative. It is also affiliated to The Universities (UK). The Leeds University Business School holds the ‘Triple Crown’ of accreditations from the Association to Advance Collegiate Schools of Business, the Association of MBAs and the European Quality Improvement System.

     
  • richardmitnick 8:24 am on January 9, 2023 Permalink | Reply
    Tags: "Entrepreneurial Milestones in Life Sciences", , , , Machine learning, Measuring the many proteins in a tumor sample in high resolution., , Quantitative Biomedicine, Spatial single-cell proteomics, The field of "image-based systems biology",   

    From The University of Zürich (Universität Zürich) (CH): “Entrepreneurial Milestones in Life Sciences” 

    From The University of Zürich (Universität Zürich) (CH)

    1.9.23
    Nathalie Huber
    English translations by Philip Isler

    UZH Spin-Offs in 2022

    Three new spin-offs were founded at UZH in 2022, transferring scientific findings into industry practice. The business ventures explore new perspectives in the fight against cancer, space factories to produce human tissue, and ways to accelerate the development of novel drugs.

    1
    The goal of the UZH spin-off Navignostics is to enable a more precise cancer diagnosis for patients. (Image: iStock / utah778)

    At UZH, new ideas evolve into pioneering technologies of the future. Last year, three groups of business founders with roots at UZH took the entrepreneurial leap and signed a licensing agreement with UZH. Their spin-offs emerged from life sciences research conducted at the Faculty of Medicine and the Faculty of Science. 

    Precision diagnostics, bespoke therapies

    Despite a wide variety of available drugs and treatment options, many people still succumb to cancer. Every tumor is unique, making it difficult to find the ideal treatment for each patient. The spin-off Navignostics develops novel diagnostic methods to perform advanced tumor sample analyses. “We want to help specialists find targeted immuno-oncology therapies that are tailored to the individual cancer patient’s tumor phenotype,” says Bernd Bodenmiller, professor of Quantitative Biomedicine.

    Navignostics leverages spatial single-cell proteomics, an approach that was developed by Bodenmiller and his research group. Their approach involves measuring the many proteins in a tumor sample in high resolution. This enables clinicians to use algorithms to determine the cell types present in the tumor as well as which of the cells’ processes are deregulated and how the tumor cells affect the surrounding cells. The aim is to use these data and artificial intelligence to recommend therapies that are tailored to the individual cancer patient.

    Navignostics is currently providing pharmaceutics companies with various services to support them in developing cancer drugs and companion diagnostics or to increase the chances of their clinical trials. Thanks to its successful round of seed financing (CHF 7.5 million), the spin-off can accelerate the development of its first diagnostic product and step up its cooperation with clinical, pharma and biotech partners.

    Human tissue from space

    The ambitious goal of Prometheus Life Technologies AG is to set up a factory that can produce human tissue – in space, no less. The spin-off wants to use the microgravity environment in space to manufacture three-dimensional organ-like tissues – dubbed organoids – using human stem cells. These tissues only grow three-dimensionally in zero gravity. On Earth and in labs, they require highly complicated auxiliary structures to do so. “At the moment, there’s an unmet demand for 3D organoids,” says Oliver Ullrich, director of the UZH Space Hub and co-inventor.

    These tissues are particularly popular among pharmaceutical companies, as they enable them to carry out toxicological trials on human tissue without first having to use animal models. Organoids produced from a patient’s stem cells could also one day be used as the building blocks for transplants to treat damaged organs, as the number of donated organs is nowhere near enough to meet the worldwide demand. Further opportunities for growth arise from replacing 2D with the more in-vivo-like 3D cell cultures.

    The spin-off’s technology is based on a previous joint project of UZH and Airbus. The research and development phase included comprehensive experiments on the ground as well as two successful production tests aboard the International Space Station (ISS). The whole process, from idea to commercialization, originated, developed and matured in the UZH Space Hub. Prometheus Life Technologies AG already won a high-ranking international award last month. The spin-off was selected as the winner of the Reef Starter Innovation Challenge, an innovation engine powered by Orbital Reef, a mixed-use space station to be built in the Earth’s lower orbit.

    Mapping drug activity contexts

    Just as statements shouldn’t be considered out of context, the effects of drugs need to be seen in a bigger picture. Founded by Lucas Pelkmans, professor of molecular biology, Apricot Therapeutics specializes in mapping drug activity contexts, or DACs. “We’re the first pharmaceutical company worldwide that focuses on DACs, and our goal is to drive forward the development of novel and innovative drugs,” Pelkmans says. The technology used by the spin-off is based on Pelkmans’ pioneering discovery that it is possible to predict the behavior of individual cells by mapping their surroundings using multi-scale microscopy and imaging technology. DACs capture how the various spatial organizations of our individual cells cause drugs to have variable effects.

    Apricot Therapeutics’ technology platform is based on methods in the field of “image-based systems biology”, for which the spin-off is currently evaluating two patent applications. The goal of the spin-off is to develop a procedure to measure all DACs relevant for drug activity and use machine learning to predict cellular responses to drugs with unprecedented accuracy. The company is the first to apply novel genomics 3.0 technologies to predict drug activity and treatment outcomes. Future clients include pharmaceutical companies, biotech and medtech start-ups, diagnostic centers, clinicians and research laboratories.

    Here are some of the milestones: 

    Successful cooperation

    Biotech company Molecular Partners concluded a licensing agreement with Novartis for Ensovibep, a drug against Covid-19. Molecular Partners sold the drug’s worldwide rights to Novartis for a one-time payment of CHF 150 million and a 22 percent royalty on sales. 
    Neuroimmune entered into a licensing agreement with AstraZeneca subsidiary Alexion to develop and market the NI006 heart drug. The spin-off also stepped up its cooperation with Japanese company Ono Pharmaceutical in the field of neurodegenerative diseases with the aim of co-developing new drugs.

    Medtech firsts

    Clemedi rolled out Tuberculini in 2022. The molecular test for drug-resistant tuberculosis can deliver results within 48 hours. 
    CUTISS AG received certification from Swissmedic that allows the UZH spin-off to manufacture personalized human skin transplants in its Schlieren facilities. On-site production increases the company’s flexibility and production capacity. In addition, CUTISS was awarded a tissue graft patent by the European Patent Office. 
    Oncobit AG obtained CE marking for its first product, oncobit™ PM. This marking, granted by European regulatory authorities, guarantees that the product can be used without restrictions throughout Europe. oncobit™ PM can be used to monitor treatment response, minimal residual disease, and disease recurrence in melanoma patients.

    New capital

    ImmunOs Therapeutics AG completed a highly successful financing round, raising over CHF 72 million. The biopharmaceutical company develops novel therapeutics for the treatment of cancer and autoimmune diseases.  
    Schlieren-based Kuros Biosciences AG announced a capital increase of CHF 6 million. The spin-off develops spinal fusion technologies that ease the burden of back pain.
    Invasight AG successfully raised CHF 4.5 million. Founded in 2020, the biotech spin-off develops protein-protein interaction antagonists (PPIAs) against invasive cancers.

    KOVE Medical and OxyPrem were each awarded an EIC Accelerator Grant funded by the State Secretariat for Education, Research and Innovation (SERI) to promote groundbreaking innovations by Swiss start-ups. KOVE is developing a method to make prenatal surgical interventions, while OxyPrem is producing a device to monitor oxygen supply to the brain.

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The University of Zürich (Universität Zürich) (CH), located in the city of Zürich, is the largest university in Switzerland, with over 26,000 students. It was founded in 1833 from the existing colleges of theology, law, medicine and a new faculty of philosophy.

    Currently, the university has seven faculties: Philosophy, Human Medicine, Economic Sciences, Law, Mathematics and Natural Sciences, Theology and Veterinary Medicine. The university offers the widest range of subjects and courses of any Swiss higher education institutions.

    As a member of the League of European Research Universities (EU) (LERU) and Universitas 21 (U21) network, a global network of 27 research universities from around the world, promoting research collaboration and exchange of knowledge.

    Numerous distinctions highlight the University’s international renown in the fields of medicine, immunology, genetics, neuroscience and structural biology as well as in economics. To date, the Nobel Prize has been conferred on twelve UZH scholars.

    Sharing Knowledge

    The academic excellence of the University of Zürich brings benefits to both the public and the private sectors not only in the Canton of Zürich, but throughout Switzerland. Knowledge is shared in a variety of ways: in addition to granting the general public access to its twelve museums and many of its libraries, the University makes findings from cutting-edge research available to the public in accessible and engaging lecture series and panel discussions.

    1. Identity of the University of Zürich

    Scholarship

    The University of Zürich (UZH) is an institution with a strong commitment to the free and open pursuit of scholarship.

    Scholarship is the acquisition, the advancement and the dissemination of knowledge in a methodological and critical manner.

    Academic freedom and responsibility

    To flourish, scholarship must be free from external influences, constraints and ideological pressures. The University of Zürich is committed to unrestricted freedom in research and teaching.

    Academic freedom calls for a high degree of responsibility, including reflection on the ethical implications of research activities for humans, animals and the environment.

    Universitas

    Work in all disciplines at the University is based on a scholarly inquiry into the realities of our world

    As Switzerland’s largest university, the University of Zürich promotes wide diversity in both scholarship and in the fields of study offered. The University fosters free dialogue, respects the individual characteristics of the disciplines, and advances interdisciplinary work.

    2. The University of Zurich’s goals and responsibilities

    Basic principles

    UZH pursues scholarly research and teaching, and provides services for the benefit of the public.

    UZH has successfully positioned itself among the world’s foremost universities. The University attracts the best researchers and students, and promotes junior scholars at all levels of their academic career.

    UZH sets priorities in research and teaching by considering academic requirements and the needs of society. These priorities presuppose basic research and interdisciplinary methods.

    UZH strives to uphold the highest quality in all its activities.
    To secure and improve quality, the University regularly monitors and evaluates its performance.

    Research

    UZH contributes to the increase of knowledge through the pursuit of cutting-edge research.

    UZH is primarily a research institution. As such, it enables and expects its members to conduct research, and supports them in doing so.

    While basic research is the core focus at UZH, the University also pursues applied research.

     
  • richardmitnick 2:13 pm on January 8, 2023 Permalink | Reply
    Tags: "Unpacking the 'black box' to build better AI models", , , , , , Computer Science and Artificial Intelligence Laboratory (CSAIL), , From butterflies to bioinformatics, Machine learning, , , Stefanie Jegelka, Stefanie Jegelka seeks to understand how machine-learning models behave to help researchers build more robust models for applications in biology and computer vision and optimization and more., Teaching models to learn,   

    From The Massachusetts Institute of Technology: “Unpacking the ‘black box’ to build better AI models” Stefanie Jegelka 

    From The Massachusetts Institute of Technology

    1.8.23
    Adam Zewe

    Stefanie Jegelka seeks to understand how machine-learning models behave, to help researchers build more robust models for applications in biology, computer vision, optimization, and more.

    1
    Stefanie Jegelka, a newly-tenured associate professor in the Department of Electrical Engineering and Computer Science at MIT, develops algorithms for deep learning applications and studies how deep learning models behave and what they can learn. Photo: M. Scott Brauer.

    2
    “What I really loved about MIT, from the very beginning, was that the people really care deeply about research and creativity. That is what I appreciate the most about MIT. The people here really value originality and digging deep into research,” Jegelka says. Photo: M. Scott Brauer.

    When deep learning models are deployed in the real world, perhaps to detect financial fraud from credit card activity or identify cancer in medical images, they are often able to outperform humans.

    But what exactly are these deep learning models learning? Does a model trained to spot skin cancer in clinical images, for example, actually learn the colors and textures of cancerous tissue, or is it flagging some other features or patterns?

    These powerful machine-learning models are typically based on artificial neural networks that can have millions of nodes that process data to make predictions. Due to their complexity, researchers often call these models “black boxes” because even the scientists who build them don’t understand everything that is going on under the hood.

    Stefanie Jegelka isn’t satisfied with that “black box” explanation. A newly tenured associate professor in the MIT Department of Electrical Engineering and Computer Science, Jegelka is digging deep into deep learning to understand what these models can learn and how they behave, and how to build certain prior information into these models.

    “At the end of the day, what a deep-learning model will learn depends on so many factors. But building an understanding that is relevant in practice will help us design better models, and also help us understand what is going on inside them so we know when we can deploy a model and when we can’t. That is critically important,” says Jegelka, who is also a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Institute for Data, Systems, and Society (IDSS).

    Jegelka is particularly interested in optimizing machine-learning models when input data are in the form of graphs. Graph data pose specific challenges: For instance, information in the data consists of both information about individual nodes and edges, as well as the structure — what is connected to what. In addition, graphs have mathematical symmetries that need to be respected by the machine-learning model so that, for instance, the same graph always leads to the same prediction. Building such symmetries into a machine-learning model is usually not easy.

    Take molecules, for instance. Molecules can be represented as graphs, with vertices that correspond to atoms and edges that correspond to chemical bonds between them. Drug companies may want to use deep learning to rapidly predict the properties of many molecules, narrowing down the number they must physically test in the lab.

    Jegelka studies methods to build mathematical machine-learning models that can effectively take graph data as an input and output something else, in this case a prediction of a molecule’s chemical properties. This is particularly challenging since a molecule’s properties are determined not only by the atoms within it, but also by the connections between them.

    Other examples of machine learning on graphs include traffic routing, chip design, and recommender systems.

    Designing these models is made even more difficult by the fact that data used to train them are often different from data the models see in practice. Perhaps the model was trained using small molecular graphs or traffic networks, but the graphs it sees once deployed are larger or more complex.

    In this case, what can researchers expect this model to learn, and will it still work in practice if the real-world data are different?

    “Your model is not going to be able to learn everything because of some hardness problems in computer science, but what you can learn and what you can’t learn depends on how you set the model up,” Jegelka says.

    She approaches this question by combining her passion for algorithms and discrete mathematics with her excitement for machine learning.

    From butterflies to bioinformatics

    Jegelka grew up in a small town in Germany and became interested in science when she was a high school student; a supportive teacher encouraged her to participate in an international science competition. She and her teammates from the U.S. and Singapore won an award for a website they created about butterflies, in three languages.

    “For our project, we took images of wings with a scanning electron microscope at a local university of applied sciences. I also got the opportunity to use a high-speed camera at Mercedes Benz — this camera usually filmed combustion engines — which I used to capture a slow-motion video of the movement of a butterfly’s wings. That was the first time I really got in touch with science and exploration,” she recalls.

    Intrigued by both biology and mathematics, Jegelka decided to study bioinformatics at the University of Tübingen and the University of Texas-Austin. She had a few opportunities to conduct research as an undergraduate, including an internship in computational neuroscience at Georgetown University, but wasn’t sure what career to follow.

    When she returned for her final year of college, Jegelka moved in with two roommates who were working as research assistants at the MPG Institute in Tübingen.

    “They were working on machine learning, and that sounded really cool to me. I had to write my bachelor’s thesis, so I asked at the institute if they had a project for me. I started working on machine learning at the MPG Institute and I loved it. I learned so much there, and it was a great place for research,” she says.

    She stayed on at the MPG Institute to complete a master’s thesis, and then embarked on a PhD in machine learning at the MPG Institute and the Swiss Federal Institute of Technology.

    During her PhD, she explored how concepts from discrete mathematics can help improve machine-learning techniques.

    Teaching models to learn

    The more Jegelka learned about machine learning, the more intrigued she became by the challenges of understanding how models behave, and how to steer this behavior.

    “You can do so much with machine learning, but only if you have the right model and data. It is not just a black-box thing where you throw it at the data and it works. You actually have to think about it, its properties, and what you want the model to learn and do,” she says.

    After completing a postdoc at the University of California-Berkeley, Jegelka was hooked on research and decided to pursue a career in academia. She joined the faculty at MIT in 2015 as an assistant professor.

    “What I really loved about MIT, from the very beginning, was that the people really care deeply about research and creativity. That is what I appreciate the most about MIT. The people here really value originality and depth in research,” she says.

    That focus on creativity has enabled Jegelka to explore a broad range of topics.

    In collaboration with other faculty at MIT, she studies machine-learning applications in biology, imaging, computer vision, and materials science.

    But what really drives Jegelka is probing the fundamentals of machine learning, and most recently, the issue of robustness. Often, a model performs well on training data, but its performance deteriorates when it is deployed on slightly different data. Building prior knowledge into a model can make it more reliable, but understanding what information the model needs to be successful and how to build it in is not so simple, she says.

    She is also exploring methods to improve the performance of machine-learning models for image classification.

    Image classification models are everywhere, from the facial recognition systems on mobile phones to tools that identify fake accounts on social media. These models need massive amounts of data for training, but since it is expensive for humans to hand-label millions of images, researchers often use unlabeled datasets to pretrain models instead.

    These models then reuse the representations they have learned when they are fine-tuned later for a specific task.

    Ideally, researchers want the model to learn as much as it can during pretraining, so it can apply that knowledge to its downstream task. But in practice, these models often learn only a few simple correlations — like that one image has sunshine and one has shade — and use these “shortcuts” to classify images.

    “We showed that this is a problem in ‘contrastive learning,’ which is a standard technique for pre-training, both theoretically and empirically. But we also show that you can influence the kinds of information the model will learn to represent by modifying the types of data you show the model. This is one step toward understanding what models are actually going to do in practice,” she says.

    Researchers still don’t understand everything that goes on inside a deep-learning model, or details about how they can influence what a model learns and how it behaves, but Jegelka looks forward to continue exploring these topics.

    “Often in machine learning, we see something happen in practice and we try to understand it theoretically. This is a huge challenge. You want to build an understanding that matches what you see in practice, so that you can do better. We are still just at the beginning of understanding this,” she says.

    Outside the lab, Jegelka is a fan of music, art, traveling, and cycling. But these days, she enjoys spending most of her free time with her preschool-aged daughter.

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    MIT Seal

    MIT Campus

    The Massachusetts Institute of Technology is a private land-grant research university in Cambridge, Massachusetts. The institute has an urban campus that extends more than a mile (1.6 km) alongside the Charles River. The institute also encompasses a number of major off-campus facilities such as the MIT Lincoln Laboratory , the MIT Bates Research and Engineering Center , and the Haystack Observatory , as well as affiliated laboratories such as the Broad Institute of MIT and Harvard and Whitehead Institute.

    Massachusettes Institute of Technology-Haystack Observatory Westford, Massachusetts, USA, Altitude 131 m (430 ft).

    4

    The Computer Science and Artificial Intelligence Laboratory (CSAIL)

    From The Kavli Institute For Astrophysics and Space Research

    MIT’s Institute for Medical Engineering and Science is a research institute at the Massachusetts Institute of Technology

    The MIT Laboratory for Nuclear Science

    The MIT Media Lab

    The MIT School of Engineering

    The MIT Sloan School of Management

    Spectrum

    Founded in 1861 in response to the increasing industrialization of the United States, Massachusetts Institute of Technology adopted a European polytechnic university model and stressed laboratory instruction in applied science and engineering. It has since played a key role in the development of many aspects of modern science, engineering, mathematics, and technology, and is widely known for its innovation and academic strength. It is frequently regarded as one of the most prestigious universities in the world.

    As of December 2020, 97 Nobel laureates, 26 Turing Award winners, and 8 Fields Medalists have been affiliated with MIT as alumni, faculty members, or researchers. In addition, 58 National Medal of Science recipients, 29 National Medals of Technology and Innovation recipients, 50 MacArthur Fellows, 80 Marshall Scholars, 3 Mitchell Scholars, 22 Schwarzman Scholars, 41 astronauts, and 16 Chief Scientists of the U.S. Air Force have been affiliated with The Massachusetts Institute of Technology. The university also has a strong entrepreneurial culture and MIT alumni have founded or co-founded many notable companies. Massachusetts Institute of Technology is a member of the Association of American Universities.

    Foundation and vision

    In 1859, a proposal was submitted to the Massachusetts General Court to use newly filled lands in Back Bay, Boston for a “Conservatory of Art and Science”, but the proposal failed. A charter for the incorporation of the Massachusetts Institute of Technology, proposed by William Barton Rogers, was signed by John Albion Andrew, the governor of Massachusetts, on April 10, 1861.

    Rogers, a professor from the University of Virginia , wanted to establish an institution to address rapid scientific and technological advances. He did not wish to found a professional school, but a combination with elements of both professional and liberal education, proposing that:

    “The true and only practicable object of a polytechnic school is, as I conceive, the teaching, not of the minute details and manipulations of the arts, which can be done only in the workshop, but the inculcation of those scientific principles which form the basis and explanation of them, and along with this, a full and methodical review of all their leading processes and operations in connection with physical laws.”

    The Rogers Plan reflected the German research university model, emphasizing an independent faculty engaged in research, as well as instruction oriented around seminars and laboratories.

    Early developments

    Two days after The Massachusetts Institute of Technology was chartered, the first battle of the Civil War broke out. After a long delay through the war years, MIT’s first classes were held in the Mercantile Building in Boston in 1865. The new institute was founded as part of the Morrill Land-Grant Colleges Act to fund institutions “to promote the liberal and practical education of the industrial classes” and was a land-grant school. In 1863 under the same act, the Commonwealth of Massachusetts founded the Massachusetts Agricultural College, which developed as the University of Massachusetts Amherst ). In 1866, the proceeds from land sales went toward new buildings in the Back Bay.

    The Massachusetts Institute of Technology was informally called “Boston Tech”. The institute adopted the European polytechnic university model and emphasized laboratory instruction from an early date. Despite chronic financial problems, the institute saw growth in the last two decades of the 19th century under President Francis Amasa Walker. Programs in electrical, chemical, marine, and sanitary engineering were introduced, new buildings were built, and the size of the student body increased to more than one thousand.

    The curriculum drifted to a vocational emphasis, with less focus on theoretical science. The fledgling school still suffered from chronic financial shortages which diverted the attention of the MIT leadership. During these “Boston Tech” years, Massachusetts Institute of Technology faculty and alumni rebuffed Harvard University president (and former MIT faculty) Charles W. Eliot’s repeated attempts to merge MIT with Harvard College’s Lawrence Scientific School. There would be at least six attempts to absorb MIT into Harvard. In its cramped Back Bay location, MIT could not afford to expand its overcrowded facilities, driving a desperate search for a new campus and funding. Eventually, the MIT Corporation approved a formal agreement to merge with Harvard, over the vehement objections of MIT faculty, students, and alumni. However, a 1917 decision by the Massachusetts Supreme Judicial Court effectively put an end to the merger scheme.

    In 1916, The Massachusetts Institute of Technology administration and the MIT charter crossed the Charles River on the ceremonial barge Bucentaur built for the occasion, to signify MIT’s move to a spacious new campus largely consisting of filled land on a one-mile-long (1.6 km) tract along the Cambridge side of the Charles River. The neoclassical “New Technology” campus was designed by William W. Bosworth and had been funded largely by anonymous donations from a mysterious “Mr. Smith”, starting in 1912. In January 1920, the donor was revealed to be the industrialist George Eastman of Rochester, New York, who had invented methods of film production and processing, and founded Eastman Kodak. Between 1912 and 1920, Eastman donated $20 million ($236.6 million in 2015 dollars) in cash and Kodak stock to MIT.

    Curricular reforms

    In the 1930s, President Karl Taylor Compton and Vice-President (effectively Provost) Vannevar Bush emphasized the importance of pure sciences like physics and chemistry and reduced the vocational practice required in shops and drafting studios. The Compton reforms “renewed confidence in the ability of the Institute to develop leadership in science as well as in engineering”. Unlike Ivy League schools, Massachusetts Institute of Technology catered more to middle-class families, and depended more on tuition than on endowments or grants for its funding. The school was elected to the Association of American Universities in 1934.

    Still, as late as 1949, the Lewis Committee lamented in its report on the state of education at The Massachusetts Institute of Technology that “the Institute is widely conceived as basically a vocational school”, a “partly unjustified” perception the committee sought to change. The report comprehensively reviewed the undergraduate curriculum, recommended offering a broader education, and warned against letting engineering and government-sponsored research detract from the sciences and humanities. The School of Humanities, Arts, and Social Sciences and the MIT Sloan School of Management were formed in 1950 to compete with the powerful Schools of Science and Engineering. Previously marginalized faculties in the areas of economics, management, political science, and linguistics emerged into cohesive and assertive departments by attracting respected professors and launching competitive graduate programs. The School of Humanities, Arts, and Social Sciences continued to develop under the successive terms of the more humanistically oriented presidents Howard W. Johnson and Jerome Wiesner between 1966 and 1980.

    The Massachusetts Institute of Technology‘s involvement in military science surged during World War II. In 1941, Vannevar Bush was appointed head of the federal Office of Scientific Research and Development and directed funding to only a select group of universities, including MIT. Engineers and scientists from across the country gathered at Massachusetts Institute of Technology ‘s Radiation Laboratory, established in 1940 to assist the British military in developing microwave radar. The work done there significantly affected both the war and subsequent research in the area. Other defense projects included gyroscope-based and other complex control systems for gunsight, bombsight, and inertial navigation under Charles Stark Draper’s Instrumentation Laboratory; the development of a digital computer for flight simulations under Project Whirlwind; and high-speed and high-altitude photography under Harold Edgerton. By the end of the war, The Massachusetts Institute of Technology became the nation’s largest wartime R&D contractor (attracting some criticism of Bush), employing nearly 4000 in the Radiation Laboratory alone and receiving in excess of $100 million ($1.2 billion in 2015 dollars) before 1946. Work on defense projects continued even after then. Post-war government-sponsored research at MIT included SAGE and guidance systems for ballistic missiles and Project Apollo.

    These activities affected The Massachusetts Institute of Technology profoundly. A 1949 report noted the lack of “any great slackening in the pace of life at the Institute” to match the return to peacetime, remembering the “academic tranquility of the prewar years”, though acknowledging the significant contributions of military research to the increased emphasis on graduate education and rapid growth of personnel and facilities. The faculty doubled and the graduate student body quintupled during the terms of Karl Taylor Compton, president of The Massachusetts Institute of Technology between 1930 and 1948; James Rhyne Killian, president from 1948 to 1957; and Julius Adams Stratton, chancellor from 1952 to 1957, whose institution-building strategies shaped the expanding university. By the 1950s, The Massachusetts Institute of Technology no longer simply benefited the industries with which it had worked for three decades, and it had developed closer working relationships with new patrons, philanthropic foundations and the federal government.

    In late 1960s and early 1970s, student and faculty activists protested against the Vietnam War and The Massachusetts Institute of Technology ‘s defense research. In this period Massachusetts Institute of Technology’s various departments were researching helicopters, smart bombs and counterinsurgency techniques for the war in Vietnam as well as guidance systems for nuclear missiles. The Union of Concerned Scientists was founded on March 4, 1969 during a meeting of faculty members and students seeking to shift the emphasis on military research toward environmental and social problems. The Massachusetts Institute of Technology ultimately divested itself from the Instrumentation Laboratory and moved all classified research off-campus to the MIT Lincoln Laboratory facility in 1973 in response to the protests. The student body, faculty, and administration remained comparatively unpolarized during what was a tumultuous time for many other universities. Johnson was seen to be highly successful in leading his institution to “greater strength and unity” after these times of turmoil. However, six Massachusetts Institute of Technology students were sentenced to prison terms at this time and some former student leaders, such as Michael Albert and George Katsiaficas, are still indignant about MIT’s role in military research and its suppression of these protests. (Richard Leacock’s film, November Actions, records some of these tumultuous events.)

    In the 1980s, there was more controversy at The Massachusetts Institute of Technology over its involvement in SDI (space weaponry) and CBW (chemical and biological warfare) research. More recently, The Massachusetts Institute of Technology’s research for the military has included work on robots, drones and ‘battle suits’.

    Recent history

    The Massachusetts Institute of Technology has kept pace with and helped to advance the digital age. In addition to developing the predecessors to modern computing and networking technologies, students, staff, and faculty members at Project MAC, the Artificial Intelligence Laboratory, and the Tech Model Railroad Club wrote some of the earliest interactive computer video games like Spacewar! and created much of modern hacker slang and culture. Several major computer-related organizations have originated at MIT since the 1980s: Richard Stallman’s GNU Project and the subsequent Free Software Foundation were founded in the mid-1980s at the AI Lab; the MIT Media Lab was founded in 1985 by Nicholas Negroponte and Jerome Wiesner to promote research into novel uses of computer technology; the World Wide Web Consortium standards organization was founded at the Laboratory for Computer Science in 1994 by Tim Berners-Lee; the MIT OpenCourseWare project has made course materials for over 2,000 Massachusetts Institute of Technology classes available online free of charge since 2002; and the One Laptop per Child initiative to expand computer education and connectivity to children worldwide was launched in 2005.

    The Massachusetts Institute of Technology was named a sea-grant college in 1976 to support its programs in oceanography and marine sciences and was named a space-grant college in 1989 to support its aeronautics and astronautics programs. Despite diminishing government financial support over the past quarter century, MIT launched several successful development campaigns to significantly expand the campus: new dormitories and athletics buildings on west campus; the Tang Center for Management Education; several buildings in the northeast corner of campus supporting research into biology, brain and cognitive sciences, genomics, biotechnology, and cancer research; and a number of new “backlot” buildings on Vassar Street including the Stata Center. Construction on campus in the 2000s included expansions of the Media Lab, the Sloan School’s eastern campus, and graduate residences in the northwest. In 2006, President Hockfield launched the MIT Energy Research Council to investigate the interdisciplinary challenges posed by increasing global energy consumption.

    In 2001, inspired by the open source and open access movements, The Massachusetts Institute of Technology launched “OpenCourseWare” to make the lecture notes, problem sets, syllabi, exams, and lectures from the great majority of its courses available online for no charge, though without any formal accreditation for coursework completed. While the cost of supporting and hosting the project is high, OCW expanded in 2005 to include other universities as a part of the OpenCourseWare Consortium, which currently includes more than 250 academic institutions with content available in at least six languages. In 2011, The Massachusetts Institute of Technology announced it would offer formal certification (but not credits or degrees) to online participants completing coursework in its “MITx” program, for a modest fee. The “edX” online platform supporting MITx was initially developed in partnership with Harvard and its analogous “Harvardx” initiative. The courseware platform is open source, and other universities have already joined and added their own course content. In March 2009 the Massachusetts Institute of Technology faculty adopted an open-access policy to make its scholarship publicly accessible online.

    The Massachusetts Institute of Technology has its own police force. Three days after the Boston Marathon bombing of April 2013, MIT Police patrol officer Sean Collier was fatally shot by the suspects Dzhokhar and Tamerlan Tsarnaev, setting off a violent manhunt that shut down the campus and much of the Boston metropolitan area for a day. One week later, Collier’s memorial service was attended by more than 10,000 people, in a ceremony hosted by the Massachusetts Institute of Technology community with thousands of police officers from the New England region and Canada. On November 25, 2013, The Massachusetts Institute of Technology announced the creation of the Collier Medal, to be awarded annually to “an individual or group that embodies the character and qualities that Officer Collier exhibited as a member of The Massachusetts Institute of Technology community and in all aspects of his life”. The announcement further stated that “Future recipients of the award will include those whose contributions exceed the boundaries of their profession, those who have contributed to building bridges across the community, and those who consistently and selflessly perform acts of kindness”.

    In September 2017, the school announced the creation of an artificial intelligence research lab called the MIT-IBM Watson AI Lab. IBM will spend $240 million over the next decade, and the lab will be staffed by MIT and IBM scientists. In October 2018 MIT announced that it would open a new Schwarzman College of Computing dedicated to the study of artificial intelligence, named after lead donor and The Blackstone Group CEO Stephen Schwarzman. The focus of the new college is to study not just AI, but interdisciplinary AI education, and how AI can be used in fields as diverse as history and biology. The cost of buildings and new faculty for the new college is expected to be $1 billion upon completion.

    The Caltech/MIT Advanced aLIGO was designed and constructed by a team of scientists from California Institute of Technology , Massachusetts Institute of Technology, and industrial contractors, and funded by the National Science Foundation .

    Caltech /MIT Advanced aLigo

    It was designed to open the field of gravitational-wave astronomy through the detection of gravitational waves predicted by general relativity. Gravitational waves were detected for the first time by the LIGO detector in 2015. For contributions to the LIGO detector and the observation of gravitational waves, two Caltech physicists, Kip Thorne and Barry Barish, and Massachusetts Institute of Technology physicist Rainer Weiss won the Nobel Prize in physics in 2017. Weiss, who is also a Massachusetts Institute of Technology graduate, designed the laser interferometric technique, which served as the essential blueprint for the LIGO.

    The mission of The Massachusetts Institute of Technology is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of The Massachusetts Institute of Technology community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

     
  • richardmitnick 11:40 am on December 13, 2022 Permalink | Reply
    Tags: "Glassy Discovery Offers Computational Windfall to Researchers Across Disciplines", A counterintuitive algorithmic strategy called “metadynamics”, , , , , Computational protein folding, , Crystals, Finding rare low-energy canyons in glassy materials., Folding peptide sequences into proteins, Glassy materials, Machine learning, , ,   

    From The School of Engineering and Applied Science At The University of Pennsylvania: “Glassy Discovery Offers Computational Windfall to Researchers Across Disciplines” 

    From The School of Engineering and Applied Science

    At

    U Penn bloc

    The University of Pennsylvania

    12.5.22
    Devorah Fischler

    1
    Penn Engineers used a counterintuitive algorithmic strategy called “metadynamics” to find rare low-energy canyons in glassy materials. Their breakthrough suggests the algorithm may have a wide range of useful scientific applications, potentially speeding up the pace of computational protein folding and eliminating the need for large data sets in machine learning. (Image credit: Dariusz Jemielniak)

    John Crocker had expected to see a flat line — a familiar horizontal track with some slight peaks and valleys — but the plot of energy in front of him dove sharply downward.

    “It’s a once-in-a-lifetime finding,” says Crocker. “It was as if the simulation had unexpectedly fallen into a deep canyon on an energy surface. This was lucky for two reasons. Firstly, it turned out to be a game changer for our study of glassy materials. And secondly, similar canyons have the potential to help others grappling with the same computational obstacles we face in our field, from computer scientists working on machine learning algorithms to bioengineers studying protein folding. We ended up with significant results because we were curious enough to try a method that shouldn’t have worked. But it did.”

    The method is metadynamics, a computational approach to exploring energy landscapes. Its counterintuitive application is the subject of a recent publication in PNAS [below] from a group of Penn Engineers at the University of Pennsylvania led by Crocker, Professor and Graduate Group Chair in the Department of Chemical and Biomolecular Engineering (CBE), along with Robert Riggleman, Associate Professor in CBE, and Amruthesh Thirumalaiswamy, Ph.D. student in CBE.

    Most solids are glasses (or glassy). We categorize the rest as crystals. These categorizations are not limited to glass or crystal as we might imagine them, but instead indicate how atoms in any solid are arranged. Crystals have neat, repetitive atomic structures. Glasses, however, are amorphous. Their atoms and molecules take on a vast number of disordered configurations.

    2
    Glassy and crystal solids.

    Glassy configurations get stuck while pursuing — as all systems do — their most stable, lowest energy states. Given enough time, glasses will still very slowly relax in energy, but their disordered atoms make it a slow and difficult process.

    Low-energy, stable glasses, or “ideal glasses,” are the key to a storehouse of knowledge that researchers are keen to unlock.

    Seeking to understand and eventually replicate the conditions of glassy materials that overcome the obstacles of their own atomic quirks, scientists use both experimental and theoretical approaches.

    Labs have, for example, melted and re-cooled fossilized amber to develop processes for recreating the encouraging effects that millions of years have had on its glassy pursuit of low-energy states. Crocker’s team, affiliated with the cross-disciplinary Penn Institute for Computational Science (PICS), explores physical structures with mathematical models.

    “We use computational models to simulate the positions and movements of atoms in different glasses,” says Thirumalaiswamy. “In order to keep track of a material’s particles, which are so numerous and dynamic they are impossible to visualize in three dimensions, we need to represent them mathematically in high-dimensional virtual spaces. If we have 300 atoms, for example, we need to represent them in 900 dimensions. We call these energy landscapes. We then investigate the landscapes, navigating them almost like explorers.”

    In these computational models, single configuration points, digests of atomic movement, tell the story of a glass’ energy levels. They show where a glass has gotten stuck and where it might have achieved a low-energy state.

    The problem is that until now, researchers have not been able to navigate landscapes efficiently enough to find these rare instances of stability.

    “Most studies do random walks around high-dimensional landscapes at enormous computational cost. It would take an infinite amount of time to find anything of interest. The landscapes are immense, and these walks are repetitive, wasting large amounts of time fixed in a single state before moving on to the next one,” says Riggleman.

    And so, they took a chance in trying metadynamics, a method that seemed destined to fail.

    Metadynamics is an algorithmic strategy developed to explore the entire landscape and avoid repetition. It assigns a penalty for going back to the same place twice. Metadynamics never works in high-dimensional spaces, however, because it takes too long to construct the penalties, canceling out the strategy’s potential for efficiency.

    Yet as the researchers watched their configuration energy trend downward, they realized it had succeeded.

    “We couldn’t have guessed it, but the landscapes proved to have these canyons with floors that are only two- or three-dimensional,” says Crocker. “Our algorithm literally fell right in. We found regularly occurring low-energy configurations in several different glasses with a method we think could be revolutionary for other disciplines as well.”

    The potential applications of the Crocker Lab canyons are wide-ranging.

    In the two decades since the Human Genome Project finished its mapping, scientists have been using computational models to fold peptide sequences into proteins. Proteins that fold well in nature have, through evolution, found ways to explore low-energy states analogous to those of ideal glasses.

    Theoretical studies of proteins use energy landscapes to learn about the folding processes that create the functional (or dysfunctional) foundations for biological health. Yet measuring these structures takes time, money and energy that scientists and the populations they aim to serve don’t have to spare. Bogged down by the same computational inefficiencies that glassy materials researchers face, genomic scientists may find similar successes with metadynamics-based approaches, accelerating the pace of medical research.

    Machine learning processes have a lot in common with random walks in high-dimensional space. Training artificial intelligence takes an enormous amount of computational time and power and has a long way to go in terms of predictive accuracies.

    A neural net needs to “see,” for example, thousands to millions of faces in order to acquire enough skill for facial recognition. With a more strategic computational process, machine learning could become faster, cheaper and more accessible. The metadynamics algorithm may have the potential to overcome the need for the huge and costly datasets typical of the process.

    Not only would this provide solutions for industry efficiency, but it could also democratize AI, allowing people with modest resources to do their own training and development.

    “We’re conjecturing that the landscapes in these different fields have similar geometric structures to ours,” says Crocker. “We suspect there might be a deep mathematical reason for why these canyons exist, and they may be present in these other related systems. This is our invitation; we look forward to the dialogue it begins.”

    This work was supported by NSF-Division of Material Research 1609525 and 1720530 and computational resources provided by XSEDE (Extreme Science and Engineering Discovery Environment) through TG-DMR150034.

    Science paper:
    PNAS

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The School of Engineering and Applied Science is an undergraduate and graduate school of The University of Pennsylvania. The School offers programs that emphasize hands-on study of engineering fundamentals (with an offering of approximately 300 courses) while encouraging students to leverage the educational offerings of the broader University. Engineering students can also take advantage of research opportunities through interactions with Penn’s School of Medicine, School of Arts and Sciences and the Wharton School.

    Penn Engineering offers bachelors, masters and Ph.D. degree programs in contemporary fields of engineering study. The nationally ranked bioengineering department offers the School’s most popular undergraduate degree program. The Jerome Fisher Program in Management and Technology, offered in partnership with the Wharton School, allows students to simultaneously earn a Bachelor of Science degree in Economics as well as a Bachelor of Science degree in Engineering. SEAS also offers several masters programs, which include: Executive Master’s in Technology Management, Master of Biotechnology, Master of Computer and Information Technology, Master of Computer and Information Science and a Master of Science in Engineering in Telecommunications and Networking.

    History

    The study of engineering at The University of Pennsylvania can be traced back to 1850 when the University trustees adopted a resolution providing for a professorship of “Chemistry as Applied to the Arts”. In 1852, the study of engineering was further formalized with the establishment of the School of Mines, Arts and Manufactures. The first Professor of Civil and Mining Engineering was appointed in 1852. The first graduate of the school received his Bachelor of Science degree in 1854. Since that time, the school has grown to six departments. In 1973, the school was renamed as the School of Engineering and Applied Science.

    The early growth of the school benefited from the generosity of two Philadelphians: John Henry Towne and Alfred Fitler Moore. Towne, a mechanical engineer and railroad developer, bequeathed the school a gift of $500,000 upon his death in 1875. The main administration building for the school still bears his name. Moore was a successful entrepreneur who made his fortune manufacturing telegraph cable. A 1923 gift from Moore established the Moore School of Electrical Engineering, which is the birthplace of the first electronic general-purpose Turing-complete digital computer, ENIAC, in 1946.

    During the latter half of the 20th century the school continued to break new ground. In 1958, Barbara G. Mandell became the first woman to enroll as an undergraduate in the School of Engineering. In 1965, the university acquired two sites that were formerly used as U.S. Army Nike Missile Base (PH 82L and PH 82R) and created the Valley Forge Research Center. In 1976, the Management and Technology Program was created. In 1990, a Bachelor of Applied Science in Biomedical Science and Bachelor of Applied Science in Environmental Science were first offered, followed by a master’s degree in Biotechnology in 1997.

    The school continues to expand with the addition of the Melvin and Claire Levine Hall for computer science in 2003, Skirkanich Hall for Bioengineering in 2006, and the Krishna P. Singh Center for Nanotechnology in 2013.

    Academics

    Penn’s School of Engineering and Applied Science is organized into six departments:

    Bioengineering
    Chemical and Biomolecular Engineering
    Computer and Information Science
    Electrical and Systems Engineering
    Materials Science and Engineering
    Mechanical Engineering and Applied Mechanics

    The school’s Department of Bioengineering, originally named Biomedical Electronic Engineering, consistently garners a top-ten ranking at both the undergraduate and graduate level from U.S. News & World Report. The department also houses the George H. Stephenson Foundation Educational Laboratory & Bio-MakerSpace (aka Biomakerspace) for training undergraduate through PhD students. It is Philadelphia’s and Penn’s only Bio-MakerSpace and it is open to the Penn community, encouraging a free flow of ideas, creativity, and entrepreneurship between Bioengineering students and students throughout the university.

    Founded in 1893, the Department of Chemical and Biomolecular Engineering is “America’s oldest continuously operating degree-granting program in chemical engineering.”

    The Department of Electrical and Systems Engineering is recognized for its research in electroscience, systems science and network systems and telecommunications.

    Originally established in 1946 as the School of Metallurgical Engineering, the Materials Science and Engineering Department “includes cutting edge programs in nanoscience and nanotechnology, biomaterials, ceramics, polymers, and metals.”

    The Department of Mechanical Engineering and Applied Mechanics draws its roots from the Department of Mechanical and Electrical Engineering, which was established in 1876.

    Each department houses one or more degree programs. The Chemical and Biomolecular Engineering, Materials Science and Engineering, and Mechanical Engineering and Applied Mechanics departments each house a single degree program.

    Bioengineering houses two programs (both a Bachelor of Science in Engineering degree as well as a Bachelor of Applied Science degree). Electrical and Systems Engineering offers four Bachelor of Science in Engineering programs: Electrical Engineering, Systems Engineering, Computer Engineering, and the Networked & Social Systems Engineering, the latter two of which are co-housed with Computer and Information Science (CIS). The CIS department, like Bioengineering, offers Computer and Information Science programs under both bachelor programs. CIS also houses Digital Media Design, a program jointly operated with PennDesign.

    Research

    Penn’s School of Engineering and Applied Science is a research institution. SEAS research strives to advance science and engineering and to achieve a positive impact on society.

    U Penn campus

    Academic life at University of Pennsylvania is unparalleled, with 100 countries and every U.S. state represented in one of the Ivy League’s most diverse student bodies. Consistently ranked among the top 10 universities in the country, Penn enrolls 10,000 undergraduate students and welcomes an additional 10,000 students to our world-renowned graduate and professional schools.

    Penn’s award-winning educators and scholars encourage students to pursue inquiry and discovery, follow their passions, and address the world’s most challenging problems through an interdisciplinary approach.

    The University of Pennsylvania is a private Ivy League research university in Philadelphia, Pennsylvania. The university claims a founding date of 1740 and is one of the nine colonial colleges chartered prior to the U.S. Declaration of Independence. Benjamin Franklin, Penn’s founder and first president, advocated an educational program that trained leaders in commerce, government, and public service, similar to a modern liberal arts curriculum.

    Penn has four undergraduate schools as well as twelve graduate and professional schools. Schools enrolling undergraduates include the College of Arts and Sciences; the School of Engineering and Applied Science; the Wharton School; and the School of Nursing. Penn’s “One University Policy” allows students to enroll in classes in any of Penn’s twelve schools. Among its highly ranked graduate and professional schools are a law school whose first professor wrote the first draft of the United States Constitution, the first school of medicine in North America (Perelman School of Medicine, 1765), and the first collegiate business school (Wharton School, 1881).

    Penn is also home to the first “student union” building and organization (Houston Hall, 1896), the first Catholic student club in North America (Newman Center, 1893), the first double-decker college football stadium (Franklin Field, 1924 when second deck was constructed), and Morris Arboretum, the official arboretum of the Commonwealth of Pennsylvania. The first general-purpose electronic computer (ENIAC) was developed at Penn and formally dedicated in 1946. In 2019, the university had an endowment of $14.65 billion, the sixth-largest endowment of all universities in the United States, as well as a research budget of $1.02 billion. The university’s athletics program, the Quakers, fields varsity teams in 33 sports as a member of the NCAA Division I Ivy League conference.

    As of 2018, distinguished alumni and/or Trustees include three U.S. Supreme Court justices; 32 U.S. senators; 46 U.S. governors; 163 members of the U.S. House of Representatives; eight signers of the Declaration of Independence and seven signers of the U.S. Constitution (four of whom signed both representing two-thirds of the six people who signed both); 24 members of the Continental Congress; 14 foreign heads of state and two presidents of the United States, including Donald Trump. As of October 2019, 36 Nobel laureates; 80 members of the American Academy of Arts and Sciences; 64 billionaires; 29 Rhodes Scholars; 15 Marshall Scholars and 16 Pulitzer Prize winners have been affiliated with the university.

    History

    The University of Pennsylvania considers itself the fourth-oldest institution of higher education in the United States, though this is contested by Princeton University and Columbia University. The university also considers itself as the first university in the United States with both undergraduate and graduate studies.

    In 1740, a group of Philadelphians joined together to erect a great preaching hall for the traveling evangelist George Whitefield, who toured the American colonies delivering open-air sermons. The building was designed and built by Edmund Woolley and was the largest building in the city at the time, drawing thousands of people the first time it was preached in. It was initially planned to serve as a charity school as well, but a lack of funds forced plans for the chapel and school to be suspended. According to Franklin’s autobiography, it was in 1743 when he first had the idea to establish an academy, “thinking the Rev. Richard Peters a fit person to superintend such an institution”. However, Peters declined a casual inquiry from Franklin and nothing further was done for another six years. In the fall of 1749, now more eager to create a school to educate future generations, Benjamin Franklin circulated a pamphlet titled Proposals Relating to the Education of Youth in Pensilvania, his vision for what he called a “Public Academy of Philadelphia”. Unlike the other colonial colleges that existed in 1749—Harvard University, William & Mary, Yale Unversity, and The College of New Jersey—Franklin’s new school would not focus merely on education for the clergy. He advocated an innovative concept of higher education, one which would teach both the ornamental knowledge of the arts and the practical skills necessary for making a living and doing public service. The proposed program of study could have become the nation’s first modern liberal arts curriculum, although it was never implemented because Anglican priest William Smith (1727-1803), who became the first provost, and other trustees strongly preferred the traditional curriculum.

    Franklin assembled a board of trustees from among the leading citizens of Philadelphia, the first such non-sectarian board in America. At the first meeting of the 24 members of the board of trustees on November 13, 1749, the issue of where to locate the school was a prime concern. Although a lot across Sixth Street from the old Pennsylvania State House (later renamed and famously known since 1776 as “Independence Hall”), was offered without cost by James Logan, its owner, the trustees realized that the building erected in 1740, which was still vacant, would be an even better site. The original sponsors of the dormant building still owed considerable construction debts and asked Franklin’s group to assume their debts and, accordingly, their inactive trusts. On February 1, 1750, the new board took over the building and trusts of the old board. On August 13, 1751, the “Academy of Philadelphia”, using the great hall at 4th and Arch Streets, took in its first secondary students. A charity school also was chartered on July 13, 1753 by the intentions of the original “New Building” donors, although it lasted only a few years. On June 16, 1755, the “College of Philadelphia” was chartered, paving the way for the addition of undergraduate instruction. All three schools shared the same board of trustees and were considered to be part of the same institution. The first commencement exercises were held on May 17, 1757.

    The institution of higher learning was known as the College of Philadelphia from 1755 to 1779. In 1779, not trusting then-provost the Reverend William Smith’s “Loyalist” tendencies, the revolutionary State Legislature created a University of the State of Pennsylvania. The result was a schism, with Smith continuing to operate an attenuated version of the College of Philadelphia. In 1791, the legislature issued a new charter, merging the two institutions into a new University of Pennsylvania with twelve men from each institution on the new board of trustees.

    Penn has three claims to being the first university in the United States, according to university archives director Mark Frazier Lloyd: the 1765 founding of the first medical school in America made Penn the first institution to offer both “undergraduate” and professional education; the 1779 charter made it the first American institution of higher learning to take the name of “University”; and existing colleges were established as seminaries (although, as detailed earlier, Penn adopted a traditional seminary curriculum as well).

    After being located in downtown Philadelphia for more than a century, the campus was moved across the Schuylkill River to property purchased from the Blockley Almshouse in West Philadelphia in 1872, where it has since remained in an area now known as University City. Although Penn began operating as an academy or secondary school in 1751 and obtained its collegiate charter in 1755, it initially designated 1750 as its founding date; this is the year that appears on the first iteration of the university seal. Sometime later in its early history, Penn began to consider 1749 as its founding date and this year was referenced for over a century, including at the centennial celebration in 1849. In 1899, the board of trustees voted to adjust the founding date earlier again, this time to 1740, the date of “the creation of the earliest of the many educational trusts the University has taken upon itself”. The board of trustees voted in response to a three-year campaign by Penn’s General Alumni Society to retroactively revise the university’s founding date to appear older than Princeton University, which had been chartered in 1746.

    Research, innovations and discoveries

    Penn is classified as an “R1” doctoral university: “Highest research activity.” Its economic impact on the Commonwealth of Pennsylvania for 2015 amounted to $14.3 billion. Penn’s research expenditures in the 2018 fiscal year were $1.442 billion, the fourth largest in the U.S. In fiscal year 2019 Penn received $582.3 million in funding from the National Institutes of Health.

    In line with its well-known interdisciplinary tradition, Penn’s research centers often span two or more disciplines. In the 2010–2011 academic year alone, five interdisciplinary research centers were created or substantially expanded; these include the Center for Health-care Financing; the Center for Global Women’s Health at the Nursing School; the $13 million Morris Arboretum’s Horticulture Center; the $15 million Jay H. Baker Retailing Center at Wharton; and the $13 million Translational Research Center at Penn Medicine. With these additions, Penn now counts 165 research centers hosting a research community of over 4,300 faculty and over 1,100 postdoctoral fellows, 5,500 academic support staff and graduate student trainees. To further assist the advancement of interdisciplinary research President Amy Gutmann established the “Penn Integrates Knowledge” title awarded to selected Penn professors “whose research and teaching exemplify the integration of knowledge”. These professors hold endowed professorships and joint appointments between Penn’s schools.

    Penn is also among the most prolific producers of doctoral students. With 487 PhDs awarded in 2009, Penn ranks third in the Ivy League, only behind Columbia University and Cornell University (Harvard University did not report data). It also has one of the highest numbers of post-doctoral appointees (933 in number for 2004–2007), ranking third in the Ivy League (behind Harvard and Yale University) and tenth nationally.

    In most disciplines Penn professors’ productivity is among the highest in the nation and first in the fields of epidemiology, business, communication studies, comparative literature, languages, information science, criminal justice and criminology, social sciences and sociology. According to the National Research Council nearly three-quarters of Penn’s 41 assessed programs were placed in ranges including the top 10 rankings in their fields, with more than half of these in ranges including the top five rankings in these fields.

    Penn’s research tradition has historically been complemented by innovations that shaped higher education. In addition to establishing the first medical school; the first university teaching hospital; the first business school; and the first student union Penn was also the cradle of other significant developments. In 1852, Penn Law was the first law school in the nation to publish a law journal still in existence (then called The American Law Register, now the Penn Law Review, one of the most cited law journals in the world). Under the deanship of William Draper Lewis, the law school was also one of the first schools to emphasize legal teaching by full-time professors instead of practitioners, a system that is still followed today. The Wharton School was home to several pioneering developments in business education. It established the first research center in a business school in 1921 and the first center for entrepreneurship center in 1973 and it regularly introduced novel curricula for which BusinessWeek wrote, “Wharton is on the crest of a wave of reinvention and change in management education”.

    Several major scientific discoveries have also taken place at Penn. The university is probably best known as the place where the first general-purpose electronic computer (ENIAC) was born in 1946 at the Moore School of Electrical Engineering.

    ENIAC UPenn

    It was here also where the world’s first spelling and grammar checkers were created, as well as the popular COBOL programming language. Penn can also boast some of the most important discoveries in the field of medicine. The dialysis machine used as an artificial replacement for lost kidney function was conceived and devised out of a pressure cooker by William Inouye while he was still a student at Penn Med; the Rubella and Hepatitis B vaccines were developed at Penn; the discovery of cancer’s link with genes; cognitive therapy; Retin-A (the cream used to treat acne), Resistin; the Philadelphia gene (linked to chronic myelogenous leukemia) and the technology behind PET Scans were all discovered by Penn Med researchers. More recent gene research has led to the discovery of the genes for fragile X syndrome, the most common form of inherited mental retardation; spinal and bulbar muscular atrophy, a disorder marked by progressive muscle wasting; and Charcot–Marie–Tooth disease, a progressive neurodegenerative disease that affects the hands, feet and limbs.

    Conductive polymer was also developed at Penn by Alan J. Heeger, Alan MacDiarmid and Hideki Shirakawa, an invention that earned them the Nobel Prize in Chemistry. On faculty since 1965, Ralph L. Brinster developed the scientific basis for in vitro fertilization and the transgenic mouse at Penn and was awarded the National Medal of Science in 2010. The theory of superconductivity was also partly developed at Penn, by then-faculty member John Robert Schrieffer (along with John Bardeen and Leon Cooper). The university has also contributed major advancements in the fields of economics and management. Among the many discoveries are conjoint analysis, widely used as a predictive tool especially in market research; Simon Kuznets’s method of measuring Gross National Product; the Penn effect (the observation that consumer price levels in richer countries are systematically higher than in poorer ones) and the “Wharton Model” developed by Nobel-laureate Lawrence Klein to measure and forecast economic activity. The idea behind Health Maintenance Organizations also belonged to Penn professor Robert Eilers, who put it into practice during then-President Nixon’s health reform in the 1970s.

    International partnerships

    Students can study abroad for a semester or a year at partner institutions such as the London School of Economics(UK), University of Barcelona [Universitat de Barcelona](ES), Paris Institute of Political Studies [Institut d’études politiques de Paris](FR), University of Queensland(AU), University College London(UK), King’s College London(UK), Hebrew University of Jerusalem(IL) and University of Warwick(UK).

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: