Tagged: Computing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:10 pm on September 28, 2021 Permalink | Reply
    Tags: "The co-evolution of particle physics and computing", , Computing, , , , ,   

    From Symmetry: “The co-evolution of particle physics and computing” 

    Symmetry Mag

    From Symmetry

    09/28/21
    Stephanie Melchor

    1
    Illustration by Sandbox Studio, Chicago with Ariel Davis.

    Over time, particle physics and astrophysics and computing have built upon one another’s successes. That co-evolution continues today.

    In the mid-twentieth century, particle physicists were peering deeper into the history and makeup of the universe than ever before. Over time, their calculations became too complex to fit on a blackboard—or to farm out to armies of human “computers” doing calculations by hand.

    To deal with this, they developed some of the world’s earliest electronic computers.

    Physics has played an important role in the history of computing. The transistor—the switch that controls the flow of electrical signal within a computer—was invented by a group of physicists at Bell Labs. The incredible computational demands of particle physics and astrophysics experiments have consistently pushed the boundaries of what is possible. They have encouraged the development of new technologies to handle tasks from dealing with avalanches of data to simulating interactions on the scales of both the cosmos and the quantum realm.

    But this influence doesn’t just go one way. Computing plays an essential role in particle physics and astrophysics as well. As computing has grown increasingly more sophisticated, its own progress has enabled new scientific discoveries and breakthroughs.

    2
    Illustration by Sandbox Studio, Chicago with Ariel Davis.

    Managing an onslaught of data

    In 1973, scientists at DOE’s Fermi National Accelerator Laboratory (US) in Illinois got their first big mainframe computer: a 7-year-old hand-me-down from DOE’s Lawrence Berkeley National Laboratory (US). Called the CDC 6600, it weighed about 6 tons. Over the next five years, Fermilab added five more large mainframe computers to its collection.

    Then came the completion of the Tevatron—at the time, the world’s highest-energy particle accelerator—which would provide the particle beams for numerous experiments at the lab.

    _________________________________________________________________________________________________________

    FNAL/Tevatron map

    Tevatron Accelerator

    FNAL/Tevatron

    FNAL/Tevatron CDF detector

    FNAL/Tevatron DØ detector

    ______________________________________________________________________________________________________________

    By the mid-1990s, two four-story particle detectors would begin selecting, storing and analyzing data from millions of particle collisions at the Tevatron per second. Called the Collider Detector at Fermilab and the DØ detector, these new experiments threatened to overpower the lab’s computational abilities.

    In December of 1983, a committee of physicists and computer scientists released a 103-page report highlighting the “urgent need for an upgrading of the laboratory’s computer facilities.” The report said the lab “should continue the process of catching up” in terms of computing ability, and that “this should remain the laboratory’s top computing priority for the next few years.”

    Instead of simply buying more large computers (which were incredibly expensive), the committee suggested a new approach: They recommended increasing computational power by distributing the burden over clusters or “farms” of hundreds of smaller computers.

    Thanks to Intel’s 1971 development of a new commercially available microprocessor the size of a domino, computers were shrinking. Fermilab was one of the first national labs to try the concept of clustering these smaller computers together, treating each particle collision as a computationally independent event that could be analyzed on its own processor.

    Like many new ideas in science, it wasn’t accepted without some pushback.

    Joel Butler, a physicist at Fermilab who was on the computing committee, recalls, “There was a big fight about whether this was a good idea or a bad idea.”

    A lot of people were enchanted with the big computers, he says. They were impressive-looking and reliable, and people knew how to use them. And then along came “this swarm of little tiny devices, packaged in breadbox-sized enclosures.”

    The computers were unfamiliar, and the companies building them weren’t well-established. On top of that, it wasn’t clear how well the clustering strategy would work.

    As for Butler? “I raised my hand [at a meeting] and said, ‘Good idea’—and suddenly my entire career shifted from building detectors and beamlines to doing computing,” he chuckles.

    Not long afterward, innovation that sparked for the benefit of particle physics enabled another leap in computing. In 1989, Tim Berners-Lee, a computer scientist at European Organization for Nuclear Research [Organisation européenne pour la recherche nucléaire] [Europäische Organisation für Kernforschung](CH) [CERN], launched the World Wide Web to help CERN physicists share data with research collaborators all over the world.

    To be clear, Berners-Lee didn’t create the internet—that was already underway in the form the ARPANET, developed by the US Department of Defense.

    3
    ARPANET

    But the ARPANET connected only a few hundred computers, and it was difficult to share information across machines with different operating systems.

    The web Berners-Lee created was an application that ran on the internet, like email, and started as a collection of documents connected by hyperlinks. To get around the problem of accessing files between different types of computers, he developed HTML (HyperText Markup Language), a programming language that formatted and displayed files in a web browser independent of the local computer’s operating system.

    Berners-Lee also developed the first web browser, allowing users to access files stored on the first web server (Berners-Lee’s computer at CERN).

    4
    NCSA MOSAIC Browser

    3
    Netscape.

    He implemented the concept of a URL (Uniform Resource Locator), specifying how and where to access desired web pages.

    What started out as an internal project to help particle physicists share data within their institution fundamentally changed not just computing, but how most people experience the digital world today.

    Back at Fermilab, cluster computing wound up working well for handling the Tevatron data. Eventually, it became industry standard for tech giants like Google and Amazon.

    Over the next decade, other US national laboratories adopted the idea, too. DOE’s SLAC National Accelerator Laboratory (US)—then called Stanford Linear Accelerator Center—transitioned from big mainframes to clusters of smaller computers to prepare for its own extremely data-hungry experiment, BaBar.

    SLAC National Accelerator Laboratory(US) BaBar

    Both SLAC and Fermilab also were early adopters of Lee’s web server. The labs set up the first two websites in the United States, paving the way for this innovation to spread across the continent.

    In 1989, in recognition of the growing importance of computing in physics, Fermilab Director John Peoples elevated the computing department to a full-fledged division. The head of a division reports directly to the lab director, making it easier to get resources and set priorities. Physicist Tom Nash formed the new Computing Division, along with Butler and two other scientists, Irwin Gaines and Victoria White. Butler led the division from 1994 to 1998.

    High-performance computing in particle physics and astrophysics

    These computational systems worked well for particle physicists for a long time, says Berkeley Lab astrophysicist Peter Nugent. That is, until Moore’s Law started grinding to a halt.

    Moore’s Law is the idea that the number of transistors in a circuit will double, making computers faster and cheaper, every two years. The term was first coined in the mid-1970s, and the trend reliably proceeded for decades. But now, computer manufacturers are starting to hit the physical limit of how many tiny transistors they can cram onto a single microchip.

    Because of this, says Nugent, particle physicists have been looking to take advantage of high-performance computing instead.

    Nugent says high-performance computing is “something more than a cluster, or a cloud-computing environment that you could get from Google or AWS, or at your local university.”

    What it typically means, he says, is that you have high-speed networking between computational nodes, allowing them to share information with each other very, very quickly. When you are computing on up to hundreds of thousands of nodes simultaneously, it massively speeds up the process.

    On a single traditional computer, he says, 100 million CPU hours translates to more than 11,000 years of continuous calculations. But for scientists using a high-performance computing facility at Berkeley Lab, DOE’s Argonne National Laboratory (US) or DOE’s Oak Ridge National Laboratory (US), 100 million hours is a typical, large allocation for one year at these facilities.

    Although astrophysicists have always relied on high-performance computing for simulating the birth of stars or modeling the evolution of the cosmos, Nugent says they are now using it for their data analysis as well.

    This includes rapid image-processing computations that have enabled the observations of several supernovae, including SN 2011fe, captured just after it began. “We found it just a few hours after it exploded, all because we were able to run these pipelines so efficiently and quickly,” Nugent says.

    According to Berkeley Lab physicist Paolo Calafiura, particle physicists also use high-performance computing for simulations—for modeling not the evolution of the cosmos, but rather what happens inside a particle detector. “Detector simulation is significantly the most computing-intensive problem that we have,” he says.

    Scientists need to evaluate multiple possibilities for what can happen when particles collide. To properly correct for detector effects when analyzing particle detector experiments, they need to simulate more data than they collect. “If you collect 1 billion collision events a year,” Calafiura says, “you want to simulate 10 billion collision events.”

    Calafiura says that right now, he’s more worried about finding a way to store all of the simulated and actual detector data than he is about producing it, but he knows that won’t last.

    “When does physics push computing?” he says. “When computing is not good enough… We see that in five years, computers will not be powerful enough for our problems, so we are pushing hard with some radically new ideas, and lots of detailed optimization work.”

    That’s why The Department of Energy’s Exascale Computing Project aims to build, in the next few years, computers capable of performing a quintillion (that is, a billion billion) operations per second. The new computers will be 1000 times faster than the current fastest computers.

    Depiction of ANL ALCF Cray Intel SC18 Shasta Aurora exascale supercomputer, to be built at DOE’s Argonne National Laboratory.

    The exascale computers will also be used for other applications ranging from precision medicine to climate modeling to national security.

    Machine learning and quantum computing

    Innovations in computer hardware have enabled astrophysicists to push the kinds of simulations and analyses they can do. For example, Nugent says, the introduction of graphics processing units [GPU’s] has sped up astrophysicists’ ability to do calculations used in machine learning, leading to an explosive growth of machine learning in astrophysics.

    With machine learning, which uses algorithms and statistics to identify patterns in data, astrophysicists can simulate entire universes in microseconds.

    Machine learning has been important in particle physics as well, says Fermilab scientist Nhan Tran. “[Physicists] have very high-dimensional data, very complex data,” he says. “Machine learning is an optimal way to find interesting structures in that data.”

    The same way a computer can be trained to tell the difference between cats and dogs in pictures, it can learn how to identify particles from physics datasets, distinguishing between things like pions and photons.

    Tran says using computation this way can accelerate discovery. “As physicists, we’ve been able to learn a lot about particle physics and nature using non-machine-learning algorithms,” he says. “But machine learning can drastically accelerate and augment that process—and potentially provide deeper insight into the data.”

    And while teams of researchers are busy building exascale computers, others are hard at work trying to build another type of supercomputer: the quantum computer.

    Remember Moore’s Law? Previously, engineers were able to make computer chips faster by shrinking the size of electrical circuits, reducing the amount of time it takes for electrical signals to travel. “Now our technology is so good that literally the distance between transistors is the size of an atom,” Tran says. “So we can’t keep scaling down the technology and expect the same gains we’ve seen in the past.”

    To get around this, some researchers are redefining how computation works at a fundamental level—like, really fundamental.

    The basic unit of data in a classical computer is called a bit, which can hold one of two values: 1, if it has an electrical signal, or 0, if it has none. But in quantum computing, data is stored in quantum systems—things like electrons, which have either up or down spins, or photons, which are polarized either vertically or horizontally. These data units are called “qubits.”

    Here’s where it gets weird. Through a quantum property called superposition, qubits have more than just two possible states. An electron can be up, down, or in a variety of stages in between.

    What does this mean for computing? A collection of three classical bits can exist in only one of eight possible configurations: 000, 001, 010, 100, 011, 110, 101 or 111. But through superposition, three qubits can be in all eight of these configurations at once. A quantum computer can use that information to tackle problems that are impossible to solve with a classical computer.

    Fermilab scientist Aaron Chou likens quantum problem-solving to throwing a pebble into a pond. The ripples move through the water in every possible direction, “simultaneously exploring all of the possible things that it might encounter.”

    In contrast, a classical computer can only move in one direction at a time.

    But this makes quantum computers faster than classical computers only when it comes to solving certain types of problems. “It’s not like you can take any classical algorithm and put it on a quantum computer and make it better,” says University of California, Santa Barbara physicist John Martinis, who helped build Google’s quantum computer.

    Although quantum computers work in a fundamentally different way than classical computers, designing and building them wouldn’t be possible without traditional computing laying the foundation, Martinis says. “We’re really piggybacking on a lot of the technology of the last 50 years or more.”

    The kinds of problems that are well suited to quantum computing are intrinsically quantum mechanical in nature, says Chou.

    For instance, Martinis says, consider quantum chemistry. Solving quantum chemistry problems with classical computers is so difficult, he says, that 10 to 15% of the world’s supercomputer usage is currently dedicated to the task. “Quantum chemistry problems are hard for the very reason why a quantum computer is powerful”—because to complete them, you have to consider all the different quantum-mechanical states of all the individual atoms involved.

    Because making better quantum computers would be so useful in physics research, and because building them requires skills and knowledge that physicists possess, physicists are ramping up their quantum efforts. In the United States, the National Quantum Initiative Act of 2018 called for the The National Institute of Standards and Technology (US), The National Science Foundation (US) and The Department of Energy (US) to support programs, centers and consortia devoted to quantum information science.

    Coevolution requires cooperation

    In the early days of computational physics, the line between who was a particle physicist and who was a computer scientist could be fuzzy. Physicists used commercially available microprocessors to build custom computers for experiments. They also wrote much of their own software—ranging from printer drivers to the software that coordinated the analysis between the clustered computers.

    Nowadays, roles have somewhat shifted. Most physicists use commercially available devices and software, allowing them to focus more on the physics, Butler says. But some people, like Anshu Dubey, work right at the intersection of the two fields. Dubey is a computational scientist at DOE’s Argonne National Laboratory (US) who works with computational physicists.

    When a physicist needs to computationally interpret or model a phenomenon, sometimes they will sign up a student or postdoc in their research group for a programming course or two and then ask them to write the code to do the job. Although these codes are mathematically complex, Dubey says, they aren’t logically complex, making them relatively easy to write.

    A simulation of a single physical phenomenon can be neatly packaged within fairly straightforward code. “But the real world doesn’t want to cooperate with you in terms of its modularity and encapsularity,” she says.

    Multiple forces are always at play, so to accurately model real-world complexity, you have to use more complex software—ideally software that doesn’t become impossible to maintain as it gets updated over time. “All of a sudden,” says Dubey, “you start to require people who are creative in their own right—in terms of being able to architect software.”

    That’s where people like Dubey come in. At Argonne, Dubey develops software that researchers use to model complex multi-physics systems—incorporating processes like fluid dynamics, radiation transfer and nuclear burning.

    Hiring computer scientists for research projects in physics and other fields of science can be a challenge, Dubey says. Most funding agencies specify that research money can be used for hiring students and postdocs, but not paying for software development or hiring dedicated engineers. “There is no viable career path in academia for people whose careers are like mine,” she says.

    In an ideal world, universities would establish endowed positions for a team of research software engineers in physics departments with a nontrivial amount of computational research, Dubey says. These engineers would write reliable, well-architected code, and their institutional knowledge would stay with a team.

    Physics and computing have been closely intertwined for decades. However the two develop—toward new analyses using artificial intelligence, for example, or toward the creation of better and better quantum computers—it seems they will remain on this path together.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.


    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 10:58 am on August 13, 2021 Permalink | Reply
    Tags: "University of Washington and Microsoft researchers develop 'nanopore-tal' enabling cells to talk to computers", A commercially available nanopore array — in this case the Oxford Nanopore Technologies MinION device., A new class of reporter proteins that can be directly read by a commercially available nanopore sensing device., , , , Computing, Genetically encoded reporter proteins have been a mainstay of biotechnology research., Scientists are currently working to scale up the number of "NTERs" to hundreds; thousands; maybe even millions more., The new system-dubbed “Nanopore-addressable protein Tags Engineered as Reporters” also known as NanoporeTERs or NTERs for short., This is a fundamentally new interface between cells and computers., University of Washington Paul G. Allen College of Electrical and Computer of Engineering (US)   

    From University of Washington Paul G. Allen College of Electrical and Computer of Engineering (US) : “University of Washington and Microsoft researchers develop ‘nanopore-tal’ enabling cells to talk to computers” 

    From University of Washington Paul G. Allen College of Electrical and Computer of Engineering (US)

    August 12, 2021

    1
    MISL researcher Nicolas Cardozo pipes cell cultures containing NanoporeTERs onto a portable MinION nanopore sensing device for processing as professor Jeff Nivala looks on. Credit: Dennis Wise/University of Washington.

    Genetically encoded reporter proteins have been a mainstay of biotechnology research, allowing scientists to track gene expression, understand intracellular processes and debug engineered genetic circuits. But conventional reporting schemes that rely on fluorescence and other optical approaches come with practical limitations that could cast a shadow over the field’s future progress. Now, thanks to a team of researchers at the University of Washington and Microsoft, scientists are about to see reporter proteins in a whole new light.

    In a paper published today in the journal Nature Biotechnology, members of the Molecular Information Systems Laboratory housed at the UW’s Paul G. Allen School of Computer Science & Engineering introduce a new class of reporter proteins that can be directly read by a commercially available nanopore sensing device. The new system ― dubbed “Nanopore-addressable protein Tags Engineered as Reporters” also known as NanoporeTERs or NTERs for short ― can perform multiplexed detection of protein expression levels from bacterial and human cell cultures far beyond the capacity of existing techniques.

    You could say the new system offers a “nanopore-tal” into what is happening inside these complex biological systems where, up until this point, scientists have largely been operating in the dark.

    “NanoporeTERs offer a new and richer lexicon for engineered cells to express themselves and shed new light on the factors they are designed to track. They can tell us a lot more about what is happening in their environment all at once,” said co-lead author Nicolas Cardozo, a graduate student in the UW’s molecular engineering Ph.D. program. “We’re essentially making it possible for these cells to ‘talk’ to computers about what’s happening in their surroundings at a new level of detail, scale and efficiency that will enable deeper analysis than what we could do before.”

    2
    Raw nanopore signals streaming from the MinION device, which contains an array of hundreds of nanopore sensors; each color represents data from an individual nanopore. The team uses machine learning to interpret these signals as NanoporeTERs barcodes. Credit: Dennis Wise/University of Washington.

    Conventional methods that employ optical reporter proteins, such as green fluorescent protein (GFP), are limited in the number of distinct genetic outputs that they can track simultaneously due to their overlapping spectral properties. For example, it’s difficult to distinguish between more than three different fluorescent protein colors, limiting multiplexed reporting to a maximum of three outputs. In contrast, NTERs were designed to carry distinct protein “barcodes” composed of strings of amino acids that, when used in combination, enable a degree of multiplexing approaching an order of magnitude more. These synthetic proteins are secreted outside of the cell into the surrounding environment, where they are collected and directly analyzed using a commercially available nanopore array — in this case the Oxford Nanopore Technologies MinION device. To make nanopore analysis possible, the NTER proteins were engineered with charged “tails” that get pulled into the tiny nanopore sensors by an electric field. Machine learning is then used to classify their electrical signals in order to determine the output levels of each NTER barcode.

    “This is a fundamentally new interface between cells and computers,” explained Allen School research professor and corresponding author Jeff Nivala. “One analogy I like to make is that fluorescent protein reporters are like lighthouses, and NanoporeTERs are like messages in a bottle. Lighthouses are really useful for communicating a physical location, as you can literally see where the signal is coming from, but it’s hard to pack more information into that kind of signal. A message in a bottle, on the other hand, can pack a lot of information into a very small vessel, and you can send many of them off to another location to be read. You might lose sight of the precise physical location where the messages were sent, but for many applications that’s not going to be an issue.”

    In developing this new, more expressive vessel, Nivala and his colleagues eschewed time-consuming sample preparation or the need for other specialized laboratory equipment to minimize both latency and cost. The NTERs scheme is also highly extensible. As a proof of concept, the team developed a library of more than 20 distinct tags; according to co-lead author Karen Zhang, the potential is significantly greater.

    3
    Co-authors of the Nature Biotechnology paper (left to right): Karen Zhang, Nicolas Cardozo, Kathryn Doroschak and Jeff Nivala. Not pictured: Aerilynn Nguyen, Zoheb Siddiqui, Nicholas Bogard, Karin Strauss and Luis Ceze. Credit: Tara Brown Photography.

    “We are currently working to scale up the number of NTERs to hundreds; thousands; maybe even millions more,” Zhang, who graduated this year from the UW with bachelor’s degrees in biochemistry and microbiology, explained. “The more we have, the more things we can track. We’re particularly excited about the potential in single-cell proteomics, but this could also be a game-changer in terms of our ability to do multiplexed biosensing to diagnose disease and even target therapeutics to specific areas inside the body. And debugging complicated genetic circuit designs would become a whole lot easier and much less time consuming if we could measure the performance of all the components in parallel instead of by trial and error.”

    MISL researchers have made novel use of the ONT MinION device before. Allen School alumna Kathryn Doroschak (Ph.D., ‘21), one of the lead co-authors of this paper, was also involved in an earlier project in which she and her teammates developed a molecular tagging system to replace conventional inventory control methods. That system relied on barcodes comprising synthetic strands of DNA that could be decoded on demand using the portable ONT reader. This time, she and her colleagues went a step further in demonstrating how versatile such devices can be.

    “This is the first paper to show how a commercial nanopore sensor device can be repurposed for applications other than the DNA and RNA sequencing for which they were originally designed,” explained Doroschak. “This is exciting as a precursor for nanopore technology becoming more accessible and ubiquitous in the future. You can already plug a nanopore device into your cell phone; I could envision someday having a choice of ‘molecular apps’ that will be relatively inexpensive and widely available outside of traditional genomics.”

    Additional co-authors of the paper include research assistants Aerilynn Nguyen and Zoheb Siddiqui, former postdoc Nicholas Bogard, Allen School affiliate professor Karin Strauss, senior principal research manager at Microsoft; and Allen School professor Luis Ceze.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    About the University of Washington Paul G. Allen College of Electrical and Computer Engineering (US)

    Mission, Facts, and Stats

    Our mission is to develop outstanding engineers and ideas that change the world.

    Faculty:
    275 faculty (25.2% women)
    Achievements:

    128 NSF Young Investigator/Early Career Awards since 1984
    32 Sloan Foundation Research Awards
    2 MacArthur Foundation Fellows (2007 and 2011)

    A national leader in educating engineers, each year the College turns out new discoveries, inventions and top-flight graduates, all contributing to the strength of our economy and the vitality of our community.

    Engineering innovation

    PEOPLE Innovation at UW ECE is exemplified by our outstanding faculty and by the exceptional group of students they advise and mentor. Students receive a robust education through a strong technical foundation, group project work and hands-on research opportunities. Our faculty work in dynamic research areas with diverse opportunities for projects and collaborations. Through their research, they address complex global challenges in health, energy, technology and the environment, and receive significant research and education grants.IMPACT We continue to expand our innovation ecosystem by promoting an entrepreneurial mindset in our teaching and through diverse partnerships. The field of electrical and computer engineering is at the forefront of solving emerging societal challenges, empowered by innovative ideas from our community. As our department evolves, we are dedicated to expanding our faculty and student body to meet the growing demand for engineers. We welcomed six new faculty hires in the 2018-2019 academic year. Our meaningful connections and collaborations place the department as a leader in the field.

    Engineers drive the innovation economy and are vital to solving society’s most challenging problems. The College of Engineering is a key part of a world-class research university in a thriving hub of aerospace, biotechnology, global health and information technology innovation. Over 50% of UW startups in FY18 came from the College of Engineering.

    Commitment to diversity and access

    The College of Engineering is committed to developing and supporting a diverse student body and faculty that reflect and elevate the populations we serve. We are a national leader in women in engineering; 25.5% of our faculty are women compared to 17.4% nationally. We offer a robust set of diversity programs for students and faculty.
    Research and commercialization

    The University of Washington is an engine of economic growth, today ranked third in the nation for the number of startups launched each year, with 65 companies having been started in the last five years alone by UW students and faculty, or with technology developed here. The College of Engineering is a key contributor to these innovations, and engineering faculty, students or technology are behind half of all UW startups. In FY19, UW received $1.58 billion in total research awards from federal and nonfederal sources.

    u-washington-campus

    The University of Washington (US) is one of the world’s preeminent public universities. Our impact on individuals, on our region, and on the world is profound — whether we are launching young people into a boundless future or confronting the grand challenges of our time through undaunted research and scholarship. Ranked number 10 in the world in Shanghai Jiao Tong University rankings and educating more than 54,000 students annually, our students and faculty work together to turn ideas into impact and in the process transform lives and our world. For more about our impact on the world, every day.

    So what defines us —the students, faculty and community members at the University of Washington? Above all, it’s our belief in possibility and our unshakable optimism. It’s a connection to others, both near and far. It’s a hunger that pushes us to tackle challenges and pursue progress. It’s the conviction that together we can create a world of good. Join us on the journey.

    The University of Washington (US) is a public research university in Seattle, Washington, United States. Founded in 1861, University of Washington is one of the oldest universities on the West Coast; it was established in downtown Seattle approximately a decade after the city’s founding to aid its economic development. Today, the university’s 703-acre main Seattle campus is in the University District above the Montlake Cut, within the urban Puget Sound region of the Pacific Northwest. The university has additional campuses in Tacoma and Bothell. Overall, University of Washington encompasses over 500 buildings and over 20 million gross square footage of space, including one of the largest library systems in the world with more than 26 university libraries, as well as the UW Tower, lecture halls, art centers, museums, laboratories, stadiums, and conference centers. The university offers bachelor’s, master’s, and doctoral degrees through 140 departments in various colleges and schools, sees a total student enrollment of roughly 46,000 annually, and functions on a quarter system.

    University of Washington is a member of the Association of American Universities(US) and is classified among “R1: Doctoral Universities – Very high research activity”. According to the National Science Foundation(US), UW spent $1.41 billion on research and development in 2018, ranking it 5th in the nation. As the flagship institution of the six public universities in Washington state, it is known for its medical, engineering and scientific research as well as its highly competitive computer science and engineering programs. Additionally, University of Washington continues to benefit from its deep historic ties and major collaborations with numerous technology giants in the region, such as Amazon, Boeing, Nintendo, and particularly Microsoft. Paul G. Allen, Bill Gates and others spent significant time at Washington computer labs for a startup venture before founding Microsoft and other ventures. The University of Washington’s 22 varsity sports teams are also highly competitive, competing as the Huskies in the Pac-12 Conference of the NCAA Division I, representing the United States at the Olympic Games, and other major competitions.

    The university has been affiliated with many notable alumni and faculty, including 21 Nobel Prize laureates and numerous Pulitzer Prize winners, Fulbright Scholars, Rhodes Scholars and Marshall Scholars.

    In 1854, territorial governor Isaac Stevens recommended the establishment of a university in the Washington Territory. Prominent Seattle-area residents, including Methodist preacher Daniel Bagley, saw this as a chance to add to the city’s potential and prestige. Bagley learned of a law that allowed United States territories to sell land to raise money in support of public schools. At the time, Arthur A. Denny, one of the founders of Seattle and a member of the territorial legislature, aimed to increase the city’s importance by moving the territory’s capital from Olympia to Seattle. However, Bagley eventually convinced Denny that the establishment of a university would assist more in the development of Seattle’s economy. Two universities were initially chartered, but later the decision was repealed in favor of a single university in Lewis County provided that locally donated land was available. When no site emerged, Denny successfully petitioned the legislature to reconsider Seattle as a location in 1858.

    In 1861, scouting began for an appropriate 10 acres (4 ha) site in Seattle to serve as a new university campus. Arthur and Mary Denny donated eight acres, while fellow pioneers Edward Lander, and Charlie and Mary Terry, donated two acres on Denny’s Knoll in downtown Seattle. More specifically, this tract was bounded by 4th Avenue to the west, 6th Avenue to the east, Union Street to the north, and Seneca Streets to the south.

    John Pike, for whom Pike Street is named, was the university’s architect and builder. It was opened on November 4, 1861, as the Territorial University of Washington. The legislature passed articles incorporating the University, and establishing its Board of Regents in 1862. The school initially struggled, closing three times: in 1863 for low enrollment, and again in 1867 and 1876 due to funds shortage. University of Washington awarded its first graduate Clara Antoinette McCarty Wilt in 1876, with a bachelor’s degree in science.

    19th century relocation

    By the time Washington state entered the Union in 1889, both Seattle and the University had grown substantially. University of Washington’s total undergraduate enrollment increased from 30 to nearly 300 students, and the campus’s relative isolation in downtown Seattle faced encroaching development. A special legislative committee, headed by University of Washington graduate Edmond Meany, was created to find a new campus to better serve the growing student population and faculty. The committee eventually selected a site on the northeast of downtown Seattle called Union Bay, which was the land of the Duwamish, and the legislature appropriated funds for its purchase and construction. In 1895, the University relocated to the new campus by moving into the newly built Denny Hall. The University Regents tried and failed to sell the old campus, eventually settling with leasing the area. This would later become one of the University’s most valuable pieces of real estate in modern-day Seattle, generating millions in annual revenue with what is now called the Metropolitan Tract. The original Territorial University building was torn down in 1908, and its former site now houses the Fairmont Olympic Hotel.

    The sole-surviving remnants of Washington’s first building are four 24-foot (7.3 m), white, hand-fluted cedar, Ionic columns. They were salvaged by Edmond S. Meany, one of the University’s first graduates and former head of its history department. Meany and his colleague, Dean Herbert T. Condon, dubbed the columns as “Loyalty,” “Industry,” “Faith”, and “Efficiency”, or “LIFE.” The columns now stand in the Sylvan Grove Theater.

    20th century expansion

    Organizers of the 1909 Alaska-Yukon-Pacific Exposition eyed the still largely undeveloped campus as a prime setting for their world’s fair. They came to an agreement with Washington’s Board of Regents that allowed them to use the campus grounds for the exposition, surrounding today’s Drumheller Fountain facing towards Mount Rainier. In exchange, organizers agreed Washington would take over the campus and its development after the fair’s conclusion. This arrangement led to a detailed site plan and several new buildings, prepared in part by John Charles Olmsted. The plan was later incorporated into the overall University of Washington campus master plan, permanently affecting the campus layout.

    Both World Wars brought the military to campus, with certain facilities temporarily lent to the federal government. In spite of this, subsequent post-war periods were times of dramatic growth for the University. The period between the wars saw a significant expansion of the upper campus. Construction of the Liberal Arts Quadrangle, known to students as “The Quad,” began in 1916 and continued to 1939. The University’s architectural centerpiece, Suzzallo Library, was built in 1926 and expanded in 1935.

    After World War II, further growth came with the G.I. Bill. Among the most important developments of this period was the opening of the School of Medicine in 1946, which is now consistently ranked as the top medical school in the United States. It would eventually lead to the University of Washington Medical Center, ranked by U.S. News and World Report as one of the top ten hospitals in the nation.

    In 1942, all persons of Japanese ancestry in the Seattle area were forced into inland internment camps as part of Executive Order 9066 following the attack on Pearl Harbor. During this difficult time, university president Lee Paul Sieg took an active and sympathetic leadership role in advocating for and facilitating the transfer of Japanese American students to universities and colleges away from the Pacific Coast to help them avoid the mass incarceration. Nevertheless many Japanese American students and “soon-to-be” graduates were unable to transfer successfully in the short time window or receive diplomas before being incarcerated. It was only many years later that they would be recognized for their accomplishments during the University of Washington’s Long Journey Home ceremonial event that was held in May 2008.

    From 1958 to 1973, the University of Washington saw a tremendous growth in student enrollment, its faculties and operating budget, and also its prestige under the leadership of Charles Odegaard. University of Washington student enrollment had more than doubled to 34,000 as the baby boom generation came of age. However, this era was also marked by high levels of student activism, as was the case at many American universities. Much of the unrest focused around civil rights and opposition to the Vietnam War. In response to anti-Vietnam War protests by the late 1960s, the University Safety and Security Division became the University of Washington Police Department.

    Odegaard instituted a vision of building a “community of scholars”, convincing the Washington State legislatures to increase investment in the University. Washington senators, such as Henry M. Jackson and Warren G. Magnuson, also used their political clout to gather research funds for the University of Washington. The results included an increase in the operating budget from $37 million in 1958 to over $400 million in 1973, solidifying University of Washington as a top recipient of federal research funds in the United States. The establishment of technology giants such as Microsoft, Boeing and Amazon in the local area also proved to be highly influential in the University of Washington’s fortunes, not only improving graduate prospects but also helping to attract millions of dollars in university and research funding through its distinguished faculty and extensive alumni network.

    21st century

    In 1990, the University of Washington opened its additional campuses in Bothell and Tacoma. Although originally intended for students who have already completed two years of higher education, both schools have since become four-year universities with the authority to grant degrees. The first freshman classes at these campuses started in fall 2006. Today both Bothell and Tacoma also offer a selection of master’s degree programs.

    In 2012, the University began exploring plans and governmental approval to expand the main Seattle campus, including significant increases in student housing, teaching facilities for the growing student body and faculty, as well as expanded public transit options. The University of Washington light rail station was completed in March 2015, connecting Seattle’s Capitol Hill neighborhood to the University of Washington Husky Stadium within five minutes of rail travel time. It offers a previously unavailable option of transportation into and out of the campus, designed specifically to reduce dependence on private vehicles, bicycles and local King County buses.

    University of Washington has been listed as a “Public Ivy” in Greene’s Guides since 2001, and is an elected member of the American Association of Universities. Among the faculty by 2012, there have been 151 members of American Association for the Advancement of Science, 68 members of the National Academy of Sciences(US), 67 members of the American Academy of Arts and Sciences, 53 members of the National Academy of Medicine(US), 29 winners of the Presidential Early Career Award for Scientists and Engineers, 21 members of the National Academy of Engineering(US), 15 Howard Hughes Medical Institute Investigators, 15 MacArthur Fellows, 9 winners of the Gairdner Foundation International Award, 5 winners of the National Medal of Science, 7 Nobel Prize laureates, 5 winners of Albert Lasker Award for Clinical Medical Research, 4 members of the American Philosophical Society, 2 winners of the National Book Award, 2 winners of the National Medal of Arts, 2 Pulitzer Prize winners, 1 winner of the Fields Medal, and 1 member of the National Academy of Public Administration. Among UW students by 2012, there were 136 Fulbright Scholars, 35 Rhodes Scholars, 7 Marshall Scholars and 4 Gates Cambridge Scholars. UW is recognized as a top producer of Fulbright Scholars, ranking 2nd in the US in 2017.

    The Academic Ranking of World Universities (ARWU) has consistently ranked University of Washington as one of the top 20 universities worldwide every year since its first release. In 2019, University of Washington ranked 14th worldwide out of 500 by the ARWU, 26th worldwide out of 981 in the Times Higher Education World University Rankings, and 28th worldwide out of 101 in the Times World Reputation Rankings. Meanwhile, QS World University Rankings ranked it 68th worldwide, out of over 900.

    U.S. News & World Report ranked University of Washington 8th out of nearly 1,500 universities worldwide for 2021, with University of Washington’s undergraduate program tied for 58th among 389 national universities in the U.S. and tied for 19th among 209 public universities.

    In 2019, it ranked 10th among the universities around the world by SCImago Institutions Rankings. In 2017, the Leiden Ranking, which focuses on science and the impact of scientific publications among the world’s 500 major universities, ranked University of Washington 12th globally and 5th in the U.S.

    In 2019, Kiplinger Magazine’s review of “top college values” named University of Washington 5th for in-state students and 10th for out-of-state students among U.S. public colleges, and 84th overall out of 500 schools. In the Washington Monthly National University Rankings University of Washington was ranked 15th domestically in 2018, based on its contribution to the public good as measured by social mobility, research, and promoting public service.

     
  • richardmitnick 9:43 pm on July 26, 2021 Permalink | Reply
    Tags: "Midgard - a paradigm shift in data center technology", , Communications, Computing,   

    From Swiss Federal Institute of Technology in Lausanne [EPFL-École Polytechnique Fédérale de Lausanne] (CH): “Midgard – a paradigm shift in data center technology” 

    From Swiss Federal Institute of Technology in Lausanne [EPFL-École Polytechnique Fédérale de Lausanne] (CH)

    1
    ©stock.adobe.com

    EPFL researchers have pioneered an innovative approach to implementing virtual memory in data centers, which will greatly increase server efficiency.

    As big data, used by everything from AI to the Internet of Things, increasingly dominates our modern lives, cloud computing has grown massively in importance. It relies heavily on the use of virtual memory with one data server running many services for many different customers all at the same time, using virtual memory to process these services and to keep each customer’s data secure from the others.

    However, the way this virtual memory is deployed dates back to the 1960’s, and the fact that memory capacity is always increasing is actually beginning to slow things down. For example, data centers that provide services such as social networks or business analytics spend more than 20% of their processing time in virtual memory and protection checks. That means that any gains made in this area will represent a huge benefit in efficiency.

    Midgard: saving energy in the cloud

    Now, researchers working with EPFL’s Ecocloud Center for Sustainable Cloud Computing, have developed Midgard, a software-modelled prototype demonstrating proof of concept to greatly increase server efficiency. Their research paper, Rebooting Virtual Memory with Midgard, has just been presented at ISCA’21, the world’s flagship conference in computer architecture, and is the first of several steps to demonstrate a fully working system.

    “Midgard is a technology that can allow for growing memory capacity, while continuing to guarantee the security of the data of each user in the cloud services,” explains Professor Babak Falsafi, Founding Director of the Ecocloud Center and one of the paper’s authors. “With Midgard, the all-important data lookups and protection checks are done directly in on-chip memory rather than virtual memory, removing so much of the traditional hierarchy of lookups and translations that it scores a net gain in efficiency, even as more memory is deployed,” he continued.

    In recent testing at low loads, the Midgard system was 5% behind standard performance, but at loads of 256 MB aggregate large cache it was able to outperform traditional systems in terms of virtual memory overheads.

    An outstanding feature of Midgard technology is that, while it does represent a paradigm shift, it is compatible with existing operating systems such as Windows, MacOS and Linux. Future work will address the wide spectrum of topics needed to realize Midgard in real systems, such as compatibility development, packaging strategies and maintenance plans.

    For more information about Midgard click here.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    EPFL bloc

    EPFL campus

    The Swiss Federal Institute of Technology in Lausanne [EPFL-École polytechnique fédérale de Lausanne] (CH) is a research institute and university in Lausanne, Switzerland, that specializes in natural sciences and engineering. It is one of the two Swiss Federal Institutes of Technology, and it has three main missions: education, research and technology transfer.

    The QS World University Rankings ranks EPFL(CH) 14th in the world across all fields in their 2020/2021 ranking, whereas Times Higher Education World University Rankings ranks EPFL(CH) as the world’s 19th best school for Engineering and Technology in 2020.

    EPFL(CH) is located in the French-speaking part of Switzerland; the sister institution in the German-speaking part of Switzerland is the Swiss Federal Institute of Technology ETH Zürich [Eidgenössische Technische Hochschule Zürich)](CH) . Associated with several specialized research institutes, the two universities form the Domain of the Swiss Federal Institutes of Technology (ETH Domain) [ETH-Bereich; Domaine des Écoles polytechniques fédérales] (CH) which is directly dependent on the Federal Department of Economic Affairs, Education and Research. In connection with research and teaching activities, EPFL(CH) operates a nuclear reactor CROCUS; a Tokamak Fusion reactor; a Blue Gene/Q Supercomputer; and P3 bio-hazard facilities.

    The roots of modern-day EPFL(CH) can be traced back to the foundation of a private school under the name École spéciale de Lausanne in 1853 at the initiative of Lois Rivier, a graduate of the École Centrale Paris (FR) and John Gay the then professor and rector of the Académie de Lausanne. At its inception it had only 11 students and the offices was located at Rue du Valentin in Lausanne. In 1869, it became the technical department of the public Académie de Lausanne. When the Académie was reorganised and acquired the status of a university in 1890, the technical faculty changed its name to École d’ingénieurs de l’Université de Lausanne. In 1946, it was renamed the École polytechnique de l’Université de Lausanne (EPUL). In 1969, the EPUL was separated from the rest of the University of Lausanne and became a federal institute under its current name. EPFL(CH), like ETH Zürich(CH), is thus directly controlled by the Swiss federal government. In contrast, all other universities in Switzerland are controlled by their respective cantonal governments. Following the nomination of Patrick Aebischer as president in 2000, EPFL(CH) has started to develop into the field of life sciences. It absorbed the Swiss Institute for Experimental Cancer Research (ISREC) in 2008.

    In 1946, there were 360 students. In 1969, EPFL(CH) had 1,400 students and 55 professors. In the past two decades the university has grown rapidly and as of 2012 roughly 14,000 people study or work on campus, about 9,300 of these being Bachelor, Master or PhD students. The environment at modern day EPFL(CH) is highly international with the school attracting students and researchers from all over the world. More than 125 countries are represented on the campus and the university has two official languages, French and English.

    Organization

    EPFL is organised into eight schools, themselves formed of institutes that group research units (laboratories or chairs) around common themes:

    School of Basic Sciences (SB, Jan S. Hesthaven)

    Institute of Mathematics (MATH, Victor Panaretos)
    Institute of Chemical Sciences and Engineering (ISIC, Emsley Lyndon)
    Institute of Physics (IPHYS, Harald Brune)
    European Centre of Atomic and Molecular Computations (CECAM, Ignacio Pagonabarraga Mora)
    Bernoulli Center (CIB, Nicolas Monod)
    Biomedical Imaging Research Center (CIBM, Rolf Gruetter)
    Interdisciplinary Center for Electron Microscopy (CIME, Cécile Hébert)
    Max Planck-EPFL Centre for Molecular Nanosciences and Technology (CMNT, Thomas Rizzo)
    Swiss Plasma Center (SPC, Ambrogio Fasoli)
    Laboratory of Astrophysics (LASTRO, Jean-Paul Kneib)

    School of Engineering (STI, Ali Sayed)

    Institute of Electrical Engineering (IEL, Giovanni De Micheli)
    Institute of Mechanical Engineering (IGM, Thomas Gmür)
    Institute of Materials (IMX, Michaud Véronique)
    Institute of Microengineering (IMT, Olivier Martin)
    Institute of Bioengineering (IBI, Matthias Lütolf)

    School of Architecture, Civil and Environmental Engineering (ENAC, Claudia R. Binder)

    Institute of Architecture (IA, Luca Ortelli)
    Civil Engineering Institute (IIC, Eugen Brühwiler)
    Institute of Urban and Regional Sciences (INTER, Philippe Thalmann)
    Environmental Engineering Institute (IIE, David Andrew Barry)

    School of Computer and Communication Sciences (IC, James Larus)

    Algorithms & Theoretical Computer Science
    Artificial Intelligence & Machine Learning
    Computational Biology
    Computer Architecture & Integrated Systems
    Data Management & Information Retrieval
    Graphics & Vision
    Human-Computer Interaction
    Information & Communication Theory
    Networking
    Programming Languages & Formal Methods
    Security & Cryptography
    Signal & Image Processing
    Systems

    School of Life Sciences (SV, Gisou van der Goot)

    Bachelor-Master Teaching Section in Life Sciences and Technologies (SSV)
    Brain Mind Institute (BMI, Carmen Sandi)
    Institute of Bioengineering (IBI, Melody Swartz)
    Swiss Institute for Experimental Cancer Research (ISREC, Douglas Hanahan)
    Global Health Institute (GHI, Bruno Lemaitre)
    Ten Technology Platforms & Core Facilities (PTECH)
    Center for Phenogenomics (CPG)
    NCCR Synaptic Bases of Mental Diseases (NCCR-SYNAPSY)

    College of Management of Technology (CDM)

    Swiss Finance Institute at EPFL (CDM-SFI, Damir Filipovic)
    Section of Management of Technology and Entrepreneurship (CDM-PMTE, Daniel Kuhn)
    Institute of Technology and Public Policy (CDM-ITPP, Matthias Finger)
    Institute of Management of Technology and Entrepreneurship (CDM-MTEI, Ralf Seifert)
    Section of Financial Engineering (CDM-IF, Julien Hugonnier)

    College of Humanities (CDH, Thomas David)

    Human and social sciences teaching program (CDH-SHS, Thomas David)

    EPFL Middle East (EME, Dr. Franco Vigliotti)[62]

    Section of Energy Management and Sustainability (MES, Prof. Maher Kayal)

    In addition to the eight schools there are seven closely related institutions

    Swiss Cancer Centre
    Center for Biomedical Imaging (CIBM)
    Centre for Advanced Modelling Science (CADMOS)
    École cantonale d’art de Lausanne (ECAL)
    Campus Biotech
    Wyss Center for Bio- and Neuro-engineering
    Swiss National Supercomputing Centre

     
  • richardmitnick 8:24 am on July 24, 2021 Permalink | Reply
    Tags: "20 Years Ago Steve Jobs Built the ‘Coolest Computer Ever.’ It Bombed", Apple Computer, , Computing, The Power Mac G4 Cube,   

    From WIRED : “20 Years Ago Steve Jobs Built the ‘Coolest Computer Ever.’ It Bombed” 

    From WIRED

    07.24.2020 [Re-issued 7.24.21]
    Steven Levy

    Plus: An interview from the archives, the most-read story in WIRED history, and bottled-up screams.

    1
    The Power Mac G4 Cube, released in 2000 and discontinued in 2001, violated the wisdom of Jobs’ product plan. Photograph: Apple/Getty Images.

    The Plain View

    This month marks the 20th anniversary of the Power Mac G4 Cube, which debuted July 19, 2000. It also marks the 19th anniversary of Apple’s announcement that it was “putting the Cube on ice”. That’s not my joke, it’s Apple’s, straight from the headline of its July 3, 2001, press release that officially pulled the plug.

    The idea of such a quick turnaround was nowhere in the mind of Apple CEO Steve Jobs on the eve of the product’s announcement at that summer 2000 Macworld Expo. I was reminded of this last week, as I listened to a cassette tape recorded 20 years prior, almost to the day. It documented a two-hour session with Jobs in Cupertino, California, shortly before the launch. The main reason he had summoned me to Apple’s headquarters was sitting under the over of a dark sheet of fabric on the long table in the boardroom of One Infinite Loop.

    “We have made the coolest computer ever,” he told me. “I guess I’ll just show it to you.”

    He yanked off the fabric, exposing an 8-inch stump of transparent plastic with a block of electronics suspended inside. It looked less like a computer than a toaster born from an immaculate conception between Philip K. Dick and Ludwig Mies van der Rohe. (But the fingerprints were, of course, Jony Ive’s.) Alongside it were two speakers encased in Christmas-ornament-sized, glasslike spheres.

    “The Cube,” Jobs said, in a stage whisper, hardly containing his excitement.

    He began by emphasizing that while the Cube was powerful, it was air-cooled. (Jobs hated fans. Hated them.) He demonstrated how it didn’t have a power switch, but could sense a wave of your hand to turn on the juice. He showed me how Apple had eliminated the tray that held CDs—with the Cube, you just hovered the disk over the slot and the machine inhaled it.

    And then he got to the plastics. It was as if Jobs had taken to heart that guy in The Graduate who gave career advice to Benjamin Braddock. “We are doing more with plastics than anyone else in the world,” he told me. “These are all specially formulated, and it’s all proprietary, just us. It took us six months just to formulate these plastics. They make bulletproof vests out of it! And it’s incredibly sturdy, and it’s just beautiful! There’s never been anything like that. How do you make something like that? Nobody ever made anything like that! Isn’t that beautiful? I think it’s stunning!”

    I admitted it was gorgeous. But I had a question for him. Earlier in the conversation, he had drawn Apple’s product matrix, four squares representing laptop and desktop, high and low end. Since returning to Apple in 1997, he had filled in all the quadrants with the iMac, Power Mac, iBook, and PowerBook. The Cube violated the wisdom of his product plan. It didn’t have the power features of the high-end Power Mac, like slots or huge storage. And it was way more expensive than the low-end iMac, even before you spent for a necessary separate display required of Cube owners. Knowing I was risking his ire, I asked him: Just who was going to buy this?

    Jobs didn’t miss a beat. “That’s easy!” he said. “A ton of people who are pros. Every designer is going to buy one.”

    Here was his justification for violating his matrix theory: “We realized there was an incredible opportunity to make something in the middle, sort of a love child, that was truly a breakthrough,” he said. The implicit message was that it was so great that people would alter their buying patterns to purchase one.

    That didn’t happen. For one thing, the price was prohibitive—by the time you bought the display, it was almost three times the price of an iMac and even more than some PowerMacs. By and large, people don’t spend their art budget on computers.

    That wasn’t the only issue with the G4 Cube. Those plastics were hard to manufacture, and people reported flaws. The air cooling had problems. If you left a sheet of paper on top of the device, it would shut down to prevent overheating. And because it had no On button, a stray wave of your hand would send the machine into action, like it or not.

    In any case, the G4 Cube failed to push buttons on the computer-buying public. Jobs told me it would sell millions. But Apple sold fewer than 150,000 units. The apotheosis of Apple design was also the apex of Apple hubris. Listening to the tape, I was struck by how much Jobs had been drunk on the elixir of aesthetics. “Do you really want to put a hole in this thing and put a button there?” Jobs asked me, justifying the lack of a power switch. “Look at the energy we put into this slot drive so you wouldn’t have a tray, and you want to ruin that and put a button in?”

    But here is something else about Jobs and the Cube that speaks not of failure but why he was a successful leader: Once it was clear that his Cube was a brick, he was quick to cut his losses and move on.

    In a 2017 talk at University of Oxford (UK), Apple CEO Tim Cook talked about the G4 Cube, which he described as “a spectacular commercial failure, from the first day, almost.” But Jobs’ reaction to the bad sales figures showed how quickly, when it became necessary, he could abandon even a product dear to his heart. “Steve, of everyone I’ve known in life,” Cook said at Oxford, “could be the most avid proponent of some position, and within minutes or days, if new information came out, you would think that he never ever thought that before.”

    But he did think it, and I have the tape to prove it. Happy birthday to Steve Jobs’ digital love child.
    ______________________________________________________________________________________________________________

    Time Travel

    My July 2000 Newsweek article about the Cube came with a sidebar of excerpts from my interview with Steve Jobs. Here are a few:

    Levy: Last January you dropped the “interim” from your CEO title. Has this had any impact?

    Jobs: No, even when I first came and wasn’t sure how long I’d be here, I made decisions for the long term. The reason I finally changed the title was that it was becoming sort of a joke. And I don’t want anything at Apple to become a joke.

    Levy: Rumors have recirculated about you becoming CEO of Disney. Is there anything about running a media giant that appeals to you?

    Jobs: I was thinking of giving you a witty answer, like “Isn’t that what I’m doing now?” But no, it doesn’t appeal to me at all. I’m a product person. I believe it’s possible to express your feelings and your caring about things from your products, whether that product is a computer system or Toy Story 2. It’s wonderful to make a pure expression of something and then make a million copies. Like the G4 Cube. There will be a million copies of this out there.

    Levy: The G4 Cube reminds a lot of people that your previous company, Next, also made a cube-shaped machine.

    Jobs: Yeah, we did one before. Cubes are very efficient spaces. What makes this one [special] for me is not the fact that it’s a cube but it’s like a brain in a beaker. It’s just hanging from this perfectly clear, pristine crystal enclosure. That’s what’s so drop-dead about it. It’s incredibly functional. The whole thing is perfect.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 11:17 am on June 16, 2021 Permalink | Reply
    Tags: "New software turns ‘mental handwriting’ into words on computer screens", , Computing,   

    From Stanford University Engineering : “New software turns ‘mental handwriting’ into words on computer screens” 

    From Stanford University Engineering

    The new “mindwriting” technology enables a man with immobilized limbs to create text messages nearly as fast as people who use their thumbs to tap words onto smartphone keyboards.

    1
    New software and hardware taps into the brain to convert thoughts about handwriting into text on a computer screen. Credit: Aaron Burden/Unsplash

    Stanford University investigators have coupled artificial intelligence software with a device, called a brain-computer interface (BCI), implanted in the brain of a man with full-body paralysis.

    The software was able to decode information from the BCI to quickly convert the man’s thoughts about handwriting into text on a computer screen.

    The man was able to write using this approach more than twice as quickly as he could using a previous method developed by the Stanford researchers, who reported those findings in 2017 in the journal eLife.

    The new findings, published online May 12 in Nature,could spur further advances benefiting hundreds of thousands of Americans, and millions globally, who’ve lost the use of their upper limbs or their ability to speak due to spinal-cord injuries, strokes or amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease, said Jaimie Henderson, professor of neurosurgery.

    “This approach allowed a person with paralysis to compose sentences at speeds nearly comparable to those of able-bodied adults of the same age typing on a smartphone,” said Henderson, the John and Jene Blume–Robert and Ruth Halperin Professor at Stanford Medicine. “The goal is to restore the ability to communicate by text.”

    The participant in the study produced text at a rate of about 18 words per minute. By comparison, able-bodied people of the same age can punch out about 23 words per minute on a smartphone.

    The participant, referred to as T5, lost practically all movement below the neck because of a spinal-cord injury in 2007. Nine years later, Henderson placed two brain-computer-interface chips, each the size of a baby aspirin, on the left side of T5’s brain. Each chip has 100 electrodes that pick up signals from neurons firing in the part of the motor cortex – a region of the brain’s outermost surface – that governs hand movement.

    Those neural signals are sent via wires to a computer, where artificial intelligence algorithms decode the signals and surmise T5’s intended hand and finger motion. The algorithms were designed in Stanford’s Neural Prosthetics Translational Laboratory, co-directed by Henderson and Krishna Shenoy, the Hong Seh and Vivian W. M. Lim Professor in the School of Engineering and professor of electrical engineering.

    Shenoy and Henderson, who have been collaborating on BCIs since 2005, are the senior co-authors of the new study. The lead author is Frank Willett, a research scientist in the Neural Prosthetics Translational Laboratory and with the Howard Hughes Medical Institute (HHMI) (US).

    “We’ve learned that the brain retains its ability to prescribe fine movements a full decade after the body has lost its ability to execute those movements,” Willett said. “And we’ve learned that complicated intended motions involving changing speeds and curved trajectories, like handwriting, can be interpreted more easily and more rapidly by the artificial intelligence algorithms we’re using than can simpler intended motions like moving a cursor in a straight path at a steady speed. Alphabetical letters are different from one another, so they’re easier to tell apart.”

    In the 2017 study, three participants with limb paralysis, including T5 – all with BCIs placed in the motor cortex – were asked to concentrate on using an arm and hand to move a cursor from one key to the next on a computer-screen keyboard display, then to focus on clicking on that key.

    In that study, T5 set what was until now the all-time record: copying displayed sentences at about 40 characters per minute. Another study participant was able to write extemporaneously, selecting whatever words she wanted, at 24.4 characters per minute.

    If the paradigm underlying the 2017 study was analogous to typing, the model for the new Nature study is analogous to handwriting. T5 concentrated on trying to write individual letters of the alphabet on an imaginary legal pad with an imaginary pen, despite his inability to move his arm or hand. He repeated each letter 10 times, permitting the software to “learn” to recognize the neural signals associated with his effort to write that particular letter.

    In numerous multi-hour sessions that followed, T5 was presented with groups of sentences and instructed to make a mental effort to “handwrite” each one. No uppercase letters were employed. Examples of the sentences were “i interrupted, unable to keep silent,” and “within thirty seconds the army had landed.” Over time, the algorithms improved their ability to differentiate among the neural firing patterns typifying different characters. The algorithms’ interpretation of whatever letter T5 was attempting to write appeared on the computer screen after a roughly half-second delay.

    In further sessions, T5 was instructed to copy sentences the algorithms had never been exposed to. He was eventually able to generate 90 characters, or about 18 words, per minute. Later, asked to give his answers to open-ended questions, which required some pauses for thought, he generated 73.8 characters (close to 15 words, on average) per minute, tripling the previous free-composition record set in the 2017 study.

    T5’s sentence-copying error rate was about one mistake in every 18 or 19 attempted characters. His free-composition error rate was about one in every 11 or 12 characters. When the researchers used an after-the-fact autocorrect function – similar to the ones incorporated into our smartphone keyboards – to clean things up, those error rates were markedly lower: below 1% for copying, and just over 2% for freestyle.

    These error rates are quite low compared with other BCIs, said Shenoy, who is also a Howard Hughes Medical Institute investigator.

    “While handwriting can approach 20 words per minute, we tend to speak around 125 words per minute, and this is another exciting direction that complements handwriting. If combined, these systems could together offer even more options for patients to communicate effectively,” Shenoy said.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Stanford Engineering has been at the forefront of innovation for nearly a century, creating pivotal technologies that have transformed the worlds of information technology, communications, health care, energy, business and beyond.

    The school’s faculty, students and alumni have established thousands of companies and laid the technological and business foundations for Silicon Valley. Today, the school educates leaders who will make an impact on global problems and seeks to define what the future of engineering will look like.
    Mission

    Our mission is to seek solutions to important global problems and educate leaders who will make the world a better place by using the power of engineering principles, techniques and systems. We believe it is essential to educate engineers who possess not only deep technical excellence, but the creativity, cultural awareness and entrepreneurial skills that come from exposure to the liberal arts, business, medicine and other disciplines that are an integral part of the Stanford experience.

    Our key goals are to:

    Conduct curiosity-driven and problem-driven research that generates new knowledge and produces discoveries that provide the foundations for future engineered systems
    Deliver world-class, research-based education to students and broad-based training to leaders in academia, industry and society
    Drive technology transfer to Silicon Valley and beyond with deeply and broadly educated people and transformative ideas that will improve our society and our world.

    The Future of Engineering

    The engineering school of the future will look very different from what it looks like today. So, in 2015, we brought together a wide range of stakeholders, including mid-career faculty, students and staff, to address two fundamental questions: In what areas can the School of Engineering make significant world‐changing impact, and how should the school be configured to address the major opportunities and challenges of the future?

    One key output of the process is a set of 10 broad, aspirational questions on areas where the School of Engineering would like to have an impact in 20 years. The committee also returned with a series of recommendations that outlined actions across three key areas — research, education and culture — where the school can deploy resources and create the conditions for Stanford Engineering to have significant impact on those challenges.

    Stanford University

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
  • richardmitnick 3:52 pm on May 13, 2021 Permalink | Reply
    Tags: "Harnessing the hum of fluorescent lights for more efficient computing", A team led by University of Michigan researchers has developed a material that’s at least twice as “magnetostrictive” and far less costly than other materials in its class., , Computing, Magnetoelectric chips could make everything from massive data centers to cell phones far more energy efficient., Magnetoelectric devices use magnetic fields instead of electricity to store the digital ones and zeros of binary data.   

    From University of Michigan : “Harnessing the hum of fluorescent lights for more efficient computing” 

    U Michigan bloc

    From University of Michigan

    May 12, 2021

    Contacts:
    Gabe Cherry
    gcherry@umich.edu,

    Nicole Casal Moore
    ncmoore@umich.edu

    1
    The property that makes fluorescent lights buzz could power a new generation of more efficient computing devices that store data with magnetic fields, rather than electricity.

    A team led by University of Michigan researchers has developed a material that’s at least twice as “magnetostrictive” and far less costly than other materials in its class. In addition to computing, it could also lead to better magnetic sensors for medical and security devices.

    Magnetostriction, which causes the buzz of fluorescent lights and electrical transformers, occurs when a material’s shape and magnetic field are linked—that is, a change in shape causes a change in magnetic field. The property could be key to a new generation of computing devices called magnetoelectrics.

    Magnetoelectric chips could make everything from massive data centers to cell phones far more energy efficient, slashing the electricity requirements of the world’s computing infrastructure.

    Made of a combination of iron and gallium, the material is detailed in a paper published May 12 in Nature Communications. The team is led by U-M materials science and engineering professor John Heron and includes researchers from Intel; Cornell University (US); University of California-Berkeley (US); University of Wisconsin (US); Purdue University (US) and elsewhere.

    Magnetoelectric devices use magnetic fields instead of electricity to store the digital ones and zeros of binary data. Tiny pulses of electricity cause them to expand or contract slightly, flipping their magnetic field from positive to negative or vice versa. Because they don’t require a steady stream of electricity, as today’s chips do, they use a fraction of the energy.

    “A key to making magnetoelectric devices work is finding materials whose electrical and magnetic properties are linked.” Heron said. “And more magnetostriction means that a chip can do the same job with less energy.”

    Cheaper magnetoelectric devices with a tenfold improvement

    Most of today’s magnetostrictive materials use rare-earth elements, which are too scarce and costly to be used in the quantities needed for computing devices. But Heron’s team has found a way to coax high levels of magnetostriction from inexpensive iron and gallium.

    Ordinarily, explains Heron, the magnetostriction of iron-gallium alloy increases as more gallium is added. But those increases level off and eventually begin to fall as the higher amounts of gallium begin to form an ordered atomic structure.

    So the research team used a process called low-temperature molecular-beam epitaxy to essentially freeze atoms in place, preventing them from forming an ordered structure as more gallium was added. This way, Heron and his team were able to double the amount of gallium in the material, netting a tenfold increase in magnetostriction compared to unmodified iron-gallium alloys.

    “Low-temperature molecular-beam epitaxy is an extremely useful technique—it’s a little bit like spray painting with individual atoms,” Heron said. “And ‘spray painting’ the material onto a surface that deforms slightly when a voltage is applied also made it easy to test its magnetostrictive properties.”

    Researchers are working with Intel’s MESO program

    The magnetoelectric devices made in the study are several microns in size—large by computing standards. But the researchers are working with Intel to find ways to shrink them to a more useful size that will be compatible with the company’s magnetoelectric spin-orbit device (or MESO) program, one goal of which is to push magnetoelectric devices into the mainstream.

    “Intel is great at scaling things and at the nuts and bolts of making a technology actually work at the super-small scale of a computer chip,” Heron said. “They’re very invested in this project and we’re meeting with them regularly to get feedback and ideas on how to ramp up this technology to make it useful in the computer chips that they call MESO.”

    While a device that uses the material is likely decades away, Heron’s lab has filed for patent protection through the U-M Office of Technology Transfer.

    The research is supported by IMRA America and the National Science Foundation (grant numbers NNCI-1542081, EEC-1160504 DMR-1719875 and DMR-1539918).

    Other researchers on the paper include U-M associate professor of materials science and engineering Emmanouil Kioupakis; U-M assistant professor of materials science and engineering Robert Hovden; and U-M graduate student research assistants Peter Meisenheimer and Suk Hyun Sung.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please support STEM education in your local school system

    Stem Education Coalition

    U MIchigan Campus

    The University of Michigan (U-M, UM, UMich, or U of M), frequently referred to simply as Michigan, is a public research university located in Ann Arbor, Michigan, United States. Originally, founded in 1817 in Detroit as the Catholepistemiad, or University of Michigania, 20 years before the Michigan Territory officially became a state, the University of Michigan is the state’s oldest university. The university moved to Ann Arbor in 1837 onto 40 acres (16 ha) of what is now known as Central Campus. Since its establishment in Ann Arbor, the university campus has expanded to include more than 584 major buildings with a combined area of more than 34 million gross square feet (781 acres or 3.16 km²), and has two satellite campuses located in Flint and Dearborn. The University was one of the founding members of the Association of American Universities (US).

    Considered one of the foremost research universities in the United States, the university has very high research activity and its comprehensive graduate program offers doctoral degrees in the humanities, social sciences, and STEM fields (Science, Technology, Engineering and Mathematics) as well as professional degrees in business, medicine, law, pharmacy, nursing, social work and dentistry. Michigan’s body of living alumni (as of 2012) comprises more than 500,000. Besides academic life, Michigan’s athletic teams compete in Division I of the NCAA and are collectively known as the Wolverines. They are members of the Big Ten Conference.

     
  • richardmitnick 7:31 am on January 22, 2021 Permalink | Reply
    Tags: "Stanford researchers combine processors and memory on multiple hybrid chips to run AI on battery-powered smart devices", , , “Memory wall”, Computing, RRAM, , , The "Illusion System"   

    From Stanford University: “Stanford researchers combine processors and memory on multiple hybrid chips to run AI on battery-powered smart devices” 

    Stanford University Name
    From Stanford University

    Stanford University Engineering

    January 11, 2021
    Tom Abate
    Stanford Engineering
    tabate@stanford.edu

    In traditional electronics, separate chips process and store data, wasting energy as they toss data back and forth over what engineers call a “memory wall.” New algorithms combine several energy-efficient hybrid chips to create the illusion of one mega–AI chip.

    1
    Hardware and software innovations give eight chips the illusion that they’re one mega-chip working together to run AI. Credit: Stocksy / Drea Sullivan.

    Smartwatches and other battery-powered electronics would be even smarter if they could run AI algorithms. But efforts to build AI-capable chips for mobile devices have so far hit a wall – the so-called “memory wall” that separates data processing and memory chips that must work together to meet the massive and continually growing computational demands imposed by AI.

    “Transactions between processors and memory can consume 95 percent of the energy needed to do machine learning and AI, and that severely limits battery life,” said computer scientist Subhasish Mitra, senior author of a new study published in Nature Electronics.

    Now, a team that includes Stanford computer scientist Mary Wootters and electrical engineer H.-S. Philip Wong has designed a system that can run AI tasks faster, and with less energy, by harnessing eight hybrid chips, each with its own data processor built right next to its own memory storage.

    This paper builds on the team’s prior development of a new memory technology, called RRAM, that stores data even when power is switched off – like flash memory – only faster and more energy efficiently. Their RRAM advance enabled the Stanford researchers to develop an earlier generation of hybrid chips that worked alone. Their latest design incorporates a critical new element: algorithms that meld the eight, separate hybrid chips into one energy-efficient AI-processing engine.

    “If we could have built one massive, conventional chip with all the processing and memory needed, we’d have done so, but the amount of data it takes to solve AI problems makes that a dream,” Mitra said. “Instead, we trick the hybrids into thinking they’re one chip, which is why we call this the Illusion System.”

    The researchers developed Illusion as part of the Electronics Resurgence Initiative (ERI), a $1.5 billion program sponsored by the Defense Advanced Research Projects Agency. DARPA, which helped spawn the internet more than 50 years ago, is supporting research investigating workarounds to Moore’s Law, which has driven electronic advances by shrinking transistors. But transistors can’t keep shrinking forever.

    “To surpass the limits of conventional electronics, we’ll need new hardware technologies and new ideas about how to use them,” Wootters said.

    The Stanford-led team built and tested its prototype with help from collaborators at the French research institute CEA-Leti and at Nanyang Technological University in Singapore. The team’s eight-chip system is just the beginning. In simulations, the researchers showed how systems with 64 hybrid chips could run AI applications seven times faster than current processors, using one-seventh as much energy.

    Such capabilities could one day enable Illusion Systems to become the brains of augmented and virtual reality glasses that would use deep neural networks to learn by spotting objects and people in the environment, and provide wearers with contextual information – imagine an AR/VR system to help birdwatchers identify unknown specimens.

    Stanford graduate student Robert Radway, who is first author of the Nature Electronics study, said the team also developed new algorithms to recompile existing AI programs, written for today’s processors, to run on the new multi-chip systems. Collaborators from Facebook helped the team test AI programs that validated their efforts. Next steps include increasing the processing and memory capabilities of individual hybrid chips and demonstrating how to mass produce them cheaply.

    “The fact that our fabricated prototype is working as we expected suggests we’re on the right track,” said Wong, who believes Illusion Systems could be ready for marketability within three to five years.

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Stanford University campus. No image credit

    Stanford University

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
  • richardmitnick 11:56 am on December 11, 2020 Permalink | Reply
    Tags: "How the Slowest Computer Programs Illuminate Math’s Fundamental Limits", , BusyBeaverology, Computing, , The search for long-running computer programs can illuminate the state of mathematical knowledge and even tell us what’s knowable.   

    From Quanta Magazine: “How the Slowest Computer Programs Illuminate Math’s Fundamental Limits” 

    From Quanta Magazine

    December 10, 2020
    John Pavlus

    1
    A visualization of the longest-running five-rule Turing machine currently known. Each column of pixels represents one step in the computation, moving from left to right. Black squares show where the machine has printed a 1. The far right column shows the state of the computation when the Turing machine halts. Credit: Quanta Magazine/Peter Krumins.

    Programmers normally want to minimize the time their code takes to execute. But in 1962, the Hungarian mathematician Tibor Radó posed the opposite problem. He asked: How long can a simple computer program possibly run before it terminates? Radó nicknamed these maximally inefficient but still functional programs “busy beavers.”

    Finding these programs has been a fiendishly diverting puzzle for programmers and other mathematical hobbyists ever since it was popularized in Scientific American’s “Computer Recreations” column in 1984. But in the last several years, the busy beaver game, as it’s known, has become an object of study in its own right, because it has yielded connections to some of the loftiest concepts and open problems in mathematics.

    “In math, there is a very permeable boundary between what’s an amusing recreation and what is actually important,” said Scott Aaronson, a theoretical computer scientist at the University of Texas, Austin who recently published a survey of progress in “BusyBeaverology.”

    The recent work suggests that the search for long-running computer programs can illuminate the state of mathematical knowledge, and even tell us what’s knowable. According to researchers, the busy beaver game provides a concrete benchmark for evaluating the difficulty of certain problems, such as the unsolved Goldbach conjecture and Riemann hypothesis. It even offers a glimpse of where the logical bedrock underlying math breaks down. The logician Kurt Gödel proved the existence of such mathematical terra incognita nearly a century ago. But the busy beaver game can show where it actually lies on a number line, like an ancient map depicting the edge of the world.

    An Uncomputable Computer Game

    The busy beaver game is all about the behavior of Turing machines — the primitive, idealized computers conceived by Alan Turing in 1936. A Turing machine performs actions on an endless strip of tape divided into squares. It does so according to a list of rules. The first rule might say:

    ‘If the square contains a 0, replace it with a 1, move one square to the right and consult rule 2. If the square contains a 1, leave the 1, move one square to the left and consult rule 3.”

    Each rule has this forking choose-your-own-adventure style. Some rules say to jump back to previous rules; eventually there’s a rule containing an instruction to “halt.” Turing proved that this simple kind of computer is capable of performing any possible calculation, given the right instructions and enough time.

    As Turing noted in 1936, in order to compute something, a Turing machine must eventually halt — it can’t get trapped in an infinite loop. But he also proved that there’s no reliable, repeatable method for distinguishing machines that halt from machines that simply run forever — a fact known as the halting problem.

    The busy beaver game asks: Given a certain number of rules, what’s the maximum number of steps that a Turing machine can take before halting?

    For instance, if you’re only allowed one rule, and you want to ensure that the Turing machine halts, you’re forced to include the halt instruction right away. The busy beaver number of a one-rule machine, or BB(1), is therefore 1.

    But adding just a few more rules instantly blows up the number of machines to consider. Of 6,561 possible machines with two rules, the one that runs the longest — six steps — before halting is the busy beaver. But some others simply run forever. None of these are the busy beaver, but how do you definitively rule them out? Turing proved that there’s no way to automatically tell whether a machine that runs for a thousand or a million steps won’t eventually terminate.

    That’s why finding busy beavers is so hard. There’s no general approach for identifying the longest-running Turing machines with an arbitrary number of instructions; you have to puzzle out the specifics of each case on its own. In other words, the busy beaver game is, in general, “uncomputable.”

    Proving that BB(2) = 6 and that BB(3) = 107 was difficult enough that Radó’s student Shen Lin earned a doctorate for the work in 1965. Radó considered BB(4) “entirely hopeless,” but the case was finally solved in 1983. Beyond that, the values virtually explode; researchers have identified a five-rule Turing machine, for instance, that runs for 47,176,870 steps before stopping, so BB(5) is at least that big. BB(6) is at least 7.4 × 1036,534. Proving the exact values “will need new ideas and new insights, if it can be done at all,” said Aaronson.

    Threshold of Unknowability

    William Gasarch, a computer scientist at the University of Maryland, College Park, said he’s less intrigued by the prospect of pinning down busy beaver numbers than by “the general concept that it’s actually uncomputable.” He and other mathematicians are mainly interested in using the game as a yardstick for gauging the difficulty of important open problems in mathematics — or for figuring out what is mathematically knowable at all.

    The Goldbach conjecture, for instance, asks whether every even integer greater than 2 is the sum of two primes. Proving the conjecture true or false would be an epochal event in number theory, allowing mathematicians to better understand the distribution of prime numbers. In 2015, an anonymous GitHub user named Code Golf Addict published code for a 27-rule Turing machine that halts if — and only if — the Goldbach conjecture is false. It works by counting upward through all even integers greater than 4; for each one, it grinds through all the possible ways to get that integer by adding two others, checking whether the pair is prime. When it finds a suitable pair of primes, it moves up to the next even integer and repeats the process. If it finds an even integer that can’t be summed by a pair of prime numbers, it halts.

    Running this mindless machine isn’t a practical way to solve the conjecture, because we can’t know if it will ever halt until it does. But the busy beaver game sheds some light on the problem. If it were possible to compute BB(27), that would provide a ceiling on how long we’d have to wait for the Goldbach conjecture to be settled automatically. That’s because BB(27) corresponds to the maximum number of steps this 27-rule Turing machine would have to execute in order to halt (if it ever did). If we knew that number, we could run the Turing machine for exactly that many steps. If it halted by that point, we’d know the Goldbach conjecture was false. But if it went that many steps and didn’t halt, we’d know for certain that it never would — thus proving the conjecture true.

    The rub is that BB(27) is such an incomprehensibly huge number that even writing it down, much less running the Goldbach-falsifying machine for that many steps, isn’t remotely possible in our physical universe. Nevertheless, that incomprehensibly huge number is still an exact figure whose magnitude, according to Aaronson, represents “a statement about our current knowledge” of number theory.

    In 2016, Aaronson established a similar result in collaboration with Yuri Matiyasevich and Stefan O’Rear. They identified a 744-rule Turing machine that halts if and only if the Riemann hypothesis is false. The Riemann hypothesis also concerns the distribution of prime numbers and is one of the Clay Mathematics Institute’s “Millennium Problems” worth $1 million. Aaronson’s machine will deliver an automatic solution in BB(744) steps. (It works by essentially the same mindless process as the Goldbach machine, iterating upward until it finds a counterexample.)

    Of course, BB(744) is an even more unattainably large number than BB(27). But working to pin down something easier, like BB(5), “may actually turn up some new number theory questions that are interesting in their own right,” Aaronson said. For instance, the mathematician Pascal Michel proved in 1993 that the record-holding five-rule Turing machine exhibits behavior similar to that of the function described in the Collatz conjecture, another famous open problem in number theory.

    “So much of math can be encoded as a question of, ‘Does this Turing machine halt or not?’” Aaronson said. “If you knew all the busy beaver numbers, then you could settle all of those questions.”

    More recently, Aaronson has used a busy-beaver-derived yardstick to gauge what he calls “the threshold of unknowability” for entire systems of mathematics. Gödel’s famous incompleteness theorems of 1931 proved that any set of basic axioms that could serve as a possible logical foundation for mathematics is doomed to one of two fates: Either the axioms will be inconsistent, leading to contradictions (like proving that 0 = 1), or they’ll be incomplete, unable to prove some true statements about numbers (like the fact that 2 + 2 = 4). The axiomatic system underpinning almost all modern math, known as Zermelo-Fraenkel (ZF) set theory, has its own Gödelian boundaries — and Aaronson wanted to use the busy beaver game to establish where they are.

    In 2016, he and his graduate student Adam Yedidia specified a 7,910-rule Turing machine that would only halt if ZF set theory is inconsistent. This means BB(7,910) is a calculation that eludes the axioms of ZF set theory. Those axioms can’t be used to prove that BB(7,910) represents one number instead of another, which is like not being able to prove that 2 + 2 = 4 instead of 5.

    O’Rear subsequently devised a much simpler 748-rule machine that halts if ZF is inconsistent — essentially moving the threshold of unknowability closer, from BB(7,910) to BB(748). “That is a kind of a dramatic thing, that the number [of rules] is not completely ridiculous,” said Harvey Friedman, a mathematical logician and emeritus professor at Ohio State University. Friedman thinks that the number can be brought down even further: “I think maybe 50 is the right answer.” Aaronson suspects that the true threshold may be as close as BB(20).

    Whether near or far, such thresholds of unknowability definitely exist. “This is the vision of the world that we have had since Gödel,” said Aaronson. “The busy beaver function is another way of making it concrete.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 12:21 pm on November 20, 2020 Permalink | Reply
    Tags: "A robot that tells growers when to water crops is on the way", , , Computing,   

    From UC Riverside: “A robot that tells growers when to water crops is on the way” 

    UC Riverside bloc

    From UC Riverside

    November 19, 2020
    Holly Ober

    1
    Researchers are creating an autonomous mobile robot to sample leaves and measure their water potential.
    The base robot for the new plant-moisture-measuring system researchers are developing will navigate rows of crops to reach individual leaves and stems.

    Every backyard gardener knows how hard it can be to tell when to water the plants. Multiply that by tens or hundreds of acres and it’s easy to see the challenges growers face keeping their crops healthy while managing water resources wisely.

    To determine water needs accurately, growers hand-pluck individual leaves from plants, put them in pressure chambers, and apply air pressure to see when water begins to leak from the leaf stems. That kind of testing is time consuming and means growers can only reach so many areas of a field each day and cannot test as frequently as needed to accurately determine optimal irrigation scheduling patterns.

    A group of researchers from UC Riverside and UC Merced have received a grant for more than $1 million from the U.S. Department of Agriculture through the National Science Foundation’s National Robotics Initiative to address these challenges. From UC Riverside are Assistant Professor Konstantinos Karydis and Professor Amit K. Roy-Chowdhury, both from the Department of Electrical and Computer Engineering. UC Merced, which leads the effort, is represented by Stefano Carpin, professor of computer science; and Joshua Viers, professor of environmental engineering.

    As part of the project, the group is developing a robotic pressure chamber that can autonomously sample leaves and immediately test them on site to provide the freshest data. The system will work to gather data even in large fields, and over a period of time, rather than just providing a snapshot.

    Frequently updated data can help growers better plan irrigation schedules to conserve water, optimize the time and effort spent by crop specialists tasked with determining and analyzing lead water potential, and help decrease some of the costs in the food-production chain.

    Current measuring techniques involve collecting leaf samples and transporting them to an off-site location, where testers can use very accurate, expensive pressure chambers; or sampling and analyzing leaf samples in the field using hand-held pressure chambers.

    “In the first category, leaf samples can get mixed up, making it impossible to track them back to the specific areas of the field they came from, Karydis said. “In addition, the properties of the leaf might vary given the time elapsed between being sampled and being analyzed, which in turn may yield misleading results.”

    Hand-held instruments in the field can be less accurate, but testing can be done multiple times with different leaves from the same plants. This method is time- and labor-intensive, and must be undertaken by specially trained personnel.

    Carpin has already worked with colleagues at UC Davis and UC Berkeley to create the Robot-Assisted Precision Irrigation Delivery, or RAPID, system, which travels along rows of crops adjusting irrigation flows according to sensor data that tells the robot precisely what’s needed for each plant.

    The project will use the same mobile base robot as in RAPID but equip it with a custom-made robotic leaf sampler and pressure chamber being designed by the researchers at UC Riverside, and pair it with drones that can survey the fields and direct the robot to areas of interest.

    “Using this process, growers could survey plants all day long, even in large fields,” Carpin said.

    The four-year project will support graduate students as well as summer research opportunities for undergraduates. The project has four phases: development of the chamber; developing machine vision so the robot can “see” the water coming from the leaf stems; coordinating multiple robots — in the air and on the ground; and evaluation.

    The researchers plan to have the first set of automated pressure chamber prototypes fabricated by spring 2021, and to evaluate their performance and refine designs in controlled settings over spring and summer 2021. They expect to have a completed setup by winter 2022, so they can begin controlled field testing.

    “We have to be quick about it because if we miss a peak growing season, we have to wait another nine months for the next one,” Carpin said. “We’d like to be able to start testing next summer and test every summer, and we need to be able to maximize the tests.”

    When all of the components have been designed, the designs and code will be made open source, and all the data collected during the project will be made available to the scientific community, the researchers wrote in their proposal.

    The project came about after Carpin and Viers, director of the Center for Information Technology Research in the Interest of Society, or CITRIS, at UC Merced, had been talking with area farmers about the challenges of growing almonds and grapes. Karydis and Roy-Chowdhury had been hearing the same challenges from citrus and avocado growers in the Riverside area, so the four partnered up.

    “California agriculture presents a challenge in terms of scalability,” Carpin said. “But this an exciting collaboration because we’ll get to develop a system that will work on different kinds of crops.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    UC Riverside Campus

    The University of California, Riverside is one of 10 universities within the prestigious University of California system, and the only UC located in Inland Southern California.

    Widely recognized as one of the most ethnically diverse research universities in the nation, UCR’s current enrollment is more than 21,000 students, with a goal of 25,000 students by 2020. The campus is in the midst of a tremendous growth spurt with new and remodeled facilities coming on-line on a regular basis.

    We are located approximately 50 miles east of downtown Los Angeles. UCR is also within easy driving distance of dozens of major cultural and recreational sites, as well as desert, mountain and coastal destinations.

     
  • richardmitnick 7:43 am on August 24, 2020 Permalink | Reply
    Tags: "Computer Search Settles 90-Year-Old Math Problem", , As of this past fall the question remained unresolved only for seven-dimensional space., Computing, In 1940 Oskar Perron proved that the conjecture is true for spaces in dimensions one through six., Keller’s conjecture, , Posed 90 years ago by Ott-Heinrich Keller is a problem about covering spaces with identical tiles., , The answer comes packaged with a long proof explaining why it’s right., The authors of the new work (cited) solved the problem using 40 computers., The Mysterious Seventh Dimension   

    From Quanta Magazine: “Computer Search Settles 90-Year-Old Math Problem” 

    From Quanta Magazine

    August 19, 2020
    Kevin Hartnett

    By translating Keller’s conjecture into a computer-friendly search for a type of graph, researchers have finally resolved a problem about covering spaces with tiles.

    1
    Olena Shmahalo/Quanta Magazine.

    A team of mathematicians has finally finished off Keller’s conjecture, but not by working it out themselves. Instead, they taught a fleet of computers to do it for them.

    Keller’s conjecture, posed 90 years ago by Ott-Heinrich Keller, is a problem about covering spaces with identical tiles. It asserts that if you cover a two-dimensional space with two-dimensional square tiles, at least two of the tiles must share an edge. It makes the same prediction for spaces of every dimension — that in covering, say, 12-dimensional space using 12-dimensional “square” tiles, you will end up with at least two tiles that abut each other exactly.

    Over the years, mathematicians have chipped away at the conjecture, proving it true for some dimensions and false for others. As of this past fall the question remained unresolved only for seven-dimensional space.

    But a new computer-generated proof has finally resolved the problem. The proof, posted online last October [The Resolution of Keller’s Conjecture], is the latest example of how human ingenuity, combined with raw computing power, can answer some of the most vexing problems in mathematics.

    The authors of the new work — Joshua Brakensiek of Stanford University, Marijn Heule and John Mackey of Carnegie Mellon University, and David Narváez of the Rochester Institute of Technology — solved the problem using 40 computers. After a mere 30 minutes, the machines produced a one-word answer: Yes, the conjecture is true in seven dimensions. And we don’t have to take their conclusion on faith.

    The answer comes packaged with a long proof explaining why it’s right. The argument is too sprawling to be understood by human beings, but it can be verified by a separate computer program as correct.

    In other words, even if we don’t know what the computers did to solve Keller’s conjecture, we can assure ourselves they did it correctly.

    The Mysterious Seventh Dimension

    It’s easy to see that Keller’s conjecture is true in two-dimensional space. Take a piece of paper and try to cover it with equal-sized squares, with no gaps between the squares and no overlapping. You won’t get far before you realize that at least two of the squares need to share an edge. If you have blocks lying around it’s similarly easy to see that the conjecture is true in three-dimensional space. In 1930, Keller conjectured that this relationship holds for corresponding spaces and tiles of any dimension.

    Early results supported Keller’s prediction. In 1940, Oskar Perron proved that the conjecture is true for spaces in dimensions one through six. But more than 50 years later, a new generation of mathematicians found the first counterexample to the conjecture: Jeffrey Lagarias and Peter Shor proved that the conjecture is false in dimension 10 in 1992.

    2
    Samuel Velasco/Quanta Magazine; source: https://www.cs.cmu.edu/~mheule/Keller/

    A simple argument shows that once the conjecture is false in one dimension, it’s necessarily false in all higher dimensions. So after Lagarias and Shor, the only unsettled dimensions were seven, eight and nine. In 2002, Mackey proved Keller’s conjecture false in dimension eight (and therefore also in dimension nine).

    That left just dimension seven open — it was either the highest dimension where the conjecture holds or the lowest dimension where it fails.

    “Nobody knows exactly what’s going on there,” said Heule.

    Connect the Dots

    As mathematicians chipped away at the problem over the decades, their methods changed. Perron worked out the first six dimensions with pencil and paper, but by the 1990s, researchers had learned how to translate Keller’s conjecture into a completely different form — one that allowed them to apply computers to the problem.

    The original formulation of Keller’s conjecture is about smooth, continuous space. Within that space, there are infinitely many ways of placing infinitely many tiles. But computers aren’t good at solving problems involving infinite options — to work their magic they need some kind of discrete, finite object to think about.

    In 1990, Keresztély Corrádi and Sándor Szabó came up with just such a discrete object. They proved that you can ask questions about this object that are equivalent to Keller’s conjecture — so that if you prove something about these objects, you necessarily prove Keller’s conjecture as well. This effectively reduced a question about infinity to an easier problem about the arithmetic of a few numbers.

    Here’s how it works.

    Say you want to solve Keller’s conjecture in dimension two. Corrádi and Szabó came up with a method for doing this by building what they called a Keller graph.

    To start, imagine 16 dice on a table, each positioned so that the face with two dots is facing up. (The fact that it’s two dots reflects the fact that you’re addressing the conjecture for dimension two; we’ll see why it’s 16 dice in a moment.) Now color each dot using any of four colors: red, green, white or black.

    The positions of dots on a single die are not interchangeable: Think of one position as representing an x-coordinate and the other as representing a y-coordinate. Once the dice are colored, we’ll start drawing lines, or edges, between pairs of dice if two conditions hold: The dice have dots in one position that are different colors, and in the other position they have dots whose colors are not only different but paired, with red and green forming one pair and black and white the other.

    3
    Samuel Velasco/Quanta Magazine; source: https://www.cs.cmu.edu/~mheule/Keller/

    So, for example, if one die has two red dots and the other has two black dots, they’re not connected: While they meet the criteria for one position (different colors), they don’t meet the criteria for the other (paired colors). However, if one die is colored red-black and the other is colored green-green they are connected, because they have paired colors in one position (red-green) and different colors in the other (black-green).

    There are 16 possible ways of using four colors to color two dots (that’s why we’re working with 16 dice). Array all 16 possibilities in front of you. Connect all pairs of dice that fit the rule. Now for the crucial question: Can you find four dice that are all connected to each other?

    Such fully connected subsets of dice are called a clique. If you can find one, you’ve proved Keller’s conjecture false in dimension two. But you can’t, because it won’t exist. The fact that there’s no clique of four dice means Keller’s conjecture is true in dimension two.

    The dice are not literally the tiles at issue in Keller’s conjecture, but you can think of each die as representing a tile. Think of the colors assigned to the dots as coordinates which situate the dice in space. And think of the existence of an edge as a description of how two dice are positioned relative to each other.

    If two dice have the exact same colors, they represent tiles that are in the exact same position in space. If they have no colors in common and no paired colors (one die is black-white and the other is green-red), they represent tiles that would partially overlap — which, remember, is not allowed in the tiling. If the two dice have one set of paired colors and one set of the same color (one is red-black and the other is green-black) they represent tiles that share a face.

    Finally, and most importantly, if they have one set of paired colors and another set of colors that are merely different — that is, if they’re connected by an edge — it means the dice represent tiles that are touching each other, but shifted off each other slightly, so that their faces don’t exactly align. This is the condition you really want to investigate. Dice that are connected by an edge represent tiles that are connected without sharing a face — exactly the kind of tiling arrangement needed to disprove Keller’s conjecture.

    “They need to touch each other, but they can’t fully touch each other,” Heule said.

    3
    Samuel Velasco/Quanta Magazine

    Scaling Up

    Thirty years ago, Corrádi and Szabó proved that mathematicians can use this procedure to address Keller’s conjecture in any dimension by adjusting the parameters of the experiment. To prove Keller’s conjecture in three dimensions you might use 216 dice with three dots on a face, and maybe three pairs of colors (though there’s flexibility on this point). Then you’d look for eight dice (2³) among them that are fully connected to each other using the same two conditions we used before.

    As a general rule, to prove Keller’s conjecture in dimension n, you use dice with n dots and try to find a clique of size 2n. You can think of this clique as representing a kind of “super tile” (made up of 2n smaller tiles) that could cover the entire n-dimensional space.

    So if you can find this super tile (that itself contains no face-sharing tiles), you can use translated, or shifted, copies of it to cover the entire space with tiles that don’t share a face, thus disproving Keller’s conjecture.

    “If you succeed, you can cover the whole space by translation. The block with no common face will extend to the whole tiling,” said Lagarias, who is now at the University of Michigan.

    Mackey disproved Keller’s conjecture in dimension eight by finding a clique of 256 dice (2^8), so answering Keller’s conjecture for dimension seven required looking for a clique of 128 dice (2^7). Find that clique, and you’ve proved Keller’s conjecture false in dimension seven. Prove that such a clique can’t exist, on the other hand, and you’ve proved the conjecture true.

    Unfortunately, finding a clique of 128 dice is a particularly thorny problem. In previous work, researchers could use the fact that dimensions eight and 10 can be “factored,” in a sense, into lower-dimensional spaces that are easier to work with. No such luck here.

    “Dimension seven is bad because it’s prime, which meant that you couldn’t split it into lower-dimensional things,” Lagarias said. “So there was no choice but to deal with the full combinatorics of these graphs.”

    Seeking out a clique of size 128 may be a difficult task for the unassisted human brain, but it’s exactly the kind of question a computer is good at answering — especially if you give it a little help.

    The Language of Logic

    To turn the search for cliques into a problem that computers can grapple with, you need a representation of the problem that uses propositional logic. It’s a type of logical reasoning that incorporates a set of constraints.

    Let’s say you and two friends are planning a party. The three of you are trying to put together the guest list, but you have somewhat competing interests. Maybe you want to either invite Avery or exclude Kemba. One of your co-planners wants to invite Kemba or Brad or both of them. Your other co-planner, with an ax to grind, wants to leave off Avery or Brad or both of them. Given these constraints, you could ask: Is there a guest list that satisfies all three party planners?

    In computer science terms, this type of question is known as a satisfiability problem. You solve it by describing it in what’s called a propositional formula that in this case looks like this, where the letters A, K and B stand for the potential guests: (A OR NOT K) AND (K OR B) AND (NOT A OR NOT B).

    The computer evaluates this formula by plugging in either 0 or 1 for each variable. A 0 means the variable is false, or turned off, and a 1 means it’s true, or turned on. So if you put in a 0 for “A” it means Avery is not invited, while a 1 means she is. There are lots of ways of assigning 1s and 0s to this simple formula — or building the guest list — and it’s possible that after running through them the computer will conclude it’s not possible to satisfy all the competing demands. In this case, though, there are two ways of assigning 1s and 0s that work for everyone: A = 1, K = 1, B = 0 (meaning inviting Avery and Kemba) and A = 0, K = 0, B = 1 (meaning inviting just Brad).

    A computer program that solves propositional logic statements like this is called a SAT solver, where “SAT” stands for “satisfiability.” It explores every combination of variables and produces a one-word answer: Either YES, there is a way to satisfy the formula, or NO, there’s not.

    “You just decide whether each variable is true or false in a way to make the whole formula true, and if you can do it the formula is satisfiable, and if you can’t the formula is unsatisfiable,” said Thomas Hales of the University of Pittsburgh.

    The question of whether it’s possible to find a clique of size 128 is a similar kind of problem. It can also be written as a propositional formula and plugged into a SAT solver. Start with a large number of dice with seven dots apiece and six possible colors. Can you color the dots such that 128 dice can be connected to each other according to the specified rules? In other words, is there a way of assigning colors that makes the clique possible?

    The propositional formula that captures this question about cliques is quite long, containing 39,000 different variables. Each can be assigned one of two values (0 or 1). As a result, the number of possible permutations of variables, or ways of arranging colors on the dice, is 239,000 — a very, very big number.

    To answer Keller’s conjecture for dimension seven, a computer would have to check every one of those combinations — either ruling them all out (meaning no clique of size 128 exists, and Keller is true in dimension seven) or finding just one that works (meaning Keller is false).

    “If you had a naive computer check all possible [configurations], it would be this 324-digit number of cases,” Mackey said. It would take the world’s fastest computers until the end of time before they’d exhausted all the possibilities.

    But the authors of the new work figured out how computers could arrive at a definitive conclusion without actually having to check every possibility. Efficiency is the key.

    Hidden Efficiencies

    Mackey recalls the day when, in his eyes, the project really came together. He was standing in front of a blackboard in his office at Carnegie Mellon University discussing the problem with two of his co-authors, Heule and Brakensiek, when Heule suggested a way of structuring the search so that it could be completed in a reasonable amount of time.

    “There was real intellectual genius at work there in my office that day,” Mackey said. “It was like watching Wayne Gretzky, like watching LeBron James in the NBA Finals. I have goose bumps right now [just thinking about it].”

    There are many ways you might grease the search for a particular Keller graph. Imagine that you have many dice on a table and you’re trying to arrange 128 of them in a way that satisfies the rules of a Keller graph. Maybe you arrange 12 of them correctly, but you can’t find a way to add the next die. At that point, you can rule out all the configurations of 128 dice that involve that unworkable starting configuration of 12 tiles.

    “If you know the first five things you’ve assigned don’t fit together, you don’t have to look at any of the other variables, and that generally cuts the search down a whole lot,” said Shor, who is now at the Massachusetts Institute of Technology.

    Another form of efficiency involves symmetry. When objects are symmetric, we think of them as being in some sense the same. This sameness allows you to understand an entire object just by studying a portion of it: Glimpse half a human face and you can reconstruct the whole visage.

    Similar shortcuts work for Keller graphs. Imagine, again, that you’re arranging dice on a table. Maybe you start at the center of the table and build out a configuration to the left. You lay four dice, then hit a roadblock. Now you’ve ruled out one starting configuration — and all configurations based on it. But you can also rule out the mirror image of that starting configuration — the arrangement of dice you get when you position the dice the same way, but building out to the right instead.

    “If you can find a way of doing satisfiability problems that takes into account the symmetries in an intelligent way, then you’ve made the problem much easier,” said Hales.

    The four collaborators took advantage of these kinds of search efficiencies in a new way — in particular, they automated considerations about symmetries, where previous work had relied on mathematicians working practically by hand to deal with them.

    They ultimately streamlined the search for a clique of size 128 so that instead of checking 2^39,000 configurations, their SAT solver only had to search about 1 billion (2^30). This turned a search that might have taken eons into a morning chore. Finally, after just half an hour of computations, they had an answer.

    “The computers said no, so we know the conjecture does hold,” said Heule. There is no way of coloring 128 dice so that they’re all connected to each other, so Keller’s conjecture is true in dimension seven: Any arrangement of tiles that covers the space inevitably includes at least two tiles that share a face.

    The computers actually delivered a lot more than a one-word answer. They supported it with a long proof — 200 gigabytes in size — justifying their conclusion.

    The proof is much more than a readout of all the configurations of variables the computers checked. It’s a logical argument which establishes that the desired clique couldn’t possibly exist. The four researchers fed the Keller proof into a formal proof checker — a computer program that traced the logic of the argument — and confirmed it works.

    “You don’t just go through all the cases and not find anything, you go through all the cases and you’re able to write a proof that this thing doesn’t exist,” Mackey said. “You’re able to write a proof of unsatisfiability.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: