Tagged: WIRED Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 7:13 am on July 21, 2019 Permalink | Reply
    Tags: "Where Do Supermassive Black Holes Come From?", , , , Caltech/MIT Advanced aLigo and Advanced VIRGO, , , , , , WIRED   

    From Western University, CA and WIRED: “Where Do Supermassive Black Holes Come From?” 

    From Western University Canada

    2
    Scott Woods, Western University, Illustration of supermassive black hole
    via

    WIRED

    Wired logo
    NASA

    June 28, 2019

    Researchers decipher the history of supermassive black holes in the early universe.

    At Western University
    MEDIA CONTACT:
    Jeff Renaud, Senior Media Relations Officer,
    519-661-2111, ext. 85165,
    519-520-7281 (mobile),
    jrenaud9@uwo.ca, @jeffrenaud99

    07.18.19
    From Wired
    Meredith Fore

    1
    NASA

    A pair of researchers at Western University in Ontario, Canada, developed their model by looking at quasars, which are supermassive black holes.

    Astronomers have a pretty good idea of how most black holes form: A massive star dies, and after it goes supernova, the remaining mass (if there’s enough of it) collapses under the force of its own gravity, leaving behind a black hole that’s between five and 50 times the mass of our Sun. What this tidy origin story fails to explain is where supermassive black holes, which range from 100,000 to tens of billions of times the mass of the Sun, come from. These monsters exist at the center of almost all galaxies in the universe, and some emerged only 690 million years after the Big Bang. In cosmic terms, that’s practically the blink of an eye—not nearly long enough for a star to be born, collapse into a black hole, and eat enough mass to become supermassive.

    One long-standing explanation for this mystery, known as the direct-collapse theory, hypothesizes that ancient black holes somehow got big without the benefit of a supernova stage. Now a pair of researchers at Western University in Ontario, Canada—Shantanu Basu and Arpan Das—have found some of the first solid observational evidence for the theory. As they described late last month in The Astrophysical Journal Letters, they did it by looking at quasars.

    Quasars are supermassive black holes that continuously suck in, or accrete, large amounts of matter; they get a special name because the stuff falling into them emits bright radiation, making them easier to observe than many other kinds of black holes. The distribution of their masses—how many are bigger, how many are smaller, and how many are in between—is the main indicator of how they formed.

    Astrophysicists at Western University have found evidence for the direct formation of black holes that do not need to emerge from a star remnant. The production of black holes in the early universe, formed in this manner, may provide scientists with an explanation for the presence of extremely massive black holes at a very early stage in the history of our universe.

    After analyzing that information, Basu and Das proposed that the supermassive black holes might have arisen from a chain reaction. They can’t say exactly where the seeds of the black holes came from in the first place, but they think they know what happened next. Each time one of the nascent black holes accreted matter, it would radiate energy, which would heat up neighboring gas clouds. A hot gas cloud collapses more easily than a cold one; with each big meal, the black hole would emit more energy, heating up other gas clouds, and so on. This fits the conclusions of several other astronomers, who believe that the population of supermassive black holes increased at an exponential rate in the universe’s infancy.

    “This is indirect observational evidence that black holes originate from direct-collapses and not from stellar remnants,” says Basu, an astronomy professor at Western who is internationally recognized as an expert in the early stages of star formation and protoplanetary disk evolution.

    Basu and Das developed the new mathematical model by calculating the mass function of supermassive black holes that form over a limited time period and undergo a rapid exponential growth of mass. The mass growth can be regulated by the Eddington limit that is set by a balance of radiation and gravitation forces or can even exceed it by a modest factor.

    “Supermassive black holes only had a short time period where they were able to grow fast and then at some point, because of all the radiation in the universe created by other black holes and stars, their production came to a halt,” explains Basu. “That’s the direct-collapse scenario.”

    But at some point, the chain reaction stopped. As more and more black holes—and stars and galaxies—were born and started radiating energy and light, the gas clouds evaporated. “The overall radiation field in the universe becomes too strong to allow such large amounts of gas to collapse directly,” Basu says. “And so the whole process comes to an end.” He and Das estimate that the chain reaction lasted about 150 million years.

    The generally accepted speed limit for black hole growth is called the Eddington rate, a balance between the outward force of radiation and the inward force of gravity. This speed limit can theoretically be exceeded if the matter is collapsing fast enough; the Basu and Das model suggests black holes were accreting matter at three times the Eddington rate for as long as the chain reaction was happening. For astronomers regularly dealing with numbers in the millions, billions, and trillions, three is quite modest.

    “If the numbers had turned out crazy, like you need 100 times the Eddington accretion rate, or the production period is 2 billion years, or 10 years,” Basu says, “then we’d probably have to conclude that the model is wrong.”

    There are many other theories for how direct-collapse black holes could be created: Perhaps halos of dark matter formed ultramassive quasi-stars that then collapsed, or dense clusters of regular mass stars merged and then collapsed.

    For Basu and Das, one strength of their model is that it doesn’t depend on how the giant seeds were created. “It’s not dependent on some person’s very specific scenario, specific chain of events happening in a certain way,” Basu says. “All this requires is that some very massive black holes did form in the early universe, and they formed in a chain reaction process, and it only lasted a brief time.”

    The ability to see a supermassive black hole forming is still out of reach; existing telescopes can’t look that far back yet. But that may change in the next decade as powerful new tools come online, including the James Webb Space Telescope, the Wide Field Infrared Survey Telescope, and the Laser Interferometer Space Antenna—all of which will hover in low Earth orbit—as well as the Large Synoptic Survey Telescope, based in Chile.

    NASA/ESA/CSA Webb Telescope annotated

    NASA/WFIRST

    Gravity is talking. Lisa will listen. Dialogos of Eide

    ESA/LISA Pathfinder

    ESA/NASA eLISA space based, the future of gravitational wave research

    LSST Camera, built at SLAC

    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    In the next five or 10 years, Basu adds, as the “mountain of data” comes in, models like his and his colleague’s will help astronomers interpret what they see.

    Avi Loeb, one of the pioneers of direct-collapse black hole theory and the director of the Black Hole Initiative at Harvard, is especially excited for the Laser Interferometer Space Antenna. Set to launch in the 2030s, it will allow scientists to measure gravitational waves—fine ripples in the fabric of space-time—more accurately than ever before.

    “We have already started the era of gravitational wave astronomy with stellar-mass black holes,” he says, referring to the black hole mergers detected by the ground-based Laser Interferometer Gravitational-Wave Observatory.

    Its space-based counterpart, Loeb anticipates, could provide a better “census” of the supermassive black hole population.

    For Basu, the question of how supermassive black holes are created is “one of the big chinks in the armor” of our current understanding of the universe. The new model “is a way of making everything work according to current observations,” he says. But Das remains open to any surprises delivered by the spate of new detectors—since surprises, after all, are often how science progresses.

    MIT /Caltech Advanced aLigo



    VIRGO Gravitational Wave interferometer, near Pisa, Italy


    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    LSC LIGO Scientific Collaboration


    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger

    Gravity is talking. Lisa will listen. Dialogos of Eide

    ESA/eLISA the future of gravitational wave research

    Localizations of gravitational-wave signals detected by LIGO in 2015 (GW150914, LVT151012, GW151226, GW170104), more recently, by the LIGO-Virgo network (GW170814, GW170817). After Virgo came online in August 2018


    Skymap showing how adding Virgo to LIGO helps in reducing the size of the source-likely region in the sky. (Credit: Giuseppe Greco (Virgo Urbino group)

    See the full WIRED article here .
    See the full Western University article here .

    The University of Western Ontario (UWO), corporately branded as Western University as of 2012 and commonly shortened to Western, is a public research university in London, Ontario, Canada. The main campus is on 455 hectares (1,120 acres) of land, surrounded by residential neighbourhoods and the Thames River bisecting the campus’s eastern portion. The university operates twelve academic faculties and schools. It is a member of the U15, a group of research-intensive universities in Canada.

    The university was founded on 7 March 1878 by Bishop Isaac Hellmuth of the Anglican Diocese of Huron as the Western University of London, Ontario. It incorporated Huron University College, which had been founded in 1863. The first four faculties were Arts, Divinity, Law and Medicine. The Western University of London became non-denominational in 1908. Beginning in 1919, the university has affiliated with several denominational colleges. The university grew substantially in the post-World War II era, as a number of faculties and schools were added to university.

    Western is a co-educational university, with more than 24,000 students, and with over 306,000 living alumni worldwide. Notable alumni include government officials, academics, business leaders, Nobel Laureates, Rhodes Scholars, and distinguished fellows. Western’s varsity teams, known as the Western Mustangs, compete in the Ontario University Athletics conference of U Sports.

    Wired logo

    WIRED

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:33 am on June 28, 2019 Permalink | Reply
    Tags: "Jony Ive Is Leaving Apple", , , WIRED   

    From WIRED: “Jony Ive Is Leaving Apple” 

    Wired logo

    From WIRED

    1
    Jony Ive. iMore

    The man who designed the iMac, the iPod, the iPhone—and even the Apple Store—is leaving Apple. Jony Ive announced in an interview with the Financial Times on Thursday that he was departing the company after more than two decades to start LoveFrom, a creative agency that will count Apple as its first client. The transition will start later this year, and LoveFrom will formally launch in 2020.

    Ive has been an indispensable leader at Apple and the chief guide of the company’s aesthetic vision. His role took on even greater importance after Apple cofounder Steve Jobs died of pancreatic cancer in 2011. Apple will not immediately appoint a new chief design officer. Instead, Alan Dye, who leads Apple’s user interface team, and Evans Hankey, head of industrial design, will report directly to Apple’s chief operating officer, Jeff Williams, according to the Financial Times.

    “This just seems like a natural and gentle time to make this change,” Ive said in the interview, somewhat perplexingly. Apple’s business is currently weathering many changes: slumping iPhone sales, an increasingly tense trade war between President Trump’s administration and China, the April departure of retail chief Angela Ahrendts. The company is also in the midst of a pivot away from hardware devices to software services.

    It’s not clear exactly what LoveFrom will work on, and Ive was relatively vague about the nature of the firm, though he said he will continue to work on technology and health care. Another Apple design employee, Marc Newson, is also leaving to join the new venture. This isn’t the first time the pair have worked on a non-Apple project together. In 2013, they designed a custom Leica camera that was sold at auction to benefit the Global Fund fighting AIDS, Tuberculosis, and Malaria.

    During an interview with Anna Wintour last November at the WIRED25 summit, Ive discussed the creative process and how he sees his responsibility as a mentor at Apple. “I still think it’s so remarkable that ideas that can become so powerful and so literally world-changing,” he said. “But those same ideas at the beginning are shockingly fragile. I think the creative process doesn’t naturally or easily sit in a large group of people.”

    Ive left the London design studio Tangerine and moved to California to join Apple in 1992. He became senior vice president of industrial design in 1997, after Jobs returned to the company. The next year, the iMac G3 was released, which would prove to be Ive’s first major hit, helping to turn around Apple’s then struggling business. He later helped oversee the design of Apple’s new headquarters, Apple Park.

    “It’s frustrating to talk about this building in terms of absurd, large numbers,” Ive told WIRED’s Steven Levy when the campus opened in 2017. “It makes for an impressive statistic, but you don’t live in an impressive statistic. While it is a technical marvel to make glass at this scale, that’s not the achievement. The achievement is to make a building where so many people can connect and collaborate and walk and talk.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:34 am on May 27, 2019 Permalink | Reply
    Tags: "These Hidden Women Helped Invent Chaos Theory", , Margaret Hamilton, Miss Ellen Fetter, Royal McBee LGP-30, Strange attractor, The butterfly effect, WIRED   

    From WIRED: “These Hidden Women Helped Invent Chaos Theory” 

    Wired logo

    From WIRED

    1
    Ellen Fetter and Margaret Hamilton were responsible for programming the enormous 1960s-era computer that would uncover strange attractors and other hallmarks of chaos theory. Credit: Olena Shmahalo/Quanta Magazine

    2
    Ellen Fetter in 1963, the year Lorenz’s seminal paper came out. Courtesy of Ellen Gille

    3
    Margaret Hamilton. Photo: Wikimedia Commons

    A little over half a century ago, chaos started spilling out of a famous experiment. It came not from a petri dish, a beaker or an astronomical observatory, but from the vacuum tubes and diodes of a Royal McBee LGP-30.

    4
    Royal McBee LGP-30. Credit Ed Thelen

    This “desk” computer—it was the size of a desk—weighed some 800 pounds and sounded like a passing propeller plane. It was so loud that it even got its own office on the fifth floor in Building 24, a drab structure near the center of the Massachusetts Institute of Technology.

    2

    Instructions for the computer came from down the hall, from the office of a meteorologist named Edward Norton Lorenz.

    The story of chaos is usually told like this: Using the LGP-30, Lorenz made paradigm-wrecking discoveries. In 1961, having programmed a set of equations into the computer that would simulate future weather, he found that tiny differences in starting values could lead to drastically different outcomes. This sensitivity to initial conditions, later popularized as the butterfly effect, made predicting the far future a fool’s errand. But Lorenz also found that these unpredictable outcomes weren’t quite random, either. When visualized in a certain way, they seemed to prowl around a shape called a strange attractor.

    About a decade later, chaos theory started to catch on in scientific circles. Scientists soon encountered other unpredictable natural systems that looked random even though they weren’t: the rings of Saturn, blooms of marine algae, Earth’s magnetic field, the number of salmon in a fishery. Then chaos went mainstream with the publication of James Gleick’s Chaos: Making a New Science in 1987. Before long, Jeff Goldblum, playing the chaos theorist Ian Malcolm, was pausing, stammering and charming his way through lines about the unpredictability of nature in Jurassic Park.

    All told, it’s a neat narrative. Lorenz, “the father of chaos,” started a scientific revolution on the LGP-30. It is quite literally a textbook case for how the numerical experiments that modern science has come to rely on—in fields ranging from climate science to ecology to astrophysics—can uncover hidden truths about nature.

    But in fact, Lorenz was not the one running the machine. There’s another story, one that has gone untold for half a century. A year and a half ago, an MIT scientist happened across a name he had never heard before and started to investigate. The trail he ended up following took him into the MIT archives, through the stacks of the Library of Congress, and across three states and five decades to find information about the women who, today, would have been listed as co-authors on that seminal paper. And that material, shared with Quanta, provides a fuller, fairer account of the birth of chaos.

    The Birth of Chaos

    In the fall of 2017, the geophysicist Daniel Rothman, co-director of MIT’s Lorenz Center, was preparing for an upcoming symposium. The meeting would honor Lorenz, who died in 2008, so Rothman revisited Lorenz’s epochal paper, a masterwork on chaos titled Deterministic Nonperiodic Flow. Published in 1963, it has since attracted thousands of citations, and Rothman, having taught this foundational material to class after class, knew it like an old friend. But this time he saw something he hadn’t noticed before. In the paper’s acknowledgments, Lorenz had written, “Special thanks are due to Miss Ellen Fetter for handling the many numerical computations.”

    “Jesus … who is Ellen Fetter?” Rothman recalls thinking at the time. “It’s one of the most important papers in computational physics and, more broadly, in computational science,” he said. And yet he couldn’t find anything about this woman. “Of all the volumes that have been written about Lorenz, the great discovery — nothing.”

    With further online searches, however, Rothman found a wedding announcement from 1963. Ellen Fetter had married John Gille, a physicist, and changed her name. A colleague of Rothman’s then remembered that a graduate student named Sarah Gille had studied at MIT in the 1990s in the very same department as Lorenz and Rothman. Rothman reached out to her, and it turned out that Sarah Gille, now a physical oceanographer at the University of California, San Diego, was Ellen and John’s daughter. Through this connection, Rothman was able to get Ellen Gille, née Fetter, on the phone. And that’s when he learned another name, the name of the woman who had preceded Fetter in the job of programming Lorenz’s first meetings with chaos: Margaret Hamilton.

    When Margaret Hamilton arrived at MIT in the summer of 1959, with a freshly minted math degree from Earlham College, Lorenz had only recently bought and taught himself to use the LGP-30. Hamilton had no prior training in programming either. Then again, neither did anyone else at the time. “He loved that computer,” Hamilton said. “And he made me feel the same way about it.”

    For Hamilton, these were formative years. She recalls being out at a party at three or four a.m., realizing that the LGP-30 wasn’t set to produce results by the next morning, and rushing over with a few friends to start it up. Another time, frustrated by all the things that had to be done to make another run after fixing an error, she devised a way to bypass the computer’s clunky debugging process. To Lorenz’s delight, Hamilton would take the paper tape that fed the machine, roll it out the length of the hallway, and edit the binary code with a sharp pencil. “I’d poke holes for ones, and I’d cover up with Scotch tape the others,” she said. “He just got a kick out of it.”

    There were desks in the computer room, but because of the noise, Lorenz, his secretary, his programmer and his graduate students all shared the other office. The plan was to use the desk computer, then a total novelty, to test competing strategies of weather prediction in a way you couldn’t do with pencil and paper.

    First, though, Lorenz’s team had to do the equivalent of catching the Earth’s atmosphere in a jar. Lorenz idealized the atmosphere in 12 equations that described the motion of gas in a rotating, stratified fluid. Then the team coded them in.

    Sometimes the “weather” inside this simulation would simply repeat like clockwork. But Lorenz found a more interesting and more realistic set of solutions that generated weather that wasn’t periodic. The team set up the computer to slowly print out a graph of how one or two variables—say, the latitude of the strongest westerly winds—changed over time. They would gather around to watch this imaginary weather, even placing little bets on what the program would do next.

    And then one day it did something really strange. This time they had set up the printer not to make a graph, but simply to print out time stamps and the values of a few variables at each time. As Lorenz later recalled, they had re-run a previous weather simulation with what they thought were the same starting values, reading off the earlier numbers from the previous printout. But those weren’t actually the same numbers. The computer was keeping track of numbers to six decimal places, but the printer, to save space on the page, had rounded them to only the first three decimal places.

    After the second run started, Lorenz went to get coffee. The new numbers that emerged from the LGP-30 while he was gone looked at first like the ones from the previous run. This new run had started in a very similar place, after all. But the errors grew exponentially. After about two months of imaginary weather, the two runs looked nothing alike. This system was still deterministic, with no random chance intruding between one moment and the next. Even so, its hair-trigger sensitivity to initial conditions made it unpredictable.

    This meant that in chaotic systems the smallest fluctuations get amplified. Weather predictions fail once they reach some point in the future because we can never measure the initial state of the atmosphere precisely enough. Or, as Lorenz would later present the idea, even a seagull flapping its wings might eventually make a big difference to the weather. (In 1972, the seagull was deposed when a conference organizer, unable to check back about what Lorenz wanted to call an upcoming talk, wrote his own title that switched the metaphor to a butterfly.)

    Many accounts, including the one in Gleick’s book, date the discovery of this butterfly effect to 1961, with the paper following in 1963. But in November 1960, Lorenz described it during the Q&A session following a talk he gave at a conference on numerical weather prediction in Tokyo. After his talk, a question came from a member of the audience: “Did you change the initial condition just slightly and see how much different results were?”

    “As a matter of fact, we tried out that once with the same equation to see what could happen,” Lorenz said. He then started to explain the unexpected result, which he wouldn’t publish for three more years. “He just gives it all away,” Rothman said now. But no one at the time registered it enough to scoop him.

    In the summer of 1961, Hamilton moved on to another project, but not before training her replacement. Two years after Hamilton first stepped on campus, Ellen Fetter showed up at MIT in much the same fashion: a recent graduate of Mount Holyoke with a degree in math, seeking any sort of math-related job in the Boston area, eager and able to learn. She interviewed with a woman who ran the LGP-30 in the nuclear engineering department, who recommended her to Hamilton, who hired her.

    Once Fetter arrived in Building 24, Lorenz gave her a manual and a set of programming problems to practice, and before long she was up to speed. “He carried a lot in his head,” she said. “He would come in with maybe one yellow sheet of paper, a legal piece of paper in his pocket, pull it out, and say, ‘Let’s try this.’”

    The project had progressed meanwhile. The 12 equations produced fickle weather, but even so, that weather seemed to prefer a narrow set of possibilities among all possible states, forming a mysterious cluster which Lorenz wanted to visualize. Finding that difficult, he narrowed his focus even further. From a colleague named Barry Saltzman, he borrowed just three equations that would describe an even simpler nonperiodic system, a beaker of water heated from below and cooled from above.

    Here, again, the LGP-30 chugged its way into chaos. Lorenz identified three properties of the system corresponding roughly to how fast convection was happening in the idealized beaker, how the temperature varied from side to side, and how the temperature varied from top to bottom. The computer tracked these properties moment by moment.

    The properties could also be represented as a point in space. Lorenz and Fetter plotted the motion of this point. They found that over time, the point would trace out a butterfly-shaped fractal structure now called the Lorenz attractor. The trajectory of the point—of the system—would never retrace its own path. And as before, two systems setting out from two minutely different starting points would soon be on totally different tracks. But just as profoundly, wherever you started the system, it would still head over to the attractor and start doing chaotic laps around it.

    The attractor and the system’s sensitivity to initial conditions would eventually be recognized as foundations of chaos theory. Both were published in the landmark 1963 paper. But for a while only meteorologists noticed the result. Meanwhile, Fetter married John Gille and moved with him when he went to Florida State University and then to Colorado. They stayed in touch with Lorenz and saw him at social events. But she didn’t realize how famous he had become.

    2

    Still, the notion of small differences leading to drastically different outcomes stayed in the back of her mind. She remembered the seagull, flapping its wings. “I always had this image that stepping off the curb one way or the other could change the course of any field,” she said.

    Flight Checks

    After leaving Lorenz’s group, Hamilton embarked on a different path, achieving a level of fame that rivals or even exceeds that of her first coding mentor. At MIT’s Instrumentation Laboratory, starting in 1965, she headed the onboard flight software team for the Apollo project.

    Her code held up when the stakes were life and death—even when a mis-flipped switch triggered alarms that interrupted the astronaut’s displays right as Apollo 11 approached the surface of the moon. Mission Control had to make a quick choice: land or abort. But trusting the software’s ability to recognize errors, prioritize important tasks, and recover, the astronauts kept going.

    Hamilton, who popularized the term “software engineering,” later led the team that wrote the software for Skylab, the first US space station. She founded her own company in Cambridge in 1976, and in recent years her legacy has been celebrated again and again. She won NASA’s Exceptional Space Act Award in 2003 and received the Presidential Medal of Freedom in 2016. In 2017 she garnered arguably the greatest honor of all: a Margaret Hamilton Lego minifigure.

    Fetter, for her part, continued to program at Florida State after leaving Lorenz’s group at MIT. After a few years, she left her job to raise her children. In the 1970s, she took computer science classes at the University of Colorado, toying with the idea of returning to programming, but she eventually took a tax preparation job instead. By the 1980s, the demographics of programming had shifted. “After I sort of got put off by a couple of job interviews, I said forget it,” she said. “They went with young, techy guys.”

    Chaos only reentered her life through her daughter, Sarah. As an undergraduate at Yale in the 1980s, Sarah Gille sat in on a class about scientific programming. The case they studied? Lorenz’s discoveries on the LGP-30. Later, Sarah studied physical oceanography as a graduate student at MIT, joining the same overarching department as both Lorenz and Rothman, who had arrived a few years earlier. “One of my office mates in the general exam, the qualifying exam for doing research at MIT, was asked: How would you explain chaos theory to your mother?” she said. “I was like, whew, glad I didn’t get that question.”

    The Changing Value of Computation

    Today, chaos theory is part of the scientific repertoire. In a study published just last month, researchers concluded that no amount of improvement in data gathering or in the science of weather forecasting will allow meteorologists to produce useful forecasts that stretch more than 15 days out. (Lorenz had suggested a similar two-week cap to weather forecasts in the mid-1960s.)

    But the many retellings of chaos’s birth say little to nothing about how Hamilton and Ellen Gille wrote the specific programs that revealed the signatures of chaos. “This is an all-too-common story in the histories of science and technology,” wrote Jennifer Light, the department head for MIT’s Science, Technology and Society program, in an email to Quanta. To an extent, we can chalk up that omission to the tendency of storytellers to focus on solitary geniuses. But it also stems from tensions that remain unresolved today.

    First, coders in general have seen their contributions to science minimized from the beginning. “It was seen as rote,” said Mar Hicks, a historian at the Illinois Institute of Technology. “The fact that it was associated with machines actually gave it less status, rather than more.” But beyond that, and contributing to it, many programmers in this era were women.

    In addition to Hamilton and the woman who coded in MIT’s nuclear engineering department, Ellen Gille recalls a woman on an LGP-30 doing meteorology next door to Lorenz’s group. Another woman followed Gille in the job of programming for Lorenz. An analysis of official U.S. labor statistics shows that in 1960, women held 27 percent of computing and math-related jobs.

    The percentage has been stuck there for a half-century. In the mid-1980s, the fraction of women pursuing bachelor’s degrees in programming even started to decline. Experts have argued over why. One idea holds that early personal computers were marketed preferentially to boys and men. Then when kids went to college, introductory classes assumed a detailed knowledge of computers going in, which alienated young women who didn’t grow up with a machine at home. Today, women programmers describe a self-perpetuating cycle where white and Asian male managers hire people who look like all the other programmers they know. Outright harassment also remains a problem.

    Hamilton and Gille, however, still speak of Lorenz’s humility and mentorship in glowing terms. Before later chroniclers left them out, Lorenz thanked them in the literature in the same way he thanked Saltzman, who provided the equations Lorenz used to find his attractor. This was common at the time. Gille recalls that in all her scientific programming work, only once did someone include her as a co-author after she contributed computational work to a paper; she said she was “stunned” because of how unusual that was.

    Computation in science has become even more indispensable, of course. For recent breakthroughs like the first image of a black hole, the hard part was not figuring out which equations described the system, but how to leverage computers to understand the data.

    Today, many programmers leave science not because their role isn’t appreciated, but because coding is better compensated in industry, said Alyssa Goodman, an astronomer at Harvard University and an expert in computing and data science. “In the 1960s, there was no such thing as a data scientist, there was no such thing as Netflix or Google or whoever, that was going to suck in these people and really, really value them,” she said.

    Still, for coder-scientists in academic systems that measure success by paper citations, things haven’t changed all that much. “If you are a software developer who may never write a paper, you may be essential,” Goodman said. “But you’re not going to be counted that way.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 11:56 am on May 12, 2019 Permalink | Reply
    Tags: "A Bizarre Form of Water May Exist All Over the Universe", , , Creating a shock wave that raised the water’s pressure to millions of atmospheres and its temperature to thousands of degrees., Experts say the discovery of superionic ice vindicates computer predictions which could help material physicists craft future substances with bespoke properties., Laboratory for Laser Energetics, , Superionic ice, Superionic ice can now claim the mantle of Ice XVIII., Superionic ice is black and hot. A cube of it would weigh four times as much as a normal one., Superionic ice is either another addition to water’s already cluttered array of avatars or something even stranger., Superionic ice would conduct electricity like a metal with the hydrogens playing the usual role of electrons., The discovery of superionic ice potentially solves decades-old puzzles about the composition of “ice giant” worlds., The fields around the solar system’s other planets seem to be made up of strongly defined north and south poles without much other structure., The magnetic fields emanating from Uranus and Neptune looked lumpier and more complex with more than two poles., The probe Voyager 2 had sailed into the outer solar system uncovering something strange about the magnetic fields of the ice giants Uranus and Neptune., , What giant icy planets like Uranus and Neptune might be made of, WIRED   

    From University of Rochester Laboratory for Laser Energetics via WIRED: “A Bizarre Form of Water May Exist All Over the Universe” 

    U Rochester bloc

    From University of Rochester

    U Rochester’s Laboratory for Laser Energetics

    via

    Wired logo

    WIRED

    1
    The discovery of superionic ice potentially solves the puzzle of what giant icy planets like Uranus and Neptune are made of. They’re now thought to have gaseous, mixed-chemical outer shells, a liquid layer of ionized water below that, a solid layer of superionic ice comprising the bulk of their interiors, and rocky centers. Credit: @iammoteh/Quanta Magazine.

    Recently at the Laboratory for Laser Energetics in Brighton, New York, one of the world’s most powerful lasers blasted a droplet of water, creating a shock wave that raised the water’s pressure to millions of atmospheres and its temperature to thousands of degrees. X-rays that beamed through the droplet in the same fraction of a second offered humanity’s first glimpse of water under those extreme conditions.

    The X-rays revealed that the water inside the shock wave didn’t become a superheated liquid or gas. Paradoxically—but just as physicists squinting at screens in an adjacent room had expected—the atoms froze solid, forming crystalline ice.

    “You hear the shot,” said Marius Millot of Lawrence Livermore National Laboratory in California, and “right away you see that something interesting was happening.” Millot co-led the experiment with Federica Coppari, also of Livermore.

    The findings, published this week in Nature, confirm the existence of “superionic ice,” a new phase of water with bizarre properties. Unlike the familiar ice found in your freezer or at the north pole, superionic ice is black and hot. A cube of it would weigh four times as much as a normal one. It was first theoretically predicted more than 30 years ago, and although it has never been seen until now, scientists think it might be among the most abundant forms of water in the universe.

    Across the solar system, at least, more water probably exists as superionic ice—filling the interiors of Uranus and Neptune—than in any other phase, including the liquid form sloshing in oceans on Earth, Europa and Enceladus. The discovery of superionic ice potentially solves decades-old puzzles about the composition of these “ice giant” worlds.

    Including the hexagonal arrangement of water molecules found in common ice, known as “ice Ih,” scientists had already discovered a bewildering 18 architectures of ice crystal. After ice I, which comes in two forms, Ih and Ic, the rest are numbered II through XVII in order of their discovery. (Yes, there is an Ice IX, but it exists only under contrived conditions, unlike the fictional doomsday substance in Kurt Vonnegut’s novel Cat’s Cradle.)

    Superionic ice can now claim the mantle of Ice XVIII. It’s a new crystal, but with a twist. All the previously known water ices are made of intact water molecules, each with one oxygen atom linked to two hydrogens. But superionic ice, the new measurements confirm, isn’t like that. It exists in a sort of surrealist limbo, part solid, part liquid. Individual water molecules break apart. The oxygen atoms form a cubic lattice, but the hydrogen atoms spill free, flowing like a liquid through the rigid cage of oxygens.

    3
    A time-integrated photograph of the X-ray diffraction experiment at the University of Rochester’s Laboratory for Laser Energetics. Giant lasers focus on a water sample to compress it into the superionic phase. Additional laser beams generate an X-ray flash off an iron foil, allowing the researchers to take a snapshot of the compressed water layer. Credit: Millot, Coppari, Kowaluk (LLNL)

    Experts say the discovery of superionic ice vindicates computer predictions, which could help material physicists craft future substances with bespoke properties. And finding the ice required ultrafast measurements and fine control of temperature and pressure, advancing experimental techniques. “All of this would not have been possible, say, five years ago,” said Christoph Salzmann at University College London, who discovered ices XIII, XIV and XV. “It will have a huge impact, for sure.”

    Depending on whom you ask, superionic ice is either another addition to water’s already cluttered array of avatars or something even stranger. Because its water molecules break apart, said the physicist Livia Bove of France’s National Center for Scientific Research and Pierre and Marie Curie University, it’s not quite a new phase of water. “It’s really a new state of matter,” she said, “which is rather spectacular.”

    Puzzles Put on Ice

    Physicists have been after superionic ice for years—ever since a primitive computer simulation led by Pierfranco Demontis in 1988 predicted [Physical Review Letters] water would take on this strange, almost metal-like form if you pushed it beyond the map of known ice phases.

    Under extreme pressure and heat, the simulations suggested, water molecules break. With the oxygen atoms locked in a cubic lattice, “the hydrogens now start to jump from one position in the crystal to another, and jump again, and jump again,” said Millot. The jumps between lattice sites are so fast that the hydrogen atoms—which are ionized, making them essentially positively charged protons—appear to move like a liquid.

    This suggested superionic ice would conduct electricity, like a metal, with the hydrogens playing the usual role of electrons. Having these loose hydrogen atoms gushing around would also boost the ice’s disorder, or entropy. In turn, that increase in entropy would make this ice much more stable than other kinds of ice crystals, causing its melting point to soar upward.

    But all this was easy to imagine and hard to trust. The first models used simplified physics, hand-waving their way through the quantum nature of real molecules. Later simulations folded in more quantum effects but still sidestepped the actual equations required to describe multiple quantum bodies interacting, which are too computationally difficult to solve. Instead, they relied on approximations, raising the possibility that the whole scenario could be just a mirage in a simulation. Experiments, meanwhile, couldn’t make the requisite pressures without also generating enough heat to melt even this hardy substance.

    As the problem simmered, though, planetary scientists developed their own sneaking suspicions that water might have a superionic ice phase. Right around the time when the phase was first predicted, the probe Voyager 2 had sailed into the outer solar system, uncovering something strange about the magnetic fields of the ice giants Uranus and Neptune.

    The fields around the solar system’s other planets seem to be made up of strongly defined north and south poles, without much other structure. It’s almost as if they have just bar magnets in their centers, aligned with their rotation axes. Planetary scientists chalk this up to “dynamos”: interior regions where conductive fluids rise and swirl as the planet rotates, sprouting massive magnetic fields.

    By contrast, the magnetic fields emanating from Uranus and Neptune looked lumpier and more complex, with more than two poles. They also don’t align as closely to their planets’ rotation. One way to produce this would be to somehow confine the conducting fluid responsible for the dynamo into just a thin outer shell of the planet, instead of letting it reach down into the core.

    But the idea that these planets might have solid cores, which are incapable of generating dynamos, didn’t seem realistic. If you drilled into these ice giants, you would expect to first encounter a layer of ionic water, which would flow, conduct currents and participate in a dynamo. Naively, it seems like even deeper material, at even hotter temperatures, would also be a fluid. “I used to always make jokes that there’s no way the interiors of Uranus and Neptune are actually solid,” said Sabine Stanley at Johns Hopkins University. “But now it turns out they might actually be.”

    Ice on Blast

    Now, finally, Coppari, Millot and their team have brought the puzzle pieces together.

    In an earlier experiment, published last February [Nature Physics], the physicists built indirect evidence for superionic ice. They squeezed a droplet of room-temperature water between the pointy ends of two cut diamonds. By the time the pressure raised to about a gigapascal, roughly 10 times that at the bottom of the Marianas Trench, the water had transformed into a tetragonal crystal called ice VI. By about 2 gigapascals, it had switched into ice VII, a denser, cubic form transparent to the naked eye that scientists recently discovered also exists in tiny pockets inside natural diamonds.

    Then, using the OMEGA laser at the Laboratory for Laser Energetics, Millot and colleagues targeted the ice VII, still between diamond anvils. As the laser hit the surface of the diamond, it vaporized material upward, effectively rocketing the diamond away in the opposite direction and sending a shock wave through the ice. Millot’s team found their super-pressurized ice melted at around 4,700 degrees Celsius, about as expected for superionic ice, and that it did conduct electricity thanks to the movement of charged protons.

    4
    Federica Coppari, a physicist at Lawrence Livermore National Laboratory, with an x-ray diffraction image plate that she and her colleagues used to discover ice XVIII, also known as superionic ice. Credit: Eugene Kowaluk/Laboratory for Laser Energetics

    With those predictions about superionic ice’s bulk properties settled, the new study led by Coppari and Millot took the next step of confirming its structure. “If you really want to prove that something is crystalline, then you need X-ray diffraction,” Salzmann said.

    Their new experiment skipped ices VI and VII altogether. Instead, the team simply smashed water with laser blasts between diamond anvils. Billionths of a second later, as shock waves rippled through and the water began crystallizing into nanometer-size ice cubes, the scientists used 16 more laser beams to vaporize a thin sliver of iron next to the sample. The resulting hot plasma flooded the crystallizing water with X-rays, which then diffracted from the ice crystals, allowing the team to discern their structure.

    Atoms in the water had rearranged into the long-predicted but never-before-seen architecture, Ice XVIII: a cubic lattice with oxygen atoms at every corner and the center of each face. “It’s quite a breakthrough,” Coppari said.

    “The fact that the existence of this phase is not an artifact of quantum molecular dynamic simulations, but is real—­that’s very comforting,” Bove said.

    And this kind of successful cross-check behind simulations and real superionic ice suggests the ultimate “dream” of material physics researchers might be soon within reach. “You tell me what properties you want in a material, and we’ll go to the computer and figure out theoretically what material and what kind of crystal structure you would need,” said Raymond Jeanloz, a member of the discovery team based at University of California, Berkeley. “The community at large is getting close.”

    The new analyses also hint that although superionic ice does conduct some electricity, it’s a mushy solid. It would flow over time, but not truly churn. Inside Uranus and Neptune, then, fluid layers might stop about 8,000 kilometers down into the planet, where an enormous mantle of sluggish, superionic ice like Millot’s team produced begins. That would limit most dynamo action to shallower depths, accounting for the planets’ unusual fields.

    Other planets and moons in the solar system likely don’t host the right interior sweet spots of temperature and pressure to allow for superionic ice. But many ice giant-sized exoplanets might, suggesting the substance could be common inside icy worlds throughout the galaxy.

    Of course, though, no real planet contains just water. The ice giants in our solar system also mix in chemical species like methane and ammonia. The extent to which superionic behavior actually occurs in nature is “going to depend on whether these phases still exist when we mix water with other materials,” Stanley said. So far, that isn’t clear, although other researchers have argued [Science] superionic ammonia should also exist.

    Aside from extending their research to other materials, the team also hopes to keep zeroing in on the strange, almost paradoxical duality of their superionic crystals. Just capturing the lattice of oxygen atoms “is clearly the most challenging experiment I have ever done,” said Millot. They haven’t yet seen the ghostly, interstitial flow of protons through the lattice. “Technologically, we are not there yet,” Coppari said, “but the field is growing very fast.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    University of Rochester Laboratory for Laser Energetics

    The Laboratory for Laser Energetics (LLE) is a scientific research facility which is part of the University of Rochester’s south campus, located in Brighton, New York. The lab was established in 1970 and its operations since then have been funded jointly; mainly by the United States Department of Energy, the University of Rochester and the New York State government. The Laser Lab was commissioned to serve as a center for investigations of high-energy physics, specifically those involving the interaction of extremely intense laser radiation with matter. Many types of scientific experiments are performed at the facility with a strong emphasis on inertial confinement, direct drive, laser-induced fusion, fundamental plasma physics and astrophysics using OMEGA. In June of 1995, OMEGA became the world’s highest-energy ultraviolet laser. The lab shares its building with the Center for Optoelectronics and Imaging and the Center for Optics Manufacturing. The Robert L. Sproull Center for Ultra High Intensity Laser Research was opened in 2005 and houses the OMEGA EP laser, which was completed in May 2008.

    The laboratory is unique in conducting big science on a university campus.[not verified in body] More than 180 Ph.D.s have been awarded for research done at the LLE.[2][3] During summer months the lab sponsors a program for high school students which involves local-area high school juniors in the research being done at the laboratory. Most of the projects are done on current research that is led by senior scientists at the lab.

    U Rochester Campus

    The University of Rochester is one of the country’s top-tier research universities. Our 158 buildings house more than 200 academic majors, more than 2,000 faculty and instructional staff, and some 10,500 students—approximately half of whom are women.

    Learning at the University of Rochester is also on a very personal scale. Rochester remains one of the smallest and most collegiate among top research universities, with smaller classes, a low 10:1 student to teacher ratio, and increased interactions with faculty.

     
  • richardmitnick 2:32 pm on April 17, 2019 Permalink | Reply
    Tags: “Not only is wind power less expensive but you can place the turbines in deeper water and do it less expensively than before.”, Environment advocates worry that offshore wind platform construction will damage sound-sensitive marine mammals like whales and dolphins., Even though cables can stretch further somebody still has to pay to bring this electricity back on land, Fishermen fear they will be shut out from fishing grounds, GE last year unveiled an even bigger turbine the 12 MW Haliade-X, In Denmark and Germany the governments pay for these connections and to convert the turbine’s alternating current (AC) to direct current (DC) for long-distance transmission., Offshore wind developers must also be sensitive to neighbors who don’t like power cables coming ashore near their homes, The potential is to generate more than 2000 gigawatts of capacity or 7200 terawatt-hours of electricity generation per year., US officials say there’s a lot of room for offshore wind to grow in US coastal waters, Vineyard Wind project, Wind Power finally catches hold, WIRED   

    From WIRED: “Offshore Wind Farms Are Spinning Up in the US—At Last” 

    Wired logo

    From WIRED

    1
    Christopher Furlong/Getty Images

    On June 1, the Pilgrim nuclear plant in Massachusetts will shut down, a victim of rising costs and a technology that is struggling to remain economically viable in the United States. But the electricity generated by the aging nuclear station soon will be replaced by another carbon-free source: a fleet of 84 offshore wind turbines rising nearly 650 feet above the ocean’s surface.

    The developers of the Vineyard Wind project say their turbines—anchored about 14 miles south of Martha’s Vineyard—will generate 800 megawatts of electricity once they start spinning sometime in 2022. That’s equivalent to the output of a large coal-fired power plant and more than Pilgrim’s 640 megawatts.

    “Offshore wind has arrived,” says Erich Stephens, chief development officer for Vineyard Wind, a developer based in New Bedford, Massachusetts, that is backed by Danish and Spanish wind energy firms. He explains that the costs have fallen enough to make developers take it seriously. “Not only is wind power less expensive, but you can place the turbines in deeper water, and do it less expensively than before.”

    Last week, the Massachusetts Department of Public Utilities awarded Vineyard Wind a 20-year contract to provide electricity at 8.9 cents per kilowatt-hour. That’s about a third the cost of other renewables (such as Canadian hydropower), and it’s estimated that ratepayers will save $1.3 billion in energy costs over the life of the deal.

    Can offshore wind pick up the slack from Pilgrim and other fading nukes? Its proponents think so, as long they can respond to concerns about potential harm to fisheries and marine life, as well as successfully connect to the existing power grid on land. Wind power is nothing new in the US, with 56,000 turbines in 41 states, Guam, and Puerto Rico producing a total of 96,433 MW nationwide. But wind farms located offshore, where wind blows stead and strong, unobstructed by buildings or mountains, have yet to start cranking.

    In recent years, however, the turbines have grown bigger and the towers taller, able to generate three times more power than they could five years ago. The technology needed to install them farther away from shore has improved as well, making them more palatable to nearby communities. When it comes to wind turbines, bigger is better, says David Hattery, practice group coordinator for power at K&L Gates, a Seattle law firm that represents wind power manufacturers and developers. Bigger turbines and blades perform better under the forces generated by strong ocean winds. “Turbulence wears out bearings and gear boxes,” Hattery said. “What you don’t want offshore is a turbine that breaks down. It is very expensive to fix it.”

    In the race to get big, Vineyard Wind plans to use a 9.5 MW turbine with a 174-meter diameter rotor, a giant by the standard of most wind farms. But GE last year unveiled an even bigger turbine, the 12 MW Haliade-X. When complete in 2021, each turbine will have a 220-meter wingspan (tip to tip) and be able to generate enough electricity to light 16,000 European homes. GE is building these beasts for offshore farms in Europe, where wind power now generates 14 percent of the continent’s electricity (compared to 6.5 percent in the US). “We feel that we have just the right machine at just the right time,” says John Lavelle, CEO of GE Renewable Energy’s Offshore Wind business.

    US officials say there’s a lot of room for offshore wind to grow in US coastal waters, with the potential to generate more than 2,000 gigawatts of capacity, or 7,200 terawatt-hours of electricity generation per year, according to the US Department of Energy. That’s nearly double the nation’s current electricity use. Even if only 1 percent of that potential is captured, nearly 6.5 million homes could be powered by offshore wind energy.

    Of course, getting these turbines built and spinning takes years of planning and dozens of federal and state permits. The federal government made things a bit easier in the past five years with new rules governing where to put the turbines. The Bureau of Ocean Energy Management (a division of the Department of Interior) now sets boundaries for offshore leases and accepts bids from commercial enterprises to develop wind farms.

    The first offshore project was a 30 MW, five-turbine wind farm that went live at the end of 2016. Developed by Deepwater Wind, the installation replaced diesel generators that once serviced the resorts of Block Island, Rhode Island. Now there are 15 active proposals for wind farms along the East Coast, and others are in the works for California, Hawaii, South Carolina, and New York.

    By having federal planners determine where to put the turbines, developers hope to avoid the debacle that was Cape Wind. Cape Wind was proposed for Nantucket Sound, a shallow area between Nantucket, Martha’s Vineyard, and Cape Cod. Developers began it with high hopes back in 2001, but pulled the plug in 2017 after years of court battles with local residents, fishermen, and two powerful American families: the Kennedys and the Koch brothers, both of whom could see the turbines from their homes.

    Like an extension cord that won’t reach all the way to the living room, Cape Wind’s developers were stuck in Nantucket Sound because existing undersea cables were limited in length. But new undersea transmission capability means the turbines can be located further offshore, away from beachfront homes, commercial shipping lanes, or whale migration routes.

    Even though cables can stretch further, somebody still has to pay to bring this electricity back on land, says Mark McGranaghan, vice president of integrated grid for the Electric Power Research Institute. McGranaghan says that in Denmark and Germany the governments pay for these connections and for the offshore electrical substations that convert the turbine’s alternating current (AC) to direct current (DC) for long-distance transmission. Here in the US, he predicts these costs will likely have to be paid by utility ratepayers or state taxpayers. “Offshore wind is totally real and we know how to do it,” McGranaghan says. “One of the things that comes up is who pays for the infrastructure to bring the power back.”

    It’s not just money. Offshore wind developers must also be sensitive to neighbors who don’t like power cables coming ashore near their homes, fishermen who fear they will be shut out from fishing grounds, or environmental advocates who worry that offshore wind platform construction will damage sound-sensitive marine mammals like whales and dolphins.

    Still, maybe that’s an easier job than finding a safe place to put all the radioactive waste that keeps piling up around Pilgrim and the nation’s 97 other nuclear reactors.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:57 am on April 7, 2019 Permalink | Reply
    Tags: "How Google Is Cramming More Data Into Its New Atlantic Cable", , , Google says the fiber-optic cable it's building across the Atlantic Ocean will be the fastest of its kind. Fiber-optic networks work by sending light over thin strands of glass., Japanese tech giant NEC says it has technology that will enable long-distance undersea cables with 16 fiber-optic pairs., The current growth in new cables is driven less by telcos and more by companies like Google Facebook and Microsoft, Today most long-distance undersea cables contain six or eight fiber-optic pairs., Vijay Vusirikala head of network architecture and optical engineering at Google says the company is already contemplating 24-pair cables., WIRED   

    From WIRED: “How Google Is Cramming More Data Into Its New Atlantic Cable” 

    Wired logo

    From WIRED

    04.05.19
    Klint Finley

    1
    Fiber-optic cable being loaded onto a ship owned by SubCom, which is working with Google to build the world’s fastest undersea data connection. Bill Gallery/SubCom.

    1

    Google says the fiber-optic cable it’s building across the Atlantic Ocean will be the fastest of its kind. When the cable goes live next year, the company estimates it will transmit around 250 terabits per second, fast enough to zap all the contents of the Library of Congress from Virginia to France three times every second. That’s about 56 percent faster than Facebook and Microsoft’s Marea cable, which can transmit about 160 terabits per second between Virginia and Spain.

    Fiber-optic networks work by sending light over thin strands of glass. Fiber-optic cables, which are about the diameter of a garden hose, enclose multiple pairs of these fibers. Google’s new cable is so fast because it carries more fiber pairs. Today, most long-distance undersea cables contain six or eight fiber-optic pairs. Google said Friday that its new cable, dubbed Dunant, is expected to be the first to include 12 pairs, thanks to new technology developed by Google and SubCom, which designs, manufactures, and deploys undersea cables.

    Dunant might not be the fastest for long: Japanese tech giant NEC says it has technology that will enable long-distance undersea cables with 16 fiber-optic pairs. And Vijay Vusirikala, head of network architecture and optical engineering at Google, says the company is already contemplating 24-pair cables.

    The surge in intercontinental cables, and their increasing capacity, reflect continual growth in internet traffic. They enable activists to livestream protests to distant countries, help companies buy and sell products around the world, and facilitate international romances. “Many people still believe international telecommunications are conducted by satellite,” says NEC executive Atsushi Kuwahara. “That was true in 1980, but nowadays, 99 percent of international telecommunications is submarine.”

    So much capacity is being added that, for the moment, it’s outstripping demand. Animations featured in a recent New York Times article illustrated the exploding number of undersea cables since 1989. That growth is continuing. Alan Mauldin of the research firm Telegeography says only about 30 percent of the potential capacity of major undersea cable routes is currently in use—and more than 60 new cables are planned to enter service by 2021. That summons memories of the 1990s Dotcom Bubble, when telecoms buried far more fiber in both the ground and the ocean than they would need for years to come.

    3
    A selection of fiber-optic cable products made by SubCom. Brian Smith/SubCom.

    But the current growth in new cables is driven less by telcos and more by companies like Google, Facebook, and Microsoft that crave ever more bandwidth for the streaming video, photos, and other data scuttling between their global data centers. And experts say that as undersea cable technologies improve, it’s not crazy for companies to build newer, faster routes between continents, even with so much fiber already laying idle in the ocean.

    Controlling Their Own Destiny

    Mauldin says that although there’s still lots of capacity available, companies like Google and Facebook prefer to have dedicated capacity for their own use. That’s part of why big tech companies have either invested in new cables through consortia or, in some cases, built their own cables.

    “When we do our network planning, it’s important to know if we’ll have the capacity in the network,” says Google’s Vusirikala. “One way to know is by building our own cables, controlling our own destiny.”

    Another factor is diversification. Having more cables means there are alternate routes for data if a cable breaks or malfunctions. At the same time, more people outside Europe and North America are tapping the internet, often through smartphones. That’s prompted companies to think about new routes, like between North and South America, or between Europe and Africa, says Mike Hollands, an executive at European data center company Interxion. The Marea cable ticks both of those boxes, giving Facebook and Microsoft faster routes to North Africa and the Middle East, while also creating an alternate path to Europe in case one or more of the traditional routes were disrupted by something like an earthquake.

    Cost Per Bit

    There are financial incentives for the tech companies as well. By owning the cables instead of leasing them from telcos, Google and other tech giants can potentially save money in the long term, Mauldin says.

    The cost to build and deploy a new undersea cable isn’t dropping. But as companies find ways to pump more data through these cables more quickly, their value increases.

    There are a few ways to increase the performance of a fiber-optic communications system. One is to increase the energy used to push the data from one end to the other. The catch is that to keep the data signal from degrading, undersea cables need repeaters roughly every 100 kilometers, Vusirikala explains. Those repeaters amplify not just the signal, but any noise introduced along the way, diminishing the value of boosting the energy.

    4
    A rendering of one of SubCom’s specialized Reliance-class cable ships. SubCom.

    You can also increase the amount of data that each fiber pair within a fiber-optic cable can carry. A technique called “dense wavelength division multiplexing” now enables more than 100 wavelengths to be sent along a single fiber pair.

    Or you can pack more fiber pairs into a cable. Traditionally each pair in a fiber-optic cable required two repeater components called “pumps.” The pumps take up space inside the repeater casing, so adding more pumps would require changes to the way undersea cable systems are built, deployed, and maintained, says SubCom CTO Georg Mohs.

    To get around that problem, SubCom and others are using a technique called space-division multiplexing (SDM) to allow four repeater pumps to power four fiber pairs. That will reduce the capacity of each pair, but cutting the required number of pumps in half allows them to add additional pairs that more than makes up for it, Mohs says.

    “This had been in our toolkit before,” Mohs says, but like other companies, SubCom has been more focused on adding more wavelengths per fiber pair.

    The result: Cables that can move more data than ever before. That means the total cost per bit of data sent across the cable is lower.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 12:50 pm on March 18, 2019 Permalink | Reply
    Tags: "AI Algorithms Are Now Shockingly Good at Doing Science", , WIRED   

    From Quanta via WIRED: “AI Algorithms Are Now Shockingly Good at Doing Science” 

    Quanta Magazine
    Quanta Magazine

    via

    Wired logo

    From WIRED

    3.17.19
    Dan Falk

    1
    Whether probing the evolution of galaxies or discovering new chemical compounds, algorithms are detecting patterns no humans could have spotted. Rachel Suggs/Quanta Magazine

    No human, or team of humans, could possibly keep up with the avalanche of information produced by many of today’s physics and astronomy experiments. Some of them record terabytes of data every day—and the torrent is only increasing. The Square Kilometer Array, a radio telescope slated to switch on in the mid-2020s, will generate about as much data traffic each year as the entire internet.

    SKA Square Kilometer Array

    The deluge has many scientists turning to artificial intelligence for help. With minimal human input, AI systems such as artificial neural networks—computer-simulated networks of neurons that mimic the function of brains—can plow through mountains of data, highlighting anomalies and detecting patterns that humans could never have spotted.

    Of course, the use of computers to aid in scientific research goes back about 75 years, and the method of manually poring over data in search of meaningful patterns originated millennia earlier. But some scientists are arguing that the latest techniques in machine learning and AI represent a fundamentally new way of doing science. One such approach, known as generative modeling, can help identify the most plausible theory among competing explanations for observational data, based solely on the data, and, importantly, without any preprogrammed knowledge of what physical processes might be at work in the system under study. Proponents of generative modeling see it as novel enough to be considered a potential “third way” of learning about the universe.

    Traditionally, we’ve learned about nature through observation. Think of Johannes Kepler poring over Tycho Brahe’s tables of planetary positions and trying to discern the underlying pattern. (He eventually deduced that planets move in elliptical orbits.) Science has also advanced through simulation. An astronomer might model the movement of the Milky Way and its neighboring galaxy, Andromeda, and predict that they’ll collide in a few billion years. Both observation and simulation help scientists generate hypotheses that can then be tested with further observations. Generative modeling differs from both of these approaches.

    Milkdromeda -Andromeda on the left-Earth’s night sky in 3.75 billion years-NASA

    “It’s basically a third approach, between observation and simulation,” says Kevin Schawinski, an astrophysicist and one of generative modeling’s most enthusiastic proponents, who worked until recently at the Swiss Federal Institute of Technology in Zurich (ETH Zurich). “It’s a different way to attack a problem.”

    Some scientists see generative modeling and other new techniques simply as power tools for doing traditional science. But most agree that AI is having an enormous impact, and that its role in science will only grow. Brian Nord, an astrophysicist at Fermi National Accelerator Laboratory who uses artificial neural networks to study the cosmos, is among those who fear there’s nothing a human scientist does that will be impossible to automate. “It’s a bit of a chilling thought,” he said.


    Discovery by Generation

    Ever since graduate school, Schawinski has been making a name for himself in data-driven science. While working on his doctorate, he faced the task of classifying thousands of galaxies based on their appearance. Because no readily available software existed for the job, he decided to crowdsource it—and so the Galaxy Zoo citizen science project was born.

    Galaxy Zoo via Astrobites

    Beginning in 2007, ordinary computer users helped astronomers by logging their best guesses as to which galaxy belonged in which category, with majority rule typically leading to correct classifications. The project was a success, but, as Schawinski notes, AI has made it obsolete: “Today, a talented scientist with a background in machine learning and access to cloud computing could do the whole thing in an afternoon.”

    Schawinski turned to the powerful new tool of generative modeling in 2016. Essentially, generative modeling asks how likely it is, given condition X, that you’ll observe outcome Y. The approach has proved incredibly potent and versatile. As an example, suppose you feed a generative model a set of images of human faces, with each face labeled with the person’s age. As the computer program combs through these “training data,” it begins to draw a connection between older faces and an increased likelihood of wrinkles. Eventually it can “age” any face that it’s given—that is, it can predict what physical changes a given face of any age is likely to undergo.

    3
    None of these faces is real. The faces in the top row (A) and left-hand column (B) were constructed by a generative adversarial network (GAN) using building-block elements of real faces. The GAN then combined basic features of the faces in A, including their gender, age and face shape, with finer features of faces in B, such as hair color and eye color, to create all the faces in the rest of the grid. NVIDIA

    The best-known generative modeling systems are “generative adversarial networks” (GANs). After adequate exposure to training data, a GAN can repair images that have damaged or missing pixels, or they can make blurry photographs sharp. They learn to infer the missing information by means of a competition (hence the term “adversarial”): One part of the network, known as the generator, generates fake data, while a second part, the discriminator, tries to distinguish fake data from real data. As the program runs, both halves get progressively better. You may have seen some of the hyper-realistic, GAN-produced “faces” that have circulated recently — images of “freakishly realistic people who don’t actually exist,” as one headline put it.

    More broadly, generative modeling takes sets of data (typically images, but not always) and breaks each of them down into a set of basic, abstract building blocks — scientists refer to this as the data’s “latent space.” The algorithm manipulates elements of the latent space to see how this affects the original data, and this helps uncover physical processes that are at work in the system.

    The idea of a latent space is abstract and hard to visualize, but as a rough analogy, think of what your brain might be doing when you try to determine the gender of a human face. Perhaps you notice hairstyle, nose shape, and so on, as well as patterns you can’t easily put into words. The computer program is similarly looking for salient features among data: Though it has no idea what a mustache is or what gender is, if it’s been trained on data sets in which some images are tagged “man” or “woman,” and in which some have a “mustache” tag, it will quickly deduce a connection.

    In a paper published in December in Astronomy & Astrophysics, Schawinski and his ETH Zurich colleagues Dennis Turp and Ce Zhang used generative modeling to investigate the physical changes that galaxies undergo as they evolve. (The software they used treats the latent space somewhat differently from the way a generative adversarial network treats it, so it is not technically a GAN, though similar.) Their model created artificial data sets as a way of testing hypotheses about physical processes. They asked, for instance, how the “quenching” of star formation—a sharp reduction in formation rates—is related to the increasing density of a galaxy’s environment.

    For Schawinski, the key question is how much information about stellar and galactic processes could be teased out of the data alone. “Let’s erase everything we know about astrophysics,” he said. “To what degree could we rediscover that knowledge, just using the data itself?”

    First, the galaxy images were reduced to their latent space; then, Schawinski could tweak one element of that space in a way that corresponded to a particular change in the galaxy’s environment—the density of its surroundings, for example. Then he could re-generate the galaxy and see what differences turned up. “So now I have a hypothesis-generation machine,” he explained. “I can take a whole bunch of galaxies that are originally in a low-density environment and make them look like they’re in a high-density environment, by this process.” Schawinski, Turp and Zhang saw that, as galaxies go from low- to high-density environments, they become redder in color, and their stars become more centrally concentrated. This matches existing observations about galaxies, Schawinski said. The question is why this is so.

    The next step, Schawinski says, has not yet been automated: “I have to come in as a human, and say, ‘OK, what kind of physics could explain this effect?’” For the process in question, there are two plausible explanations: Perhaps galaxies become redder in high-density environments because they contain more dust, or perhaps they become redder because of a decline in star formation (in other words, their stars tend to be older). With a generative model, both ideas can be put to the test: Elements in the latent space related to dustiness and star formation rates are changed to see how this affects galaxies’ color. “And the answer is clear,” Schawinski said. Redder galaxies are “where the star formation had dropped, not the ones where the dust changed. So we should favor that explanation.”

    4
    Using generative modeling, astrophysicists could investigate how galaxies change when they go from low-density regions of the cosmos to high-density regions, and what physical processes are responsible for these changes. K. Schawinski et al.; doi: 10.1051/0004-6361/201833800

    The approach is related to traditional simulation, but with critical differences. A simulation is “essentially assumption-driven,” Schawinski said. “The approach is to say, ‘I think I know what the underlying physical laws are that give rise to everything that I see in the system.’ So I have a recipe for star formation, I have a recipe for how dark matter behaves, and so on. I put all of my hypotheses in there, and I let the simulation run. And then I ask: Does that look like reality?” What he’s done with generative modeling, he said, is “in some sense, exactly the opposite of a simulation. We don’t know anything; we don’t want to assume anything. We want the data itself to tell us what might be going on.”

    The apparent success of generative modeling in a study like this obviously doesn’t mean that astronomers and graduate students have been made redundant—but it appears to represent a shift in the degree to which learning about astrophysical objects and processes can be achieved by an artificial system that has little more at its electronic fingertips than a vast pool of data. “It’s not fully automated science—but it demonstrates that we’re capable of at least in part building the tools that make the process of science automatic,” Schawinski said.

    Generative modeling is clearly powerful, but whether it truly represents a new approach to science is open to debate. For David Hogg, a cosmologist at New York University and the Flatiron Institute (which, like Quanta, is funded by the Simons Foundation), the technique is impressive but ultimately just a very sophisticated way of extracting patterns from data—which is what astronomers have been doing for centuries.


    In other words, it’s an advanced form of observation plus analysis. Hogg’s own work, like Schawinski’s, leans heavily on AI; he’s been using neural networks to classify stars according to their spectra and to infer other physical attributes of stars using data-driven models. But he sees his work, as well as Schawinski’s, as tried-and-true science. “I don’t think it’s a third way,” he said recently. “I just think we as a community are becoming far more sophisticated about how we use the data. In particular, we are getting much better at comparing data to data. But in my view, my work is still squarely in the observational mode.”

    Hardworking Assistants

    Whether they’re conceptually novel or not, it’s clear that AI and neural networks have come to play a critical role in contemporary astronomy and physics research. At the Heidelberg Institute for Theoretical Studies, the physicist Kai Polsterer heads the astroinformatics group — a team of researchers focused on new, data-centered methods of doing astrophysics. Recently, they’ve been using a machine-learning algorithm to extract redshift information from galaxy data sets, a previously arduous task.

    Polsterer sees these new AI-based systems as “hardworking assistants” that can comb through data for hours on end without getting bored or complaining about the working conditions. These systems can do all the tedious grunt work, he said, leaving you “to do the cool, interesting science on your own.”

    But they’re not perfect. In particular, Polsterer cautions, the algorithms can only do what they’ve been trained to do. The system is “agnostic” regarding the input. Give it a galaxy, and the software can estimate its redshift and its age — but feed that same system a selfie, or a picture of a rotting fish, and it will output a (very wrong) age for that, too. In the end, oversight by a human scientist remains essential, he said. “It comes back to you, the researcher. You’re the one in charge of doing the interpretation.”

    For his part, Nord, at Fermilab, cautions that it’s crucial that neural networks deliver not only results, but also error bars to go along with them, as every undergraduate is trained to do. In science, if you make a measurement and don’t report an estimate of the associated error, no one will take the results seriously, he said.

    Like many AI researchers, Nord is also concerned about the impenetrability of results produced by neural networks; often, a system delivers an answer without offering a clear picture of how that result was obtained.

    Yet not everyone feels that a lack of transparency is necessarily a problem. Lenka Zdeborová, a researcher at the Institute of Theoretical Physics at CEA Saclay in France, points out that human intuitions are often equally impenetrable. You look at a photograph and instantly recognize a cat—“but you don’t know how you know,” she said. “Your own brain is in some sense a black box.”

    It’s not only astrophysicists and cosmologists who are migrating toward AI-fueled, data-driven science. Quantum physicists like Roger Melko of the Perimeter Institute for Theoretical Physics and the University of Waterloo in Ontario have used neural networks to solve some of the toughest and most important problems in that field, such as how to represent the mathematical “wave function” describing a many-particle system.

    Perimeter Institute in Waterloo, Canada


    AI is essential because of what Melko calls “the exponential curse of dimensionality.” That is, the possibilities for the form of a wave function grow exponentially with the number of particles in the system it describes. The difficulty is similar to trying to work out the best move in a game like chess or Go: You try to peer ahead to the next move, imagining what your opponent will play, and then choose the best response, but with each move, the number of possibilities proliferates.

    Of course, AI systems have mastered both of these games—chess, decades ago, and Go in 2016, when an AI system called AlphaGo defeated a top human player. They are similarly suited to problems in quantum physics, Melko says.

    The Mind of the Machine

    Whether Schawinski is right in claiming that he’s found a “third way” of doing science, or whether, as Hogg says, it’s merely traditional observation and data analysis “on steroids,” it’s clear AI is changing the flavor of scientific discovery, and it’s certainly accelerating it. How far will the AI revolution go in science?

    Occasionally, grand claims are made regarding the achievements of a “robo-scientist.” A decade ago, an AI robot chemist named Adam investigated the genome of baker’s yeast and worked out which genes are responsible for making certain amino acids. (Adam did this by observing strains of yeast that had certain genes missing, and comparing the results to the behavior of strains that had the genes.) Wired’s headline read, “Robot Makes Scientific Discovery All by Itself.”

    More recently, Lee Cronin, a chemist at the University of Glasgow, has been using a robot to randomly mix chemicals, to see what sorts of new compounds are formed.

    Monitoring the reactions in real-time with a mass spectrometer, a nuclear magnetic resonance machine, and an infrared spectrometer, the system eventually learned to predict which combinations would be the most reactive. Even if it doesn’t lead to further discoveries, Cronin has said, the robotic system could allow chemists to speed up their research by about 90 percent.

    Last year, another team of scientists at ETH Zurich used neural networks to deduce physical laws from sets of data. Their system, a sort of robo-Kepler, rediscovered the heliocentric model of the solar system from records of the position of the sun and Mars in the sky, as seen from Earth, and figured out the law of conservation of momentum by observing colliding balls. Since physical laws can often be expressed in more than one way, the researchers wonder if the system might offer new ways—perhaps simpler ways—of thinking about known laws.

    These are all examples of AI kick-starting the process of scientific discovery, though in every case, we can debate just how revolutionary the new approach is. Perhaps most controversial is the question of how much information can be gleaned from data alone—a pressing question in the age of stupendously large (and growing) piles of it. In The Book of Why (2018), the computer scientist Judea Pearl and the science writer Dana Mackenzie assert that data are “profoundly dumb.” Questions about causality “can never be answered from data alone,” they write. “Anytime you see a paper or a study that analyzes the data in a model-free way, you can be certain that the output of the study will merely summarize, and perhaps transform, but not interpret the data.” Schawinski sympathizes with Pearl’s position, but he described the idea of working with “data alone” as “a bit of a straw man.” He’s never claimed to deduce cause and effect that way, he said. “I’m merely saying we can do more with data than we often conventionally do.”

    Another oft-heard argument is that science requires creativity, and that—at least so far—we have no idea how to program that into a machine. (Simply trying everything, like Cronin’s robo-chemist, doesn’t seem especially creative.) “Coming up with a theory, with reasoning, I think demands creativity,” Polsterer said. “Every time you need creativity, you will need a human.” And where does creativity come from? Polsterer suspects it is related to boredom—something that, he says, a machine cannot experience. “To be creative, you have to dislike being bored. And I don’t think a computer will ever feel bored.” On the other hand, words like “creative” and “inspired” have often been used to describe programs like Deep Blue and AlphaGo. And the struggle to describe what goes on inside the “mind” of a machine is mirrored by the difficulty we have in probing our own thought processes.

    Schawinski recently left academia for the private sector; he now runs a startup called Modulos which employs a number of ETH scientists and, according to its website, works “in the eye of the storm of developments in AI and machine learning.” Whatever obstacles may lie between current AI technology and full-fledged artificial minds, he and other experts feel that machines are poised to do more and more of the work of human scientists. Whether there is a limit remains to be seen.

    “Will it be possible, in the foreseeable future, to build a machine that can discover physics or mathematics that the brightest humans alive are not able to do on their own, using biological hardware?” Schawinski wonders. “Will the future of science eventually necessarily be driven by machines that operate on a level that we can never reach? I don’t know. It’s a good question.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 1:10 pm on March 10, 2019 Permalink | Reply
    Tags: A quantum computer would greatly speed up analysis of the collisions hopefully finding evidence of supersymmetry much sooner—or at least allowing us to ditch the theory and move on., And they’ve been waiting for decades. Google is in the race as are IBM Microsoft Intel and a clutch of startups academic groups and the Chinese government., , At the moment researchers spend weeks and months sifting through the debris from proton-proton collisions in the LCH trying to find exotic heavy sister-particles to all our known particles of matter., “This is a marathon” says David Reilly who leads Microsoft’s quantum lab at the University of Sydney Australia. “And it's only 10 minutes into the marathon.”, , , CERN-Future Circular Collider, For CERN the quantum promise could for instance help its scientists find evidence of supersymmetry or SUSY which so far has proven elusive., HL-LHC-High-Luminosity LHC, IBM has steadily been boosting the number of qubits on its quantum computers starting with a meagre 5-qubit computer then 16- and 20-qubit machines and just recently showing off its 50-qubit processor, In a bid to make sense of the impending data deluge some at CERN are turning to the emerging field of quantum computing., In a quantum computer each circuit can have one of two values—either one (on) or zero (off) in binary code; the computer turns the voltage in a circuit on or off to make it work., In theory a quantum computer would process all the states a qubit can have at once and with every qubit added to its memory size its computational power should increase exponentially., Last year physicists from the California Institute of Technology in Pasadena and the University of Southern California managed to replicate the discovery of the Higgs boson found at the LHC in 2012, None of the competing teams have come close to reaching even the first milestone., , , , The quest has now lasted decades and a number of physicists are questioning if the theory behind SUSY is really valid., Traditional computers—be it an Apple Watch or the most powerful supercomputer—rely on tiny silicon transistors that work like on-off switches to encode bits of data., Venture capitalists invested some $250 million in various companies researching quantum computing in 2018 alone., WIRED   

    From WIRED: “Inside the High-Stakes Race to Make Quantum Computers Work” 

    Wired logo

    From WIRED

    03.08.19
    Katia Moskvitch

    1
    View Pictures/Getty Images

    Deep beneath the Franco-Swiss border, the Large Hadron Collider is sleeping.

    LHC

    CERN map


    CERN LHC Tunnel

    CERN LHC particles

    But it won’t be quiet for long. Over the coming years, the world’s largest particle accelerator will be supercharged, increasing the number of proton collisions per second by a factor of two and a half.

    Once the work is complete in 2026, researchers hope to unlock some of the most fundamental questions in the universe. But with the increased power will come a deluge of data the likes of which high-energy physics has never seen before. And, right now, humanity has no way of knowing what the collider might find.

    To understand the scale of the problem, consider this: When it shut down in December 2018, the LHC generated about 300 gigabytes of data every second, adding up to 25 petabytes (PB) annually. For comparison, you’d have to spend 50,000 years listening to music to go through 25 PB of MP3 songs, while the human brain can store memories equivalent to just 2.5 PB of binary data. To make sense of all that information, the LHC data was pumped out to 170 computing centers in 42 countries [http://greybook.cern.ch/]. It was this global collaboration that helped discover the elusive Higgs boson, part of the Higgs field believed to give mass to elementary particles of matter.

    CERN CMS Higgs Event


    CERN ATLAS Higgs Event

    To process the looming data torrent, scientists at the European Organization for Nuclear Research, or CERN, will need 50 to 100 times more computing power than they have at their disposal today. A proposed Future Circular Collider, four times the size of the LHC and 10 times as powerful, would create an impossibly large quantity of data, at least twice as much as the LHC.

    CERN FCC Future Circular Collider map

    In a bid to make sense of the impending data deluge, some at CERN are turning to the emerging field of quantum computing. Powered by the very laws of nature the LHC is probing, such a machine could potentially crunch the expected volume of data in no time at all. What’s more, it would speak the same language as the LHC. While numerous labs around the world are trying to harness the power of quantum computing, it is the future work at CERN that makes it particularly exciting research. There’s just one problem: Right now, there are only prototypes; nobody knows whether it’s actually possible to build a reliable quantum device.

    Traditional computers—be it an Apple Watch or the most powerful supercomputer—rely on tiny silicon transistors that work like on-off switches to encode bits of data.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    Each circuit can have one of two values—either one (on) or zero (off) in binary code; the computer turns the voltage in a circuit on or off to make it work.

    A quantum computer is not limited to this “either/or” way of thinking. Its memory is made up of quantum bits, or qubits—tiny particles of matter like atoms or electrons. And qubits can do “both/and,” meaning that they can be in a superposition of all possible combinations of zeros and ones; they can be all of those states simultaneously.

    For CERN, the quantum promise could, for instance, help its scientists find evidence of supersymmetry, or SUSY, which so far has proven elusive.

    Standard Model of Supersymmetry via DESY

    At the moment, researchers spend weeks and months sifting through the debris from proton-proton collisions in the LCH, trying to find exotic, heavy sister-particles to all our known particles of matter. The quest has now lasted decades, and a number of physicists are questioning if the theory behind SUSY is really valid. A quantum computer would greatly speed up analysis of the collisions, hopefully finding evidence of supersymmetry much sooner—or at least allowing us to ditch the theory and move on.

    A quantum device might also help scientists understand the evolution of the early universe, the first few minutes after the Big Bang. Physicists are pretty confident that back then, our universe was nothing but a strange soup of subatomic particles called quarks and gluons. To understand how this quark-gluon plasma has evolved into the universe we have today, researchers simulate the conditions of the infant universe and then test their models at the LHC, with multiple collisions. Performing a simulation on a quantum computer, governed by the same laws that govern the very particles that the LHC is smashing together, could lead to a much more accurate model to test.

    Beyond pure science, banks, pharmaceutical companies, and governments are also waiting to get their hands on computing power that could be tens or even hundreds of times greater than that of any traditional computer.

    And they’ve been waiting for decades. Google is in the race, as are IBM, Microsoft, Intel and a clutch of startups, academic groups, and the Chinese government. The stakes are incredibly high. Last October, the European Union pledged to give $1 billion to over 5,000 European quantum technology researchers over the next decade, while venture capitalists invested some $250 million in various companies researching quantum computing in 2018 alone. “This is a marathon,” says David Reilly, who leads Microsoft’s quantum lab at the University of Sydney, Australia. “And it’s only 10 minutes into the marathon.”

    Despite the hype surrounding quantum computing and the media frenzy triggered by every announcement of a new qubit record, none of the competing teams have come close to reaching even the first milestone, fancily called quantum supremacy—the moment when a quantum computer performs at least one specific task better than a standard computer. Any kind of task, even if it is totally artificial and pointless. There are plenty of rumors in the quantum community that Google may be close, although if true, it would give the company bragging rights at best, says Michael Biercuk, a physicist at the University of Sydney and founder of quantum startup Q-CTRL. “It would be a bit of a gimmick—an artificial goal,” says Reilly “It’s like concocting some mathematical problem that really doesn’t have an obvious impact on the world just to say that a quantum computer can solve it.”

    That’s because the first real checkpoint in this race is much further away. Called quantum advantage, it would see a quantum computer outperform normal computers on a truly useful task. (Some researchers use the terms quantum supremacy and quantum advantage interchangeably.) And then there is the finish line, the creation of a universal quantum computer. The hope is that it would deliver a computational nirvana with the ability to perform a broad range of incredibly complex tasks. At stake is the design of new molecules for life-saving drugs, helping banks to adjust the riskiness of their investment portfolios, a way to break all current cryptography and develop new, stronger systems, and for scientists at CERN, a way to glimpse the universe as it was just moments after the Big Bang.

    Slowly but surely, work is already underway. Federico Carminati, a physicist at CERN, admits that today’s quantum computers wouldn’t give researchers anything more than classical machines, but, undeterred, he’s started tinkering with IBM’s prototype quantum device via the cloud while waiting for the technology to mature. It’s the latest baby step in the quantum marathon. The deal between CERN and IBM was struck in November last year at an industry workshop organized by the research organization.

    Set up to exchange ideas and discuss potential collab­orations, the event had CERN’s spacious auditorium packed to the brim with researchers from Google, IBM, Intel, D-Wave, Rigetti, and Microsoft. Google detailed its tests of Bristlecone, a 72-qubit machine. Rigetti was touting its work on a 128-qubit system. Intel showed that it was in close pursuit with 49 qubits. For IBM, physicist Ivano Tavernelli took to the stage to explain the company’s progress.

    IBM has steadily been boosting the number of qubits on its quantum computers, starting with a meagre 5-qubit computer, then 16- and 20-qubit machines, and just recently showing off its 50-qubit processor.

    IBM iconic image of Quantum computer

    Carminati listened to Tavernelli, intrigued, and during a much needed coffee break approached him for a chat. A few minutes later, CERN had added a quantum computer to its impressive technology arsenal. CERN researchers are now starting to develop entirely new algorithms and computing models, aiming to grow together with the device. “A fundamental part of this process is to build a solid relationship with the technology providers,” says Carminati. “These are our first steps in quantum computing, but even if we are coming relatively late into the game, we are bringing unique expertise in many fields. We are experts in quantum mechanics, which is at the base of quantum computing.”

    The attraction of quantum devices is obvious. Take standard computers. The prediction by former Intel CEO Gordon Moore in 1965 that the number of components in an integrated circuit would double roughly every two years has held true for more than half a century. But many believe that Moore’s law is about to hit the limits of physics. Since the 1980s, however, researchers have been pondering an alternative. The idea was popularized by Richard Feynman, an American physicist at Caltech in Pasadena. During a lecture in 1981, he lamented that computers could not really simulate what was happening at a subatomic level, with tricky particles like electrons and photons that behave like waves but also dare to exist in two states at once, a phenomenon known as quantum superposition.

    Feynman proposed to build a machine that could. “I’m not happy with all the analyses that go with just the classical theory, because nature isn’t classical, dammit,” he told the audience back in 1981. “And if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy.”

    And so the quantum race began. Qubits can be made in different ways, but the rule is that two qubits can be both in state A, both in state B, one in state A and one in state B, or vice versa, so there are four probabilities in total. And you won’t know what state a qubit is at until you measure it and the qubit is yanked out of its quantum world of probabilities into our mundane physical reality.

    In theory, a quantum computer would process all the states a qubit can have at once, and with every qubit added to its memory size, its computational power should increase exponentially. So, for three qubits, there are eight states to work with simultaneously, for four, 16; for 10, 1,024; and for 20, a whopping 1,048,576 states. You don’t need a lot of qubits to quickly surpass the memory banks of the world’s most powerful modern supercomputers—meaning that for specific tasks, a quantum computer could find a solution much faster than any regular computer ever would. Add to this another crucial concept of quantum mechanics: entanglement. It means that qubits can be linked into a single quantum system, where operating on one affects the rest of the system. This way, the computer can harness the processing power of both simultaneously, massively increasing its computational ability.

    While a number of companies and labs are competing in the quantum marathon, many are running their own races, taking different approaches. One device has even been used by a team of researchers to analyze CERN data, albeit not at CERN. Last year, physicists from the California Institute of Technology in Pasadena and the University of Southern California managed to replicate the discovery of the Higgs boson, found at the LHC in 2012, by sifting through the collider’s troves of data using a quantum computer manufactured by D-Wave, a Canadian firm based in Burnaby, British Columbia. The findings didn’t arrive any quicker than on a traditional computer, but, crucially, the research showed a quantum machine could do the work.

    One of the oldest runners in the quantum race, D-Wave announced back in 2007 that it had built a fully functioning, commercially available 16-qubit quantum computer prototype—a claim that’s controversial to this day. D-Wave focuses on a technology called quantum annealing, based on the natural tendency of real-world quantum systems to find low-energy states (a bit like a spinning top that inevitably will fall over). A D-Wave quantum computer imagines the possible solutions of a problem as a landscape of peaks and valleys; each coordinate represents a possible solution and its elevation represents its energy. Annealing allows you to set up the problem, and then let the system fall into the answer—in about 20 milliseconds. As it does so, it can tunnel through the peaks as it searches for the lowest valleys. It finds the lowest point in the vast landscape of solutions, which corresponds to the best possible outcome—although it does not attempt to fully correct for any errors, inevitable in quantum computation. D-Wave is now working on a prototype of a universal annealing quantum computer, says Alan Baratz, the company’s chief product officer.

    Apart from D-Wave’s quantum annealing, there are three other main approaches to try and bend the quantum world to our whim: integrated circuits, topological qubits and ions trapped with lasers. CERN is placing high hopes on the first method but is closely watching other efforts too.

    IBM, whose computer Carminati has just started using, as well as Google and Intel, all make quantum chips with integrated circuits—quantum gates—that are superconducting, a state when certain metals conduct electricity with zero resistance. Each quantum gate holds a pair of very fragile qubits. Any noise will disrupt them and introduce errors—and in the quantum world, noise is anything from temperature fluctuations to electromagnetic and sound waves to physical vibrations.

    To isolate the chip from the outside world as much as possible and get the circuits to exhibit quantum mechanical effects, it needs to be supercooled to extremely low temperatures. At the IBM quantum lab in Zurich, the chip is housed in a white tank—a cryostat—suspended from the ceiling. The temperature inside the tank is a steady 10 millikelvin or –273 degrees Celsius, a fraction above absolute zero and colder than outer space. But even this isn’t enough.

    Just working with the quantum chip, when scientists manipulate the qubits, causes noise. “The outside world is continually interacting with our quantum hardware, damaging the information we are trying to process,” says physicist John Preskill at the California Institute of Technology, who in 2012 coined the term quantum supremacy. It’s impossible to get rid of the noise completely, so researchers are trying to suppress it as much as possible, hence the ultracold temperatures to achieve at least some stability and allow more time for quantum computations.

    “My job is to extend the lifetime of qubits, and we’ve got four of them to play with,” says Matthias Mergenthaler, an Oxford University postdoc student working at IBM’s Zurich lab. That doesn’t sound like a lot, but, he explains, it’s not so much the number of qubits that counts but their quality, meaning qubits with as low a noise level as possible, to ensure they last as long as possible in superposition and allow the machine to compute. And it’s here, in the fiddly world of noise reduction, that quantum computing hits up against one of its biggest challenges. Right now, the device you’re reading this on probably performs at a level similar to that of a quantum computer with 30 noisy qubits. But if you can reduce the noise, then the quantum computer is many times more powerful.

    Once the noise is reduced, researchers try to correct any remaining errors with the help of special error-correcting algorithms, run on a classical computer. The problem is, such error correction works qubit by qubit, so the more qubits there are, the more errors the system has to cope with. Say a computer makes an error once every 1,000 computational steps; it doesn’t sound like much, but after 1,000 or so operations, the program will output incorrect results. To be able to achieve meaningful computations and surpass standard computers, a quantum machine has to have about 1,000 qubits that are relatively low noise and with error rates as corrected as possible. When you put them all together, these 1,000 qubits will make up what researchers call a logical qubit. None yet exist—so far, the best that prototype quantum devices have achieved is error correction for up to 10 qubits. That’s why these prototypes are called noisy intermediate-scale quantum computers (NISQ), a term also coined by Preskill in 2017.

    For Carminati, it’s clear the technology isn’t ready yet. But that isn’t really an issue. At CERN the challenge is to be ready to unlock the power of quantum computers when and if the hardware becomes available. “One exciting possibility will be to perform very, very accurate simulations of quantum systems with a quantum computer—which in itself is a quantum system,” he says. “Other groundbreaking opportunities will come from the blend of quantum computing and artificial intelligence to analyze big data, a very ambitious proposition at the moment, but central to our needs.”

    But some physicists think NISQ machines will stay just that—noisy—forever. Gil Kalai, a professor at Yale University, says that error correcting and noise suppression will never be good enough to allow any kind of useful quantum computation. And it’s not even due to technology, he says, but to the fundamentals of quantum mechanics. Interacting systems have a tendency for errors to be connected, or correlated, he says, meaning errors will affect many qubits simultaneously. Because of that, it simply won’t be possible to create error-correcting codes that keep noise levels low enough for a quantum computer with the required large number of qubits.

    “My analysis shows that noisy quantum computers with a few dozen qubits deliver such primitive computational power that it will simply not be possible to use them as the building blocks we need to build quantum computers on a wider scale,” he says. Among scientists, such skepticism is hotly debated. The blogs of Kalai and fellow quantum skeptics are forums for lively discussion, as was a recent much-shared article titled “The Case Against Quantum Computing”—followed by its rebuttal, “The Case Against the Case Against Quantum Computing.

    For now, the quantum critics are in a minority. “Provided the qubits we can already correct keep their form and size as we scale, we should be okay,” says Ray Laflamme, a physicist at the University of Waterloo in Ontario, Canada. The crucial thing to watch out for right now is not whether scientists can reach 50, 72, or 128 qubits, but whether scaling quantum computers to this size significantly increases the overall rate of error.

    3
    The Quantum Nano Centre in Canada is one of numerous big-budget research and development labs focussed on quantum computing. James Brittain/Getty Images

    Others believe that the best way to suppress noise and create logical qubits is by making qubits in a different way. At Microsoft, researchers are developing topological qubits—although its array of quantum labs around the world has yet to create a single one. If it succeeds, these qubits would be much more stable than those made with integrated circuits. Microsoft’s idea is to split a particle—for example an electron—in two, creating Majorana fermion quasi-particles. They were theorized back in 1937, and in 2012 researchers at Delft University of Technology in the Netherlands, working at Microsoft’s condensed matter physics lab, obtained the first experimental evidence of their existence.

    “You will only need one of our qubits for every 1,000 of the other qubits on the market today,” says Chetan Nayak, general manager of quantum hardware at Microsoft. In other words, every single topological qubit would be a logical one from the start. Reilly believes that researching these elusive qubits is worth the effort, despite years with little progress, because if one is created, scaling such a device to thousands of logical qubits would be much easier than with a NISQ machine. “It will be extremely important for us to try out our code and algorithms on different quantum simulators and hardware solutions,” says Carminati. “Sure, no machine is ready for prime time quantum production, but neither are we.”

    Another company Carminati is watching closely is IonQ, a US startup that spun out of the University of Maryland. It uses the third main approach to quantum computing: trapping ions. They are naturally quantum, having superposition effects right from the start and at room temperature, meaning that they don’t have to be supercooled like the integrated circuits of NISQ machines. Each ion is a singular qubit, and researchers trap them with special tiny silicon ion traps and then use lasers to run algorithms by varying the times and intensities at which each tiny laser beam hits the qubits. The beams encode data to the ions and read it out from them by getting each ion to change its electronic states.

    In December, IonQ unveiled its commercial device, capable of hosting 160 ion qubits and performing simple quantum operations on a string of 79 qubits. Still, right now, ion qubits are just as noisy as those made by Google, IBM, and Intel, and neither IonQ nor any other labs around the world experimenting with ions have achieved quantum supremacy.

    As the noise and hype surrounding quantum computers rumbles on, at CERN, the clock is ticking. The collider will wake up in just five years, ever mightier, and all that data will have to be analyzed. A non-noisy, error-corrected quantum computer will then come in quite handy.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 1:54 pm on February 7, 2019 Permalink | Reply
    Tags: , , , , Now You Can Join the Search for Killer Asteroids, , , , WIRED   

    From WIRED: “Now You Can Join the Search for Killer Asteroids” 

    Wired logo

    From WIRED

    02.07.19
    Sarah Scoles

    1
    A Hawaii observatory just put the largest astronomical data trove ever online, making it free and accessible so anyone can hunt for new cosmic phenomena. R. White/STScI/PS1 Science Consortium

    If you want to watch sunrise from the national park at the top of Mount Haleakala, the volcano that makes up around 75 percent of the island of Maui, you have to make a reservation. Being at 10,023 feet, the summit provides a spectacular—and very popular, ticket-controlled—view.

    2
    Looking into the Haleakalā crater

    Just about a mile down the road from the visitors’ center sits “Science City,” where civilian and military telescopes curl around the road, their domes bubbling up toward the sky. Like the park’s visitors, they’re looking out beyond Earth’s atmosphere—toward the Sun, satellites, asteroids, or distant galaxies. And one of them, called the Panoramic Survey Telescope and Rapid Response System, or Pan-STARRS, just released the biggest digital astro-dataset ever, amounting to 1.6 petabytes, the equivalent of around 500,000 HD movies.

    Pann-STARS 1 Telescope, U Hawaii, situated at Haleakala Observatories near the summit of Haleakala in Hawaii, USA, altitude 3,052 m (10,013 ft)

    From its start in 2010, Pan-STARRS has been watching the 75 percent of the sky it can see from its perch and recording cosmic states and changes on its 1.4-billion-pixel camera. It even discovered the strange ‘Oumuamua, the interstellar object that a Harvard astronomer has suggested could be an alien spaceship.

    3
    An artist’s rendering of the first recorded visitor to the solar system, ‘Oumuamua.
    Aunt_Spray/Getty Images

    Big surveys like this one, which watch swaths of sky agnostically rather than homing in on specific stuff, represent a big chunk of modern astronomy. They are an efficient, pseudo-egalitarian way to collect data, uncover the unexpected, and allow for discovery long after the lens cap closes. With better computing power, astronomers can see the universe not just as it was and is but also as it’s changing, by comparing, say, how a given part of the sky looks on Tuesday to how it looks on Wednesday. Pan-STARRS’s latest data dump, in particular, gives everyone access to the in-process cosmos, opening up the “time domain” to all earthlings with a good internet connection.

    Pan-STARRS, like all projects, was once just an idea. It started around the turn of this century, when astronomers Nick Kaiser, John Tonry, and Gerry Luppino, from Hawaii’s Institute for Astronomy, suggested that relatively “modest” telescopes—hooked to huge cameras—were the best way to image large skyfields.

    Today, that idea has morphed into Pan-STARRS, a many-pixeled instrument attached to a 1.8-meter telescope (big optical telescopes may measure around 10 meters). It takes multiple images of each part of the sky to show how it’s changing. Over the course of four years, Pan-STARRS imaged the heavens above 12 times, using five different filters. These pictures may show supernovae flaring up and dimming back down, active galaxies whose centers glare as their black holes digest material, and strange bursts from cataclysmic events. “When you visit the same piece of sky again and again, you can recognize, ‘Oh, this galaxy has a new star in it that was not there when we were there a year or three months ago,” says Rick White, an astronomer at the Space Telescope Science Institute, which hosts Pan-STARRS’s archive. In this way, Pan-STARRS is a forerunner of the massive Large Synoptic Survey Telescope, or LSST, which will snap 800 panoramic images every evening, with a 3.2-billion-pixel camera, capturing the whole sky twice a week.

    LSST


    LSST Camera, built at SLAC



    LSST telescope, currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    Plus, by comparing bright dots that move between images, astronomers can uncover closer-by objects, like rocks whose path might sweep uncomfortably close to Earth.

    That latter part is not just interesting to scientists, but to the military too. “It’s considered a defense function to find asteroids that might cause us to go extinct,” says White. That’s (at least part of) why the Air Force, which also operates a satellite-tracking system on Haleakala, pushed $60 million into Pan-STARRS’s development. NASA, the state of Hawaii, a consortium of scientists, and some private donations ponied up the rest.

    But when the telescope first got to work, its operations hit some snags. Its initial images were about half as sharp as they should have been, because the system that adjusted the telescope’s mirror to make up for distortions wasn’t working right.

    Also, the Air Force redacted parts of the sky. It used software called “Magic” to detect streaks of light that might be satellites (including the US government’s own). Magic masked those streaks, essentially placing a dead-pixel black bar across that section of sky, to “to prevent the determination of any orbital element of the artificial satellite before the images left the [Institute for Astronomy] servers,” according to a recent paper by the Pan-STARRS group. In December 2011, the Air Force “dropped the requirement,” says the article. The magic was gone, and the scientists reprocessed the original raw data, removing the black boxes.

    The first tranche of data, from the world’s most substantial digital sky survey, came in December 2016. It was full of stars, galaxies, space rocks, and strangeness. The telescope and its associated scientists have already found an eponymous comet, crafted a 3D model of the Milky Way’s dust, unearthed way-old active galaxies, and spotted everyone’s favorite probably-not-an-alien-spaceship, ’Oumuamua.

    The real deal, though, entered the world late last month, when astronomers publicly released and put online all the individual snapshots, including auto-generated catalogs of some 800 million objects. With that dataset, astronomers and regular people everywhere (once they’ve read a fair number of help-me files) can check out a patch of sky and see how it evolved as time marched on. The curious can do more of the “time domain” science Pan-STARRS was made for: catching explosions, watching rocks, and squinting at unexplained bursts.

    Pan-STARRS might never have gotten its observations online if NASA hadn’t seen its own future in the observatory’s massive data pileup. That 1.6-petabyte archive is now housed at the Space Telescope Science Institute, in Maryland, in a repository called the Mikulski Archive for Space Telescopes. The Institute is also the home of bytes from Hubble, Kepler, GALEX, and 15 other missions, mostly belonging to NASA. “At the beginning they didn’t have any commitment to release the data publicly,” says White. “It’s such a large quantity they didn’t think they could manage to do it.” The Institute, though, welcomed this outsider data in part so it could learn how to deal with such huge quantities.

    The hope is that Pan-STARRS’s freely available data will make a big contribution to astronomy. Just look at the discoveries people publish using Hubble data, says White. “The majority of papers being published are from archival data, by scientists that have no connection to the original observations,” he says. That, he believes, will hold true for Pan-STARRS too.

    But surveys are beautiful not just because they can be shared online. They’re also A+ because their observations aren’t narrow. In much of astronomy, scientists look at specific objects in specific ways at specific times. Maybe they zoom in on the magnetic field of pulsar J1745–2900, or the hydrogen gas in the farthest reaches of the Milky Way’s Perseus arm, or that one alien spaceship rock. Those observations are perfect for that individual astronomer to learn about that field, arm, or ship—but they’re not as great for anything or anyone else. Surveys, on the other hand, serve everyone.

    “The Sloan Digital Sky Survey set the standard for these huge survey projects,” says White. Sloan, which started operations in 2000, is on its fourth iteration, collecting light with telescopes at Apache Point Observatory in New Mexico and Las Campanas Observatory in Northern Chile.

    SDSS 2.5 meter Telescope at Apache Point Observatory, near Sunspot NM, USA, Altitude 2,788 meters (9,147 ft)

    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey

    Carnegie Las Campanas Observatory in the southern Atacama Desert of Chile in the Atacama Region approximately 100 kilometres (62 mi) northeast of the city of La Serena,near the southern end and over 2,500 m (8,200 ft) high

    From the early universe to the modern state of the Milky Way’s union, Sloan data has painted a full-on portrait of the universe that, like those creepy Renaissance portraits, will stick around for years to come.

    Over in a different part of New Mexico, on the high Plains of San Agustin, radio astronomers recently set the Very Large Array’s sights on a new survey. Having started in 2017, the Very Large Array Sky Survey is still at the beginning of its seven years of operation.

    NRAO/Karl V Jansky Expanded Very Large Array, on the Plains of San Agustin fifty miles west of Socorro, NM, USA, at an elevation of 6970 ft (2124 m)

    But astronomers don’t have to wait for it to finish its observations, as happened with the first Pan-STARRS survey. “Within several days of the data coming off the telescope, the images are available to everybody,” says Brian Kent, who, since 2012, has worked on the software that processes the data. Which is no small task: For every four hours of skywatching, the telescope spits out 300 gigabytes, which the software then has to make useful and usable. “You have to put the collective smarts of the astronomers into the software,” he says.

    Kent is excited about the same kinds of time-domain discoveries as White is: about seeing the universe at work rather than as a set of static images. Including the chronological dimension is hot in astronomy right now, from these surveys to future instruments like the LSST and the massive Square Kilometre Array, a radio telescope that will spread across two continents.

    SKA Square Kilometer Array

    SKA Murchison Widefield Array, Boolardy station in outback Western Australia, at the Murchison Radio-astronomy Observatory (MRO)


    Australian Square Kilometre Array Pathfinder (ASKAP) is a radio telescope array located at Murchison Radio-astronomy Observatory (MRO) in the Australian Mid West. ASKAP consists of 36 identical parabolic antennas, each 12 metres in diameter, working together as a single instrument with a total collecting area of approximately 4,000 square metres.

    SKA LOFAR core (“superterp”) near Exloo, Netherlands

    SKA South Africa


    SKA Meerkat telescope, 90 km outside the small Northern Cape town of Carnarvon, SA


    SKA Meerkat telescope, 90 km outside the small Northern Cape town of Carnarvon, SA

    SKA Meerkat telescope, South African design

    Now, as of late January, anyone can access all of those observations, containing phenomena astronomers don’t yet know about and that—hey, who knows—you could beat them to discovering.
    Big surveys like this one, which watch swaths of sky agnostically rather than homing in on specific stuff, represent a big chunk of modern astronomy. They are an efficient, pseudo-egalitarian way to collect data, uncover the unexpected, and allow for discovery long after the lens cap closes. With better computing power, astronomers can see the universe not just as it was and is but also as it’s changing, by comparing, say, how a given part of the sky looks on Tuesday to how it looks on Wednesday. Pan-STARRS’s latest data dump, in particular, gives everyone access to the in-process cosmos, opening up the “time domain” to all earthlings with a good internet connection.

    But surveys are beautiful not just because they can be shared online. They’re also A+ because their observations aren’t narrow. In much of astronomy, scientists look at specific objects in specific ways at specific times. Maybe they zoom in on the magnetic field of pulsar J1745–2900, or the hydrogen gas in the farthest reaches of the Milky Way’s Perseus arm, or that one alien spaceship rock. Those observations are perfect for that individual astronomer to learn about that field, arm, or ship—but they’re not as great for anything or anyone else. Surveys, on the other hand, serve everyone.

    “The Sloan Digital Sky Survey set the standard for these huge survey projects,” says White. Sloan, which started operations in 2000, is on its fourth iteration, collecting light with telescopes at Apache Point Observatory in New Mexico and Las Campanas Observatory in Northern Chile. From the early universe to the modern state of the Milky Way’s union, Sloan data has painted a full-on portrait of the universe that, like those creepy Renaissance portraits, will stick around for years to come.

    Over in a different part of New Mexico, on the high Plains of San Agustin, radio astronomers recently set the Very Large Array’s sights on a new survey. Having started in 2017, the Very Large Array Sky Survey is still at the beginning of its seven years of operation. But astronomers don’t have to wait for it to finish its observations, as happened with the first Pan-STARRS survey. “Within several days of the data coming off the telescope, the images are available to everybody,” says Brian Kent, who, since 2012, has worked on the software that processes the data. Which is no small task: For every four hours of skywatching, the telescope spits out 300 gigabytes, which the software then has to make useful and usable. “You have to put the collective smarts of the astronomers into the software,” he says.

    Kent is excited about the same kinds of time-domain discoveries as White is: about seeing the universe at work rather than as a set of static images. Including the chronological dimension is hot in astronomy right now, from these surveys to future instruments like the LSST and the massive Square Kilometre Array, a radio telescope that will spread across two continents.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 12:38 pm on February 1, 2019 Permalink | Reply
    Tags: A project in eastern Tennessee quietly exceeded the scale of any corporate AI lab. It was run by the US government., , , Nvidia powerful graphics processors, ORNL SUMMIT supercomputer unveiled-world's most powerful in 2018, Summit has a hybrid architecture and each node contains multiple IBM POWER9 CPUs and NVIDIA Volta GPUs all connected together with NVIDIA’s high-speed NVLink, , TensorFlow machine-learning software, The World’s Fastest Supercomputer Breaks an AI Record, WIRED   

    From Oak Ridge National Laboratory via WIRED “The World’s Fastest Supercomputer Breaks an AI Record” 

    i1

    From Oak Ridge National Laboratory

    via

    Wired logo

    WIRED

    ORNL IBM AC922 SUMMIT supercomputer. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    1
    Oak Ridge National Lab’s Summit supercomputer became the world’s most powerful in 2018, reclaiming that title from China for the first time in five years.
    Carlos Jones/Oak Ridge National Lab

    Along America’s west coast, the world’s most valuable companies are racing to make artificial intelligence smarter. Google and Facebook have boasted of experiments using billions of photos and thousands of high-powered processors. But late last year, a project in eastern Tennessee quietly exceeded the scale of any corporate AI lab. It was run by the US government.

    The record-setting project involved the world’s most powerful supercomputer, Summit, at Oak Ridge National Lab. The machine captured that crown in June last year, reclaiming the title for the US after five years of China topping the list. As part of a climate research project, the giant computer booted up a machine-learning experiment that ran faster than any before.

    Summit, which occupies an area equivalent to two tennis courts, used more than 27,000 powerful graphics processors in the project. It tapped their power to train deep-learning algorithms, the technology driving AI’s frontier, chewing through the exercise at a rate of a billion billion operations per second, a pace known in supercomputing circles as an exaflop.

    “Deep learning has never been scaled to such levels of performance before,” says Prabhat, who leads a research group at the National Energy Research Scientific Computing Center at Lawrence Berkeley National Lab. (He goes by one name.) His group collaborated with researchers at Summit’s home base, Oak Ridge National Lab.

    Fittingly, the world’s most powerful computer’s AI workout was focused on one of the world’s largest problems: climate change. Tech companies train algorithms to recognize faces or road signs; the government scientists trained theirs to detect weather patterns like cyclones in the copious output from climate simulations that spool out a century’s worth of three-hour forecasts for Earth’s atmosphere. (It’s unclear how much power the project used or how much carbon that spewed into the air.)

    The Summit experiment has implications for the future of both AI and climate science. The project demonstrates the scientific potential of adapting deep learning to supercomputers, which traditionally simulate physical and chemical processes such as nuclear explosions, black holes, or new materials. It also shows that machine learning can benefit from more computing power—if you can find it—boding well for future breakthroughs.

    “We didn’t know until we did it that it could be done at this scale,” says Rajat Monga, an engineering director at Google. He and other Googlers helped the project by adapting the company’s open-source TensorFlow machine-learning software to Summit’s giant scale.

    Most work on scaling up deep learning has taken place inside the data centers of internet companies, where servers work together on problems by splitting them up, because they are connected relatively loosely, not bound into one giant computer. Supercomputers like Summit have a different architecture, with specialized high-speed connections linking their thousands of processors into a single system that can work as a whole. Until recently, there has been relatively little work on adapting machine learning to work on that kind of hardware.

    Monga says working to adapt TensorFlow to Summit’s scale will also inform Google’s efforts to expand its internal AI systems. Engineers from Nvidia also helped out on the project, by making sure the machine’s tens of thousands of Nvidia graphics processors worked together smoothly.

    Finding ways to put more computing power behind deep-learning algorithms has played a major part in the technology’s recent ascent. The technology that Siri uses to recognize your voice and Waymo vehicles use to read road signs burst into usefulness in 2012 after researchers adapted it to run on Nvidia graphics processors.

    In an analysis published last May, researchers from OpenAI, a San Francisco research institute cofounded by Elon Musk, calculated that the amount of computing power in the largest publicly disclosed machine-learning experiments has doubled roughly every 3.43 months since 2012; that would mean an 11-fold increase each year. That progression has helped bots from Google parent Alphabet defeat champions at tough board games and videogames, and fueled a big jump in the accuracy of Google’s translation service.

    Google and other companies are now creating new kinds of chips customized for AI to continue that trend. Google has said that “pods” tightly integrating 1,000 of its AI chips—dubbed tensor processing units, or TPUs—can provide 100 petaflops of computing power, one-tenth the rate Summit achieved on its AI experiment.

    The Summit project’s contribution to climate science is to show how giant-scale AI could improve our understanding of future weather patterns. When researchers generate century-long climate predictions, reading the resulting forecast is a challenge. “Imagine you have a YouTube movie that runs for 100 years. There’s no way to find all the cats and dogs in it by hand,” says Prabhat of Lawrence Berkeley. The software typically used to automate the process is imperfect, he says. Summit’s results showed that machine learning can do it better, which should help predict storm impacts such as flooding or physical damage. The Summit results won Oak Ridge, Lawrence Berkeley, and Nvidia researchers the Gordon Bell Prize for boundary-pushing work in supercomputing.

    Running deep learning on supercomputers is a new idea that’s come along at a good moment for climate researchers, says Michael Pritchard, a professor at the University of California, Irvine. The slowing pace of improvements to conventional processors had led engineers to stuff supercomputers with growing numbers of graphics chips, where performance has grown more reliably. “There came a point where you couldn’t keep growing computing power in the normal way,” Pritchard says.

    That shift posed some challenges to conventional simulations, which had to be adapted. It also opened the door to embracing the power of deep learning, which is a natural fit for graphics chips. That could give us a clearer view of our climate’s future. Pritchard’s group showed last year that deep learning can generate more realistic simulations of clouds inside climate forecasts, which could improve forecasts of changing rainfall patterns.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: