Tagged: WIRED Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:56 am on May 12, 2019 Permalink | Reply
    Tags: "A Bizarre Form of Water May Exist All Over the Universe", , , Creating a shock wave that raised the water’s pressure to millions of atmospheres and its temperature to thousands of degrees., Experts say the discovery of superionic ice vindicates computer predictions which could help material physicists craft future substances with bespoke properties., Laboratory for Laser Energetics, , Superionic ice, Superionic ice can now claim the mantle of Ice XVIII., Superionic ice is black and hot. A cube of it would weigh four times as much as a normal one., Superionic ice is either another addition to water’s already cluttered array of avatars or something even stranger., Superionic ice would conduct electricity like a metal with the hydrogens playing the usual role of electrons., The discovery of superionic ice potentially solves decades-old puzzles about the composition of “ice giant” worlds., The fields around the solar system’s other planets seem to be made up of strongly defined north and south poles without much other structure., The magnetic fields emanating from Uranus and Neptune looked lumpier and more complex with more than two poles., The probe Voyager 2 had sailed into the outer solar system uncovering something strange about the magnetic fields of the ice giants Uranus and Neptune., , What giant icy planets like Uranus and Neptune might be made of, WIRED   

    From University of Rochester Laboratory for Laser Energetics via WIRED: “A Bizarre Form of Water May Exist All Over the Universe” 

    U Rochester bloc

    From University of Rochester

    U Rochester’s Laboratory for Laser Energetics

    via

    Wired logo

    WIRED

    1
    The discovery of superionic ice potentially solves the puzzle of what giant icy planets like Uranus and Neptune are made of. They’re now thought to have gaseous, mixed-chemical outer shells, a liquid layer of ionized water below that, a solid layer of superionic ice comprising the bulk of their interiors, and rocky centers. Credit: @iammoteh/Quanta Magazine.

    Recently at the Laboratory for Laser Energetics in Brighton, New York, one of the world’s most powerful lasers blasted a droplet of water, creating a shock wave that raised the water’s pressure to millions of atmospheres and its temperature to thousands of degrees. X-rays that beamed through the droplet in the same fraction of a second offered humanity’s first glimpse of water under those extreme conditions.

    The X-rays revealed that the water inside the shock wave didn’t become a superheated liquid or gas. Paradoxically—but just as physicists squinting at screens in an adjacent room had expected—the atoms froze solid, forming crystalline ice.

    “You hear the shot,” said Marius Millot of Lawrence Livermore National Laboratory in California, and “right away you see that something interesting was happening.” Millot co-led the experiment with Federica Coppari, also of Livermore.

    The findings, published this week in Nature, confirm the existence of “superionic ice,” a new phase of water with bizarre properties. Unlike the familiar ice found in your freezer or at the north pole, superionic ice is black and hot. A cube of it would weigh four times as much as a normal one. It was first theoretically predicted more than 30 years ago, and although it has never been seen until now, scientists think it might be among the most abundant forms of water in the universe.

    Across the solar system, at least, more water probably exists as superionic ice—filling the interiors of Uranus and Neptune—than in any other phase, including the liquid form sloshing in oceans on Earth, Europa and Enceladus. The discovery of superionic ice potentially solves decades-old puzzles about the composition of these “ice giant” worlds.

    Including the hexagonal arrangement of water molecules found in common ice, known as “ice Ih,” scientists had already discovered a bewildering 18 architectures of ice crystal. After ice I, which comes in two forms, Ih and Ic, the rest are numbered II through XVII in order of their discovery. (Yes, there is an Ice IX, but it exists only under contrived conditions, unlike the fictional doomsday substance in Kurt Vonnegut’s novel Cat’s Cradle.)

    Superionic ice can now claim the mantle of Ice XVIII. It’s a new crystal, but with a twist. All the previously known water ices are made of intact water molecules, each with one oxygen atom linked to two hydrogens. But superionic ice, the new measurements confirm, isn’t like that. It exists in a sort of surrealist limbo, part solid, part liquid. Individual water molecules break apart. The oxygen atoms form a cubic lattice, but the hydrogen atoms spill free, flowing like a liquid through the rigid cage of oxygens.

    3
    A time-integrated photograph of the X-ray diffraction experiment at the University of Rochester’s Laboratory for Laser Energetics. Giant lasers focus on a water sample to compress it into the superionic phase. Additional laser beams generate an X-ray flash off an iron foil, allowing the researchers to take a snapshot of the compressed water layer. Credit: Millot, Coppari, Kowaluk (LLNL)

    Experts say the discovery of superionic ice vindicates computer predictions, which could help material physicists craft future substances with bespoke properties. And finding the ice required ultrafast measurements and fine control of temperature and pressure, advancing experimental techniques. “All of this would not have been possible, say, five years ago,” said Christoph Salzmann at University College London, who discovered ices XIII, XIV and XV. “It will have a huge impact, for sure.”

    Depending on whom you ask, superionic ice is either another addition to water’s already cluttered array of avatars or something even stranger. Because its water molecules break apart, said the physicist Livia Bove of France’s National Center for Scientific Research and Pierre and Marie Curie University, it’s not quite a new phase of water. “It’s really a new state of matter,” she said, “which is rather spectacular.”

    Puzzles Put on Ice

    Physicists have been after superionic ice for years—ever since a primitive computer simulation led by Pierfranco Demontis in 1988 predicted [Physical Review Letters] water would take on this strange, almost metal-like form if you pushed it beyond the map of known ice phases.

    Under extreme pressure and heat, the simulations suggested, water molecules break. With the oxygen atoms locked in a cubic lattice, “the hydrogens now start to jump from one position in the crystal to another, and jump again, and jump again,” said Millot. The jumps between lattice sites are so fast that the hydrogen atoms—which are ionized, making them essentially positively charged protons—appear to move like a liquid.

    This suggested superionic ice would conduct electricity, like a metal, with the hydrogens playing the usual role of electrons. Having these loose hydrogen atoms gushing around would also boost the ice’s disorder, or entropy. In turn, that increase in entropy would make this ice much more stable than other kinds of ice crystals, causing its melting point to soar upward.

    But all this was easy to imagine and hard to trust. The first models used simplified physics, hand-waving their way through the quantum nature of real molecules. Later simulations folded in more quantum effects but still sidestepped the actual equations required to describe multiple quantum bodies interacting, which are too computationally difficult to solve. Instead, they relied on approximations, raising the possibility that the whole scenario could be just a mirage in a simulation. Experiments, meanwhile, couldn’t make the requisite pressures without also generating enough heat to melt even this hardy substance.

    As the problem simmered, though, planetary scientists developed their own sneaking suspicions that water might have a superionic ice phase. Right around the time when the phase was first predicted, the probe Voyager 2 had sailed into the outer solar system, uncovering something strange about the magnetic fields of the ice giants Uranus and Neptune.

    The fields around the solar system’s other planets seem to be made up of strongly defined north and south poles, without much other structure. It’s almost as if they have just bar magnets in their centers, aligned with their rotation axes. Planetary scientists chalk this up to “dynamos”: interior regions where conductive fluids rise and swirl as the planet rotates, sprouting massive magnetic fields.

    By contrast, the magnetic fields emanating from Uranus and Neptune looked lumpier and more complex, with more than two poles. They also don’t align as closely to their planets’ rotation. One way to produce this would be to somehow confine the conducting fluid responsible for the dynamo into just a thin outer shell of the planet, instead of letting it reach down into the core.

    But the idea that these planets might have solid cores, which are incapable of generating dynamos, didn’t seem realistic. If you drilled into these ice giants, you would expect to first encounter a layer of ionic water, which would flow, conduct currents and participate in a dynamo. Naively, it seems like even deeper material, at even hotter temperatures, would also be a fluid. “I used to always make jokes that there’s no way the interiors of Uranus and Neptune are actually solid,” said Sabine Stanley at Johns Hopkins University. “But now it turns out they might actually be.”

    Ice on Blast

    Now, finally, Coppari, Millot and their team have brought the puzzle pieces together.

    In an earlier experiment, published last February [Nature Physics], the physicists built indirect evidence for superionic ice. They squeezed a droplet of room-temperature water between the pointy ends of two cut diamonds. By the time the pressure raised to about a gigapascal, roughly 10 times that at the bottom of the Marianas Trench, the water had transformed into a tetragonal crystal called ice VI. By about 2 gigapascals, it had switched into ice VII, a denser, cubic form transparent to the naked eye that scientists recently discovered also exists in tiny pockets inside natural diamonds.

    Then, using the OMEGA laser at the Laboratory for Laser Energetics, Millot and colleagues targeted the ice VII, still between diamond anvils. As the laser hit the surface of the diamond, it vaporized material upward, effectively rocketing the diamond away in the opposite direction and sending a shock wave through the ice. Millot’s team found their super-pressurized ice melted at around 4,700 degrees Celsius, about as expected for superionic ice, and that it did conduct electricity thanks to the movement of charged protons.

    4
    Federica Coppari, a physicist at Lawrence Livermore National Laboratory, with an x-ray diffraction image plate that she and her colleagues used to discover ice XVIII, also known as superionic ice. Credit: Eugene Kowaluk/Laboratory for Laser Energetics

    With those predictions about superionic ice’s bulk properties settled, the new study led by Coppari and Millot took the next step of confirming its structure. “If you really want to prove that something is crystalline, then you need X-ray diffraction,” Salzmann said.

    Their new experiment skipped ices VI and VII altogether. Instead, the team simply smashed water with laser blasts between diamond anvils. Billionths of a second later, as shock waves rippled through and the water began crystallizing into nanometer-size ice cubes, the scientists used 16 more laser beams to vaporize a thin sliver of iron next to the sample. The resulting hot plasma flooded the crystallizing water with X-rays, which then diffracted from the ice crystals, allowing the team to discern their structure.

    Atoms in the water had rearranged into the long-predicted but never-before-seen architecture, Ice XVIII: a cubic lattice with oxygen atoms at every corner and the center of each face. “It’s quite a breakthrough,” Coppari said.

    “The fact that the existence of this phase is not an artifact of quantum molecular dynamic simulations, but is real—­that’s very comforting,” Bove said.

    And this kind of successful cross-check behind simulations and real superionic ice suggests the ultimate “dream” of material physics researchers might be soon within reach. “You tell me what properties you want in a material, and we’ll go to the computer and figure out theoretically what material and what kind of crystal structure you would need,” said Raymond Jeanloz, a member of the discovery team based at University of California, Berkeley. “The community at large is getting close.”

    The new analyses also hint that although superionic ice does conduct some electricity, it’s a mushy solid. It would flow over time, but not truly churn. Inside Uranus and Neptune, then, fluid layers might stop about 8,000 kilometers down into the planet, where an enormous mantle of sluggish, superionic ice like Millot’s team produced begins. That would limit most dynamo action to shallower depths, accounting for the planets’ unusual fields.

    Other planets and moons in the solar system likely don’t host the right interior sweet spots of temperature and pressure to allow for superionic ice. But many ice giant-sized exoplanets might, suggesting the substance could be common inside icy worlds throughout the galaxy.

    Of course, though, no real planet contains just water. The ice giants in our solar system also mix in chemical species like methane and ammonia. The extent to which superionic behavior actually occurs in nature is “going to depend on whether these phases still exist when we mix water with other materials,” Stanley said. So far, that isn’t clear, although other researchers have argued [Science] superionic ammonia should also exist.

    Aside from extending their research to other materials, the team also hopes to keep zeroing in on the strange, almost paradoxical duality of their superionic crystals. Just capturing the lattice of oxygen atoms “is clearly the most challenging experiment I have ever done,” said Millot. They haven’t yet seen the ghostly, interstitial flow of protons through the lattice. “Technologically, we are not there yet,” Coppari said, “but the field is growing very fast.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    University of Rochester Laboratory for Laser Energetics

    The Laboratory for Laser Energetics (LLE) is a scientific research facility which is part of the University of Rochester’s south campus, located in Brighton, New York. The lab was established in 1970 and its operations since then have been funded jointly; mainly by the United States Department of Energy, the University of Rochester and the New York State government. The Laser Lab was commissioned to serve as a center for investigations of high-energy physics, specifically those involving the interaction of extremely intense laser radiation with matter. Many types of scientific experiments are performed at the facility with a strong emphasis on inertial confinement, direct drive, laser-induced fusion, fundamental plasma physics and astrophysics using OMEGA. In June of 1995, OMEGA became the world’s highest-energy ultraviolet laser. The lab shares its building with the Center for Optoelectronics and Imaging and the Center for Optics Manufacturing. The Robert L. Sproull Center for Ultra High Intensity Laser Research was opened in 2005 and houses the OMEGA EP laser, which was completed in May 2008.

    The laboratory is unique in conducting big science on a university campus.[not verified in body] More than 180 Ph.D.s have been awarded for research done at the LLE.[2][3] During summer months the lab sponsors a program for high school students which involves local-area high school juniors in the research being done at the laboratory. Most of the projects are done on current research that is led by senior scientists at the lab.

    U Rochester Campus

    The University of Rochester is one of the country’s top-tier research universities. Our 158 buildings house more than 200 academic majors, more than 2,000 faculty and instructional staff, and some 10,500 students—approximately half of whom are women.

    Learning at the University of Rochester is also on a very personal scale. Rochester remains one of the smallest and most collegiate among top research universities, with smaller classes, a low 10:1 student to teacher ratio, and increased interactions with faculty.

     
  • richardmitnick 2:32 pm on April 17, 2019 Permalink | Reply
    Tags: “Not only is wind power less expensive but you can place the turbines in deeper water and do it less expensively than before.”, Environment advocates worry that offshore wind platform construction will damage sound-sensitive marine mammals like whales and dolphins., Even though cables can stretch further somebody still has to pay to bring this electricity back on land, Fishermen fear they will be shut out from fishing grounds, GE last year unveiled an even bigger turbine the 12 MW Haliade-X, In Denmark and Germany the governments pay for these connections and to convert the turbine’s alternating current (AC) to direct current (DC) for long-distance transmission., Offshore wind developers must also be sensitive to neighbors who don’t like power cables coming ashore near their homes, The potential is to generate more than 2000 gigawatts of capacity or 7200 terawatt-hours of electricity generation per year., US officials say there’s a lot of room for offshore wind to grow in US coastal waters, Vineyard Wind project, Wind Power finally catches hold, WIRED   

    From WIRED: “Offshore Wind Farms Are Spinning Up in the US—At Last” 

    Wired logo

    From WIRED

    1
    Christopher Furlong/Getty Images

    On June 1, the Pilgrim nuclear plant in Massachusetts will shut down, a victim of rising costs and a technology that is struggling to remain economically viable in the United States. But the electricity generated by the aging nuclear station soon will be replaced by another carbon-free source: a fleet of 84 offshore wind turbines rising nearly 650 feet above the ocean’s surface.

    The developers of the Vineyard Wind project say their turbines—anchored about 14 miles south of Martha’s Vineyard—will generate 800 megawatts of electricity once they start spinning sometime in 2022. That’s equivalent to the output of a large coal-fired power plant and more than Pilgrim’s 640 megawatts.

    “Offshore wind has arrived,” says Erich Stephens, chief development officer for Vineyard Wind, a developer based in New Bedford, Massachusetts, that is backed by Danish and Spanish wind energy firms. He explains that the costs have fallen enough to make developers take it seriously. “Not only is wind power less expensive, but you can place the turbines in deeper water, and do it less expensively than before.”

    Last week, the Massachusetts Department of Public Utilities awarded Vineyard Wind a 20-year contract to provide electricity at 8.9 cents per kilowatt-hour. That’s about a third the cost of other renewables (such as Canadian hydropower), and it’s estimated that ratepayers will save $1.3 billion in energy costs over the life of the deal.

    Can offshore wind pick up the slack from Pilgrim and other fading nukes? Its proponents think so, as long they can respond to concerns about potential harm to fisheries and marine life, as well as successfully connect to the existing power grid on land. Wind power is nothing new in the US, with 56,000 turbines in 41 states, Guam, and Puerto Rico producing a total of 96,433 MW nationwide. But wind farms located offshore, where wind blows stead and strong, unobstructed by buildings or mountains, have yet to start cranking.

    In recent years, however, the turbines have grown bigger and the towers taller, able to generate three times more power than they could five years ago. The technology needed to install them farther away from shore has improved as well, making them more palatable to nearby communities. When it comes to wind turbines, bigger is better, says David Hattery, practice group coordinator for power at K&L Gates, a Seattle law firm that represents wind power manufacturers and developers. Bigger turbines and blades perform better under the forces generated by strong ocean winds. “Turbulence wears out bearings and gear boxes,” Hattery said. “What you don’t want offshore is a turbine that breaks down. It is very expensive to fix it.”

    In the race to get big, Vineyard Wind plans to use a 9.5 MW turbine with a 174-meter diameter rotor, a giant by the standard of most wind farms. But GE last year unveiled an even bigger turbine, the 12 MW Haliade-X. When complete in 2021, each turbine will have a 220-meter wingspan (tip to tip) and be able to generate enough electricity to light 16,000 European homes. GE is building these beasts for offshore farms in Europe, where wind power now generates 14 percent of the continent’s electricity (compared to 6.5 percent in the US). “We feel that we have just the right machine at just the right time,” says John Lavelle, CEO of GE Renewable Energy’s Offshore Wind business.

    US officials say there’s a lot of room for offshore wind to grow in US coastal waters, with the potential to generate more than 2,000 gigawatts of capacity, or 7,200 terawatt-hours of electricity generation per year, according to the US Department of Energy. That’s nearly double the nation’s current electricity use. Even if only 1 percent of that potential is captured, nearly 6.5 million homes could be powered by offshore wind energy.

    Of course, getting these turbines built and spinning takes years of planning and dozens of federal and state permits. The federal government made things a bit easier in the past five years with new rules governing where to put the turbines. The Bureau of Ocean Energy Management (a division of the Department of Interior) now sets boundaries for offshore leases and accepts bids from commercial enterprises to develop wind farms.

    The first offshore project was a 30 MW, five-turbine wind farm that went live at the end of 2016. Developed by Deepwater Wind, the installation replaced diesel generators that once serviced the resorts of Block Island, Rhode Island. Now there are 15 active proposals for wind farms along the East Coast, and others are in the works for California, Hawaii, South Carolina, and New York.

    By having federal planners determine where to put the turbines, developers hope to avoid the debacle that was Cape Wind. Cape Wind was proposed for Nantucket Sound, a shallow area between Nantucket, Martha’s Vineyard, and Cape Cod. Developers began it with high hopes back in 2001, but pulled the plug in 2017 after years of court battles with local residents, fishermen, and two powerful American families: the Kennedys and the Koch brothers, both of whom could see the turbines from their homes.

    Like an extension cord that won’t reach all the way to the living room, Cape Wind’s developers were stuck in Nantucket Sound because existing undersea cables were limited in length. But new undersea transmission capability means the turbines can be located further offshore, away from beachfront homes, commercial shipping lanes, or whale migration routes.

    Even though cables can stretch further, somebody still has to pay to bring this electricity back on land, says Mark McGranaghan, vice president of integrated grid for the Electric Power Research Institute. McGranaghan says that in Denmark and Germany the governments pay for these connections and for the offshore electrical substations that convert the turbine’s alternating current (AC) to direct current (DC) for long-distance transmission. Here in the US, he predicts these costs will likely have to be paid by utility ratepayers or state taxpayers. “Offshore wind is totally real and we know how to do it,” McGranaghan says. “One of the things that comes up is who pays for the infrastructure to bring the power back.”

    It’s not just money. Offshore wind developers must also be sensitive to neighbors who don’t like power cables coming ashore near their homes, fishermen who fear they will be shut out from fishing grounds, or environmental advocates who worry that offshore wind platform construction will damage sound-sensitive marine mammals like whales and dolphins.

    Still, maybe that’s an easier job than finding a safe place to put all the radioactive waste that keeps piling up around Pilgrim and the nation’s 97 other nuclear reactors.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:57 am on April 7, 2019 Permalink | Reply
    Tags: "How Google Is Cramming More Data Into Its New Atlantic Cable", , , Google says the fiber-optic cable it's building across the Atlantic Ocean will be the fastest of its kind. Fiber-optic networks work by sending light over thin strands of glass., Japanese tech giant NEC says it has technology that will enable long-distance undersea cables with 16 fiber-optic pairs., The current growth in new cables is driven less by telcos and more by companies like Google Facebook and Microsoft, Today most long-distance undersea cables contain six or eight fiber-optic pairs., Vijay Vusirikala head of network architecture and optical engineering at Google says the company is already contemplating 24-pair cables., WIRED   

    From WIRED: “How Google Is Cramming More Data Into Its New Atlantic Cable” 

    Wired logo

    From WIRED

    04.05.19
    Klint Finley

    1
    Fiber-optic cable being loaded onto a ship owned by SubCom, which is working with Google to build the world’s fastest undersea data connection. Bill Gallery/SubCom.

    1

    Google says the fiber-optic cable it’s building across the Atlantic Ocean will be the fastest of its kind. When the cable goes live next year, the company estimates it will transmit around 250 terabits per second, fast enough to zap all the contents of the Library of Congress from Virginia to France three times every second. That’s about 56 percent faster than Facebook and Microsoft’s Marea cable, which can transmit about 160 terabits per second between Virginia and Spain.

    Fiber-optic networks work by sending light over thin strands of glass. Fiber-optic cables, which are about the diameter of a garden hose, enclose multiple pairs of these fibers. Google’s new cable is so fast because it carries more fiber pairs. Today, most long-distance undersea cables contain six or eight fiber-optic pairs. Google said Friday that its new cable, dubbed Dunant, is expected to be the first to include 12 pairs, thanks to new technology developed by Google and SubCom, which designs, manufactures, and deploys undersea cables.

    Dunant might not be the fastest for long: Japanese tech giant NEC says it has technology that will enable long-distance undersea cables with 16 fiber-optic pairs. And Vijay Vusirikala, head of network architecture and optical engineering at Google, says the company is already contemplating 24-pair cables.

    The surge in intercontinental cables, and their increasing capacity, reflect continual growth in internet traffic. They enable activists to livestream protests to distant countries, help companies buy and sell products around the world, and facilitate international romances. “Many people still believe international telecommunications are conducted by satellite,” says NEC executive Atsushi Kuwahara. “That was true in 1980, but nowadays, 99 percent of international telecommunications is submarine.”

    So much capacity is being added that, for the moment, it’s outstripping demand. Animations featured in a recent New York Times article illustrated the exploding number of undersea cables since 1989. That growth is continuing. Alan Mauldin of the research firm Telegeography says only about 30 percent of the potential capacity of major undersea cable routes is currently in use—and more than 60 new cables are planned to enter service by 2021. That summons memories of the 1990s Dotcom Bubble, when telecoms buried far more fiber in both the ground and the ocean than they would need for years to come.

    3
    A selection of fiber-optic cable products made by SubCom. Brian Smith/SubCom.

    But the current growth in new cables is driven less by telcos and more by companies like Google, Facebook, and Microsoft that crave ever more bandwidth for the streaming video, photos, and other data scuttling between their global data centers. And experts say that as undersea cable technologies improve, it’s not crazy for companies to build newer, faster routes between continents, even with so much fiber already laying idle in the ocean.

    Controlling Their Own Destiny

    Mauldin says that although there’s still lots of capacity available, companies like Google and Facebook prefer to have dedicated capacity for their own use. That’s part of why big tech companies have either invested in new cables through consortia or, in some cases, built their own cables.

    “When we do our network planning, it’s important to know if we’ll have the capacity in the network,” says Google’s Vusirikala. “One way to know is by building our own cables, controlling our own destiny.”

    Another factor is diversification. Having more cables means there are alternate routes for data if a cable breaks or malfunctions. At the same time, more people outside Europe and North America are tapping the internet, often through smartphones. That’s prompted companies to think about new routes, like between North and South America, or between Europe and Africa, says Mike Hollands, an executive at European data center company Interxion. The Marea cable ticks both of those boxes, giving Facebook and Microsoft faster routes to North Africa and the Middle East, while also creating an alternate path to Europe in case one or more of the traditional routes were disrupted by something like an earthquake.

    Cost Per Bit

    There are financial incentives for the tech companies as well. By owning the cables instead of leasing them from telcos, Google and other tech giants can potentially save money in the long term, Mauldin says.

    The cost to build and deploy a new undersea cable isn’t dropping. But as companies find ways to pump more data through these cables more quickly, their value increases.

    There are a few ways to increase the performance of a fiber-optic communications system. One is to increase the energy used to push the data from one end to the other. The catch is that to keep the data signal from degrading, undersea cables need repeaters roughly every 100 kilometers, Vusirikala explains. Those repeaters amplify not just the signal, but any noise introduced along the way, diminishing the value of boosting the energy.

    4
    A rendering of one of SubCom’s specialized Reliance-class cable ships. SubCom.

    You can also increase the amount of data that each fiber pair within a fiber-optic cable can carry. A technique called “dense wavelength division multiplexing” now enables more than 100 wavelengths to be sent along a single fiber pair.

    Or you can pack more fiber pairs into a cable. Traditionally each pair in a fiber-optic cable required two repeater components called “pumps.” The pumps take up space inside the repeater casing, so adding more pumps would require changes to the way undersea cable systems are built, deployed, and maintained, says SubCom CTO Georg Mohs.

    To get around that problem, SubCom and others are using a technique called space-division multiplexing (SDM) to allow four repeater pumps to power four fiber pairs. That will reduce the capacity of each pair, but cutting the required number of pumps in half allows them to add additional pairs that more than makes up for it, Mohs says.

    “This had been in our toolkit before,” Mohs says, but like other companies, SubCom has been more focused on adding more wavelengths per fiber pair.

    The result: Cables that can move more data than ever before. That means the total cost per bit of data sent across the cable is lower.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 12:50 pm on March 18, 2019 Permalink | Reply
    Tags: "AI Algorithms Are Now Shockingly Good at Doing Science", , WIRED   

    From Quanta via WIRED: “AI Algorithms Are Now Shockingly Good at Doing Science” 

    Quanta Magazine
    Quanta Magazine

    via

    Wired logo

    From WIRED

    3.17.19
    Dan Falk

    1
    Whether probing the evolution of galaxies or discovering new chemical compounds, algorithms are detecting patterns no humans could have spotted. Rachel Suggs/Quanta Magazine

    No human, or team of humans, could possibly keep up with the avalanche of information produced by many of today’s physics and astronomy experiments. Some of them record terabytes of data every day—and the torrent is only increasing. The Square Kilometer Array, a radio telescope slated to switch on in the mid-2020s, will generate about as much data traffic each year as the entire internet.

    SKA Square Kilometer Array

    The deluge has many scientists turning to artificial intelligence for help. With minimal human input, AI systems such as artificial neural networks—computer-simulated networks of neurons that mimic the function of brains—can plow through mountains of data, highlighting anomalies and detecting patterns that humans could never have spotted.

    Of course, the use of computers to aid in scientific research goes back about 75 years, and the method of manually poring over data in search of meaningful patterns originated millennia earlier. But some scientists are arguing that the latest techniques in machine learning and AI represent a fundamentally new way of doing science. One such approach, known as generative modeling, can help identify the most plausible theory among competing explanations for observational data, based solely on the data, and, importantly, without any preprogrammed knowledge of what physical processes might be at work in the system under study. Proponents of generative modeling see it as novel enough to be considered a potential “third way” of learning about the universe.

    Traditionally, we’ve learned about nature through observation. Think of Johannes Kepler poring over Tycho Brahe’s tables of planetary positions and trying to discern the underlying pattern. (He eventually deduced that planets move in elliptical orbits.) Science has also advanced through simulation. An astronomer might model the movement of the Milky Way and its neighboring galaxy, Andromeda, and predict that they’ll collide in a few billion years. Both observation and simulation help scientists generate hypotheses that can then be tested with further observations. Generative modeling differs from both of these approaches.

    Milkdromeda -Andromeda on the left-Earth’s night sky in 3.75 billion years-NASA

    “It’s basically a third approach, between observation and simulation,” says Kevin Schawinski, an astrophysicist and one of generative modeling’s most enthusiastic proponents, who worked until recently at the Swiss Federal Institute of Technology in Zurich (ETH Zurich). “It’s a different way to attack a problem.”

    Some scientists see generative modeling and other new techniques simply as power tools for doing traditional science. But most agree that AI is having an enormous impact, and that its role in science will only grow. Brian Nord, an astrophysicist at Fermi National Accelerator Laboratory who uses artificial neural networks to study the cosmos, is among those who fear there’s nothing a human scientist does that will be impossible to automate. “It’s a bit of a chilling thought,” he said.


    Discovery by Generation

    Ever since graduate school, Schawinski has been making a name for himself in data-driven science. While working on his doctorate, he faced the task of classifying thousands of galaxies based on their appearance. Because no readily available software existed for the job, he decided to crowdsource it—and so the Galaxy Zoo citizen science project was born.

    Galaxy Zoo via Astrobites

    Beginning in 2007, ordinary computer users helped astronomers by logging their best guesses as to which galaxy belonged in which category, with majority rule typically leading to correct classifications. The project was a success, but, as Schawinski notes, AI has made it obsolete: “Today, a talented scientist with a background in machine learning and access to cloud computing could do the whole thing in an afternoon.”

    Schawinski turned to the powerful new tool of generative modeling in 2016. Essentially, generative modeling asks how likely it is, given condition X, that you’ll observe outcome Y. The approach has proved incredibly potent and versatile. As an example, suppose you feed a generative model a set of images of human faces, with each face labeled with the person’s age. As the computer program combs through these “training data,” it begins to draw a connection between older faces and an increased likelihood of wrinkles. Eventually it can “age” any face that it’s given—that is, it can predict what physical changes a given face of any age is likely to undergo.

    3
    None of these faces is real. The faces in the top row (A) and left-hand column (B) were constructed by a generative adversarial network (GAN) using building-block elements of real faces. The GAN then combined basic features of the faces in A, including their gender, age and face shape, with finer features of faces in B, such as hair color and eye color, to create all the faces in the rest of the grid. NVIDIA

    The best-known generative modeling systems are “generative adversarial networks” (GANs). After adequate exposure to training data, a GAN can repair images that have damaged or missing pixels, or they can make blurry photographs sharp. They learn to infer the missing information by means of a competition (hence the term “adversarial”): One part of the network, known as the generator, generates fake data, while a second part, the discriminator, tries to distinguish fake data from real data. As the program runs, both halves get progressively better. You may have seen some of the hyper-realistic, GAN-produced “faces” that have circulated recently — images of “freakishly realistic people who don’t actually exist,” as one headline put it.

    More broadly, generative modeling takes sets of data (typically images, but not always) and breaks each of them down into a set of basic, abstract building blocks — scientists refer to this as the data’s “latent space.” The algorithm manipulates elements of the latent space to see how this affects the original data, and this helps uncover physical processes that are at work in the system.

    The idea of a latent space is abstract and hard to visualize, but as a rough analogy, think of what your brain might be doing when you try to determine the gender of a human face. Perhaps you notice hairstyle, nose shape, and so on, as well as patterns you can’t easily put into words. The computer program is similarly looking for salient features among data: Though it has no idea what a mustache is or what gender is, if it’s been trained on data sets in which some images are tagged “man” or “woman,” and in which some have a “mustache” tag, it will quickly deduce a connection.

    In a paper published in December in Astronomy & Astrophysics, Schawinski and his ETH Zurich colleagues Dennis Turp and Ce Zhang used generative modeling to investigate the physical changes that galaxies undergo as they evolve. (The software they used treats the latent space somewhat differently from the way a generative adversarial network treats it, so it is not technically a GAN, though similar.) Their model created artificial data sets as a way of testing hypotheses about physical processes. They asked, for instance, how the “quenching” of star formation—a sharp reduction in formation rates—is related to the increasing density of a galaxy’s environment.

    For Schawinski, the key question is how much information about stellar and galactic processes could be teased out of the data alone. “Let’s erase everything we know about astrophysics,” he said. “To what degree could we rediscover that knowledge, just using the data itself?”

    First, the galaxy images were reduced to their latent space; then, Schawinski could tweak one element of that space in a way that corresponded to a particular change in the galaxy’s environment—the density of its surroundings, for example. Then he could re-generate the galaxy and see what differences turned up. “So now I have a hypothesis-generation machine,” he explained. “I can take a whole bunch of galaxies that are originally in a low-density environment and make them look like they’re in a high-density environment, by this process.” Schawinski, Turp and Zhang saw that, as galaxies go from low- to high-density environments, they become redder in color, and their stars become more centrally concentrated. This matches existing observations about galaxies, Schawinski said. The question is why this is so.

    The next step, Schawinski says, has not yet been automated: “I have to come in as a human, and say, ‘OK, what kind of physics could explain this effect?’” For the process in question, there are two plausible explanations: Perhaps galaxies become redder in high-density environments because they contain more dust, or perhaps they become redder because of a decline in star formation (in other words, their stars tend to be older). With a generative model, both ideas can be put to the test: Elements in the latent space related to dustiness and star formation rates are changed to see how this affects galaxies’ color. “And the answer is clear,” Schawinski said. Redder galaxies are “where the star formation had dropped, not the ones where the dust changed. So we should favor that explanation.”

    4
    Using generative modeling, astrophysicists could investigate how galaxies change when they go from low-density regions of the cosmos to high-density regions, and what physical processes are responsible for these changes. K. Schawinski et al.; doi: 10.1051/0004-6361/201833800

    The approach is related to traditional simulation, but with critical differences. A simulation is “essentially assumption-driven,” Schawinski said. “The approach is to say, ‘I think I know what the underlying physical laws are that give rise to everything that I see in the system.’ So I have a recipe for star formation, I have a recipe for how dark matter behaves, and so on. I put all of my hypotheses in there, and I let the simulation run. And then I ask: Does that look like reality?” What he’s done with generative modeling, he said, is “in some sense, exactly the opposite of a simulation. We don’t know anything; we don’t want to assume anything. We want the data itself to tell us what might be going on.”

    The apparent success of generative modeling in a study like this obviously doesn’t mean that astronomers and graduate students have been made redundant—but it appears to represent a shift in the degree to which learning about astrophysical objects and processes can be achieved by an artificial system that has little more at its electronic fingertips than a vast pool of data. “It’s not fully automated science—but it demonstrates that we’re capable of at least in part building the tools that make the process of science automatic,” Schawinski said.

    Generative modeling is clearly powerful, but whether it truly represents a new approach to science is open to debate. For David Hogg, a cosmologist at New York University and the Flatiron Institute (which, like Quanta, is funded by the Simons Foundation), the technique is impressive but ultimately just a very sophisticated way of extracting patterns from data—which is what astronomers have been doing for centuries.


    In other words, it’s an advanced form of observation plus analysis. Hogg’s own work, like Schawinski’s, leans heavily on AI; he’s been using neural networks to classify stars according to their spectra and to infer other physical attributes of stars using data-driven models. But he sees his work, as well as Schawinski’s, as tried-and-true science. “I don’t think it’s a third way,” he said recently. “I just think we as a community are becoming far more sophisticated about how we use the data. In particular, we are getting much better at comparing data to data. But in my view, my work is still squarely in the observational mode.”

    Hardworking Assistants

    Whether they’re conceptually novel or not, it’s clear that AI and neural networks have come to play a critical role in contemporary astronomy and physics research. At the Heidelberg Institute for Theoretical Studies, the physicist Kai Polsterer heads the astroinformatics group — a team of researchers focused on new, data-centered methods of doing astrophysics. Recently, they’ve been using a machine-learning algorithm to extract redshift information from galaxy data sets, a previously arduous task.

    Polsterer sees these new AI-based systems as “hardworking assistants” that can comb through data for hours on end without getting bored or complaining about the working conditions. These systems can do all the tedious grunt work, he said, leaving you “to do the cool, interesting science on your own.”

    But they’re not perfect. In particular, Polsterer cautions, the algorithms can only do what they’ve been trained to do. The system is “agnostic” regarding the input. Give it a galaxy, and the software can estimate its redshift and its age — but feed that same system a selfie, or a picture of a rotting fish, and it will output a (very wrong) age for that, too. In the end, oversight by a human scientist remains essential, he said. “It comes back to you, the researcher. You’re the one in charge of doing the interpretation.”

    For his part, Nord, at Fermilab, cautions that it’s crucial that neural networks deliver not only results, but also error bars to go along with them, as every undergraduate is trained to do. In science, if you make a measurement and don’t report an estimate of the associated error, no one will take the results seriously, he said.

    Like many AI researchers, Nord is also concerned about the impenetrability of results produced by neural networks; often, a system delivers an answer without offering a clear picture of how that result was obtained.

    Yet not everyone feels that a lack of transparency is necessarily a problem. Lenka Zdeborová, a researcher at the Institute of Theoretical Physics at CEA Saclay in France, points out that human intuitions are often equally impenetrable. You look at a photograph and instantly recognize a cat—“but you don’t know how you know,” she said. “Your own brain is in some sense a black box.”

    It’s not only astrophysicists and cosmologists who are migrating toward AI-fueled, data-driven science. Quantum physicists like Roger Melko of the Perimeter Institute for Theoretical Physics and the University of Waterloo in Ontario have used neural networks to solve some of the toughest and most important problems in that field, such as how to represent the mathematical “wave function” describing a many-particle system.

    Perimeter Institute in Waterloo, Canada


    AI is essential because of what Melko calls “the exponential curse of dimensionality.” That is, the possibilities for the form of a wave function grow exponentially with the number of particles in the system it describes. The difficulty is similar to trying to work out the best move in a game like chess or Go: You try to peer ahead to the next move, imagining what your opponent will play, and then choose the best response, but with each move, the number of possibilities proliferates.

    Of course, AI systems have mastered both of these games—chess, decades ago, and Go in 2016, when an AI system called AlphaGo defeated a top human player. They are similarly suited to problems in quantum physics, Melko says.

    The Mind of the Machine

    Whether Schawinski is right in claiming that he’s found a “third way” of doing science, or whether, as Hogg says, it’s merely traditional observation and data analysis “on steroids,” it’s clear AI is changing the flavor of scientific discovery, and it’s certainly accelerating it. How far will the AI revolution go in science?

    Occasionally, grand claims are made regarding the achievements of a “robo-scientist.” A decade ago, an AI robot chemist named Adam investigated the genome of baker’s yeast and worked out which genes are responsible for making certain amino acids. (Adam did this by observing strains of yeast that had certain genes missing, and comparing the results to the behavior of strains that had the genes.) Wired’s headline read, “Robot Makes Scientific Discovery All by Itself.”

    More recently, Lee Cronin, a chemist at the University of Glasgow, has been using a robot to randomly mix chemicals, to see what sorts of new compounds are formed.

    Monitoring the reactions in real-time with a mass spectrometer, a nuclear magnetic resonance machine, and an infrared spectrometer, the system eventually learned to predict which combinations would be the most reactive. Even if it doesn’t lead to further discoveries, Cronin has said, the robotic system could allow chemists to speed up their research by about 90 percent.

    Last year, another team of scientists at ETH Zurich used neural networks to deduce physical laws from sets of data. Their system, a sort of robo-Kepler, rediscovered the heliocentric model of the solar system from records of the position of the sun and Mars in the sky, as seen from Earth, and figured out the law of conservation of momentum by observing colliding balls. Since physical laws can often be expressed in more than one way, the researchers wonder if the system might offer new ways—perhaps simpler ways—of thinking about known laws.

    These are all examples of AI kick-starting the process of scientific discovery, though in every case, we can debate just how revolutionary the new approach is. Perhaps most controversial is the question of how much information can be gleaned from data alone—a pressing question in the age of stupendously large (and growing) piles of it. In The Book of Why (2018), the computer scientist Judea Pearl and the science writer Dana Mackenzie assert that data are “profoundly dumb.” Questions about causality “can never be answered from data alone,” they write. “Anytime you see a paper or a study that analyzes the data in a model-free way, you can be certain that the output of the study will merely summarize, and perhaps transform, but not interpret the data.” Schawinski sympathizes with Pearl’s position, but he described the idea of working with “data alone” as “a bit of a straw man.” He’s never claimed to deduce cause and effect that way, he said. “I’m merely saying we can do more with data than we often conventionally do.”

    Another oft-heard argument is that science requires creativity, and that—at least so far—we have no idea how to program that into a machine. (Simply trying everything, like Cronin’s robo-chemist, doesn’t seem especially creative.) “Coming up with a theory, with reasoning, I think demands creativity,” Polsterer said. “Every time you need creativity, you will need a human.” And where does creativity come from? Polsterer suspects it is related to boredom—something that, he says, a machine cannot experience. “To be creative, you have to dislike being bored. And I don’t think a computer will ever feel bored.” On the other hand, words like “creative” and “inspired” have often been used to describe programs like Deep Blue and AlphaGo. And the struggle to describe what goes on inside the “mind” of a machine is mirrored by the difficulty we have in probing our own thought processes.

    Schawinski recently left academia for the private sector; he now runs a startup called Modulos which employs a number of ETH scientists and, according to its website, works “in the eye of the storm of developments in AI and machine learning.” Whatever obstacles may lie between current AI technology and full-fledged artificial minds, he and other experts feel that machines are poised to do more and more of the work of human scientists. Whether there is a limit remains to be seen.

    “Will it be possible, in the foreseeable future, to build a machine that can discover physics or mathematics that the brightest humans alive are not able to do on their own, using biological hardware?” Schawinski wonders. “Will the future of science eventually necessarily be driven by machines that operate on a level that we can never reach? I don’t know. It’s a good question.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 1:10 pm on March 10, 2019 Permalink | Reply
    Tags: A quantum computer would greatly speed up analysis of the collisions hopefully finding evidence of supersymmetry much sooner—or at least allowing us to ditch the theory and move on., And they’ve been waiting for decades. Google is in the race as are IBM Microsoft Intel and a clutch of startups academic groups and the Chinese government., , At the moment researchers spend weeks and months sifting through the debris from proton-proton collisions in the LCH trying to find exotic heavy sister-particles to all our known particles of matter., “This is a marathon” says David Reilly who leads Microsoft’s quantum lab at the University of Sydney Australia. “And it's only 10 minutes into the marathon.”, , , CERN-Future Circular Collider, For CERN the quantum promise could for instance help its scientists find evidence of supersymmetry or SUSY which so far has proven elusive., HL-LHC-High-Luminosity LHC, IBM has steadily been boosting the number of qubits on its quantum computers starting with a meagre 5-qubit computer then 16- and 20-qubit machines and just recently showing off its 50-qubit processor, In a bid to make sense of the impending data deluge some at CERN are turning to the emerging field of quantum computing., In a quantum computer each circuit can have one of two values—either one (on) or zero (off) in binary code; the computer turns the voltage in a circuit on or off to make it work., In theory a quantum computer would process all the states a qubit can have at once and with every qubit added to its memory size its computational power should increase exponentially., Last year physicists from the California Institute of Technology in Pasadena and the University of Southern California managed to replicate the discovery of the Higgs boson found at the LHC in 2012, None of the competing teams have come close to reaching even the first milestone., , , , The quest has now lasted decades and a number of physicists are questioning if the theory behind SUSY is really valid., Traditional computers—be it an Apple Watch or the most powerful supercomputer—rely on tiny silicon transistors that work like on-off switches to encode bits of data., Venture capitalists invested some $250 million in various companies researching quantum computing in 2018 alone., WIRED   

    From WIRED: “Inside the High-Stakes Race to Make Quantum Computers Work” 

    Wired logo

    From WIRED

    03.08.19
    Katia Moskvitch

    1
    View Pictures/Getty Images

    Deep beneath the Franco-Swiss border, the Large Hadron Collider is sleeping.

    LHC

    CERN map


    CERN LHC Tunnel

    CERN LHC particles

    But it won’t be quiet for long. Over the coming years, the world’s largest particle accelerator will be supercharged, increasing the number of proton collisions per second by a factor of two and a half.

    Once the work is complete in 2026, researchers hope to unlock some of the most fundamental questions in the universe. But with the increased power will come a deluge of data the likes of which high-energy physics has never seen before. And, right now, humanity has no way of knowing what the collider might find.

    To understand the scale of the problem, consider this: When it shut down in December 2018, the LHC generated about 300 gigabytes of data every second, adding up to 25 petabytes (PB) annually. For comparison, you’d have to spend 50,000 years listening to music to go through 25 PB of MP3 songs, while the human brain can store memories equivalent to just 2.5 PB of binary data. To make sense of all that information, the LHC data was pumped out to 170 computing centers in 42 countries [http://greybook.cern.ch/]. It was this global collaboration that helped discover the elusive Higgs boson, part of the Higgs field believed to give mass to elementary particles of matter.

    CERN CMS Higgs Event


    CERN ATLAS Higgs Event

    To process the looming data torrent, scientists at the European Organization for Nuclear Research, or CERN, will need 50 to 100 times more computing power than they have at their disposal today. A proposed Future Circular Collider, four times the size of the LHC and 10 times as powerful, would create an impossibly large quantity of data, at least twice as much as the LHC.

    CERN FCC Future Circular Collider map

    In a bid to make sense of the impending data deluge, some at CERN are turning to the emerging field of quantum computing. Powered by the very laws of nature the LHC is probing, such a machine could potentially crunch the expected volume of data in no time at all. What’s more, it would speak the same language as the LHC. While numerous labs around the world are trying to harness the power of quantum computing, it is the future work at CERN that makes it particularly exciting research. There’s just one problem: Right now, there are only prototypes; nobody knows whether it’s actually possible to build a reliable quantum device.

    Traditional computers—be it an Apple Watch or the most powerful supercomputer—rely on tiny silicon transistors that work like on-off switches to encode bits of data.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    Each circuit can have one of two values—either one (on) or zero (off) in binary code; the computer turns the voltage in a circuit on or off to make it work.

    A quantum computer is not limited to this “either/or” way of thinking. Its memory is made up of quantum bits, or qubits—tiny particles of matter like atoms or electrons. And qubits can do “both/and,” meaning that they can be in a superposition of all possible combinations of zeros and ones; they can be all of those states simultaneously.

    For CERN, the quantum promise could, for instance, help its scientists find evidence of supersymmetry, or SUSY, which so far has proven elusive.

    Standard Model of Supersymmetry via DESY

    At the moment, researchers spend weeks and months sifting through the debris from proton-proton collisions in the LCH, trying to find exotic, heavy sister-particles to all our known particles of matter. The quest has now lasted decades, and a number of physicists are questioning if the theory behind SUSY is really valid. A quantum computer would greatly speed up analysis of the collisions, hopefully finding evidence of supersymmetry much sooner—or at least allowing us to ditch the theory and move on.

    A quantum device might also help scientists understand the evolution of the early universe, the first few minutes after the Big Bang. Physicists are pretty confident that back then, our universe was nothing but a strange soup of subatomic particles called quarks and gluons. To understand how this quark-gluon plasma has evolved into the universe we have today, researchers simulate the conditions of the infant universe and then test their models at the LHC, with multiple collisions. Performing a simulation on a quantum computer, governed by the same laws that govern the very particles that the LHC is smashing together, could lead to a much more accurate model to test.

    Beyond pure science, banks, pharmaceutical companies, and governments are also waiting to get their hands on computing power that could be tens or even hundreds of times greater than that of any traditional computer.

    And they’ve been waiting for decades. Google is in the race, as are IBM, Microsoft, Intel and a clutch of startups, academic groups, and the Chinese government. The stakes are incredibly high. Last October, the European Union pledged to give $1 billion to over 5,000 European quantum technology researchers over the next decade, while venture capitalists invested some $250 million in various companies researching quantum computing in 2018 alone. “This is a marathon,” says David Reilly, who leads Microsoft’s quantum lab at the University of Sydney, Australia. “And it’s only 10 minutes into the marathon.”

    Despite the hype surrounding quantum computing and the media frenzy triggered by every announcement of a new qubit record, none of the competing teams have come close to reaching even the first milestone, fancily called quantum supremacy—the moment when a quantum computer performs at least one specific task better than a standard computer. Any kind of task, even if it is totally artificial and pointless. There are plenty of rumors in the quantum community that Google may be close, although if true, it would give the company bragging rights at best, says Michael Biercuk, a physicist at the University of Sydney and founder of quantum startup Q-CTRL. “It would be a bit of a gimmick—an artificial goal,” says Reilly “It’s like concocting some mathematical problem that really doesn’t have an obvious impact on the world just to say that a quantum computer can solve it.”

    That’s because the first real checkpoint in this race is much further away. Called quantum advantage, it would see a quantum computer outperform normal computers on a truly useful task. (Some researchers use the terms quantum supremacy and quantum advantage interchangeably.) And then there is the finish line, the creation of a universal quantum computer. The hope is that it would deliver a computational nirvana with the ability to perform a broad range of incredibly complex tasks. At stake is the design of new molecules for life-saving drugs, helping banks to adjust the riskiness of their investment portfolios, a way to break all current cryptography and develop new, stronger systems, and for scientists at CERN, a way to glimpse the universe as it was just moments after the Big Bang.

    Slowly but surely, work is already underway. Federico Carminati, a physicist at CERN, admits that today’s quantum computers wouldn’t give researchers anything more than classical machines, but, undeterred, he’s started tinkering with IBM’s prototype quantum device via the cloud while waiting for the technology to mature. It’s the latest baby step in the quantum marathon. The deal between CERN and IBM was struck in November last year at an industry workshop organized by the research organization.

    Set up to exchange ideas and discuss potential collab­orations, the event had CERN’s spacious auditorium packed to the brim with researchers from Google, IBM, Intel, D-Wave, Rigetti, and Microsoft. Google detailed its tests of Bristlecone, a 72-qubit machine. Rigetti was touting its work on a 128-qubit system. Intel showed that it was in close pursuit with 49 qubits. For IBM, physicist Ivano Tavernelli took to the stage to explain the company’s progress.

    IBM has steadily been boosting the number of qubits on its quantum computers, starting with a meagre 5-qubit computer, then 16- and 20-qubit machines, and just recently showing off its 50-qubit processor.

    IBM iconic image of Quantum computer

    Carminati listened to Tavernelli, intrigued, and during a much needed coffee break approached him for a chat. A few minutes later, CERN had added a quantum computer to its impressive technology arsenal. CERN researchers are now starting to develop entirely new algorithms and computing models, aiming to grow together with the device. “A fundamental part of this process is to build a solid relationship with the technology providers,” says Carminati. “These are our first steps in quantum computing, but even if we are coming relatively late into the game, we are bringing unique expertise in many fields. We are experts in quantum mechanics, which is at the base of quantum computing.”

    The attraction of quantum devices is obvious. Take standard computers. The prediction by former Intel CEO Gordon Moore in 1965 that the number of components in an integrated circuit would double roughly every two years has held true for more than half a century. But many believe that Moore’s law is about to hit the limits of physics. Since the 1980s, however, researchers have been pondering an alternative. The idea was popularized by Richard Feynman, an American physicist at Caltech in Pasadena. During a lecture in 1981, he lamented that computers could not really simulate what was happening at a subatomic level, with tricky particles like electrons and photons that behave like waves but also dare to exist in two states at once, a phenomenon known as quantum superposition.

    Feynman proposed to build a machine that could. “I’m not happy with all the analyses that go with just the classical theory, because nature isn’t classical, dammit,” he told the audience back in 1981. “And if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy.”

    And so the quantum race began. Qubits can be made in different ways, but the rule is that two qubits can be both in state A, both in state B, one in state A and one in state B, or vice versa, so there are four probabilities in total. And you won’t know what state a qubit is at until you measure it and the qubit is yanked out of its quantum world of probabilities into our mundane physical reality.

    In theory, a quantum computer would process all the states a qubit can have at once, and with every qubit added to its memory size, its computational power should increase exponentially. So, for three qubits, there are eight states to work with simultaneously, for four, 16; for 10, 1,024; and for 20, a whopping 1,048,576 states. You don’t need a lot of qubits to quickly surpass the memory banks of the world’s most powerful modern supercomputers—meaning that for specific tasks, a quantum computer could find a solution much faster than any regular computer ever would. Add to this another crucial concept of quantum mechanics: entanglement. It means that qubits can be linked into a single quantum system, where operating on one affects the rest of the system. This way, the computer can harness the processing power of both simultaneously, massively increasing its computational ability.

    While a number of companies and labs are competing in the quantum marathon, many are running their own races, taking different approaches. One device has even been used by a team of researchers to analyze CERN data, albeit not at CERN. Last year, physicists from the California Institute of Technology in Pasadena and the University of Southern California managed to replicate the discovery of the Higgs boson, found at the LHC in 2012, by sifting through the collider’s troves of data using a quantum computer manufactured by D-Wave, a Canadian firm based in Burnaby, British Columbia. The findings didn’t arrive any quicker than on a traditional computer, but, crucially, the research showed a quantum machine could do the work.

    One of the oldest runners in the quantum race, D-Wave announced back in 2007 that it had built a fully functioning, commercially available 16-qubit quantum computer prototype—a claim that’s controversial to this day. D-Wave focuses on a technology called quantum annealing, based on the natural tendency of real-world quantum systems to find low-energy states (a bit like a spinning top that inevitably will fall over). A D-Wave quantum computer imagines the possible solutions of a problem as a landscape of peaks and valleys; each coordinate represents a possible solution and its elevation represents its energy. Annealing allows you to set up the problem, and then let the system fall into the answer—in about 20 milliseconds. As it does so, it can tunnel through the peaks as it searches for the lowest valleys. It finds the lowest point in the vast landscape of solutions, which corresponds to the best possible outcome—although it does not attempt to fully correct for any errors, inevitable in quantum computation. D-Wave is now working on a prototype of a universal annealing quantum computer, says Alan Baratz, the company’s chief product officer.

    Apart from D-Wave’s quantum annealing, there are three other main approaches to try and bend the quantum world to our whim: integrated circuits, topological qubits and ions trapped with lasers. CERN is placing high hopes on the first method but is closely watching other efforts too.

    IBM, whose computer Carminati has just started using, as well as Google and Intel, all make quantum chips with integrated circuits—quantum gates—that are superconducting, a state when certain metals conduct electricity with zero resistance. Each quantum gate holds a pair of very fragile qubits. Any noise will disrupt them and introduce errors—and in the quantum world, noise is anything from temperature fluctuations to electromagnetic and sound waves to physical vibrations.

    To isolate the chip from the outside world as much as possible and get the circuits to exhibit quantum mechanical effects, it needs to be supercooled to extremely low temperatures. At the IBM quantum lab in Zurich, the chip is housed in a white tank—a cryostat—suspended from the ceiling. The temperature inside the tank is a steady 10 millikelvin or –273 degrees Celsius, a fraction above absolute zero and colder than outer space. But even this isn’t enough.

    Just working with the quantum chip, when scientists manipulate the qubits, causes noise. “The outside world is continually interacting with our quantum hardware, damaging the information we are trying to process,” says physicist John Preskill at the California Institute of Technology, who in 2012 coined the term quantum supremacy. It’s impossible to get rid of the noise completely, so researchers are trying to suppress it as much as possible, hence the ultracold temperatures to achieve at least some stability and allow more time for quantum computations.

    “My job is to extend the lifetime of qubits, and we’ve got four of them to play with,” says Matthias Mergenthaler, an Oxford University postdoc student working at IBM’s Zurich lab. That doesn’t sound like a lot, but, he explains, it’s not so much the number of qubits that counts but their quality, meaning qubits with as low a noise level as possible, to ensure they last as long as possible in superposition and allow the machine to compute. And it’s here, in the fiddly world of noise reduction, that quantum computing hits up against one of its biggest challenges. Right now, the device you’re reading this on probably performs at a level similar to that of a quantum computer with 30 noisy qubits. But if you can reduce the noise, then the quantum computer is many times more powerful.

    Once the noise is reduced, researchers try to correct any remaining errors with the help of special error-correcting algorithms, run on a classical computer. The problem is, such error correction works qubit by qubit, so the more qubits there are, the more errors the system has to cope with. Say a computer makes an error once every 1,000 computational steps; it doesn’t sound like much, but after 1,000 or so operations, the program will output incorrect results. To be able to achieve meaningful computations and surpass standard computers, a quantum machine has to have about 1,000 qubits that are relatively low noise and with error rates as corrected as possible. When you put them all together, these 1,000 qubits will make up what researchers call a logical qubit. None yet exist—so far, the best that prototype quantum devices have achieved is error correction for up to 10 qubits. That’s why these prototypes are called noisy intermediate-scale quantum computers (NISQ), a term also coined by Preskill in 2017.

    For Carminati, it’s clear the technology isn’t ready yet. But that isn’t really an issue. At CERN the challenge is to be ready to unlock the power of quantum computers when and if the hardware becomes available. “One exciting possibility will be to perform very, very accurate simulations of quantum systems with a quantum computer—which in itself is a quantum system,” he says. “Other groundbreaking opportunities will come from the blend of quantum computing and artificial intelligence to analyze big data, a very ambitious proposition at the moment, but central to our needs.”

    But some physicists think NISQ machines will stay just that—noisy—forever. Gil Kalai, a professor at Yale University, says that error correcting and noise suppression will never be good enough to allow any kind of useful quantum computation. And it’s not even due to technology, he says, but to the fundamentals of quantum mechanics. Interacting systems have a tendency for errors to be connected, or correlated, he says, meaning errors will affect many qubits simultaneously. Because of that, it simply won’t be possible to create error-correcting codes that keep noise levels low enough for a quantum computer with the required large number of qubits.

    “My analysis shows that noisy quantum computers with a few dozen qubits deliver such primitive computational power that it will simply not be possible to use them as the building blocks we need to build quantum computers on a wider scale,” he says. Among scientists, such skepticism is hotly debated. The blogs of Kalai and fellow quantum skeptics are forums for lively discussion, as was a recent much-shared article titled “The Case Against Quantum Computing”—followed by its rebuttal, “The Case Against the Case Against Quantum Computing.

    For now, the quantum critics are in a minority. “Provided the qubits we can already correct keep their form and size as we scale, we should be okay,” says Ray Laflamme, a physicist at the University of Waterloo in Ontario, Canada. The crucial thing to watch out for right now is not whether scientists can reach 50, 72, or 128 qubits, but whether scaling quantum computers to this size significantly increases the overall rate of error.

    3
    The Quantum Nano Centre in Canada is one of numerous big-budget research and development labs focussed on quantum computing. James Brittain/Getty Images

    Others believe that the best way to suppress noise and create logical qubits is by making qubits in a different way. At Microsoft, researchers are developing topological qubits—although its array of quantum labs around the world has yet to create a single one. If it succeeds, these qubits would be much more stable than those made with integrated circuits. Microsoft’s idea is to split a particle—for example an electron—in two, creating Majorana fermion quasi-particles. They were theorized back in 1937, and in 2012 researchers at Delft University of Technology in the Netherlands, working at Microsoft’s condensed matter physics lab, obtained the first experimental evidence of their existence.

    “You will only need one of our qubits for every 1,000 of the other qubits on the market today,” says Chetan Nayak, general manager of quantum hardware at Microsoft. In other words, every single topological qubit would be a logical one from the start. Reilly believes that researching these elusive qubits is worth the effort, despite years with little progress, because if one is created, scaling such a device to thousands of logical qubits would be much easier than with a NISQ machine. “It will be extremely important for us to try out our code and algorithms on different quantum simulators and hardware solutions,” says Carminati. “Sure, no machine is ready for prime time quantum production, but neither are we.”

    Another company Carminati is watching closely is IonQ, a US startup that spun out of the University of Maryland. It uses the third main approach to quantum computing: trapping ions. They are naturally quantum, having superposition effects right from the start and at room temperature, meaning that they don’t have to be supercooled like the integrated circuits of NISQ machines. Each ion is a singular qubit, and researchers trap them with special tiny silicon ion traps and then use lasers to run algorithms by varying the times and intensities at which each tiny laser beam hits the qubits. The beams encode data to the ions and read it out from them by getting each ion to change its electronic states.

    In December, IonQ unveiled its commercial device, capable of hosting 160 ion qubits and performing simple quantum operations on a string of 79 qubits. Still, right now, ion qubits are just as noisy as those made by Google, IBM, and Intel, and neither IonQ nor any other labs around the world experimenting with ions have achieved quantum supremacy.

    As the noise and hype surrounding quantum computers rumbles on, at CERN, the clock is ticking. The collider will wake up in just five years, ever mightier, and all that data will have to be analyzed. A non-noisy, error-corrected quantum computer will then come in quite handy.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 1:54 pm on February 7, 2019 Permalink | Reply
    Tags: , , , , Now You Can Join the Search for Killer Asteroids, , , , WIRED   

    From WIRED: “Now You Can Join the Search for Killer Asteroids” 

    Wired logo

    From WIRED

    02.07.19
    Sarah Scoles

    1
    A Hawaii observatory just put the largest astronomical data trove ever online, making it free and accessible so anyone can hunt for new cosmic phenomena. R. White/STScI/PS1 Science Consortium

    If you want to watch sunrise from the national park at the top of Mount Haleakala, the volcano that makes up around 75 percent of the island of Maui, you have to make a reservation. Being at 10,023 feet, the summit provides a spectacular—and very popular, ticket-controlled—view.

    2
    Looking into the Haleakalā crater

    Just about a mile down the road from the visitors’ center sits “Science City,” where civilian and military telescopes curl around the road, their domes bubbling up toward the sky. Like the park’s visitors, they’re looking out beyond Earth’s atmosphere—toward the Sun, satellites, asteroids, or distant galaxies. And one of them, called the Panoramic Survey Telescope and Rapid Response System, or Pan-STARRS, just released the biggest digital astro-dataset ever, amounting to 1.6 petabytes, the equivalent of around 500,000 HD movies.

    Pann-STARS 1 Telescope, U Hawaii, situated at Haleakala Observatories near the summit of Haleakala in Hawaii, USA, altitude 3,052 m (10,013 ft)

    From its start in 2010, Pan-STARRS has been watching the 75 percent of the sky it can see from its perch and recording cosmic states and changes on its 1.4-billion-pixel camera. It even discovered the strange ‘Oumuamua, the interstellar object that a Harvard astronomer has suggested could be an alien spaceship.

    3
    An artist’s rendering of the first recorded visitor to the solar system, ‘Oumuamua.
    Aunt_Spray/Getty Images

    Big surveys like this one, which watch swaths of sky agnostically rather than homing in on specific stuff, represent a big chunk of modern astronomy. They are an efficient, pseudo-egalitarian way to collect data, uncover the unexpected, and allow for discovery long after the lens cap closes. With better computing power, astronomers can see the universe not just as it was and is but also as it’s changing, by comparing, say, how a given part of the sky looks on Tuesday to how it looks on Wednesday. Pan-STARRS’s latest data dump, in particular, gives everyone access to the in-process cosmos, opening up the “time domain” to all earthlings with a good internet connection.

    Pan-STARRS, like all projects, was once just an idea. It started around the turn of this century, when astronomers Nick Kaiser, John Tonry, and Gerry Luppino, from Hawaii’s Institute for Astronomy, suggested that relatively “modest” telescopes—hooked to huge cameras—were the best way to image large skyfields.

    Today, that idea has morphed into Pan-STARRS, a many-pixeled instrument attached to a 1.8-meter telescope (big optical telescopes may measure around 10 meters). It takes multiple images of each part of the sky to show how it’s changing. Over the course of four years, Pan-STARRS imaged the heavens above 12 times, using five different filters. These pictures may show supernovae flaring up and dimming back down, active galaxies whose centers glare as their black holes digest material, and strange bursts from cataclysmic events. “When you visit the same piece of sky again and again, you can recognize, ‘Oh, this galaxy has a new star in it that was not there when we were there a year or three months ago,” says Rick White, an astronomer at the Space Telescope Science Institute, which hosts Pan-STARRS’s archive. In this way, Pan-STARRS is a forerunner of the massive Large Synoptic Survey Telescope, or LSST, which will snap 800 panoramic images every evening, with a 3.2-billion-pixel camera, capturing the whole sky twice a week.

    LSST


    LSST Camera, built at SLAC



    LSST telescope, currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    Plus, by comparing bright dots that move between images, astronomers can uncover closer-by objects, like rocks whose path might sweep uncomfortably close to Earth.

    That latter part is not just interesting to scientists, but to the military too. “It’s considered a defense function to find asteroids that might cause us to go extinct,” says White. That’s (at least part of) why the Air Force, which also operates a satellite-tracking system on Haleakala, pushed $60 million into Pan-STARRS’s development. NASA, the state of Hawaii, a consortium of scientists, and some private donations ponied up the rest.

    But when the telescope first got to work, its operations hit some snags. Its initial images were about half as sharp as they should have been, because the system that adjusted the telescope’s mirror to make up for distortions wasn’t working right.

    Also, the Air Force redacted parts of the sky. It used software called “Magic” to detect streaks of light that might be satellites (including the US government’s own). Magic masked those streaks, essentially placing a dead-pixel black bar across that section of sky, to “to prevent the determination of any orbital element of the artificial satellite before the images left the [Institute for Astronomy] servers,” according to a recent paper by the Pan-STARRS group. In December 2011, the Air Force “dropped the requirement,” says the article. The magic was gone, and the scientists reprocessed the original raw data, removing the black boxes.

    The first tranche of data, from the world’s most substantial digital sky survey, came in December 2016. It was full of stars, galaxies, space rocks, and strangeness. The telescope and its associated scientists have already found an eponymous comet, crafted a 3D model of the Milky Way’s dust, unearthed way-old active galaxies, and spotted everyone’s favorite probably-not-an-alien-spaceship, ’Oumuamua.

    The real deal, though, entered the world late last month, when astronomers publicly released and put online all the individual snapshots, including auto-generated catalogs of some 800 million objects. With that dataset, astronomers and regular people everywhere (once they’ve read a fair number of help-me files) can check out a patch of sky and see how it evolved as time marched on. The curious can do more of the “time domain” science Pan-STARRS was made for: catching explosions, watching rocks, and squinting at unexplained bursts.

    Pan-STARRS might never have gotten its observations online if NASA hadn’t seen its own future in the observatory’s massive data pileup. That 1.6-petabyte archive is now housed at the Space Telescope Science Institute, in Maryland, in a repository called the Mikulski Archive for Space Telescopes. The Institute is also the home of bytes from Hubble, Kepler, GALEX, and 15 other missions, mostly belonging to NASA. “At the beginning they didn’t have any commitment to release the data publicly,” says White. “It’s such a large quantity they didn’t think they could manage to do it.” The Institute, though, welcomed this outsider data in part so it could learn how to deal with such huge quantities.

    The hope is that Pan-STARRS’s freely available data will make a big contribution to astronomy. Just look at the discoveries people publish using Hubble data, says White. “The majority of papers being published are from archival data, by scientists that have no connection to the original observations,” he says. That, he believes, will hold true for Pan-STARRS too.

    But surveys are beautiful not just because they can be shared online. They’re also A+ because their observations aren’t narrow. In much of astronomy, scientists look at specific objects in specific ways at specific times. Maybe they zoom in on the magnetic field of pulsar J1745–2900, or the hydrogen gas in the farthest reaches of the Milky Way’s Perseus arm, or that one alien spaceship rock. Those observations are perfect for that individual astronomer to learn about that field, arm, or ship—but they’re not as great for anything or anyone else. Surveys, on the other hand, serve everyone.

    “The Sloan Digital Sky Survey set the standard for these huge survey projects,” says White. Sloan, which started operations in 2000, is on its fourth iteration, collecting light with telescopes at Apache Point Observatory in New Mexico and Las Campanas Observatory in Northern Chile.

    SDSS 2.5 meter Telescope at Apache Point Observatory, near Sunspot NM, USA, Altitude 2,788 meters (9,147 ft)

    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey

    Carnegie Las Campanas Observatory in the southern Atacama Desert of Chile in the Atacama Region approximately 100 kilometres (62 mi) northeast of the city of La Serena,near the southern end and over 2,500 m (8,200 ft) high

    From the early universe to the modern state of the Milky Way’s union, Sloan data has painted a full-on portrait of the universe that, like those creepy Renaissance portraits, will stick around for years to come.

    Over in a different part of New Mexico, on the high Plains of San Agustin, radio astronomers recently set the Very Large Array’s sights on a new survey. Having started in 2017, the Very Large Array Sky Survey is still at the beginning of its seven years of operation.

    NRAO/Karl V Jansky Expanded Very Large Array, on the Plains of San Agustin fifty miles west of Socorro, NM, USA, at an elevation of 6970 ft (2124 m)

    But astronomers don’t have to wait for it to finish its observations, as happened with the first Pan-STARRS survey. “Within several days of the data coming off the telescope, the images are available to everybody,” says Brian Kent, who, since 2012, has worked on the software that processes the data. Which is no small task: For every four hours of skywatching, the telescope spits out 300 gigabytes, which the software then has to make useful and usable. “You have to put the collective smarts of the astronomers into the software,” he says.

    Kent is excited about the same kinds of time-domain discoveries as White is: about seeing the universe at work rather than as a set of static images. Including the chronological dimension is hot in astronomy right now, from these surveys to future instruments like the LSST and the massive Square Kilometre Array, a radio telescope that will spread across two continents.

    SKA Square Kilometer Array

    SKA Murchison Widefield Array, Boolardy station in outback Western Australia, at the Murchison Radio-astronomy Observatory (MRO)


    Australian Square Kilometre Array Pathfinder (ASKAP) is a radio telescope array located at Murchison Radio-astronomy Observatory (MRO) in the Australian Mid West. ASKAP consists of 36 identical parabolic antennas, each 12 metres in diameter, working together as a single instrument with a total collecting area of approximately 4,000 square metres.

    SKA LOFAR core (“superterp”) near Exloo, Netherlands

    SKA South Africa


    SKA Meerkat telescope, 90 km outside the small Northern Cape town of Carnarvon, SA


    SKA Meerkat telescope, 90 km outside the small Northern Cape town of Carnarvon, SA

    SKA Meerkat telescope, South African design

    Now, as of late January, anyone can access all of those observations, containing phenomena astronomers don’t yet know about and that—hey, who knows—you could beat them to discovering.
    Big surveys like this one, which watch swaths of sky agnostically rather than homing in on specific stuff, represent a big chunk of modern astronomy. They are an efficient, pseudo-egalitarian way to collect data, uncover the unexpected, and allow for discovery long after the lens cap closes. With better computing power, astronomers can see the universe not just as it was and is but also as it’s changing, by comparing, say, how a given part of the sky looks on Tuesday to how it looks on Wednesday. Pan-STARRS’s latest data dump, in particular, gives everyone access to the in-process cosmos, opening up the “time domain” to all earthlings with a good internet connection.

    But surveys are beautiful not just because they can be shared online. They’re also A+ because their observations aren’t narrow. In much of astronomy, scientists look at specific objects in specific ways at specific times. Maybe they zoom in on the magnetic field of pulsar J1745–2900, or the hydrogen gas in the farthest reaches of the Milky Way’s Perseus arm, or that one alien spaceship rock. Those observations are perfect for that individual astronomer to learn about that field, arm, or ship—but they’re not as great for anything or anyone else. Surveys, on the other hand, serve everyone.

    “The Sloan Digital Sky Survey set the standard for these huge survey projects,” says White. Sloan, which started operations in 2000, is on its fourth iteration, collecting light with telescopes at Apache Point Observatory in New Mexico and Las Campanas Observatory in Northern Chile. From the early universe to the modern state of the Milky Way’s union, Sloan data has painted a full-on portrait of the universe that, like those creepy Renaissance portraits, will stick around for years to come.

    Over in a different part of New Mexico, on the high Plains of San Agustin, radio astronomers recently set the Very Large Array’s sights on a new survey. Having started in 2017, the Very Large Array Sky Survey is still at the beginning of its seven years of operation. But astronomers don’t have to wait for it to finish its observations, as happened with the first Pan-STARRS survey. “Within several days of the data coming off the telescope, the images are available to everybody,” says Brian Kent, who, since 2012, has worked on the software that processes the data. Which is no small task: For every four hours of skywatching, the telescope spits out 300 gigabytes, which the software then has to make useful and usable. “You have to put the collective smarts of the astronomers into the software,” he says.

    Kent is excited about the same kinds of time-domain discoveries as White is: about seeing the universe at work rather than as a set of static images. Including the chronological dimension is hot in astronomy right now, from these surveys to future instruments like the LSST and the massive Square Kilometre Array, a radio telescope that will spread across two continents.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 12:38 pm on February 1, 2019 Permalink | Reply
    Tags: A project in eastern Tennessee quietly exceeded the scale of any corporate AI lab. It was run by the US government., , , Nvidia powerful graphics processors, ORNL SUMMIT supercomputer unveiled-world's most powerful in 2018, Summit has a hybrid architecture and each node contains multiple IBM POWER9 CPUs and NVIDIA Volta GPUs all connected together with NVIDIA’s high-speed NVLink, , TensorFlow machine-learning software, The World’s Fastest Supercomputer Breaks an AI Record, WIRED   

    From Oak Ridge National Laboratory via WIRED “The World’s Fastest Supercomputer Breaks an AI Record” 

    i1

    From Oak Ridge National Laboratory

    via

    Wired logo

    WIRED

    ORNL IBM AC922 SUMMIT supercomputer. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    1
    Oak Ridge National Lab’s Summit supercomputer became the world’s most powerful in 2018, reclaiming that title from China for the first time in five years.
    Carlos Jones/Oak Ridge National Lab

    Along America’s west coast, the world’s most valuable companies are racing to make artificial intelligence smarter. Google and Facebook have boasted of experiments using billions of photos and thousands of high-powered processors. But late last year, a project in eastern Tennessee quietly exceeded the scale of any corporate AI lab. It was run by the US government.

    The record-setting project involved the world’s most powerful supercomputer, Summit, at Oak Ridge National Lab. The machine captured that crown in June last year, reclaiming the title for the US after five years of China topping the list. As part of a climate research project, the giant computer booted up a machine-learning experiment that ran faster than any before.

    Summit, which occupies an area equivalent to two tennis courts, used more than 27,000 powerful graphics processors in the project. It tapped their power to train deep-learning algorithms, the technology driving AI’s frontier, chewing through the exercise at a rate of a billion billion operations per second, a pace known in supercomputing circles as an exaflop.

    “Deep learning has never been scaled to such levels of performance before,” says Prabhat, who leads a research group at the National Energy Research Scientific Computing Center at Lawrence Berkeley National Lab. (He goes by one name.) His group collaborated with researchers at Summit’s home base, Oak Ridge National Lab.

    Fittingly, the world’s most powerful computer’s AI workout was focused on one of the world’s largest problems: climate change. Tech companies train algorithms to recognize faces or road signs; the government scientists trained theirs to detect weather patterns like cyclones in the copious output from climate simulations that spool out a century’s worth of three-hour forecasts for Earth’s atmosphere. (It’s unclear how much power the project used or how much carbon that spewed into the air.)

    The Summit experiment has implications for the future of both AI and climate science. The project demonstrates the scientific potential of adapting deep learning to supercomputers, which traditionally simulate physical and chemical processes such as nuclear explosions, black holes, or new materials. It also shows that machine learning can benefit from more computing power—if you can find it—boding well for future breakthroughs.

    “We didn’t know until we did it that it could be done at this scale,” says Rajat Monga, an engineering director at Google. He and other Googlers helped the project by adapting the company’s open-source TensorFlow machine-learning software to Summit’s giant scale.

    Most work on scaling up deep learning has taken place inside the data centers of internet companies, where servers work together on problems by splitting them up, because they are connected relatively loosely, not bound into one giant computer. Supercomputers like Summit have a different architecture, with specialized high-speed connections linking their thousands of processors into a single system that can work as a whole. Until recently, there has been relatively little work on adapting machine learning to work on that kind of hardware.

    Monga says working to adapt TensorFlow to Summit’s scale will also inform Google’s efforts to expand its internal AI systems. Engineers from Nvidia also helped out on the project, by making sure the machine’s tens of thousands of Nvidia graphics processors worked together smoothly.

    Finding ways to put more computing power behind deep-learning algorithms has played a major part in the technology’s recent ascent. The technology that Siri uses to recognize your voice and Waymo vehicles use to read road signs burst into usefulness in 2012 after researchers adapted it to run on Nvidia graphics processors.

    In an analysis published last May, researchers from OpenAI, a San Francisco research institute cofounded by Elon Musk, calculated that the amount of computing power in the largest publicly disclosed machine-learning experiments has doubled roughly every 3.43 months since 2012; that would mean an 11-fold increase each year. That progression has helped bots from Google parent Alphabet defeat champions at tough board games and videogames, and fueled a big jump in the accuracy of Google’s translation service.

    Google and other companies are now creating new kinds of chips customized for AI to continue that trend. Google has said that “pods” tightly integrating 1,000 of its AI chips—dubbed tensor processing units, or TPUs—can provide 100 petaflops of computing power, one-tenth the rate Summit achieved on its AI experiment.

    The Summit project’s contribution to climate science is to show how giant-scale AI could improve our understanding of future weather patterns. When researchers generate century-long climate predictions, reading the resulting forecast is a challenge. “Imagine you have a YouTube movie that runs for 100 years. There’s no way to find all the cats and dogs in it by hand,” says Prabhat of Lawrence Berkeley. The software typically used to automate the process is imperfect, he says. Summit’s results showed that machine learning can do it better, which should help predict storm impacts such as flooding or physical damage. The Summit results won Oak Ridge, Lawrence Berkeley, and Nvidia researchers the Gordon Bell Prize for boundary-pushing work in supercomputing.

    Running deep learning on supercomputers is a new idea that’s come along at a good moment for climate researchers, says Michael Pritchard, a professor at the University of California, Irvine. The slowing pace of improvements to conventional processors had led engineers to stuff supercomputers with growing numbers of graphics chips, where performance has grown more reliably. “There came a point where you couldn’t keep growing computing power in the normal way,” Pritchard says.

    That shift posed some challenges to conventional simulations, which had to be adapted. It also opened the door to embracing the power of deep learning, which is a natural fit for graphics chips. That could give us a clearer view of our climate’s future. Pritchard’s group showed last year that deep learning can generate more realistic simulations of clouds inside climate forecasts, which could improve forecasts of changing rainfall patterns.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 1:27 pm on December 19, 2018 Permalink | Reply
    Tags: , , , , , WIRED   

    From WIRED: “Dark Matter Hunters Pivot After Years of Failed Searches” 

    Wired logo

    From WIRED

    12.19.18
    Sophia Chen

    1
    NASA Goddard

    Physicists are remarkably frank: they don’t know what dark matter is made of.

    “We’re all scratching our heads,” says physicist Reina Maruyama of Yale University.

    “The gut feeling is that 80 percent of it is one thing, and 20 percent of it is something else,” says physicist Gray Rybka of the University of Washington. Why does he think this? It’s not because of science. “It’s a folk wisdom,” he says.

    Peering through telescopes, researchers have found a deluge of evidence for dark matter. Galaxies, they’ve observed, rotate far faster than their visible mass allows. The established equations of gravity dictate that those galaxies should fall apart, like pieces of cake batter flinging off a spinning hand mixer. The prevailing thought is that some invisible material—dark matter—must be holding those galaxies together. Observations suggest that dark matter consists of diffuse material “sort of like a cotton ball,” says Maruyama, who co-leads a dark matter research collaboration called COSINE-100.

    2
    Jay Hyun Jo/DM-Ice/KIMS

    Here on Earth, though, clues are scant. Given the speed that galaxies rotate, dark matter should make up 85 percent of the matter in the universe, including on our provincial little home planet. But only one experiment, a detector in Italy named DAMA, has ever registered compelling evidence of the stuff on Earth.

    DAMA-LIBRA at Gran Sasso


    Gran Sasso LABORATORI NAZIONALI del GRAN SASSO, located in the Abruzzo region of central Italy

    “There have been hints in other experiments, but DAMA is the only one with robust signals,” says Maruyama, who is unaffiliated with the experiment. For two decades, DAMA has consistently measured a varying signal that peaks in June and dips in December. The signal suggests that dark matter hits Earth at different rates corresponding to its location in its orbit, which matches theoretical predictions.

    But the search has yielded few other promising signals. This year, several detectors reported null findings. XENON1T, a collaboration whose detector is located in the same Italian lab as DAMA, announced they hadn’t found anything this May.

    XENON1T at Gran Sasso LABORATORI NAZIONALI del GRAN SASSO, located in the Abruzzo region of central Italy

    Panda-X, a China-based experiment, published in July that they also hadn’t found anything.

    PandaX II Dark Matter experiment at Jin-ping Underground Laboratory (CJPL) in Sichuan, China

    Even DAMA’s results have been called into question: In December, Maruyama’s team published that their detector, a South-Korea based DAMA replica made of some 200 pounds of sodium iodide crystal, failed to reproduce its Italian predecessor’s results.

    These experiments are all designed to search for a specific dark matter candidate, a theorized class of particles known as Weakly Interacting Massive Particles, or WIMPs, that should be about a million times heavier than an electron. WIMPs have dominated dark matter research for years, and Miguel Zumalacárregui is tired of them. About a decade ago, when Zumalacárregui was still a PhD student, WIMP researchers were already promising an imminent discovery. “They’re just coming back empty-handed,” says Zumalacárregui, now an astrophysicist at the University of California, Berkeley.

    He’s not the only one with WIMP fatigue. “In some ways, I grew tired of WIMPs long ago,” says Rybka. Rybka is co-leading an experiment that is pursuing another dark matter candidate: a dainty particle called an axion, roughly a billion times lighter than an electron and much lighter than the WIMP. In April, the Axion Dark Matter Experiment collaboration announced that they’d finally tweaked their detector to be sensitive enough to detect axions.

    Inside the ADMX experiment hall at the University of Washington Credit Mark Stone U. of Washington

    The detector acts sort of like an AM radio, says Rybka. A strong magnet inside the machine would convert incoming axions into radio waves, which the detector would then pick up. “Given that we don’t know the exact mass of the axion, we don’t know which frequency to tune to,” says Rybka. “So we slowly turn the knob while listening, and mostly we hear noise. But someday, hopefully, we’ll tune to the right frequency, and we’ll hear that pure tone.”

    He is betting on axions because they would also resolve a piece of another long-standing puzzle in physics: exactly how quarks bind together to form atomic nuclei. “It seems too good to just be a coincidence, that this theory from nuclear physics happens to make the right amount of dark matter,” says Rybka.

    As Rybka’s team sifts through earthly data for signs of axions, astrophysicists look to the skies for leads. In a paper published in October, Zumalacárregui and a colleague ruled out an old idea that dark matter was mostly made of black holes. They reached this conclusion by looking through two decades of supernovae observations. When a supernova passes behind a black hole, the black hole’s gravity bends the supernova’s light to make it appear brighter. The brighter the light, the more massive the black hole. So by tabulating the brightness of hundreds of supernovae, they calculated that black holes that are at least one-hundredth the size of the sun can account for up to 40 percent of dark matter, and no more.

    “We’re at a point where our best theories seem to be breaking,” says astrophysicist Jamie Farnes of Oxford University. “We clearly need some kind of new idea. There’s something key we’re missing about how the universe is working.”

    Farnes is trying to fill that void. In a paper published in December [Astronomy and Astrophysics], he proposed that dark matter could be a weird fluid that moves toward you if you try to push it away. He created a simplistic simulation of the universe containing this fluid and found that it could potentially also explain why the universe is expanding, another long-standing mystery in physics. He is careful to point out that his ideas are speculative, and it is still unclear whether they are consistent with prior telescope observations and dark matter experiments.

    WIMPs could still be dark matter as well, despite enthusiasm for new approaches. Maruyama’s Korean experiment has ruled out “the canonical, vanilla WIMP that most people talk about,” she says, but lesser-known WIMP cousins are still on the table.

    It’s important to remember, as physicists clutch onto their favorite theories—regardless of how refreshing they are—that they need corroborating data. “The universe doesn’t care what is beautiful or elegant,” says Farnes. Nor does it care about what’s trendy. Guys, the universe might be really uncool.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:01 am on October 3, 2018 Permalink | Reply
    Tags: , , Transmission electron microscopes, WIRED   

    From WIRED: “New Microscope Shows the Quantum World in Crazy Detail” 

    Wired logo

    From WIRED

    9.21.18
    Sophia Chen

    1
    Scientists at Lawrence Berkeley National Lab use the microscope to painstakingly map every single atom in a nanoparticle. Here, they surveyed a tiny iron platinum cluster under the microscope and virtually picked it apart. Colin Ophus

    The transmission electron microscope was designed to break records. Using its beam of electrons, scientists have glimpsed many types of viruses for the first time. They’ve used it to study parts of biological cells like ribosomes and mitochondria. You can see individual atoms with it.

    Scanning transmission electron microscope Wikipedia Materialscientist

    Custom-designed scanning transmission electron microscope at Cornell University by David Muller/Cornell University

    But experts have recently unlocked new potential for the machine. “It’s been a very dramatic and sudden shift,” says physicist David Muller of Cornell University. “It was a little bit like everyone was flying biplanes, and all of a sudden, here’s a jetliner.”

    For one thing, Muller’s team has set a new record. Publishing in Nature this July, they used their scope to take the highest resolution images to date. To do this, they had to create special lenses to better focus the electrons, sort of like “glasses” for the microscope, he says. They also developed a super-sensitive camera, capable of quickly registering single electrons. Their new images show a razor-thin layer, just two atoms thick, of molybdenum and sulfur atoms bonded together. Not only could they distinguish between individual atoms, they could even see them when they were about only 0.4 angstroms apart, half the length of a chemical bond. They even could spot a gap where a sulfur atom was missing in the material’s otherwise repeating pattern. “They could do this primarily because their electron camera is so good,” says physicist Colin Ophus of Lawrence Berkeley National Lab, who was not involved with the work.

    2
    Each dot in this image is a single molybdenum or sulfur atom from two overlapping but twisted atom-thick sheets. Cornell University’s transmission electron microscope, which took this image, broke the record for highest-resolution microscope this July. David Muller/Cornell University

    Now the rest of the field is clamoring to outfit their scopes with similar cameras, says Muller. “You can see all sorts of things you couldn’t before,” he says. In particular, Muller is studying thin materials, one to two atoms thick, that exhibit unusual properties. For example, physicists recently discovered that one type of thin material, when layered in a certain way, becomes superconducting. Muller thinks that the microscope could help reveal the underlying mechanisms behind such properties.

    When it comes to magnifying the miniscule, electrons are fundamentally better than visible light. That’s because electrons, which have wavelike properties due to quantum mechanics, have wavelengths a thousand times shorter. Shorter wavelengths produce higher resolution, much like finer thread can create more intricate embroidery. “Electron microscopes are pretty much the only game in town if you want to look at things on the atomic scale,” says physicist Ben McMorran of the University of Oregon. Pelting a material with electrons and detecting the ones that have traveled through produces a detailed image of that material.

    But high resolution isn’t the machine’s only trick. In a paper recently accepted to Nano Letters [not recovered], a team led by McMorran has developed a new type of image you can take with the microscope. This method can image materials normally transparent to electrons, such as lightweight atoms like lithium. It should allow scientists to study and improve lithium-based batteries with atomic detail.

    There’s more. By measuring a property of the electron called its phase, they can actually map the electric and magnetic fields inside the material, says Fehmi Yasin, a physics graduate student at the University of Oregon. “This technique can tease more information out of the electrons,” he says.

    These new capabilities can help scientists like Mary Scott, a physicist at the University of California, Berkeley, who studies nanoparticles smaller than a bacterium. Scott has spent long hours photographing these tiny inanimate clumps under an electron microscope. Using a special rig, she carefully tilts the sample to get as many angles as possible. Then, from those images, she creates an extremely precise 3-D model, accurate down to the atom. In 2017, she and her team mapped the exact locations of 23,000 atoms in a single silver and platinum nanoparticle. The point of such painstaking models is to study how individual atoms contribute to a property of the material—how strong or conductive it is, for example. The new techniques could help Scott examine those material properties more easily.

    But the ultimate goal of such experiments isn’t merely to study the materials. Eventually, scientists like Scott want to turn atoms into Legos: to assemble them, brick by brick, into brand new materials. But even tiny changes in a material’s atomic composition or structure can alter its function, says Scott, and no one fully understands why. The microscope images can teach them how and why atoms lock together.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 5:08 pm on September 16, 2018 Permalink | Reply
    Tags: Astronomers Have Found the Universe's Missing Matter, , , , , WIRED   

    From WIRED: “Astronomers Have Found the Universe’s Missing Matter” 

    Wired logo

    from WIRED

    1
    A computer simulation of the hot gas between galaxies hinted at the location of the universe’s missing matter. Princeton University/Renyue Cen.

    09.16.18
    Katia Moskvitch

    For decades, some of the atomic matter in the universe had not been located. Recent papers reveal where it’s been hiding. [No papers cited]

    Astronomers have finally found the last of the missing universe. It’s been hiding since the mid-1990s, when researchers decided to inventory all the “ordinary” matter in the cosmos—stars and planets and gas, anything made out of atomic parts. (This isn’t “dark matter,” which remains a wholly separate enigma.) They had a pretty good idea of how much should be out there, based on theoretical studies of how matter was created during the Big Bang. Studies of the cosmic microwave background (CMB)—the leftover light from the Big Bang—would confirm these initial estimates.

    So they added up all the matter they could see—stars and gas clouds and the like, all the so-called baryons. They were able to account for only about 10 percent of what there should be. And when they considered that ordinary matter makes up only 15 percent of all matter in the universe—dark matter makes up the rest—they had only inventoried a mere 1.5 percent of all matter in the universe.

    Now, in a series of three recent papers, astronomers have identified the final chunks of all the ordinary matter in the universe. (They are still deeply perplexed as to what makes up dark matter.) And despite the fact that it took so long to identify it all, researchers spotted it right where they had expected it to be all along: in extensive tendrils of hot gas that span the otherwise empty chasms between galaxies, more properly known as the warm-hot intergalactic medium, or WHIM.

    Early indications that there might be extensive spans of effectively invisible gas between galaxies came from computer simulations done in 1998. “We wanted to see what was happening to all the gas in the universe,” said Jeremiah Ostriker, a cosmologist at Princeton University who constructed one of those simulations along with his colleague Renyue Cen. The two ran simulations of gas movements in the universe acted on by gravity, light, supernova explosions and all the forces that move matter in space. “We concluded that the gas will accumulate in filaments that should be detectable,” he said.

    Except they weren’t — not yet.

    “It was clear from the early days of cosmological simulations that many of the baryons would be in a hot, diffuse form — not in galaxies,” said Ian McCarthy, an astrophysicist at Liverpool John Moores University. Astronomers expected these hot baryons to conform to a cosmic superstructure, one made of invisible dark matter, that spanned the immense voids between galaxies. The gravitational force of the dark matter would pull gas toward it and heat the gas up to millions of degrees. Unfortunately, hot, diffuse gas is extremely difficult to find.

    To spot the hidden filaments, two independent teams of researchers searched for precise distortions in the CMB, the afterglow of the Big Bang. As that light from the early universe streams across the cosmos, it can be affected by the regions that it’s passing through. In particular, the electrons in hot, ionized gas (such as the WHIM) should interact with photons from the CMB in a way that imparts some additional energy to those photons. The CMB’s spectrum should get distorted.

    Unfortunately the best maps of the CMB (provided by the Planck satellite) showed no such distortions. Either the gas wasn’t there, or the effect was too subtle to show up.

    CMB per ESA/Planck


    ESA/Planck 2009 to 2013

    But the two teams of researchers were determined to make them visible. From increasingly detailed computer simulations of the universe, they knew that gas should stretch between massive galaxies like cobwebs across a windowsill. Planck wasn’t able to see the gas between any single pair of galaxies. So the researchers figured out a way to multiply the faint signal by a million.

    First, the scientists looked through catalogs of known galaxies to find appropriate galaxy pairs — galaxies that were sufficiently massive, and that were at the right distance apart, to produce a relatively thick cobweb of gas between them. Then the astrophysicists went back to the Planck data, identified where each pair of galaxies was located, and then essentially cut out that region of the sky using digital scissors. With over a million clippings in hand (in the case of the study led by Anna de Graaff, a Ph.D. student at the University of Edinburgh), they rotated each one and zoomed it in or out so that all the pairs of galaxies appeared to be in the same position. They then stacked a million galaxy pairs on top of one another. (A group led by Hideki Tanimura at the Institute of Space Astrophysics in Orsay, France, combined 260,000 pairs of galaxies.) At last, the individual threads — ghostly filaments of diffuse hot gas — suddenly became visible.

    2
    (A) Images of one million galaxy pairs were aligned and added together. (B) Astronomers mapped all the gas within the actual galaxies. (C) By subtracting the galaxies (B) from the initial image (A), researchers revealed filamentary gas hiding in intergalactic space. Adapted by Quanta Magazine

    The technique has its pitfalls. The interpretation of the results, said Michael Shull, an astronomer at the University of Colorado at Boulder, requires assumptions about the temperature and spatial distribution of the hot gas. And because of the stacking of signals, “one always worries about ‘weak signals’ that are the result of combining large numbers of data,” he said. “As is sometimes found in opinion polls, one can get erroneous results when one has outliers or biases in the distribution that skew the statistics.”

    In part because of these concerns, the cosmological community didn’t consider the case settled. What was needed was an independent way of measuring the hot gas. This summer, one arrived.

    Lighthouse Effect

    While the first two teams of researchers were stacking signals together, a third team followed a different approach. They observed a distant quasar — a bright beacon from billions of light-years away — and used it to detect gas in the seemingly empty intergalactic spaces through which the light traveled. It was like examining the beam of a faraway lighthouse in order to study the fog around it.

    Usually when astronomers do this, they try to look for light that has been absorbed by atomic hydrogen, since it is the most abundant element in the universe. Unfortunately, this option was out. The WHIM is so hot that it ionizes hydrogen, stripping its single electron away. The result is a plasma of free protons and electrons that don’t absorb any light.

    So the group decided to look for another element instead: oxygen. While there’s not nearly as much oxygen as hydrogen in the WHIM, atomic oxygen has eight electrons, as opposed to hydrogen’s one. The heat from the WHIM strips most of those electrons away, but not all. The team, led by Fabrizio Nicastro of the National Institute for Astrophysics in Rome, tracked the light that was absorbed by oxygen that had lost all but two of its electrons. They found two pockets of hot intergalactic gas. The oxygen “provides a tracer of the much larger reservoir of hydrogen and helium gas,” said Shull, who is a member of Nicastro’s team. The researchers then extrapolated the amount of gas they found between Earth and this particular quasar to the universe as a whole. The result suggested that they had located the missing 30 percent.

    The number also agrees nicely with the findings from the CMB studies. “The groups are looking at different pieces of the same puzzle and are coming up with the same answer, which is reassuring, given the differences in their methods,” said Mike Boylan-Kolchin, an astronomer at the University of Texas, Austin.

    The next step, said Shull, is to observe more quasars with next-generation X-ray and ultraviolet telescopes with greater sensitivity. “The quasar we observed was the best and brightest lighthouse that we could find. Other ones will be fainter, and the observations will take longer,” he said. But for now, the takeaway is clear. “We conclude that the missing baryons have been found,” their team wrote.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: