Tagged: Google Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:57 am on April 7, 2019 Permalink | Reply
    Tags: "How Google Is Cramming More Data Into Its New Atlantic Cable", , Google, Google says the fiber-optic cable it's building across the Atlantic Ocean will be the fastest of its kind. Fiber-optic networks work by sending light over thin strands of glass., Japanese tech giant NEC says it has technology that will enable long-distance undersea cables with 16 fiber-optic pairs., The current growth in new cables is driven less by telcos and more by companies like Google Facebook and Microsoft, Today most long-distance undersea cables contain six or eight fiber-optic pairs., Vijay Vusirikala head of network architecture and optical engineering at Google says the company is already contemplating 24-pair cables.,   

    From WIRED: “How Google Is Cramming More Data Into Its New Atlantic Cable” 

    Wired logo

    From WIRED

    04.05.19
    Klint Finley

    1
    Fiber-optic cable being loaded onto a ship owned by SubCom, which is working with Google to build the world’s fastest undersea data connection. Bill Gallery/SubCom.

    1

    Google says the fiber-optic cable it’s building across the Atlantic Ocean will be the fastest of its kind. When the cable goes live next year, the company estimates it will transmit around 250 terabits per second, fast enough to zap all the contents of the Library of Congress from Virginia to France three times every second. That’s about 56 percent faster than Facebook and Microsoft’s Marea cable, which can transmit about 160 terabits per second between Virginia and Spain.

    Fiber-optic networks work by sending light over thin strands of glass. Fiber-optic cables, which are about the diameter of a garden hose, enclose multiple pairs of these fibers. Google’s new cable is so fast because it carries more fiber pairs. Today, most long-distance undersea cables contain six or eight fiber-optic pairs. Google said Friday that its new cable, dubbed Dunant, is expected to be the first to include 12 pairs, thanks to new technology developed by Google and SubCom, which designs, manufactures, and deploys undersea cables.

    Dunant might not be the fastest for long: Japanese tech giant NEC says it has technology that will enable long-distance undersea cables with 16 fiber-optic pairs. And Vijay Vusirikala, head of network architecture and optical engineering at Google, says the company is already contemplating 24-pair cables.

    The surge in intercontinental cables, and their increasing capacity, reflect continual growth in internet traffic. They enable activists to livestream protests to distant countries, help companies buy and sell products around the world, and facilitate international romances. “Many people still believe international telecommunications are conducted by satellite,” says NEC executive Atsushi Kuwahara. “That was true in 1980, but nowadays, 99 percent of international telecommunications is submarine.”

    So much capacity is being added that, for the moment, it’s outstripping demand. Animations featured in a recent New York Times article illustrated the exploding number of undersea cables since 1989. That growth is continuing. Alan Mauldin of the research firm Telegeography says only about 30 percent of the potential capacity of major undersea cable routes is currently in use—and more than 60 new cables are planned to enter service by 2021. That summons memories of the 1990s Dotcom Bubble, when telecoms buried far more fiber in both the ground and the ocean than they would need for years to come.

    3
    A selection of fiber-optic cable products made by SubCom. Brian Smith/SubCom.

    But the current growth in new cables is driven less by telcos and more by companies like Google, Facebook, and Microsoft that crave ever more bandwidth for the streaming video, photos, and other data scuttling between their global data centers. And experts say that as undersea cable technologies improve, it’s not crazy for companies to build newer, faster routes between continents, even with so much fiber already laying idle in the ocean.

    Controlling Their Own Destiny

    Mauldin says that although there’s still lots of capacity available, companies like Google and Facebook prefer to have dedicated capacity for their own use. That’s part of why big tech companies have either invested in new cables through consortia or, in some cases, built their own cables.

    “When we do our network planning, it’s important to know if we’ll have the capacity in the network,” says Google’s Vusirikala. “One way to know is by building our own cables, controlling our own destiny.”

    Another factor is diversification. Having more cables means there are alternate routes for data if a cable breaks or malfunctions. At the same time, more people outside Europe and North America are tapping the internet, often through smartphones. That’s prompted companies to think about new routes, like between North and South America, or between Europe and Africa, says Mike Hollands, an executive at European data center company Interxion. The Marea cable ticks both of those boxes, giving Facebook and Microsoft faster routes to North Africa and the Middle East, while also creating an alternate path to Europe in case one or more of the traditional routes were disrupted by something like an earthquake.

    Cost Per Bit

    There are financial incentives for the tech companies as well. By owning the cables instead of leasing them from telcos, Google and other tech giants can potentially save money in the long term, Mauldin says.

    The cost to build and deploy a new undersea cable isn’t dropping. But as companies find ways to pump more data through these cables more quickly, their value increases.

    There are a few ways to increase the performance of a fiber-optic communications system. One is to increase the energy used to push the data from one end to the other. The catch is that to keep the data signal from degrading, undersea cables need repeaters roughly every 100 kilometers, Vusirikala explains. Those repeaters amplify not just the signal, but any noise introduced along the way, diminishing the value of boosting the energy.

    4
    A rendering of one of SubCom’s specialized Reliance-class cable ships. SubCom.

    You can also increase the amount of data that each fiber pair within a fiber-optic cable can carry. A technique called “dense wavelength division multiplexing” now enables more than 100 wavelengths to be sent along a single fiber pair.

    Or you can pack more fiber pairs into a cable. Traditionally each pair in a fiber-optic cable required two repeater components called “pumps.” The pumps take up space inside the repeater casing, so adding more pumps would require changes to the way undersea cable systems are built, deployed, and maintained, says SubCom CTO Georg Mohs.

    To get around that problem, SubCom and others are using a technique called space-division multiplexing (SDM) to allow four repeater pumps to power four fiber pairs. That will reduce the capacity of each pair, but cutting the required number of pumps in half allows them to add additional pairs that more than makes up for it, Mohs says.

    “This had been in our toolkit before,” Mohs says, but like other companies, SubCom has been more focused on adding more wavelengths per fiber pair.

    The result: Cables that can move more data than ever before. That means the total cost per bit of data sent across the cable is lower.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 8:54 am on September 8, 2018 Permalink | Reply
    Tags: Google, Google Dataset Search, JASMIN supercmputer, NERC, , , UK dataset expertise informs Google's new dataset search   

    From Science and Technology Facilities Council: “UK dataset expertise informs Google’s new dataset search” 


    From Science and Technology Facilities Council

    6 September 2018

    1
    False colour image of Europe captured by Sentinel 3. (Credit: contains modified Copernicus Sentinel data (2018)

    ESA Sentinel 3

    Experts from UK Research and Innovation have contributed to a search tool newly launched by Google that aims to help scientists, policy makers and other user groups more easily find the data required for their work and their stories, or simply to satisfy their intellectual curiosity.

    In today’s world, scientists in many disciplines and a growing number of journalists live and breathe data. There are many thousands of data repositories on the web, providing access to millions of datasets; and local and national governments around the world publish their data as well. As part of the UK Research and Innovation commitment to easy access to data, their experts worked with Google to help develop the Dataset Search, launched today.

    Similar to how Google Scholar works, Dataset Search lets users find datasets wherever they’re hosted, whether it’s a publisher’s site, a digital library, or an author’s personal web page.

    Google approached UK Research and Innovation’s Natural Environment Research Council (NERC) and Science and Technology Facilities Council (STFC) to help ensure their world-leading environmental datasets were included. The heritage in these organisations for managing huge complex datasets on the atmosphere, oceans, climate change, and even data about the solar system, managed by Dr Sarah Callaghan, the Data and Programme Manager at the UKRI’s national space laboratory STFC RAL Space, led to them working with Google on the project.

    Dr Sarah Callaghan said: “In RAL Space we manage, archive and distribute thousands of terabytes of data to make it available to scientific researchers and other interested parties. My experience making datasets findable, usable and interoperable enabled me to advise Google on their Dataset Search and how to best display their search results.”

    “I was able to draw on my work with NERC and STFC datasets, not only in just archiving and managing data for the long term and the scientific record, but also helping users to understand if a dataset is the right one for their purposes.”

    3
    Temperature of Europe during the April 2018 heatwave. (Credit: contains modified Copernicus Sentinel data (2018)

    To create Dataset Search, Google developed guidelines for dataset providers to describe their data in a way that search engines can better understand the content of their pages. These guidelines include salient information about datasets: who created the dataset, when it was published, how the data was collected, what the terms are for using the data, etc. This enables search engines to collect and link this information, analyse where different versions of the same dataset might be, and find publications that may be describing or discussing the dataset. The approach is based on an open standard for describing this information (schema.org). Many STFC and NERC datasets for environmental data are already described in this way and are particularly good examples of findable, user-friendly datasets.

    “Standardised ways of describing data allows us to help researchers by building tools and services to make it easier to find and use data” said Dr Callaghan, “If people don’t know what datasets exist, they won’t know how to look for what they need to solve their environmental problems. For example, an ecologist might not know where to go to find, or how to access the rainfall data needed to understand a changing habitat. Making data easier to find, will help introduce researchers from a variety of disciplines to the vast amount of data I and my colleagues manage for NERC and STFC.”

    The new Google Dataset Search offers references to most datasets in environmental and social sciences, as well as data from other disciplines including government data and data provided by news organisations.

    Professor Tim Wheeler, Director of Research and Innovation at NERC, said: “NERC is constantly working to raise awareness of the wealth of environmental information held within its Data Centres, and to improve access to it. This new tool will make it easier than ever for the public, business and science professionals to find and access the data that they’re looking for. We want to get as many people as possible interested in and able to benefit from data collected by the environmental science that we fund.”

    NERC JASMIN supercomputer based at STFC’s Rutherford Appleton Laboratory (Credit: STFC)

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    STFC Hartree Centre

    Helping build a globally competitive, knowledge-based UK economy

    We are a world-leading multi-disciplinary science organisation, and our goal is to deliver economic, societal, scientific and international benefits to the UK and its people – and more broadly to the world. Our strength comes from our distinct but interrelated functions:

    Universities: we support university-based research, innovation and skills development in astronomy, particle physics, nuclear physics, and space science
    Scientific Facilities: we provide access to world-leading, large-scale facilities across a range of physical and life sciences, enabling research, innovation and skills training in these areas
    National Campuses: we work with partners to build National Science and Innovation Campuses based around our National Laboratories to promote academic and industrial collaboration and translation of our research to market through direct interaction with industry
    Inspiring and Involving: we help ensure a future pipeline of skilled and enthusiastic young people by using the excitement of our sciences to encourage wider take-up of STEM subjects in school and future life (science, technology, engineering and mathematics)

    We support an academic community of around 1,700 in particle physics, nuclear physics, and astronomy including space science, who work at more than 50 universities and research institutes in the UK, Europe, Japan and the United States, including a rolling cohort of more than 900 PhD students.

    STFC-funded universities produce physics postgraduates with outstanding high-end scientific, analytic and technical skills who on graduation enjoy almost full employment. Roughly half of our PhD students continue in research, sustaining national capability and creating the bedrock of the UK’s scientific excellence. The remainder – much valued for their numerical, problem solving and project management skills – choose equally important industrial, commercial or government careers.

    Our large-scale scientific facilities in the UK and Europe are used by more than 3,500 users each year, carrying out more than 2,000 experiments and generating around 900 publications. The facilities provide a range of research techniques using neutrons, muons, lasers and x-rays, and high performance computing and complex analysis of large data sets.

    They are used by scientists across a huge variety of science disciplines ranging from the physical and heritage sciences to medicine, biosciences, the environment, energy, and more. These facilities provide a massive productivity boost for UK science, as well as unique capabilities for UK industry.

    Our two Campuses are based around our Rutherford Appleton Laboratory at Harwell in Oxfordshire, and our Daresbury Laboratory in Cheshire – each of which offers a different cluster of technological expertise that underpins and ties together diverse research fields.

    The combination of access to world-class research facilities and scientists, office and laboratory space, business support, and an environment which encourages innovation has proven a compelling combination, attracting start-ups, SMEs and large blue chips such as IBM and Unilever.

    We think our science is awesome – and we know students, teachers and parents think so too. That’s why we run an extensive Public Engagement and science communication programme, ranging from loans to schools of Moon Rocks, funding support for academics to inspire more young people, embedding public engagement in our funded grant programme, and running a series of lectures, travelling exhibitions and visits to our sites across the year.

    Ninety per cent of physics undergraduates say that they were attracted to the course by our sciences, and applications for physics courses are up – despite an overall decline in university enrolment.

     
  • richardmitnick 11:01 am on September 5, 2018 Permalink | Reply
    Tags: , Google, , , ,   

    From Duke University via The News&Observer: “Look out, IBM. A Duke-led group is also a player in quantum computing” 

    Duke Bloc
    Duke Crest

    From Duke University

    via

    The News&Observer

    August 13, 2018
    Ray Gronberg

    1
    Duke University professors Iman Marvian, Jungsang Kim and Kenneth Brown, gathered here in Kim’s lab in the Chesterfield Building in downtown Durham, are working together to develop a quantum computer that relies on “trapped ion” technology. The National Science Foundation and the federal Intelligence Advanced Research Projects Activity are helping fund the project. Les Todd LKT Photography, Inc.

    There’s a group based at Duke University that thinks it can out-do IBM in the quantum-computing game, and it just got another $15 million in funding from the U.S. government.

    Quantum computing – IBM

    The National Science Foundation grant is helping underwrite a consortium led by professors Jungsang Kim and Ken Brown that’s previously received backing from the federal Intelligence Advanced Research Projects Activity.

    Kim said the group is developing a quantum computer that has “up to a couple dozen qubits” of computational power and reckons it’s a year or so from being operational. The world qubit is the quantum-computing world’s equivalent of normal computing’s “bit” when it comes to gauging processing ability, and each additional qubit represents a doubling of that power.

    “One of the goals of this [grant] is to establish the hardware so we can allow researchers to work on the software and systems optimization,” Kim said of the National Science Foundation grant the agency awarded on Aug. 6.

    Two or three dozen qubits might not sound like a lot when IBM says it has built and tested a 50-qubit machine. But the Duke-led research group is approaching the problem from an entirely different angle.

    The “trapped-ion” design it’s using could hold qubits steady in its internal memory for much longer than superconducting designs like those IBM is working on can manage, Brown said.

    Superconducting designs — which operate at extremely cold temperatures — “are a bit faster” than trapped-ion ones and are the focus of “a much larger industrial effort,” Brown said.

    That speed-versus-resilience tradeoff could matter because IBM says its machines can hold a qubit steady in memory for only up to about 90 microseconds. That means processing runs have to be short, on the order of no more than a couple of seconds total.

    “One thing that’s becoming clear in the community is, the thing we need to scale is not just the number of qubits but also the quality of operations,” said Brown, who in January traded a faculty post at Georgia Tech for a new one at Duke. “If you have a huge number of qubits but the operations are not very good, you effectively have a bad classical computer.”

    Kim added that designers working on quantum computers have to look for the same kind of breakthrough in thinking about the technology that the Wright brothers brought to the development of flight.

    Just as the Wrights and other people working in the field in the late 19th and early 20th centuries figured out that mimicking birds was a developmental dead end, the builders of quantum computers “have to start with something that’s fundamentally quantum and build the right technology to scale it,” Kim said. “You don’t build quantum computers by mimicking classical computers.”

    But for now, the government agencies that are subsidizing the field are backing different approaches and waiting to see what pans out.

    The Aug. 6 grant is the third big one Kim’s lab has secured, building on awards from IARPA in 2010 and 2016 that together brought it about $54.5 million in funding. But in both those rounds of funding, teams from IBM were also among those getting awards from the federal agency, which funds what it calls “high-risk/high-payoff” research for the intelligence community.

    The stakes are so high because quantum computing could become a breakthrough technology. It exploits the physics of subatomic particles in hopes of developing a machine that can process data that exists in multiple states at once, rather than the binary 1 or 0 of traditional computing.

    IBM and the government aren’t the only heavy hitters involved. Google has a quantum-computing project of its own that’s grown with help from IARPA funding.

    3
    Google’s Quantum Dream Machine

    Kim and other people involved in the Duke-led group have also formed a company called IonQ that’s received investment from Google and Amazon.

    The Duke-led group also includes teams from from the University of Maryland, the University of Chicago and Tufts University that are working on hardware, software and applications development, respectively, Duke officials say. Researchers from the University of New Mexico, MIT, the National Institute of Standards and Technology and the University of California-Berkeley are also involved.

    Duke doesn’t have quantum computing all to itself in the Triangle, as in the spring IBM made N.C. State University part of its Q Network, a group of businesses, universities and government agencies that can use IBM’s quantum machines via the cloud.

    But the big difference between the N.C. State and Duke efforts is that with State, the focus is on developing both the future workforce and beginning to push software development, while at Duke it’s more fundamentally about trying to develop the technology.

    Not that software is a side issue, mind.

    “If I had a quantum computer with 60 qubits, I know there are algorithms I can run on it that I can’t simulate with my regular computers,” Brown said, explaining that the technology requires new thinking there, too. “That’s a weird place to be.”

    The quantum project is important enough that Duke has backed it with faculty hires. Brown had been collaborating with Kim’s group for a while, but elected to move to Duke from Georgia Tech after Duke officials decided to conduct what Kim termed “a cluster hire” of quantum specialists.

    Brown joined Kim in the Pratt School of Engineering’s electrical and computer engineering department. A search for someone to fill an an endowed chair in physics continues.

    Another professor involved, Iman Marvian, also joined the Duke faculty at the start of 2018 thanks to the university’s previously announced “quantitative initiative.” A quantum information theorist, he got a joint appointment in physics and engineering. He came to Duke from MIT after a post-doc stint at the Boston school.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Duke Campus

    Younger than most other prestigious U.S. research universities, Duke University consistently ranks among the very best. Duke’s graduate and professional schools — in business, divinity, engineering, the environment, law, medicine, nursing and public policy — are among the leaders in their fields. Duke’s home campus is situated on nearly 9,000 acres in Durham, N.C, a city of more than 200,000 people. Duke also is active internationally through the Duke-NUS Graduate Medical School in Singapore, Duke Kunshan University in China and numerous research and education programs across the globe. More than 75 percent of Duke students pursue service-learning opportunities in Durham and around the world through DukeEngage and other programs that advance the university’s mission of “knowledge in service to society.”

     
  • richardmitnick 4:58 pm on November 14, 2017 Permalink | Reply
    Tags: , Google, , , , Quantum Circuits Company, , , , Robert Schoelkopf is at the forefront of a worldwide effort to build the world’s first quantum computer,   

    From NYT: “Yale Professors Race Google and IBM to the First Quantum Computer” 

    New York Times

    The New York Times

    NOV. 13, 2017
    CADE METZ

    1
    Prof. Robert Schoelkopf inside a lab at Yale University. Quantum Circuits, the start-up he has created with two of his fellow professors, is located just down the road. Credit Roger Kisby for The New York Times

    Robert Schoelkopf is at the forefront of a worldwide effort to build the world’s first quantum computer. Such a machine, if it can be built, would use the seemingly magical principles of quantum mechanics to solve problems today’s computers never could.

    Three giants of the tech world — Google, IBM, and Intel — are using a method pioneered by Mr. Schoelkopf, a Yale University professor, and a handful of other physicists as they race to build a machine that could significantly accelerate everything from drug discovery to artificial intelligence. So does a Silicon Valley start-up called Rigetti Computing. And though it has remained under the radar until now, those four quantum projects have another notable competitor: Robert Schoelkopf.

    After their research helped fuel the work of so many others, Mr. Schoelkopf and two other Yale professors have started their own quantum computing company, Quantum Circuits.

    Based just down the road from Yale in New Haven, Conn., and backed by $18 million in funding from the venture capital firm Sequoia Capital and others, the start-up is another sign that quantum computing — for decades a distant dream of the world’s computer scientists — is edging closer to reality.

    “In the last few years, it has become apparent to us and others around the world that we know enough about this that we can build a working system,” Mr. Schoelkopf said. “This is a technology that we can begin to commercialize.”

    Quantum computing systems are difficult to understand because they do not behave like the everyday world we live in. But this counterintuitive behavior is what allows them to perform calculations at rate that would not be possible on a typical computer.

    Today’s computers store information as “bits,” with each transistor holding either a 1 or a 0. But thanks to something called the superposition principle — behavior exhibited by subatomic particles like electrons and photons, the fundamental particles of light — a quantum bit, or “qubit,” can store a 1 and a 0 at the same time. This means two qubits can hold four values at once. As you expand the number of qubits, the machine becomes exponentially more powerful.

    Todd Holmdahl, who oversees the quantum project at Microsoft, said he envisioned a quantum computer as something that could instantly find its way through a maze. “A typical computer will try one path and get blocked and then try another and another and another,” he said. “A quantum computer can try all paths at the same time.”

    The trouble is that storing information in a quantum system for more than a short amount of time is very difficult, and this short “coherence time” leads to errors in calculations. But over the past two decades, Mr. Schoelkopf and other physicists have worked to solve this problem using what are called superconducting circuits. They have built qubits from materials that exhibit quantum properties when cooled to extremely low temperatures.

    With this technique, they have shown that, every three years or so, they can improve coherence times by a factor of 10. This is known as Schoelkopf’s Law, a playful ode to Moore’s Law, the rule that says the number of transistors on computer chips will double every two years.

    2
    Professor Schoelkopf, left, and Prof. Michel Devoret working on a device that can reach extremely low temperatures to allow a quantum computing device to function. Credit Roger Kisby for The New York Times

    “Schoelkopf’s Law started as a joke, but now we use it in many of our research papers,” said Isaac Chuang, a professor at the Massachusetts Institute of Technology. “No one expected this would be possible, but the improvement has been exponential.”

    These superconducting circuits have become the primary area of quantum computing research across the industry. One of Mr. Schoelkopf’s former students now leads the quantum computing program at IBM. The founder of Rigetti Computing studied with Michel Devoret, one of the other Yale professors behind Quantum Circuits.

    In recent months, after grabbing a team of top researchers from the University of California, Santa Barbara, Google indicated it is on the verge of using this method to build a machine that can achieve “quantum supremacy” — when a quantum machine performs a task that would be impossible on your laptop or any other machine that obeys the laws of classical physics.

    There are other areas of research that show promise. Microsoft, for example, is betting on particles known as anyons. But superconducting circuits appear likely to be the first systems that will bear real fruit.

    The belief is that quantum machines will eventually analyze the interactions between physical molecules with a precision that is not possible today, something that could radically accelerate the development of new medications. Google and others also believe that these systems can significantly accelerate machine learning, the field of teaching computers to learn tasks on their own by analyzing data or experiments with certain behavior.

    A quantum computer could also be able to break the encryption algorithms that guard the world’s most sensitive corporate and government data. With so much at stake, it is no surprise that so many companies are betting on this technology, including start-ups like Quantum Circuits.

    The deck is stacked against the smaller players, because the big-name companies have so much more money to throw at the problem. But start-ups have their own advantages, even in such a complex and expensive area of research.

    “Small teams of exceptional people can do exceptional things,” said Bill Coughran, who helped oversee the creation of Google’s vast internet infrastructure and is now investing in Mr. Schoelkopf’s company as a partner at Sequoia. “I have yet to see large teams inside big companies doing anything tremendously innovative.”

    Though Quantum Circuits is using the same quantum method as its bigger competitors, Mr. Schoelkopf argued that his company has an edge because it is tackling the problem differently. Rather than building one large quantum machine, it is constructing a series of tiny machines that can be networked together. He said this will make it easier to correct errors in quantum calculations — one of the main difficulties in building one of these complex machines.

    But each of the big companies insist that they hold an advantage — and each is loudly trumpeting its progress, even if a working machine is still years away.

    Mr. Coughran said that he and Sequoia envision Quantum Circuits evolving into a company that can deliver quantum computing to any business or researcher that needs it. Another investor, Canaan’s Brendan Dickinson, said that if a company like this develops a viable quantum machine, it will become a prime acquisition target.

    “The promise of a large quantum computer is incredibly powerful,” Mr. Dickinson said. “It will solve problems we can’t even imagine right now.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 3:06 pm on September 1, 2017 Permalink | Reply
    Tags: A Grand Tour of the Ocean Basins, , , Google   

    From Eos: “A Grand Tour of the Ocean Basins” 

    AGU bloc

    AGU
    Eos news bloc

    Eos

    9/1/17
    Declan G. De Paor

    A new teaching resource facilitates plate tectonic studies using a Google Earth virtual guided tour of ocean basins around the world.

    1
    Google Earth images provide detailed views of Earth’s continents and oceans. Custom overlays enhance the images, turning them into resources for instructors and students studying plate tectonic theory and other topics. A new online teaching resource takes advantage of Google Earth to offer a virtual tour of the world’s ocean basins, providing insights into the processes that shape oceans and continents. This Google Earth image displays data overlays showing ages of the ocean floor, together with tectonic plate boundaries. Credit: Models: Age of the Lithosphere for Google Earth and Using Google Earth to Explore Plate Tectonics. All figures showing Google Earth are ©2017, Google Inc. Images: PGC/NASA, Landsat/Copernicus, USGS. Data: SIO, NOAA, U.S. Navy, NGA, GEBC, USGS.

    Students, especially those at the beginner levels, are often presented with simplistic visualizations of plate tectonics that lack the rich detail and recent science available to researchers. Yet plate tectonics’ ability to explain fine details of the continental and oceanic lithosphere is the strongest available verification of this theory. Presenting more of this detail in a real-world setting can help motivate students to study the processes that mold Earth’s oceans and continents.

    Google Earth allows instructors and students to explore Earth’s oceans and continents in considerable detail. The images in this open access, online resource provide a striking portrait of the planet’s continents and oceans. A user can browse this virtual globe’s features and explore in fine detail mountain ranges, geological faults, ocean basins, and much more.

    Properly annotated, Google Earth can also provide insights into the geophysical processes that created the world as we see it today. It can serve as an informative tool for students and instructors in their study of tectonic plates, bringing to life the geological significance of features such as the famous Ring of Fire that girdles the Pacific.

    Our project, Google Earth for Onsite and Distance Education (GEODE), has now added a Grand Tour of the Ocean Basins to its website to provide such help. This tour gives instructors a way to become familiar with details of Earth’s tectonic story and to stay up to date about new insights into tectonic processes. They can then better respond to, and provide context for, on-the-spot questions from students as they become caught up in the images they view on Google Earth.

    The tour was designed for geoscience majors, but an instructor could edit it to suit general education or high school courses. Students can use the documentation as a self-study tool, even if they do not have extensive prior knowledge of tectonic processes.

    A Teaching Sequence

    The tour is organized in a teaching sequence, beginning with the East African Rift, continuing through the Red Sea and Gulf of Aden into the Arabian Sea. The tour proceeds to the passive margins of Antarctica, which lead tourists to the South Atlantic, North Atlantic, and Arctic oceans. En route, students visit thinned continental shelves and abandoned ocean basins (where seafloor spreading no longer occurs). The Lesser Antilles Arc and Scotia Arc serve as an introduction to Pacific continental arcs, transform boundaries, island arcs, and marginal basins. The tour ends with ophiolites—slivers of ocean thrust onto land—in Oman.

    The tour uses a series of Google Earth placemarks (map pin icons), with descriptions and illustrations in a separate Portable Document Format (PDF) file. We provide plate tectonic context by combining two superlative resources: ocean floor ages from the Age of the Lithosphere for Google Earth website (based on Müller et al. [2008]) and the plate boundary model from Laurel Goodell’s Science Education Resource Center page (based on Bird [2003]).

    Not Your Grandmother’s Plate Tectonics

    Our virtual tour of ocean basins includes lots of up-to-date local details, thanks largely to recent research that takes advantage of precise data provided by satellite-based GPS. Just as your car’s GPS receiver tells you how fast you are traveling and in what direction, highly sensitive GPS devices record plate velocities, even though plates move only at about the rate your fingernails grow. Researchers no longer regard plates as absolutely rigid: Internal plate deformation was first documented in the Indian Ocean [Wiens et al., 1985].

    GPS surveys and seismic records reveal large regions of deformation along diffuse boundaries between tectonic plates, where the movement is not along one well-defined plane. Instead, movement involves microplates: relatively rigid parts of plates that move with significantly differing velocities. For example, tour stop 9, the eastern Indian Ocean, shows the presence of widespread diffuse deformation in the Indian, Australian, and Capricorn plates (Figure 1). For mechanical reasons, these microplates tend to pivot about points separating regions of diffuse extension from compression, represented by white circle icons in the Google Earth tour.

    Beyond Atlantic Style and Pacific Style

    Our Google Earth tour also allows us to address misconceptions about the boundaries between tectonic plates and between oceans and continents. Some of the most persistent misconceptions concern the differences between active plate boundaries and passive continental margins.

    A bit of background first: Active plate boundaries can be divergent (mid-ocean ridges), convergent (subduction and collision zones), or transform (e.g., the San Andreas Fault). At passive continental margins, oceanic lithosphere and continental lithosphere are welded together along the fossilized line of initial continental rifting. A person in our Google Earth tour will encounter numerous examples of both active plate boundaries and passive continental margins.

    3
    Fig. 1. This image from the grand tour illustrates diffuse deformation on the Indian, Australian, and Capricorn plates. Areas of extension are shaded gray; areas of contraction are yellow. Thick dashed lines mark the median lines of the zones of diffuse deformation. They define a diffuse triple junction. Open circles are poles of relative rotation of pairs of plates (a third pole may already be subducted under the Sunda Plate). These poles occupy regions of little deformation between the extensional and contractional zones. Purple dotted lines outline continental shelves. Credit: Based on data from Royer and Gordon [1997]

    Misconceptions arise from the introductory level on, where teachers present students with two basic cross sections of ocean basins: Atlantic style with two passive continental margins and Pacific style with two active plate boundaries. Students commonly draw cross sections with two symmetrical active convergent plate boundaries even though there is no such ocean basin on Earth.

    Symmetrical passive margins do exist, however: They border large regions of oceanic crust, including, for example, the North and South Atlantic oceans, the western portion of the Indian Ocean within the Arabian Sea, and the Southern Ocean between Australia and Antarctica as well as between Africa and Antarctica. But active basins are always asymmetrical, with ridges often far from the middle of the ocean basin. Seafloor spreading is generally symmetrical about ocean ridges (except for local instances of ridge jump), but there is no reason for subduction to occur at the same rate on either side of an ocean basin; hence, ridges migrate as they spread, and in places, they reach a trench and are subducted.

    Our grand tour presents lithospheric cross sections of the Pacific crust to scale, with its eastern 4,000-kilometer-wide Nazca Plate and western 12,000-kilometer-wide Pacific Plate. It also highlights the eastern Indian Ocean, with its passive margin against Madagascar and active plate boundary against Burma-Sumatra, the scene of the devastating, tsunami-generating earthquake of 26 December 2004 (Figure 2).

    3
    Fig. 2. An ocean can be bounded by a passive continental margin on one side and an active plate boundary on the other. In such cases, the spreading ridge is never in the middle of the ocean. A traverse from Madagascar in the west to Sumatra in the east serves as a modern-day analogue for times during the evolution of the Iapetus Ocean that was consumed in the Appalachian-Caledonian Orogeny.

    This combination of passive continental margin and active plate boundary serves as a good modern analogue for the Iapetus Ocean, the ocean that separated paleo–North America from paleo-Europe and paleo-Africa before the collisions that created the Appalachians, Caledonides, and associated mountains. Models of those mountain-building events involve a collision of active and passive sides of the ocean basin at times as Iapetus was consumed.

    Sampling Diversity in Ocean Basins

    The grand tour also visits many of the diverse features of Earth’s ocean basins. A significant amount of oceanic crust resides in failed or abandoned basins bounded by passive margins. Such regions include the Gulf of Mexico, the Labrador Sea and Baffin Bay between Canada and Greenland, the Bay of Biscay between France and Spain, the western Mediterranean, and the Tasman and Coral seas east of Australia, all of which are visited on the tour.

    Many offshore regions are underlain by oceanic crust that developed in marginal basins behind island arcs such as Japan and the Mariana Islands, and the tour visits these regions as well. Because the west side of the Pacific’s oceanic crust is so much older than the east, it is colder and denser and subducts steeply and rapidly. Consequently, trenches marking the initiation of subduction roll back eastward, like a Michael Jackson moonwalk. The resultant “trench suction” forces open multiple back-arc basins to the west of the main Pacific basin, with their own miniature spreading ridges.

    A third type of minor ocean basin is created by side-stepping transform fault arrays as in the Gulf of California and on the northern border of the Caribbean Plate. In such locations, transform faults are long, and spreading ridge segments are short.

    Finally, there are numerous oceanic plateaus with relatively thick crust derived from large igneous provinces or small submerged continental fragments. Examples of all of the above are included in our tour.

    Triple Junctions and Hot Spots

    The tour makes stops at triple junctions, where three major plates meet. At some locations, triangular microplates without any bounding continental margins grow, as exemplified by the Galápagos Microplate (Figure 3, tour stop 38). Researchers have found strong evidence that one such paleomicroplate grew to become the Pacific Plate (Figure 4) [Boschman and van Hinsbergen, 2016]. The Pacific oceanic crust never had passive continental margins. It was born at sea!

    5
    Fig. 3. Stop 38 on the Grand Tour of the Ocean Basins focuses on the Galápagos Microplate (designated µ in the image) on the East Pacific Rise. It sits at a triple junction where three large plates meet. The Galápagos hot spot to the east was probably instrumental in the location of the triple junction. Credit: Based on data from Schouten et al. [2008]

    Oceans are also home to mantle hot spot trails unrelated to plate boundaries. The grand tour visits the well-known Hawaiian Islands–Emperor Seamount trail. Numerous other trails are easily recognizable in Google Earth.

    File Formats and System Requirements

    The tour is presented in two file formats: Keyhole Markup Language (KML)—the format of Google Earth custom content—and an associated PDF file. Google Earth puts descriptive text and imagery into placemark balloons, which can obscure the surface of the map. Because these balloons cannot be dragged to one side, simultaneous viewing of KML maps and PDF descriptive documents is the solution. Dual monitors, twin projectors, or pairs of laptops make for the best viewing for personal study, lecture presentation, and student collaboration.

    The PDF document is laid out in frames suited to reading on digital devices. Each frame contains a block of text and associated imagery. Instructors may omit or rearrange tour stops to suit the needs of their courses. Because KML is human-readable, such rearrangements can be done in a text editor. Note that the KML file must be viewed on a desktop or laptop computer (Mac, Windows, or Linux) because Google Earth for mobile devices is highly limited.

    5
    Fig. 4. Stop 39 on the Grand Tour of the Ocean Basins looks at the formation of the Pacific Plate. The oldest isochrons are not seen at the western subduction zone with the Eurasian Plate and associated marginal basins; rather, the oldest oceanic crust forms a Russian doll–style set of nested triangles, suggesting that the Pacific Plate started as a triangular microplate growing from a triple junction, just like the Galápagos Plate today.

    Trying It Out for Yourself

    The KML and PDF files are available for download. The KML download contains a simple network link to an online KML document so that updates occur automatically whenever the document is opened in Google Earth.

    The author invites suggestions for continuously improving this resource.

    Acknowledgments

    Development was supported by the National Science Foundation under grant NSF DUE 1323419, “Google Earth for Onsite and Distance Education (GEODE).” Any opinions, findings, and conclusions or recommendations are those of the author and do not necessarily reflect the views of the National Science Foundation. Thanks are owed to the Eos editors and to two anonymous reviewers for very helpful suggestions that improved the submitted manuscript.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Eos is the leading source for trustworthy news and perspectives about the Earth and space sciences and their impact. Its namesake is Eos, the Greek goddess of the dawn, who represents the light shed on understanding our planet and its environment in space by the Earth and space sciences.

     
  • richardmitnick 10:57 am on September 19, 2016 Permalink | Reply
    Tags: , Google, ,   

    From New Scientist- “Revealed: Google’s plan for quantum computer supremacy” 

    NewScientist

    New Scientist

    31 August 2016 [This just now appeared in social media.]
    Jacob Aron

    1
    Superconducting qubits are tops. UCSB

    The field of quantum computing is undergoing a rapid shake-up, and engineers at Google have quietly set out a plan to dominate.

    SOMEWHERE in California, Google is building a device that will usher in a new era for computing. It’s a quantum computer, the largest ever made, designed to prove once and for all that machines exploiting exotic physics can outperform the world’s top supercomputers.

    And New Scientist has learned it could be ready sooner than anyone expected – perhaps even by the end of next year.

    The quantum computing revolution has been a long time coming. In the 1980s, theorists realised that a computer based on quantum mechanics had the potential to vastly outperform ordinary, or classical, computers at certain tasks. But building one was another matter. Only recently has a quantum computer that can beat a classical one gone from a lab curiosity to something that could actually happen. Google wants to create the first.

    The firm’s plans are secretive, and Google declined to comment for this article. But researchers contacted by New Scientist all believe it is on the cusp of a breakthrough, following presentations at conferences and private meetings.
    .

    “They are definitely the world leaders now, there is no doubt about it,” says Simon Devitt at the RIKEN Center for Emergent Matter Science in Japan. “It’s Google’s to lose. If Google’s not the group that does it, then something has gone wrong.”

    We have had a glimpse of Google’s intentions. Last month, its engineers quietly published a paper detailing their plans (arxiv.org/abs/1608.00263). Their goal, audaciously named quantum supremacy, is to build the first quantum computer capable of performing a task no classical computer can.

    “It’s a blueprint for what they’re planning to do in the next couple of years,” says Scott Aaronson at the University of Texas at Austin, who has discussed the plans with the team.

    So how will they do it? Quantum computers process data as quantum bits, or qubits. Unlike classical bits, these can store a mixture of both 0 and 1 at the same time, thanks to the principle of quantum superposition. It’s this potential that gives quantum computers the edge at certain problems, like factoring large numbers. But ordinary computers are also pretty good at such tasks. Showing quantum computers are better would require thousands of qubits, which is far beyond our current technical ability.

    Instead, Google wants to claim the prize with just 50 qubits. That’s still an ambitious goal – publicly, they have only announced a 9-qubit computer – but one within reach.

    To help it succeed, Google has brought the fight to quantum’s home turf. It is focusing on a problem that is fiendishly difficult for ordinary computers but that a quantum computer will do naturally: simulating the behaviour of a random arrangement of quantum circuits.

    Any small variation in the input into those quantum circuits can produce a massively different output, so it’s difficult for the classical computer to cheat with approximations to simplify the problem. “They’re doing a quantum version of chaos,” says Devitt. “The output is essentially random, so you have to compute everything.”

    To push classical computing to the limit, Google turned to Edison, one of the most advanced supercomputers in the world, housed at the US National Energy Research Scientific Computing Center. Google had it simulate the behaviour of quantum circuits on increasingly larger grids of qubits, up to a 6 × 7 grid of 42 qubits.

    This computation is difficult because as the grid size increases, the amount of memory needed to store everything balloons rapidly. A 6 × 4 grid needed just 268 megabytes, less than found in your average smartphone. The 6 × 7 grid demanded 70 terabytes, roughly 10,000 times that of a high-end PC.

    Google stopped there because going to the next size up is currently impossible: a 48-qubit grid would require 2.252 petabytes of memory, almost double that of the top supercomputer in the world. If Google can solve the problem with a 50-qubit quantum computer, it will have beaten every other computer in existence.

    Eyes on the prize

    By setting out this clear test, Google hopes to avoid the problems that have plagued previous claims of quantum computers outperforming ordinary ones – including some made by Google.

    Last year, the firm announced it had solved certain problems 100 million times faster than a classical computer by using a D-Wave quantum computer, a commercially available device with a controversial history. Experts immediately dismissed the results, saying they weren’t a fair comparison.

    Google purchased its D-Wave computer in 2013 to figure out whether it could be used to improve search results and artificial intelligence. The following year, the firm hired John Martinis at the University of California, Santa Barbara, to design its own superconducting qubits. “His qubits are way higher quality,” says Aaronson.

    It’s Martinis and colleagues who are now attempting to achieve quantum supremacy with 50 qubits, and many believe they will get there soon. “I think this is achievable within two or three years,” says Matthias Troyer at the Swiss Federal Institute of Technology in Zurich. “They’ve showed concrete steps on how they will do it.”

    Martinis and colleagues have discussed a number of timelines for reaching this milestone, says Devitt. The earliest is by the end of this year, but that is unlikely. “I’m going to be optimistic and say maybe at the end of next year,” he says. “If they get it done even within the next five years, that will be a tremendous leap forward.”

    The first successful quantum supremacy experiment won’t give us computers capable of solving any problem imaginable – based on current theory, those will need to be much larger machines. But having a working, small computer could drive innovation, or augment existing computers, making it the start of a new era.

    Aaronson compares it to the first self-sustaining nuclear reaction, achieved by the Manhattan project in Chicago in 1942. “It might be a thing that causes people to say, if we want a full-scalable quantum computer, let’s talk numbers: how many billions of dollars?” he says.

    Solving the challenges of building a 50-qubit device will prepare Google to construct something bigger. “It’s absolutely progress to building a fully scalable machine,” says Ian Walmsley at the University of Oxford.

    For quantum computers to be truly useful in the long run, we will also need robust quantum error correction, a technique to mitigate the fragility of quantum states. Martinis and others are already working on this, but it will take longer than achieving quantum supremacy.

    Still, achieving supremacy won’t be dismissed.

    “Once a system hits quantum supremacy and is showing clear scale-up behaviour, it will be a flare in the sky to the private sector,” says Devitt. “It’s ready to move out of the labs.”

    “The field is moving much faster than expected,” says Troyer. “It’s time to move quantum computing from science to engineering and really build devices.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 8:03 pm on July 24, 2016 Permalink | Reply
    Tags: , Google, ,   

    From Science Alert: “Google’s quantum computer just accurately simulated a molecule for the first time” 

    ScienceAlert

    Science Alert

    22 JUL 2016
    DAVID NIELD

    1

    It’s a quantum world, we’re just living in it.

    Google’s engineers just achieved a milestone in quantum computing: they’ve produced the first completely scalable quantum simulation of a hydrogen molecule.

    That’s big news, because it shows similar devices could help us unlock the quantum secrets hidden in the chemistry that surrounds us.

    Researchers working with the Google team were able to accurately simulate the energy of hydrogen H2 molecules, and if we can repeat the trick for other molecules, we could see the benefits in everything from solar cells to medicines.

    These types of predictions are often impossible for ‘classical’ computers or take an extremely long time – working out the energy of something like a propane (C3H8) molecule would take a supercomputer in the region of 10 days.

    To achieve the feat, Google’s engineers teamed up with researchers from Harvard University, Lawrence Berkeley National Labs, UC Santa Barbara, Tufts University, and University College London in the UK.

    “While the energies of molecular hydrogen can be computed classically (albeit inefficiently), as one scales up quantum hardware it becomes possible to simulate even larger chemical systems, including classically intractable ones,” writes Google Quantum Software Engineer Ryan Babbush.

    Chemical reactions are quantum in nature, because they form highly entangled quantum superposition states. In other words, each particle’s state can’t be described independently of the others, and that causes problems for computers used to dealing in binary values of 1s and 0s.

    Enter Google’s universal quantum computer, which deals in qubits – bits that themselves can be in a state of superposition, representing both 1 and 0 at the same time.

    To run the simulation, the engineers used a supercooled quantum computing circuit called a variational quantum eigensolver (VQE) – essentially a highly advanced modelling system that attempts to mimic our brain’s own neural networks on a quantum level.

    2
    Credit: Google

    When the results of the VQE were compared against the actual released energy of the hydrogen molecule, the curves matched almost exactly, as you can see in the graph above.

    Babbush explains that going from qualitative and descriptive chemistry simulations to quantitative and predictive ones “could modernise the field so dramatically that the examples imaginable today are just the tip of the iceberg”.

    We’re dealing with the very first steps of modelling reality, and Google says we could start to see applications in all kinds of systems involving chemistry: improved batteries, flexible electronics, new types of materials, and more.

    One potential use is modelling the way bacteria produce fertiliser. The way humans produce fertiliser is extremely inefficient in terms of the environment, and costs 1-2 percent of the world’s energy per year – so any improvements in understanding the chemical reactions involved could produce massive gains.

    It’s still early days though, and while we’ve described Google’s hardware as a quantum computer for simplicity’s sake, there’s still an ongoing debate over whether we’ve cracked the quantum computing code just yet.

    Some say Google’s machine is still a prototype, part-quantum computer rather than the real deal. But while the scientists discuss the ins and outs of that argument, at least we’re starting to reap the benefits of the technology – and can look forward to a near future where computing power is almost unimaginable.

    The findings are published in Physical Review X.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 8:32 pm on September 28, 2015 Permalink | Reply
    Tags: , Google, ,   

    From WIRED: “The Other Way A Quantum Computer Could Revive Moore’s Law” 

    Wired logo

    Wired

    09.28.15
    Cade Metz

    1
    D-Wave’s quantum chip. Google

    Google is upgrading its quantum computer. Known as the D-Wave, Google’s machine is making the leap from 512 qubits—the fundamental building block of a quantum computer—to more than a 1000 qubits. And according to the company that built the system, this leap doesn’t require a significant increase in power, something that could augur well for the progress of quantum machines.

    Together with NASA and the Universities Space Research Association, or USRA, Google operates its quantum machine at the NASA Ames Research center not far from its Mountain View, California headquarters. Today, D-Wave Systems, the Canadian company that built the machine, said it has agreed to provide regular upgrades to the system—keeping it “state-of-the-art”—for the next seven years. Colin Williams, director of business development and strategic partnerships for D-Wave, calls this “the biggest deal in the company’s history.” The system is also used by defense giant Lockheed Martin, among others.

    Though the D-Wave machine is less powerful than many scientists hope quantum computers will one day be, the leap to 1000 qubits represents an exponential improvement in what the machine is capable of. What is it capable of? Google and its partners are still trying to figure that out. But Google has said it’s confident there are situations where the D-Wave can outperform today’s non-quantum machines, and scientists at the University of Southern California have published research suggesting that the D-Wave exhibits behavior beyond classical physics.

    Over the life of Google’s contract, if all goes according to plan, the performance of the system will continue to improve. But there’s another characteristic to consider. Williams says that as D-Wave expands the number of qubits, the amount of power needed to operate the system stays roughly the same. “We can increase performance with constant power consumption,” he says. At a time when today’s computer chip makers are struggling to get more performance out of the same power envelope, the D-Wave goes against the trend.

    The Qubit

    A quantum computer operates according to the principles of quantum mechanics, the physics of very small things, such as electrons and photons. In a classical computer, a transistor stores a single “bit” of information. If the transistor is “on,” it holds a 1, and if it’s “off,” it holds a 0. But in quantum computer, thanks to what’s called the superposition principle, information is held in a quantum system that can exist in two states at the same time. This “qubit” can store a 0 and 1 simultaneously.

    Two qubits, then, can hold four values at any given time (00, 01, 10, and 11). And as you keep increasing the number of qubits, you exponentially increase the power of the system. The problem is that building a qubit is a extreme difficult thing. If you read information from a quantum system, it “decoheres.” Basically, it turns into a classical bit that houses only a single value.

    D-Wave believes it has found a way around this problem. It released its first machine, spanning 16 qubits, in 2007. Together with NASA, Google started testing the machine when it reached 512 qubits a few years back. Each qubit, D-Wave says, is a superconducting circuit—a tiny loop of flowing current—and these circuits are dropped to extremely low temperatures so that the current flows in both directions at once. The machine then performs calculations using algorithms that, in essence, determine the probability that a collection of circuits will emerge in a particular pattern when the temperature is raised.

    Reversing the Trend

    Some have questioned whether the system truly exhibits quantum properties. But researchers at USC say that the system appears to display a phenomenon called “quantum annealing” that suggests it’s truly operating in the quantum realm. Regardless, the D-Wave is not a general quantum computer—that is, it’s not a computer for just any task. But D-Wave says the machine is well-suited to “optimization” problems, where you’re facing many, many different ways forward and must pick the best option, and to machine learning, where computers teach themselves tasks by analyzing large amount of data.

    D-Wave says that most of the power needed to run the system is related to the extreme cooling. The entire system consumes about 15 kilowatts of power, while the quantum chip itself uses a fraction of a microwatt. “Most of the power,” Williams says, “is being used to run the refrigerator.” This means that the company can continue to improve its performance without significantly expanding the power it has to use. At the moment, that’s not hugely important. But in a world where classical computers are approaching their limits, it at least provides some hope that the trend can be reversed.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 9:53 am on August 18, 2015 Permalink | Reply
    Tags: , Google,   

    From wired: “How Much Can You Save With Solar Panels? Just Ask Google” 

    Wired logo

    Wired

    08.18.15
    Cade Metz

    1
    Google

    If you’re considering solar power but aren’t quite sure it’s worth the expense, Google wants to point you in the right direction. Tapping its trove of satellite imagery and the latest in artificial intelligence, the company is offering a new online service that will instantly estimate how much you’ll save with a roof full of solar panels.

    3
    The first three concentrated solar power (CSP) units of Spain’s Solnova Solar Power Station in the foreground, with the PS10 and PS20 solar power towers in the background

    On Monday, the company unveiled Project Sunroof, a tool that calculates your home’s solar power potential using the same high-resolution aerial photos Google Earth uses to map the planet. After creating a 3-D model of your roof, the service estimates how much sun will hit those solar panels during the year and how much money the panels could save you over the next two decades. “People search Google all the time to learn about solar,” says Google’s Joel Conkling. “But it would be much more helpful if they could learn whether their particular roof is a good fit.”

    2
    Google

    The service is now available for homes in the San Francisco Bay Area, central California, and the greater Boston area. Google is headquartered in California, you see, and project creator Carl Elkin lives in Boston. Based in the company’s Cambridge offices, Elkin typically works on Google’s search engine, but he developed Project Sunroof during his “20 percent time“—that slice of the work week Googlers can use for independent projects.

    How Google Parses Your Roof

    Elkin’s own home has solar panels, and he once volunteered with Solarize Massachusetts to promote solar in the Bay State. He and Google see Project Sunroof pushing solar use further still. “We people want to go solar but don’t understand how cheap it is,” Elkin says. “I wanted people to understand that they can actually save money.”

    As Google notes in a blog post announcing Project Sunroof, the time is ripe for such a tool. “This is an extremely useful thing,” says Roland Winston, a professor at the University of California, Merced, who specializes in solar energy. “Solar technology is cheaper than ever.” Indeed, others have developed services along these lines, including academics and companies like Geostellar and Mapdwell.

    But Google’s service is a bit different. It has Google behind it—and the company is taking a particularly comprehensive approach. In analyzing satellite images of your home, Google uses “deep learning” neural networks to separate your roof from the surrounding trees and shadows. “Even a strong solar advocate like me wouldn’t recommend putting solar panels on your trees,” Elkin says. Mimicking the web of neurons in the human brain, this sort of neural network is the same technology used to recognize faces on Facebook or instantly translate from one language to another on Skype.

    Project Sunroof also simulates the shadows that typically cover your home on any given day (see animation above), and it tracks local weather patterns. “We’re able show how much energy is hitting each part of your roof,” Conkling says. And if you like, you can further hone that company’s calculations by providing how much you typically spend on electricity (otherwise, the service relies on public utility rates in your area).

    Beyond Elkin’s personal crusade, Google has a long history of advocating for solar power. In addition to investing in solar as a means of powering its global network of data centers, the company previously has invested in residential solar projects. But this isn’t mere charity work. Project Sunroof also recommends solar providers in your area, and it plans to eventually take a referral fee from these providers. “We want to help people understand the potential of solar power,” says Conkling. “But we can make some money off of that as well.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 2:34 pm on January 22, 2013 Permalink | Reply
    Tags: , Google, ScienceSprings   

    A Note About the ScienceSprings Fan Page at Facebook 

    Some time ago, as an experiment, I started a Fan Page for ScienceSprings at Facebook. I assumed it would be a total flop. I mean, you know, it is hard enough to get people to be interested in Science; how far can one go in asking for their allegiance.

    A problem occurred through my own ignorance: since I “Liked” the page, entires there went through to my own Facebook page, and I assumed that they went to all of my “friends” at Facebook. But my daughter let me know that was not the case. She commented that she had not seen anything from me in quite a while. So, I put a note on the Fan Page that I needed to stop using it.

    Now, just today, a friend explained to my how it all works. So I started in the business of bringing the Fan Page up to date from about December 8, 2012, until the present.

    Well, the digirati at Facebook went nuts, told me I was going “too fast” and “blocked” me for two days. Too fast? What does “digital” mean? How can one go “too fast”?

    Anyway, now that I know how it works, I will be re-energizing the page for all of those interested.

    BTW, ScienceSprings is now also at Google+. You can search “Richard Mitnick”, or, I am told, you can actually search “ScienceSprings”. If you are using Google+, please search me up and add me to your circles.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: