Tagged: New Scientist Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:48 pm on December 13, 2016 Permalink | Reply
    Tags: , New Scientist,   

    From New Scientist: “Quantum computers ditch all the lasers for easier engineering” 

    NewScientist

    New Scientist

    7 December 2016
    Michael Brooks

    1
    Lasers are not the only option. Richard Kail/Science Photo Library

    They will be the ultimate multitaskers – but quantum computers might take a bit of juggling to operate. Now, a team has simplified their inner workings.

    Computers that take advantage of quantum laws allowing particles to exist in multiple states at the same time promise to run millions of calculations at once. One of the candidate technologies involves ion traps, which hold and manipulate charged particles, called ions, to encode information.

    But to make a processor that works faster than a classical computer would require millions of such traps, each controlled with its own precisely aligned laser – making it extremely complicated.

    Now, Winfried Hensinger at the University of Sussex in the UK and his colleagues have replaced the millions of lasers with some static magnets and a handful of electromagnetic fields. “Our invention has led to a radical simplification of the engineering required, which means we are now able to construct a large-scale device,” he says.

    In their scheme, each ion is trapped by four permanent magnets, with a controllable voltage across the trap. The entire device is bathed in a set of tuned microwave and radio-frequency electromagnetic fields. Tweaking the voltage shifts the ions to a different position in the magnetic field, changing their state.

    Promising technology

    The researchers have already used this idea to build and operate a quantum logic gate, a building block of a processor. This particular gate involves entangling two ions – in other words, linking their quantum states such that they are fully dependent on each other. Hensinger says this is the most difficult kind of logic gate to build.

    Manas Mukherjee at the National University of Singapore is impressed with the new technology. “It’s a promising development, with good potential for scaling up,” he says.

    That’s exactly what the team is planning: they hope to have a trial device containing tens of ions ready within four years.

    The fact that the device uses current technologies such as techniques for silicon-chip manufacturing means there are no known roadblocks to scaling up to create a useful quantum computer.

    It won’t be plain sailing, though. Scaling up will mean creating magnetic fields that vary in strength over relatively short distances. This a significant engineering challenge, says Mukherjee. Then there’s the challenge of handling waste heat, which becomes more problematic as the processor gets bigger. “As with any architecture, you need low heating rates,” he says.

    Journal reference: Physical Review Letters, DOI: 10.1103/PhysRevLett.117.220501

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 11:07 am on November 25, 2016 Permalink | Reply
    Tags: , , New Scientist, ,   

    From New Scientist: “Gravity may have chased light in the early universe” 

    NewScientist

    New Scientist

    23 November 2016
    Michael Brooks

    1
    Getting up to speed. Manuela Schewe-Behnisch/EyeEm/Getty

    It’s supposed to be the most fundamental constant in physics, but the speed of light may not always have been the same. This twist on a controversial idea could overturn our standard cosmological wisdom.

    In 1998, Joao Magueijo at Imperial College London, proposed that the speed of light might vary, to solve what cosmologists call the horizon problem. This says that the universe reached a uniform temperature long before heat-carrying photons, which travel at the speed of light, had time to reach all corners of the universe.

    The standard way to explain this conundrum is an idea called inflation, which suggests that the universe went through a short period of rapid expansion early on – so the temperature evented out when the cosmos was smaller, then it suddenly grew. But we don’t know why inflation started, or stopped. So Magueijo has been looking for alternatives.

    Now, in a paper to be published 28 November in Physical Review, he and Niayesh Afshordi at the Perimeter Institute in Canada have laid out a new version of the idea – and this one is testable. They suggest that in the early universe, light and gravity propagated at different speeds.

    If photons moved faster than gravity just after the big bang, that would have let them get far enough for the universe to reach an equilibrium temperature much more quickly, the team say.

    A testable theory

    What really excites Magueijo about the idea is that it makes a specific prediction about the cosmic microwave background (CMB). This radiation, which fills the universe, was created shortly after the big bang and contains a “fossilised” imprint of the conditions of the universe.

    CMB per ESA/Planck
    CMB per ESA/Planck

    In Magueijo and Afshordi’s model, certain details about the CMB reflect the way the speed of light and the speed of gravity vary as the temperature of the universe changes. They found that there was an abrupt change at a certain point, when the ratio of the speeds of light and gravity rapidly went to infinity.

    This fixes a value called the spectral index, which describes the initial density ripples in the universe, at 0.96478 – a value that can be checked against future measurements. The latest figure, reported by the CMB-mapping Planck satellite in 2015, place the spectral index at about 0.968, which is tantalisingly close.

    ESA/Planck
    ESA/Planck

    If more data reveals a mismatch, the theory can be discarded. “That would be great – I won’t have to think about these theories again,” Magueijo says. “This whole class of theories in which the speed of light varies with respect to the speed of gravity will be ruled out.”

    But no measurement will rule out inflation entirely, because it doesn’t make specific predictions. “There is a huge space of possible inflationary theories, which makes testing the basic idea very difficult,” says Peter Coles at Cardiff University, UK. “It’s like nailing jelly to the wall.”

    That makes it all the more important to explore alternatives like varying light speeds, he adds.

    John Webb of the University of New South Wales in Sydney, Australia, has worked for many years on the idea that constants may vary, and is “very impressed” by Magueijo and Afshordi’s prediction. “A testable theory is a good theory,” he says.

    The implications could be profound. Physicists have long known there is a mismatch in the way the universe operates on its smallest scales and at its highest energies, and have sought a theory of quantum gravity to unite them. If there is a good fit between Magueijo’s theory and observations, it could bridge this gap, adding to our understanding of the universe’s first moments.

    “We have a model of the universe that embraces the idea there must be new physics at some point,” Magueijo says. “It’s complicated, obviously, but I think ultimately there will be a way of informing quantum gravity from this kind of cosmology.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 10:04 am on October 22, 2016 Permalink | Reply
    Tags: , , KIC 9832227, New Scientist, Red nova   

    From New Scientist: “Double star may light up the sky as rare red nova in six years” 

    NewScientist

    New Scientist

    21 October 2016
    Ken Croswell

    1
    Is another one around the corner? NASA / Hubble Heritage Team (AURA/STScI)

    A dim binary star is behaving exactly as expected if it is about to explode as a “red nova“. If that happens, in 2022 or so it could shine as brightly as the North Star.

    Dozens of ordinary novae – the temporary flare-ups of white dwarf stars stealing gas from their companion star – explode in our galaxy every year. These novae turn blue.

    In recent years, however, astronomers have discovered a rare type of nova that turns red instead. At peak brightness, many red novae rival the most luminous stars in the galaxy.

    A red nova in 2008 gave us a clue as to why these explosions happen: observations made before the blast revealed that the nova was the result of two stars orbiting each other merging into one.

    The two stars were in a so-called contact binary, orbiting so closely that they touched. If Earth circled a contact binary, our suns would look like a fiery peanut.

    Despite their exotic appearance, contact binaries are common, with nearly 40,000 known in our galaxy. Now, new observations show that one, named KIC 9832227, could be about to explode as a red nova.

    Boom star

    “My colleagues like to call it the ‘Boom Star’,” says Larry Molnar of Calvin College in Grand Rapids, Michigan.

    The binary is roughly 1700 light years from Earth, in the constellation Cygnus. The two stars whirl around each other every 11 hours.

    In 2013 and 2014, Molnar’s team discovered two things about KIC 9832227 that suggest an imminent explosion: the orbital period is decreasing, and it’s doing so at an ever-faster rate.

    This is exactly what the contact binary that sparked the 2008 red nova did. The orbital period shrank because the two stars circled each other faster as they spiralled closer together.

    Unfortunately, other effects can mimic this decrease in orbital period. For example, a third star can pull the binary toward us so that its light takes less time to reach Earth, creating the illusion that the two stars are circling each other faster. So additional observations were needed to figure out what KIC 9832227 was likely to do.

    In late 2015, astronomers in Bulgaria observed the star with a 30-centimetre telescope, and found that its period is still shrinking at an ever-faster clip. “A stellar merger is a real possibility,” says Alexander Kurtenkov of the University of Sofia.

    Molnar’s team finds this trend persisting into 2016. “At this point, I think we have a serious candidate,” he says.

    His latest observations, made with 40-centimetre telescopes in Michigan and New Mexico, put the date of the potential explosion between 2021 and 2023. But he cautions that another three years of observations are required before he can rule out alternatives. By then, if the orbital period keeps shrinking faster and faster, an impending explosion will be very likely. If it calms down, there might be a different outcome.

    KIC 9832227 is currently 12th magnitude – visible only through a telescope. But if it brightens by 10 magnitudes, as the 2008 red nova did, it will be as bright as the North Star and the brightest stars of the Big Dipper, and easily visible to the naked eye.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 3:11 pm on October 19, 2016 Permalink | Reply
    Tags: , , EEW: Earthquake Early Warning at UC Berkeley, New Scientist, , Why San Francisco’s next quake could be much bigger than feared   

    From New Scientist: “Why San Francisco’s next quake could be much bigger than feared” 

    NewScientist

    New Scientist

    19 October 2016
    Chelsea Whyte

    1
    Geological faults lie beneath the San Francisco Bay Area. USGS/ESA

    By Chelsea Whyte

    Since reports hit last year that a potentially massive earthquake could destroy vast tracts of the west coast of the United States, my phone has rung regularly with concerned family members from the Pacific coast asking one question: how big could it possibly be?

    In the San Francisco Bay Area, new findings now show a connection between two fault lines that could result in a major earthquake clocking in at magnitude 7.4.

    At that magnitude, it would radiate five times more energy than the 1989 Loma Prieta earthquake that killed dozens, injured thousands, and cost billions of dollars in direct damage.

    .“The concerning thing with the Hayward and Rodgers Creek faults is that they’ve accumulated enough stress to be released in a major earthquake. They’re, in a sense, primed,” says Janet Watt, a geophysicist at the US Geological Survey who led the study.

    The Hayward fault’s average time between quakes is 140 years, and the last one was 148 years ago.

    “In the next 30 years, there’s a 33 per cent chance of a magnitude 6.7 or greater,” she says. These two faults combined cover 190 kilometres running parallel to their famous neighbour, the San Andreas fault, from Santa Rosa in the north down through San Pablo Bay and south right under Berkeley stadium.

    Sweeping the bay

    To map the faults, Watt and her team scanned back and forth across the bay for magnetic anomalies that crop up near fault lines. They also swept the bay with a high-frequency acoustic instrument called a chirp to image the faults’ relationship below the sea floor using radar and sonar, in a similar way to how a bat uses echolocation to “see” the shape of a cave.

    “A direct connection makes it easier for a larger earthquake to occur that ruptures both faults,” says Roland Bürgmann at the University of California, Berkeley, who studies faults in the area.

    The Hayward and Rodgers Creek faults [Science Advances] combined could produce an earthquake releasing five times more energy than the Hayward fault alone.

    “It doesn’t mean the two faults couldn’t rupture together without the connection,” says Burgmann. “And it doesn’t mean that smaller earthquakes couldn’t occur on one or the other of the two faults most of the time.”

    But it makes the scenario of the larger, linked quake more likely, he says.

    Be prepared

    Bürgmann and his colleagues have found a similar connection between the southern end of the Hayward fault and the Calaveras fault, suggesting that they ought to be treated as one continuous fault. This new work follows that fault even farther north.

    So what do I tell my mom next time she calls?

    “Most important continues to be improving preparedness at all levels,” says Bürgmann. That includes better construction, personal readiness supplies, and the implementation of earthquake early warning systems, which include sensors triggered by the first signs of a quake and send out alerts ahead of the most violent shaking.

    The state and federal governments support building such a warning system in California, an effort led by Berkeley’s Seismological Laboratory.

    See the full article here.

    QCN bloc

    Quake-Catcher Network

    IF YOU LIVE IN AN EARTHQUAKE PRONE AREA, ESPECIALLY IN CALIFORNIA, YOU CAN EASILY JOIN THE QUAKE-CATCHER NETWORK

    The Quake-Catcher Network is a collaborative initiative for developing the world’s largest, low-cost strong-motion seismic network by utilizing sensors in and attached to internet-connected computers. With your help, the Quake-Catcher Network can provide better understanding of earthquakes, give early warning to schools, emergency response systems, and others. The Quake-Catcher Network also provides educational software designed to help teach about earthquakes and earthquake hazards.

    After almost eight years at Stanford, and a year at CalTech, the QCN project is moving to the University of Southern California Dept. of Earth Sciences. QCN will be sponsored by the Incorporated Research Institutions for Seismology (IRIS) and the Southern California Earthquake Center (SCEC).

    The Quake-Catcher Network is a distributed computing network that links volunteer hosted computers into a real-time motion sensing network. QCN is one of many scientific computing projects that runs on the world-renowned distributed computing platform Berkeley Open Infrastructure for Network Computing (BOINC).

    BOINCLarge

    BOINC WallPaper

    The volunteer computers monitor vibrational sensors called MEMS accelerometers, and digitally transmit “triggers” to QCN’s servers whenever strong new motions are observed. QCN’s servers sift through these signals, and determine which ones represent earthquakes, and which ones represent cultural noise (like doors slamming, or trucks driving by).

    There are two categories of sensors used by QCN: 1) internal mobile device sensors, and 2) external USB sensors.

    Mobile Devices: MEMS sensors are often included in laptops, games, cell phones, and other electronic devices for hardware protection, navigation, and game control. When these devices are still and connected to QCN, QCN software monitors the internal accelerometer for strong new shaking. Unfortunately, these devices are rarely secured to the floor, so they may bounce around when a large earthquake occurs. While this is less than ideal for characterizing the regional ground shaking, many such sensors can still provide useful information about earthquake locations and magnitudes.

    USB Sensors: MEMS sensors can be mounted to the floor and connected to a desktop computer via a USB cable. These sensors have several advantages over mobile device sensors. 1) By mounting them to the floor, they measure more reliable shaking than mobile devices. 2) These sensors typically have lower noise and better resolution of 3D motion. 3) Desktops are often left on and do not move. 4) The USB sensor is physically removed from the game, phone, or laptop, so human interaction with the device doesn’t reduce the sensors’ performance. 5) USB sensors can be aligned to North, so we know what direction the horizontal “X” and “Y” axes correspond to.

    If you are a science teacher at a K-12 school, please apply for a free USB sensor and accompanying QCN software. QCN has been able to purchase sensors to donate to schools in need. If you are interested in donating to the program or requesting a sensor, click here.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing, developed at UC Berkeley.

    Earthquake safety is a responsibility shared by billions worldwide. The Quake-Catcher Network (QCN) provides software so that individuals can join together to improve earthquake monitoring, earthquake awareness, and the science of earthquakes. The Quake-Catcher Network (QCN) links existing networked laptops and desktops in hopes to form the worlds largest strong-motion seismic network.

    Below, the QCN Quake Catcher Network map
    QCN Quake Catcher Network map

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 9:33 am on October 19, 2016 Permalink | Reply
    Tags: , , New Scientist, The Boötes void   

    From New Scientist: “Space is full of gigantic holes that are bigger than we expected” 

    NewScientist

    New Scientist

    18 October 2016
    Joshua Sokol

    1
    Very empty space. Richard Powell

    Face it, the vast darkness of space is a little eerie. It’s no wonder we usually prefer to focus on the bright spots. But it’s in the void that we might find our best explanations of the cosmos.

    In 1923, Edwin Hubble showed that the universe was far larger than expected by discovering that what we thought were swirls of gas on the edge of our own galaxy were actually galaxies in their own right: lonely “island universes” we could spot across an empty sea of black. That led to a comforting thought – we now know that even the darkest patch of sky, when seen through the telescope named after Hubble, is dotted with clumps of luminous stuff like our Milky Way.

    But there’s another view of the universe, like the horror cliché of flipping an image to its photonegative. Since 1981, when astronomers found a vacant expanse called the Boötes void, we’ve also known that the universe has holes of cold, dark, lonely nothing that are larger than anyone expected. To truly understand the universe, we may have to gaze into the abyss.

    A bubble in space

    The Boötes void, which you will assuredly not see if you look at Boötes, the “ploughman” constellation adjacent to the Big Dipper, is a rough sphere about 280 million light years in diameter.

    Galaxy-wise, it’s a ghost town. When we first saw the void, we found only one galaxy inside. Since then, we’ve detected only a few dozen more. By contrast, the Virgo Supercluster, a smaller region that includes the Milky Way, contains over 2000 galaxies.

    As residents of the Milky Way, humans are able to see one large nearby galaxy, Andromeda, with our naked eyes. The proximity of Andromeda helped Edwin Hubble look at its individual stars to unlock the true scope of the universe. If our galaxy were in the Boötes void, our nearest peers would be much farther away – perhaps allowing us to fancy ourselves at the center of the cosmos for longer.

    This is no statistical accident. At very large scales, the universe is often described as a cosmic web, with strands of invisible dark matter undergirding the universe’s luminous structure. It might be better here to think of it as cosmic foam, like soap bubbles in a bathtub. Just as it’s sudsy where bubbles intersect, galaxy clusters concentrate in walls, filaments and intersections. In between is mostly void.

    Making peace with the vacuum

    The problem was that the Boötes void was just too big. Voids grow because their dense edges have a much stronger gravitational pull than anything at their centres. But the universe wasn’t yet old enough to have inflated such a big bubble.

    For an explanation, we had to wait until the 1998 discovery of dark energy: a cosmic pressure that forces empty regions of space to expand as if someone was blowing air into each of the universe’s soap bubbles all at once.

    Many astronomers, now in a boom of cataloging and mapping voids, think these spooky regions that expose the naked fabric of the universe could point to the next big discovery.

    Soon, statistical analyses of their shapes may be able to help us measure dark energy, gravity and any mysterious new forces better than ever before. And in the process, perhaps, they will help us learn to embrace the emptiness.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 2:34 pm on October 7, 2016 Permalink | Reply
    Tags: , First farm to grow veg in a desert using only sun and seawater, New Scientist   

    From New Scientist: “First farm to grow veg in a desert using only sun and seawater” 

    NewScientist

    New Scientist

    6 October 2016
    Alice Klein

    1
    Sundrop farm: no fossil fuels required to grow 180,000 tomato plants. Sundrop

    Sunshine and seawater. That’s all a new, futuristic-looking greenhouse needs to produce 17,000 tonnes of tomatoes per year in the South Australian desert.

    It’s the first agricultural system of its kind in the world and uses no soil, pesticides, fossil fuels or groundwater. As the demand for fresh water and energy continues to rise, this might be the face of farming in the future.

    An international team of scientists have spent the last six years fine-tuning the design – first with a pilot greenhouse built in 2010; then with a commercial-scale facility that began construction in 2014 and was officially launched today.

    How it works

    Seawater is piped 2 kilometres from the Spencer Gulf to Sundrop Farm – the 20-hectare site in the arid Port Augusta region. A solar-powered desalination plant removes the salt, creating enough fresh water to irrigate 180,000 tomato plants inside the greenhouse.

    Scorching summer temperatures and dry conditions make the region unsuitable for conventional farming, but the greenhouse is lined with seawater-soaked cardboard to keep the plants cool enough to stay healthy. In winter, solar heating keeps the greenhouse warm.

    There is no need for pesticides as seawater cleans and sterilises the air, and plants grow in coconut husks instead of soil.

    The farm’s solar power is generated by 23,000 mirrors that reflect sunlight towards a 115-metre high receiver tower. On a sunny day, up to 39 megawatts of energy can be produced – enough to power the desalination plant and supply the greenhouse’s electricity needs.

    Tomatoes produced by the greenhouse have already started being sold in Australian supermarkets.

    Future outlook

    Possible solar energy shortages in winter mean that the greenhouse still needs to be hooked up to the grid for back-up, but gradual improvements to the design will eliminate any reliance on fossil fuels, says Sundrop Farm CEO Philipp Saumweber.

    The $200 million infrastructure makes the seawater greenhouse more expensive to set up than traditional greenhouses, but the cost will pay off long-term, says Saumweber. Conventional greenhouses are more expensive to run on an annual basis because of the cost of fossil fuels, he says.

    Sundrop is now planning to launch similar sustainable greenhouses in Portugal and the US, and another in Australia. Other companies are also testing pilot seawater greenhouses in desert areas of Oman, Qatar and the United Arab Emirates.

    “These closed production systems are very clever,” says Robert Park at the University of Sydney, Australia. “I believe that systems using renewable energy sources will become better and better and increase in the future, contributing even more of some of our foods.”

    However, Paul Kristiansen at the University of New England, Australia, questions the need for energy-intensive tomato farming in a desert, when there are ideal growing conditions in other parts of Australia.

    “It’s a bit like crushing a garlic clove with a sledgehammer,” he says. “We don’t have problems growing tomatoes in Australia.”

    Nevertheless, the technology may become useful in the future if climate change causes drought in once-fertile regions, Kristiansen says. “Then it will be good to have back-up plans.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 9:17 am on September 29, 2016 Permalink | Reply
    Tags: , , New Scientist, , Where is the Milky Way?   

    From New Scientist: “Our home spiral arm in the Milky Way is less wimpy than thought” 

    NewScientist

    New Scientist

    28 September 2016

    It’s tricky to map an entire galaxy when you live in one of its arms. But astronomers have made the clearest map yet of the Milky Way – and it turns out that the arm that hosts our solar system is even bigger than previously thought.

    The idea that the Milky Way is a spiral was first proposed more than 150 years ago, but we only started identifying its limbs in the 1950s. Details about the galaxy’s exact structure are still hotly debated, such as the number of arms, their length and the size of the bar of hot gas and dust that stretches across its middle.

    The star-filled arms are densely packed with gas and dust, where new stars are born. That dust can obscure stars we use to measure distances, complicating the mapping process.

    .
    Two of the arms, called Perseus and Scutum-Centaurus, are larger and filled with more stars, while the Sagittarius and Outer arms have fewer stars but just as much gas. The solar system has been thought to lie in a structure called the Orion Spur, or Local Arm, which is smaller than the nearby Perseus Arm.

    1
    Artist’s conception of the Milky Way galaxy as seen from far Galactic North (in Coma Berenices) by NASA/JPL-Caltech/R. Hurt annotated with arms (colour-coded according to Milky Way article) as well as distances from the Solar System and galactic longitude with corresponding constellation.

    Just as grand

    Now, Ye Xu and colleagues from the Purple Mountain Observatory in Nanjing, China, say the Local Arm is just as grand as the others.

    Purple Mountain Observatory
    Purple Mountain Observatory in Nanjing, China

    The team used the Very Long Baseline Array in New Mexico to make extremely accurate measurements of high-mass gas clouds in the arms, and used a star-measuring trigonometry trick called parallax to measure their distances.

    NRAO VLBA
    NRAO VLBA

    “Radio telescopes can ‘see’ through the galactic plane to massive star forming regions that trace spiral structure, while optical wavelengths will be hidden by dust,” Xe says. “Achieving a highly accurate parallax is not easy.”

    The new measurements suggest the Milky Way is not a grand design spiral with well-defined arms, but a spiral with many branches and subtle spurs.

    However, Xu and colleagues say the Orion Spur is not a spur at all, but more in line with the galaxy’s other spectacular arms. The team also discovered a spur connecting the Local and Sagittarius arms.

    “This lane has received little attention in the past because it does not correspond with any of the major spiral arm features of the inner galaxy,” the authors of the study write.

    Future measurements with other radio telescopes will shed more light on the galaxy’s shape. The European Space Agency’s Gaia spacecraft is in the midst of a mission to make a three-dimensional map of our galaxy, too.

    ESA/GAIA satellite
    ESA/GAIA satellite

    More measurements of the high-mass gas regions will help astronomers determine what our galaxy looks like, from the inside out.

    Journal reference: Science Advances, DOI: 10.1126/sciadv.1600878

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 2:19 pm on September 26, 2016 Permalink | Reply
    Tags: , , , New Scientist   

    From New Scientist: “Puffed-up exoplanets inflate with heat from their stars alone” 

    NewScientist

    New Scientist

    26 September 2016
    Shannon Hall

    1
    Would float in a bathtub. NASA/ESA/G. Bacon (STScI)

    A star’s heat goes deep. For the first time, we have spotted a “hot Jupiter” that has expanded thanks to its swelling host star – an observation that could settle a 15-year-old debate.

    Hot Jupiters – gas giant exoplanets that orbit scorchingly close to their stars – are inexplicably puffy. “We see these planets that are the sizes of stars without being anywhere near the mass of those stars,” says Sam Grunblatt at the University of Hawaii.

    HAT-P-1b, for example, contains half the mass of Jupiter yet is 20 per cent larger in radius. This gives it such a low density that it would float in a bathtub.

    For more than a decade, we have been trying to explain how these planetary puffballs grow so large. The dozen or so different scenarios we have come up with all fall into two broad categories: either the star’s heat keeps the planet from cooling and therefore contracting, or it somehow penetrates the planet’s deep interior, causing it to expand.

    Heat is gone

    One way to test this would be to nudge an older planet – one whose initial heat is long gone – towards its host star. If all that were needed for expansion was heat from the star, the planet would inflate like a balloon. But if planetary heat were also needed, then the nudged planet would remain the same size.

    We can’t play puppeteer with other planetary systems, but we can search for Jupiter-size planets orbiting red giant stars. These stars, which are in a later stage in their evolution, are bigger and brighter than when they were youngsters. That means a Jupiter-size planet orbiting a red giant is a good stand-in for a planet that has been pushed close to its host star.

    To search for such systems, Grunblatt and his colleagues examined data from the Kepler space telescope. They discovered that one planet, dubbed EPIC 211351816.01, is 1.3 times Jupiter’s size and far enough away from the red giant that it could only have inflated after the star had swelled outward.

    “We’re catching it in a phase where its radius is being expanded because the star’s brightness is increasing a lot, just in the past few hundred million years,” says Jonathan Fortney at the University of California, Santa Cruz.

    Adam Burrows at Princeton University says we can’t reach strong conclusions from studying just one object. But he says that another recent study shows that older planets, which orbit brighter stars, tend to be slightly larger than younger planets, which orbit fainter stars. Taken together, both studies strongly suggest that stellar radiation alone can inflate planets, he says.

    But we are still hunting for further examples. “The TESS mission, which will launch at the end of next year, should observe maybe 100,000 red giants and really help put this question to bed,” says Thomas Barclay at the NASA Ames Research Center.

    Journal reference: arXiv, DOI: arXiv:1606.05818v2

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 10:57 am on September 19, 2016 Permalink | Reply
    Tags: , , New Scientist,   

    From New Scientist- “Revealed: Google’s plan for quantum computer supremacy” 

    NewScientist

    New Scientist

    31 August 2016 [This just now appeared in social media.]
    Jacob Aron

    1
    Superconducting qubits are tops. UCSB

    The field of quantum computing is undergoing a rapid shake-up, and engineers at Google have quietly set out a plan to dominate.

    SOMEWHERE in California, Google is building a device that will usher in a new era for computing. It’s a quantum computer, the largest ever made, designed to prove once and for all that machines exploiting exotic physics can outperform the world’s top supercomputers.

    And New Scientist has learned it could be ready sooner than anyone expected – perhaps even by the end of next year.

    The quantum computing revolution has been a long time coming. In the 1980s, theorists realised that a computer based on quantum mechanics had the potential to vastly outperform ordinary, or classical, computers at certain tasks. But building one was another matter. Only recently has a quantum computer that can beat a classical one gone from a lab curiosity to something that could actually happen. Google wants to create the first.

    The firm’s plans are secretive, and Google declined to comment for this article. But researchers contacted by New Scientist all believe it is on the cusp of a breakthrough, following presentations at conferences and private meetings.
    .

    “They are definitely the world leaders now, there is no doubt about it,” says Simon Devitt at the RIKEN Center for Emergent Matter Science in Japan. “It’s Google’s to lose. If Google’s not the group that does it, then something has gone wrong.”

    We have had a glimpse of Google’s intentions. Last month, its engineers quietly published a paper detailing their plans (arxiv.org/abs/1608.00263). Their goal, audaciously named quantum supremacy, is to build the first quantum computer capable of performing a task no classical computer can.

    “It’s a blueprint for what they’re planning to do in the next couple of years,” says Scott Aaronson at the University of Texas at Austin, who has discussed the plans with the team.

    So how will they do it? Quantum computers process data as quantum bits, or qubits. Unlike classical bits, these can store a mixture of both 0 and 1 at the same time, thanks to the principle of quantum superposition. It’s this potential that gives quantum computers the edge at certain problems, like factoring large numbers. But ordinary computers are also pretty good at such tasks. Showing quantum computers are better would require thousands of qubits, which is far beyond our current technical ability.

    Instead, Google wants to claim the prize with just 50 qubits. That’s still an ambitious goal – publicly, they have only announced a 9-qubit computer – but one within reach.

    To help it succeed, Google has brought the fight to quantum’s home turf. It is focusing on a problem that is fiendishly difficult for ordinary computers but that a quantum computer will do naturally: simulating the behaviour of a random arrangement of quantum circuits.

    Any small variation in the input into those quantum circuits can produce a massively different output, so it’s difficult for the classical computer to cheat with approximations to simplify the problem. “They’re doing a quantum version of chaos,” says Devitt. “The output is essentially random, so you have to compute everything.”

    To push classical computing to the limit, Google turned to Edison, one of the most advanced supercomputers in the world, housed at the US National Energy Research Scientific Computing Center. Google had it simulate the behaviour of quantum circuits on increasingly larger grids of qubits, up to a 6 × 7 grid of 42 qubits.

    This computation is difficult because as the grid size increases, the amount of memory needed to store everything balloons rapidly. A 6 × 4 grid needed just 268 megabytes, less than found in your average smartphone. The 6 × 7 grid demanded 70 terabytes, roughly 10,000 times that of a high-end PC.

    Google stopped there because going to the next size up is currently impossible: a 48-qubit grid would require 2.252 petabytes of memory, almost double that of the top supercomputer in the world. If Google can solve the problem with a 50-qubit quantum computer, it will have beaten every other computer in existence.

    Eyes on the prize

    By setting out this clear test, Google hopes to avoid the problems that have plagued previous claims of quantum computers outperforming ordinary ones – including some made by Google.

    Last year, the firm announced it had solved certain problems 100 million times faster than a classical computer by using a D-Wave quantum computer, a commercially available device with a controversial history. Experts immediately dismissed the results, saying they weren’t a fair comparison.

    Google purchased its D-Wave computer in 2013 to figure out whether it could be used to improve search results and artificial intelligence. The following year, the firm hired John Martinis at the University of California, Santa Barbara, to design its own superconducting qubits. “His qubits are way higher quality,” says Aaronson.

    It’s Martinis and colleagues who are now attempting to achieve quantum supremacy with 50 qubits, and many believe they will get there soon. “I think this is achievable within two or three years,” says Matthias Troyer at the Swiss Federal Institute of Technology in Zurich. “They’ve showed concrete steps on how they will do it.”

    Martinis and colleagues have discussed a number of timelines for reaching this milestone, says Devitt. The earliest is by the end of this year, but that is unlikely. “I’m going to be optimistic and say maybe at the end of next year,” he says. “If they get it done even within the next five years, that will be a tremendous leap forward.”

    The first successful quantum supremacy experiment won’t give us computers capable of solving any problem imaginable – based on current theory, those will need to be much larger machines. But having a working, small computer could drive innovation, or augment existing computers, making it the start of a new era.

    Aaronson compares it to the first self-sustaining nuclear reaction, achieved by the Manhattan project in Chicago in 1942. “It might be a thing that causes people to say, if we want a full-scalable quantum computer, let’s talk numbers: how many billions of dollars?” he says.

    Solving the challenges of building a 50-qubit device will prepare Google to construct something bigger. “It’s absolutely progress to building a fully scalable machine,” says Ian Walmsley at the University of Oxford.

    For quantum computers to be truly useful in the long run, we will also need robust quantum error correction, a technique to mitigate the fragility of quantum states. Martinis and others are already working on this, but it will take longer than achieving quantum supremacy.

    Still, achieving supremacy won’t be dismissed.

    “Once a system hits quantum supremacy and is showing clear scale-up behaviour, it will be a flare in the sky to the private sector,” says Devitt. “It’s ready to move out of the labs.”

    “The field is moving much faster than expected,” says Troyer. “It’s time to move quantum computing from science to engineering and really build devices.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 4:34 pm on September 13, 2016 Permalink | Reply
    Tags: , , , , New Scientist, Star arrangement that hid for a decade spotted at galaxy’s heart   

    From New Scientist: “Star arrangement that hid for a decade spotted at galaxy’s heart” 

    NewScientist

    New Scientist

    13 September 2016
    Adam Mann

    1
    Part of our galaxy’s centre, as seen in near-infrared wavelengths. ESO/S. Gillessen et al.

    There’s a party in the galactic centre. We may have found the first solid evidence of a dense conference of stars around the Milky Way’s heart, which may one day help us observe the supermassive black hole living there.

    Sag A*  NASA Chandra X-Ray Observatory 23 July 2014, the supermassive black hole at the center of the Milky Way
    Sag A* NASA Chandra X-Ray Observatory 23 July 2014, the supermassive black hole at the center of the Milky Way

    The structure is known as a stellar cusp, and it has played hide-and-seek with astronomers for more than a decade. It was first proposed in the 1970s, when models predicted that stars orbiting a supermassive black hole would jostle around every time one was devoured. Over the course of a galaxy’s lifetime, this should leave an arrangement with many stars near the black hole and exponentially fewer as you move farther away.

    But it has been hard to prove this happens. Other galaxies are too far away for us to see their centres as anything more than fuzzy blobs. Observations in the early 2000s seemed to support a cusp in the Milky Way, but better data showed that we had been tricked by obscuring dust.

    Now, Rainer Schödel at the Institute of Astrophysics of Andalusia in Granada, Spain, and his colleagues have combined images of the galactic centre to map faint old stars, which have been around long enough to settle into a cusp. They also studied the total light emitted by all stars at varying distances from our galaxy’s central black hole, and compared the results with simulations.

    Perfect probes

    These methods point to the same conclusion: the cusp exists. Around our galaxy’s central black hole, the density of stars is 10 million times that in our local area, says Schödel, who presented the work on 7 September at the LISA Symposium in Zurich, Switzerland.

    Many of those stars will eventually explode as supernovae, leaving behind black holes with masses comparable to that of our sun. If one of these merges with the black hole in the galactic centre, it will emit telltale gravitational waves that can be picked up by future observatories, like the proposed Laser Interferometer Space Antenna (LISA).

    ESA/eLISA
    ESA/eLISA

    Those waves will help figure out the mass, rotation rate and other properties of the black hole with extreme precision.

    “These stellar mass black holes would be absolutely perfect probes of spacetime around the supermassive black hole,” Schödel says.

    If the Milky Way has a cusp, then it’s likely that other galaxies do as well. That’s good news for an observatory like LISA, which may be able to pick up waves from dozens or even hundreds of interactions between stellar mass and supermassive black holes each year.

    The work is a significant advance over previous methods and seems to support the existence of a cusp, says Tuan Do at the University of California, Los Angeles. “The galactic centre is always surprising us though, so I think it would be great to take more observations to verify that there is a cusp of faint old stars,” he says.

    The next generation of enormous observatories, like the Thirty Meter Telescope and Giant Magellan Telescope, will see an order of magnitude more stars than current observatories can.

    TMT-Thirty Meter Telescope, proposed for Mauna Kea, Hawaii, USA
    TMT-Thirty Meter Telescope, proposed for Mauna Kea, Hawaii, USA

    Giant Magellan Telescope, Las Campanas Observatory, to be built  some 115 km (71 mi) north-northeast of La Serena, Chile
    Giant Magellan Telescope, Las Campanas Observatory, to be built some 115 km (71 mi) north-northeast of La Serena, Chile

    They will almost certainly observe the cusp if it’s there, Schödel says.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: