Recent Updates Page 2 Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 7:57 am on April 18, 2018 Permalink | Reply
    Tags: , , , , RV Investigator   

    From CSIROscope: “(100,000) nautical miles of science” 

    CSIRO bloc

    CSIROscope

    18 April 2018
    Matt Marrison

    0 NM
    Shipyard, Singapore – June 2014

    1
    Investigator in dry dock in Singapore. Image Mike Watson.

    ____________________________________________________
    For those converting at home…

    1 nautical mile (NM) = 1.85 kilometres (km) or 1.15 miles (mi)
    ____________________________________________________

    The ship is our research vessel Investigator. Newly built, it sits in dry dock at Sembawang Shipyard in Singapore. Freshly painted in blue, green and white, Investigator waits patiently for water to flood the dock to lift it from its supports and float free for the first time.

    It’s a big day.

    The ship is a game-changer for marine research in Australia. With capabilities far beyond those of previous Australian research vessels, Investigator will voyage far and do big science. It gives the nation a world-leading scientific edge to help answer big questions about the marine environment and resources, climate and food security.

    Installation complete, the ship log is switched on for the first time. The display flickers into life and shows 0 nautical miles (NM). Our journey begins.

    10,000 NM
    Derwent River, Tasmania – December 2014

    2
    Investigator arrives in its home port of Hobart, Tasmania.

    Science takes you places. Investigator has travelled from Singapore to Hobart, and is undergoing sea trials ahead of commissioning. The trials test the limit of the ship’s endurance, taking it past 60°S to the edge of the Antarctic sea ice.

    On return, and nearly three years after construction began, Investigator is commissioned at the CSIRO Marine Laboratories in Hobart on 12 December 2014.

    20,000 NM
    East Australian Current, Tasman Sea – June 2015

    3
    Discovering undersea volcanoes off the NSW coast.

    While on a voyage to study the East Australian Current, seafloor surveys pick up some unusual features off the coast of Sydney. They look like a row of egg cups. The egg cups are ancient marine volcanoes, never seen before, but now appearing in bright colour on monitors across the ship.

    Scientists watch the story unfold in the evening news as they continue their work on board. This ship runs on a 24/7 mix of high-octane science.

    30,000 NM
    Heard Island, Southern Ocean – January 2016

    4
    Investigator approaches the remote Heard Island. Image Pete Harmsen.

    Investigator has journeyed to a remote corner of our vast ocean estate to study volcanoes on the sea floor. While at Heard Island, steam rising from Big Ben signals to those on board that they have arrived in time to witness a rare eruption from one of Australia’s only active volcanoes.

    5
    Big Ben expresses himself, giving researchers a bang for their investigatory buck. Image: Pete Harmsen.

    40,000 NM
    Somewhere in the Southern Ocean – April 2016

    6
    Collaboration leads to some deep discussions about data. Image Gloria Salgado Gispert.

    40 scientists walk onto a ship…

    Investigator’s great capacity for work has allowed three separate projects to be combined on this voyage to study the Southern Ocean, from the deep ocean high into the atmosphere above.

    The mixing pot of scientists, gathered on board from both near and far, leads to the sharing of ideas and knowledge from researchers across multiple disciplines. Importantly, it also gives students on board the chance to learn from world-renown experts in marine and climate science.

    50,000 NM
    West of Fiji, Pacific Ocean – July 2016

    7
    Investigator in tropical waters off Lautoka, Fiji.

    We have better maps of the moon than we do of our sea floor. The advanced mapping technology on Investigator is slowly chipping away at the edges of the unknown on each and every voyage.

    A transit voyage back from Fiji provides scientists with the opportunity to collect seafloor samples and map previously unseen underwater landscapes formed during the break-up of Gondwana.

    8
    https://www.zmescience.com/science/geology/fossilized-scorpion-gondwana-02092013/

    60,000 NM
    East Australian Current, Tasman Sea – November 2016

    9
    A deep-water mooring anchor stack is deployed from the back deck.

    Anchors away! Another mooring is lowered into the ocean. These form part of an important network of monitoring stations in oceans across the planet which feed data into global datasets. This is data that allow us to better understand ocean and climate change.

    Before Investigator, these deployments took smaller ships many voyages back and forth. Now, the ship is loaded up and heads out to complete the job in one go.

    70,000 NM
    Totten Glacier, Antarctica – March 2017

    10
    Investigator gets up close to the ice edge in Antarctica.

    It takes a lot of patience to study glaciers, especially those at the ends of the Earth! It’s a long way down and a long way back. Luckily, they aren’t going anywhere fast.

    Or are they? The science we’re enabling on this voyage will help us find out.

    Since arriving, Investigator has now completed 15 research and transit voyages totalling over 400 days of science at sea.

    80,000 NM
    The Abyss, Coral Sea – June 2017

    11
    Scientists look for signs of life in sediments from the abyss. Image Asher Flatt.

    Marine life can be found in some hard to reach places. Investigator is on a voyage to study life in Australia’s deep ocean abyss off the east coast. Seven Commonwealth Marine Reserves are being mapped and studied. Many of the denizens of the deep discovered are soon to become worldwide science sensations.

    90,000 NM
    North West Shelf, Indian Ocean – November 2017

    12
    Investigator enables unique studies of the biodiversity in our oceans.

    In the warm waters off the coast of Western Australia, Investigator is studying the long term recovery of trawled marine communities. It is part of a circumnavigation of the continent completed during 2017 that saw the ship conduct research in all offshore waters.

    It’s our first big lap but it won’t be our last.

    100,000 NM
    Somewhere in the Southern Ocean – February 2018

    13
    Clocking up the big science miles!

    Deep in the Southern Ocean, returning from its first voyage for 2018, Investigator passes a significant milestone on the ship log.

    100,000 nautical miles!

    That’s about 4.5 laps of the globe (at the equator).

    Across the journey, over 800 scientists, researchers and support staff (including over 100 students) from Australia and over 15 other countries have stepped on board as part of voyage science teams.

    For the 40 members of the science team on board today, it’s business as usual. The science doesn’t stop to celebrate. This is what the ship does. Big science over the big journey to answer the big questions.

    100,001+ NM

    The journey is only just beginning for this ship. It’s still a somewhat precocious teenager. With an operational life stretching out at least 25 years, much more science lies ahead for RV Investigator and our future heroes of science on board.

    Stay tuned for the sequel!

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    SKA/ASKAP radio telescope at the Murchison Radio-astronomy Observatory (MRO) in Mid West region of Western Australia

    So what can we expect these new radio projects to discover? We have no idea, but history tells us that they are almost certain to deliver some major surprises.

    Making these new discoveries may not be so simple. Gone are the days when astronomers could just notice something odd as they browse their tables and graphs.

    Nowadays, astronomers are more likely to be distilling their answers from carefully-posed queries to databases containing petabytes of data. Human brains are just not up to the job of making unexpected discoveries in these circumstances, and instead we will need to develop “learning machines” to help us discover the unexpected.

    With the right tools and careful insight, who knows what we might find.

    CSIRO campus

    CSIRO, the Commonwealth Scientific and Industrial Research Organisation, is Australia’s national science agency and one of the largest and most diverse research agencies in the world.

    Advertisements
     
  • richardmitnick 9:13 pm on April 17, 2018 Permalink | Reply
    Tags: And yet - here we are, , , , , Dark Energy and Dark Matter, , What Astronomers Wish Everyone Knew About Dark Matter And Dark Energy   

    From Ethan Siegel: “What Astronomers Wish Everyone Knew About Dark Matter And Dark Energy” 

    From Ethan Siegel
    Apr 17, 2018

    1
    One way of measuring the Universe’s expansion history involves going all the way back to the first light we can see, when the Universe was just 380,000 years old. The other ways don’t go backwards nearly as far, but also have a lesser potential to be contaminated by systematic errors. (European Southern Observatory).

    Among the general public, people compare it to the aether, phlogiston, or epicycles. Yet almost all astronomers are certain: dark matter and dark energy exist. Here’s why.

    If you go by what’s often reported in the news, you’d be under the impression that dark matter and dark energy are houses of cards just waiting to be blown down. Theorists are constantly exploring other options; individual galaxies and their satellites arguably favor some modification of gravity to dark matter; there are big controversies over just how fast the Universe is expanding, and the conclusions we’ve drawn from supernova data may need to be altered. Given that we’ve made mistaken assumptions in the past by presuming that the unseen Universe contained substances that simply weren’t there, from the aether to phlogiston, isn’t it a greater leap-of-faith to assume that 95% of the Universe is some invisible, unseen form of energy than it is to assume there’s just a flaw in the law of gravity?

    The answer is a resounding, absolute no, according to almost all astronomers, astrophysicists, and cosmologists who study the Universe. Here’s why.

    Cosmology is the science of what the Universe is, how it came to be this way, what its fate is, and what it’s made up of. Originally, these questions were in the realms of poets, philosophers and theologians, but the 20th century brought these questions firmly into the realm of science. When Einstein put forth his theory of General Relativity, one of the first things that was realized is if you fill the space that makes up the Universe with any form of matter or energy, it immediately becomes unstable. If space contains matter and energy, it can expand or contract, but all static solutions are unstable. Once we measured the Hubble expansion of the Universe and discovered the leftover glow from the Big Bang in the form of the Cosmic Microwave Background, cosmology became a quest to measure two numbers: the expansion rate itself and how that rate changed over time. Measure those, and General Relativity tells you everything you could want to know about the Universe.

    COBE CMB

    NASA/COBE 1989 to 1993.

    Cosmic Microwave Background NASA/WMAP

    NASA WMAP 2001 to 2010

    CMB per ESA/Planck


    ESA/Planck 2009 to 2013

    3
    A plot of the apparent expansion rate (y-axis) vs. distance (x-axis) is consistent with a Universe that expanded faster in the past, but is still expanding today. This is a modern version of, extending thousands of times farther than, Hubble’s original work. Note the fact that the points do not form a straight line, indicating the expansion rate’s change over time. (Ned Wright, based on the latest data from Betoule et al. (2014))

    These two numbers, known as H_0 and q_0, are called the Hubble parameter and the deceleration parameter, respectively. If you take a Universe that’s filled with stuff, and start it off expanding at a particular rate, you’d fully expect it to have those two major physical phenomena — gravitational attraction and the initial expansion — fight against each other. Depending on how it all turned out, the Universe ought to follow one of three paths:

    The Universe expands fast enough that even with all the matter and energy in the Universe, it can slow the expansion down but never reverse it. In this case, the Universe expands forever.
    The Universe begins expanding quickly, but there’s too much matter and energy. The expansion slows, comes to a halt, reverses, and the Universe eventually recollapses.
    Or, perhaps, the Universe — like the third bowl of porridge in Goldilocks — is just right. Perhaps the expansion rate and the amount of stuff in the Universe are perfectly balanced, with the expansion rate asymptoting to zero.

    That last case can only occur if the energy density of the Universe equals some perfectly balanced value: the critical density.

    4
    The expected fates of the Universe (top three illustrations) all correspond to a Universe where the matter and energy fights against the initial expansion rate. In our observed Universe, a cosmic acceleration is caused by some type of dark energy, which is hitherto unexplained. (E. Siegel / Beyond the Galaxy)

    This is actually a beautiful setup, because the equations you derive from General Relativity are completely deterministic here. Measure how the Universe is expanding today and how it was expanding in the past, and you know exactly what the Universe must be made out of. You can derive how old the Universe has to be, how much matter and radiation (and curvature, and any other stuff) has to be in it, and all sorts of other interesting information. If we could know those two numbers exactly, H_0 and q_0, we would immediately know both the Universe’s age and also what the Universe is made out of.

    5
    Three different types of measurements, distant stars and galaxies, the large scale structure of the Universe, and the fluctuations in the CMB, tell us the expansion history of the Universe. (NASA/ESA HUbble, Sloan Digital Sky Survey, ESA and the Planck Collaboration [ESA/Planck pictured above)

    NASA/ESA Hubble Telescope


    SDSS Telescope at Apache Point Observatory, near Sunspot NM, USA, Altitude 2,788 meters (9,147 ft)

    Now, we had some preconceptions when we started down this path. For aesthetic or mathematically prejudicial reasons, some people preferred the recollapsing Universe, while others preferred the critical Universe and still others preferred the open one. In reality, all you can do, if you want to understand the Universe, is examine it and ask it what it’s made of. Our laws of physics tell us what rules the Universe plays by; the rest is determined by measurement. For a long time, measurements of the Hubble constant were highly uncertain, but one thing became clear: if the Universe were made 100% of normal matter, the Universe turned out to be very young.

    6
    Measuring back in time and distance (to the left of “today”) can inform how the Universe will evolve and accelerate/decelerate far into the future. We can learn that acceleration turned on about 7.8 billion years ago with the current data, but also learn that the models of the Universe without dark energy have either Hubble constants that are too low or ages that are too young to match with observations. (Saul Perlmutter, Nobel Laureate, of Berkeley)

    If the expansion rate, H_0, was fast, like 100 km/s/Mpc, the Universe would only be 6.5 billion years old. Given that the ages of stars in globular clusters — admittedly, some of the oldest stars in the Universe — were at least 12 billion years old (and many cited numbers closer to 14–16 billion), the Universe couldn’t be this young. While some measurements of H_0 were significantly lower, like 55 km/s/Mpc, that still gave a Universe that was 11-and-change billion: still younger than the stars we found within it. Moreover, as more and more measurements came in during the 1970s, 1980s and beyond, it became clear that an abnormally low Hubble constant in the 40s or 50s, simply didn’t line up with the data.

    7
    The globular cluster Messier 75, showing a huge central concentration, is over 13 billion years old. Many globular clusters have stellar populations that are in excess of 12 or even 13 billion years, a challenge for ‘matter-only’ models of the Universe. (HST / Fabian RRRR, with data from the Hubble Legacy Archive)

    At the same time, we were beginning to measure to good precision how abundant the light elements in the Universe were. Big Bang Nucleosynthesis is the science of how much relative hydrogen, helium-4, helium-3, deuterium, and lithium-7 ought to be left over from the Big Bang. The only parameter that isn’t derivable from physical constants in these calculation is the baryon-to-photon ratio, which tells you the density of normal matter in the Universe. (This is relative to the number density of photons, but that is easily measurable from the Cosmic Microwave Background.) While there was some uncertainty at the time, it became clear very quickly that 100% of the matter couldn’t be “normal,” but only about 10% at most. There is no way the laws of physics could be correct and give you a Universe with 100% normal matter.

    8
    The predicted abundances of helium-4, deuterium, helium-3 and lithium-7 as predicted by Big Bang Nucleosynthesis, with observations shown in the red circles. This corresponds to a Universe where the baryon density (normal matter density) is only 5% of the critical value. (NASA / WMAP Science Team)

    By the early 1990s, this began to line up with a slew of observations that all pointed to pieces of this cosmic puzzle:

    The oldest stars had to be at least 13 billion years old,
    If the Universe were made of 100% matter, the value of H_0 could be no bigger than 50 km/s/Mpc to get a Universe that old,
    Galaxies and clusters of galaxies showed strong evidence that there was lots of dark matter,
    X-ray observations from clusters showed that only 10–20% of the matter could be normal matter,
    The large-scale structure of the Universe (correlations between galaxies on hundreds-of-millions of light year scales) showed you need more mass than normal matter could provide,
    but the deep source counts, which depend on the Universe’s volume and how that changes over time, showed that 100% matter was far too much,
    Gravitational lensing was starting to “weigh” these galaxy clusters, and found that only about 30% of the critical density was total matter,
    and Big Bang Nucleosynthesis really seemed to favor a Universe where just ~1/6th of the matter density was normal matter.

    So what was the solution?

    9
    The mass distribution of cluster Abell 370. reconstructed through gravitational lensing, shows two large, diffuse halos of mass, consistent with dark matter with two merging clusters to create what we see here. Around and through every galaxy, cluster, and massive collection of normal matter exists 5 times as much dark matter, overall. This still isn’t enough to reach the critical density, or anywhere close to it, on its own. (NASA, ESA, D. Harvey (École Polytechnique Fédérale de Lausanne, Switzerland), R. Massey (Durham University, UK), the Hubble SM4 ERO Team and ST-ECF)

    Gravitational Lensing NASA/ESA

    Most astronomers had accepted dark matter by this time, but even a Universe that was made exclusively of dark and normal matter would still be problematic. It simply wasn’t old enough for the stars in it! Two pieces of evidence in the late 1990s that came together gave us the way forward. One was the cosmic microwave background, which showed us that the Universe was spatially flat, and therefore the total amount of stuff in there added up to 100%. Yet it couldn’t all be matter, even a mix of normal and dark matter! The other piece of evidence was supernova data, which showed that there was a component in the Universe causing it to accelerate: this must be dark energy. Looking at the multiple lines of evidence even today, they all point to that exact picture.

    11

    Constraints on dark energy from three independent sources: supernovae, the CMB, and BAO (which are a feature in the Universe’s large-scale structure). Note that even without supernovae, we’d need dark energy, and that only 1/6th of the matter found can be normal matter; the rest must be dark matter. (Supernova Cosmology Project, Amanullah, et al., Ap.J. (2010))

    So either you have all these independent lines of evidence, all pointing towards the same picture: General Relativity is our theory of gravity, and our Universe is 13.8 billion years old, with ~70% dark energy, ~30% total matter, where about 5% is normal matter and 25% is dark matter. There are photons and neutrinos which were important in the past, but they’re just a small fraction-of-a-percent by today. As even greater evidence has come in — small-scale fluctuations in the cosmic microwave background, the baryon oscillations in the large-scale structure of the Universe, high-redshift quasars and gamma-ray bursts — this picture remains unchanged. Everything we observe on all scales points to it.

    12
    The farther away we look, the closer in time we’re seeing towards the Big Bang. The newest record-holder for quasars comes from a time when the Universe was just 690 million years old. These ultra-distant cosmological probes also show us a Universe that contains dark matter and dark energy. (Jinyi Yang, University of Arizona; Reidar Hahn, Fermilab; M. Newhouse NOAO/AURA/NSF)

    It wasn’t always apparent that this would be the solution, but this one solution works for literally all the observations. When someone puts forth the hypothesis that “dark matter and/or dark energy doesn’t exist,” the onus is on them to answer the implicit question, “okay, then what replaces General Relativity as your theory of gravity to explain the entire Universe?” As gravitational wave astronomy has further confirmed Einstein’s greatest theory even more spectacularly, even many of the fringe alternatives to General Relativity have fallen away. The way it stands now, there are no theories that exist that successfully do away with dark matter and dark energy and still explain everything that we see. Until there are, there are no real alternatives to the modern picture that deserve to be taken seriously.

    13
    A detailed look at the Universe reveals that it’s made of matter and not antimatter, that dark matter and dark energy are required, and that we don’t know the origin of any of these mysteries. However, the fluctuations in the CMB, the formation and correlations between large-scale structure, and modern observations of gravitational lensing, among many others, all point towards the same picture.(Chris Blake and Sam Moorfield)

    It might not feel right to you, in your gut, that 95% of the Universe would be dark. It might not seem like it’s a reasonable possibility when all you’d need to do, in principle, is to replace your underlying laws with new ones. But until those laws are found, and it hasn’t even been shown that they could mathematically exist, you absolutely have to go with the description of the Universe that all the evidence points to. Anything else is simply an unscientific conclusion.

    And, here we are:

    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 4:47 pm on April 17, 2018 Permalink | Reply
    Tags: , , NIF-National Ignition Facility, Ramp compression, Superearths   

    From Lawrence Livermore National Laboratory: “Ramp compression of iron provides insight into core conditions of large rocky exoplanets” 

    Lawrence Livermore National Laboratory

    April 16, 2018
    Breanna Bishop
    bishop33@llnl.gov
    925-423-9802

    1
    High-power lasers at the National Ignition Facility are focused onto a multi-stepped iron sample at the center of the 10-meter-diameter target chamber. These experiments measure the equation of state of iron under core conditions of large rocky exoplanets.

    In a paper published today by Nature Astronomy , a team of researchers from Lawrence Livermore National Laboratory (LLNL), Princeton University, Johns Hopkins University and the University of Rochester have provided the first experimentally based mass-radius relationship for a hypothetical pure iron planet at super-Earth core conditions.

    This discovery can be used to evaluate plausible compositional space for large, rocky exoplanets, forming the basis of future planetary interior models, which in turn can be used to more accurately interpret observation data from the Kepler space mission and aid in identifying planets suitable for habitability.

    “The discovery of large numbers of planets outside our solar system has been one of the most exciting scientific discoveries of this generation,” said Ray Smith, a physicist at LLNL and lead author of the research. “These discoveries raise fundamental questions. What are the different types of extrasolar planets and how do they form and evolve? Which of these objects can potentially sustain surface conditions suitable for life? To address such questions, it is necessary to understand the composition and interior structure of these objects.”

    Of the more than 4,000 confirmed and candidate extrasolar planets, those that are one to four times the radius of the Earth are now known to be the most abundant. This size range, which spans between Earth and Neptune, is not represented in our own solar system, indicating that planets form over a wider range of physical conditions than previously thought.

    “Determining the interior structure and composition of these super-Earth planets is challenging but is crucial to understanding the diversity and evolution of planetary systems within our galaxy,” Smith said.

    As core pressures for even a 5×-Earth-mass planet can reach as high as 2 million atmospheres, a fundamental requirement for constraining exoplanetary composition and interior structure is an accurate determination of the material properties at extreme pressures. Iron (Fe) is a cosmochemically abundant element and, as the dominant constituent of terrestrial planetary cores, is a key material for studying super-Earth interiors. A detailed understanding of the properties of iron at super-Earth conditions is an essential component of the team’s experiments.

    The researchers describe a new generation of high-power laser experiments, which use ramp compression techniques to provide the first absolute equation of state measurements of Fe at the extreme pressure and density conditions found within super-Earth cores. Such shock-free dynamic compression is uniquely suited for compressing matter with minimal heating to TPa pressures (1 TPa = 10 million atmospheres).

    The experiments were conducted at the LLNL’s National Ignition Facility (NIF).

    NIF, the world’s largest and most energetic laser, can deliver up to 2 megajoules of laser energy over 30 nanoseconds and provides the necessary laser power and control to ramp compress materials to TPa pressures. The team’s experiments reached peak pressures of 1.4 TPa, four times higher pressure than previous static results, representing core conditions found with a 3-4x Earth mass planet.

    “Planetary interior models, which rely on a description of constituent materials under extreme pressures, are commonly based on extrapolations of low-pressure data and produce a wide range of predicated material states. Our experimental data provides a firmer basis for establishing the properties of a super-Earth planet with a pure iron planet,” Smith said. “Furthermore, our study demonstrates the capability for determination of equations of state and other key thermodynamic properties of planetary core materials at pressures well beyond those of conventional static techniques. Such information is crucial for advancing our understanding of the structure and dynamics of large rocky exoplanets and their evolution.”

    Future experiments on NIF will extend the study of planetary materials to several TPa while combining nanosecond X-ray diffraction techniques to determine the crystal structure evolution with pressure.

    Co-authors include Dayne Fratanduono, David Braun, Peter Celliers, Suzanne Ali, Amalia Fernandez-Pañella, Richard Kraus, Damian Swift and Jon Eggert from LLNL; Thomas Duffy from Princeton University; June Wicks from Johns Hopkins University; and Gilbert Collins from the University of Rochester.
    Tags: Lasers / NIF / National Ignition Facility

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security Administration

    LLNL/NIF


    DOE Seal
    NNSA

     
  • richardmitnick 3:00 pm on April 17, 2018 Permalink | Reply
    Tags: Application, Complexity, Fidelity, Hello DARKNESS, , , The most advanced camera in the world,   

    From UCSB: Science + Technology 

    UC Santa Barbara Name bloc
    UC Santa Barbara

    Complexity, Fidelity, Application
    UCSB/Google researchers in quantum computing professor John Martinis’ group outline their plan for quantum supremacy

    By Sonia Fernandez
    (805) 893-4765
    sonia.fernandez@ucsb.edu
    Thursday, April 12, 2018

    1
    The dilution refrigerator, a cryogenic device where the quantum happens. Photo Credit: Eric Lucero/Google, Inc.

    2
    This superconducting chip, with a total area of one square centimeter, consists of nine qubits in a 1D array. Microwave pusles are applied to control their states and their interaction, and consequently control the dynamics of the system. Such Josephson-junction based superconducting systems are a leading physical implementations for quantum computation and simulation processing. Photo Credit: Eric Lucero/Google, Inc.

    Things are getting real for researchers in the UC Santa Barbara John Martinis/Google group. They are making good on their intentions to claim supremacy in a tight global race to build the first quantum machine to outperform the world’s best classical supercomputers.

    But what is quantum supremacy in a field where horizons are being widened on a regular basis, in which teams of the brightest quantum computing minds in the world routinely up the ante on the number and type of quantum bits (“qubits”) they can build, each with their own range of qualities?

    “Let’s define that, because it’s kind of vague,” said Google researcher Charles Neill. Simply put, he continued, “we would like to perform an algorithm or computation that couldn’t be done otherwise. That’s what we actually mean.”

    Neill is lead author of the group’s new paper, “A blueprint for demonstrating quantum supremacy with superconducting qubits,” now published in the journal Science.

    Fortunately, nature offers up many such complex situations, in which the variables are so numerous and interdependent that classical computers can’t hold all the values and perform the operations. Think chemical reactions, fluid interactions, even quantum phase changes in solids and a host of other problems that have daunted researchers in the past. Something on the order of at least 49 qubits — roughly equivalent to a petabyte (one million gigabytes) of classical random access memory — could put a quantum computer on equal footing with the world’s supercomputers. Just recently, Neill’s Google/Martinis colleagues announced an effort toward quantum supremacy with a 72-qubit chip possessing a “bristlecone” architecture that has yet to be put through its paces.

    But according to Neill, it’s more than the number of qubits on hand.

    “You have to generate some sort of evolution in the system which leads you to use every state that has a name associated with it,” he said. The power of quantum computing lies in, among other things, the superpositioning of states. In classical computers, each bit can exist in one of two states — zero or one, off or on, true or false — but qubits can exist in a third state that is a superposition of both zero and one, raising exponentially the number of possible states a quantum system can explore.

    Additionally, say the researchers, fidelity is important, because massive processing power is not worth much if it’s not accurate. Decoherence is a major challenge for anyone building a quantum computer — perturb the system, the information changes. Wait a few hundredths of a second too long, the information changes again.

    “People might build 50 qubit systems, but you have to ask how well it computed what you wanted it to compute,” Neill said. “That’s a critical question. It’s the hardest part of the field.” Experiments with their superconducting qubits have demonstrated an error rate of one percent per qubit with three- and nine-qubit systems, which, they say, can be reduced as they scale up, via improvements in hardware, calibration, materials, architecture and machine learning.

    Building a qubit system complete with error correction components — the researchers estimate a range of 100,000 to a million qubits — is doable and part of the plan. And still years away. But that doesn’t mean their system isn’t already capable of doing some heavy lifting. Just recently it was deployed, with spectroscopy, on the issue of many-body localization in a quantum phase change — a quantum computer solving a quantum statistical mechanics problem. In that experiment, the nine-qubit system became a quantum simulator, using photons bouncing around in their array to map the evolution of electrons in a system of increasing, yet highly controlled, disorder.

    “A good reason why our fidelity was so high is because we’re able to reach complex states in very little time,” Neill explained. The more quickly a system can explore all possible states, the better the prediction of how a system will evolve, he said.

    If all goes smoothly, the world should be seeing a practicable UCSB/Google quantum computer soon. The researchers are eager to put it through its paces, gaining answers to questions that were once accessible only through theory, extrapolation and highly educated guessing — and opening up a whole new level of experiments and research.

    “It’s definitely very exciting,” said Google researcher Pedram Roushan, who led the many-body quantum simulation work published in Science in 2017. They expect their early work to stay close to home, such as research in condensed matter physics and quantum statistical mechanics, but they plan to branch out to other areas, including chemistry and materials, as the technology becomes more refined and accessible.

    “For instance, knowing whether or not a molecule would form a bond or react in some other way with another molecule for some new technology… there are some important problems that you can’t roughly estimate; they really depend on details and very strong computational power,” Roushan said, hinting that a few years down the line they may be able to provide wider access to this computing power. “So you can get an account, log in and explore the quantum world.”

    See the full article here .

    Hello DARKNESS

    UCSB physicists team up with Caltech astronomers to commission the most advanced camera in the world.

    April 16, 2018
    Julie Cohen
    (805) 893-7220
    julie.cohen@ucsb.edu

    3
    The world’s most advanced camera can detect planets around the nearest stars.
    Photo Credit: COURTESY PHOTO

    Somewhere in the vastness of the universe another habitable planet likely exists. And it may not be that far — astronomically speaking — from our own solar system.

    Distinguishing that planet’s light from its star, however, can be problematic. But an international team led by UC Santa Barbara physicist Benjamin Mazin has developed a new instrument to detect planets around the nearest stars. It is the world’s largest and most advanced superconducting camera. The team’s work appears in the journal Publications of the Astronomical Society of the Pacific.

    The group, which includes Dimitri Mawet of the California Institute of Technology and Eugene Serabyn of the Jet Propulsion Laboratory in Pasadena, California, created a device named DARKNESS (the DARK-speckle Near-infrared Energy-resolved Superconducting Spectrophotometer), the first 10,000-pixel integral field spectrograph designed to overcome the limitations of traditional semiconductor detectors. It employs Microwave Kinetic Inductance Detectors that, in conjunction with a large telescope and an adaptive optics system, enable direct imaging of planets around nearby stars.

    “Taking a picture of an exoplanet is extremely challenging because the star is much brighter than the planet, and the planet is very close to the star,” said Mazin, who holds the Worster Chair in Experimental Physics at UCSB.

    Funded by the National Science Foundation, DARKNESS is an attempt to overcome some of the technical barriers to detecting planets. It can take the equivalent of thousands of frames per second without any read noise or dark current, which are among the primary sources of error in other instruments. It also has the ability to determine the wavelength and arrival time of every photon. This time domain information is important for distinguishing a planet from scattered or refracted light called speckles.

    “This technology will lower the contrast floor so that we can detect fainter planets,” Mazin explained. “We hope to approach the photon noise limit, which will give us contrast ratios close to 10-8, allowing us to see planets 100 million times fainter than the star. At those contrast levels, we can see some planets in reflected light, which opens up a whole new domain of planets to explore. The really exciting thing is that this is a technology pathfinder for the next generation of telescopes.”

    Designed for the 200-inch Hale telescope at the Palomar Observatory near San Diego, California, DARKNESS acts as both the science camera and a focal-plane wave-front sensor, quickly measuring the light and then sending a signal back to a rubber mirror that can form into a new shape 2,000 times a second. This process cleans up the atmospheric distortion that causes stars to twinkle by suppressing the starlight and enabling higher contrast ratios between the star and the planet.

    During the past year and a half, the team has employed DARKNESS on four runs at Palomar to work out bugs. The researchers will return in May to take more data on certain planets and to demonstrate their progress in improving the contrast ratio.

    “Our hope is that one day we will be able to build an instrument for the Thirty Meter Telescope planned for Mauna Kea on the island of Hawaii or La Palma,” Mazin said. “With that, we’ll be able to take pictures of planets in the habitable zones of nearby low mass stars and look for life in their atmospheres. That’s the long-term goal and this is an important step toward that.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    UC Santa Barbara Seal
    The University of California, Santa Barbara (commonly referred to as UC Santa Barbara or UCSB) is a public research university and one of the 10 general campuses of the University of California system. Founded in 1891 as an independent teachers’ college, UCSB joined the University of California system in 1944 and is the third-oldest general-education campus in the system. The university is a comprehensive doctoral university and is organized into five colleges offering 87 undergraduate degrees and 55 graduate degrees. In 2012, UCSB was ranked 41st among “National Universities” and 10th among public universities by U.S. News & World Report. UCSB houses twelve national research centers, including the renowned Kavli Institute for Theoretical Physics.

     
  • richardmitnick 12:53 pm on April 17, 2018 Permalink | Reply
    Tags: , , , ,   

    From Symmetry: “The world’s largest astronomical movie” 

    Symmetry Mag
    Symmetry

    04/17/18
    Manuel Gnida

    1
    Artwork by Sandbox Studio, Chicago with Ana Kova

    When the Large Synoptic Survey Telescope begins to survey the night sky in the early 2020s, it’ll collect a treasure trove of data.

    LSST


    LSST Camera, built at SLAC



    LSST telescope, currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    The information will benefit a wide range of groundbreaking astronomical and astrophysical research, addressing topics such as dark matter, dark energy, the formation of galaxies and detailed studies of objects in our very own cosmic neighborhood, the Milky Way.

    LSST’s centerpiece will be its 3.2-gigapixel camera, which is being assembled at the US Department of Energy’s SLAC National Accelerator Laboratory. Every few days, the largest digital camera ever built for astronomy will compile a complete image of the Southern sky. Moreover, it’ll do so over and over again for a period of 10 years. It’ll track the motions and changes of tens of billions of stars, galaxies and other objects in what will be the world’s largest stop-motion movie of the universe.

    Fulfilling this extraordinary task requires extraordinary technology. The camera will be the size of a small SUV, weigh in at a whopping 3 tons, and use state-of-the-art optics, imaging technology and data management tools. But how exactly will it work?

    2
    Artwork by Sandbox Studio, Chicago with Ana Kova

    Collecting ancient light

    It all starts with choosing the right location for the telescope. Astronomers want the sharpest images of the dimmest objects for their analyses, and they also want to maximize their observation time. They need the nights to be dark and the air to be dry and stable.

    It turns out that the Atacama Desert, a plateau in the foothills of the Andes Mountains, scores very high for these criteria. That’s where LSST will be located—at nearly 8700 feet altitude on the Cerro Pachón ridge in Chile, 60 miles from the coastal town of La Serena.

    The next challenge is that most objects LSST researchers want to study are so far away that their light has been traveling through space for millions to billions of years. It arrives on Earth merely as a faint glow, and astronomers need to collect as much of that glow as possible. For this purpose, LSST will have a large primary mirror with a diameter close to 28 feet.

    The mirror will be part of a sophisticated three-mirror system that will reflect and focus the cosmic light into the camera.

    The unique optical design is crucial for the telescope’s extraordinary field of view—a measure of the area of sky captured with every snapshot. At 9.6 square degrees, corresponding to 40 times the area of the full moon, the large field of view will allow astronomers to put together a complete map of the Southern night sky every few days.

    After bouncing off the mirrors, the ancient cosmic light will enter the camera through a set of three large lenses. The largest one will have a diameter of more than 5 feet.

    Together with the mirrors, the lenses’ job is to focus the light as sharply as possible onto the focal plane—a grid of light-sensitive sensors at the back of the camera where the light from the sky will be detected.

    A filter changer will insert filters in front of the third lens, allowing astronomers to take images with different kinds of cosmic light that range from the ultraviolet to the near-infrared. This flexibility enhances the range of possible observations with LSST. For example, with an infrared filter researchers can look right through dust and get a better view of objects obscured by it. By comparing how bright an object is when seen through different filters, astronomers also learn how its emitted light varies with the wavelength, which reveals details about how the light is produced.

    4
    Artwork by Sandbox Studio, Chicago with Ana Kova

    An Extraordinary Imaging Device

    The heart of LSST’s camera is its 25-inch-wide focal plane. That’s where the light of stars and galaxies will be turned into electrical signals, which will then be used to reconstruct images of the sky. The focal plane will hold 189 imaging sensors, called charge-coupled devices, that perform this transformation.

    Each CCD is 4096 pixels wide and long, and together they’ll add up to the camera’s 3.2 gigapixels. A “good” star will be the size of only a handful of pixels, whereas distant galaxies might appear as somewhat larger fuzzballs.

    The focal plane will consist of 21 smaller square arrays, called rafts, with nine CCDs each. This modular structure will make it easier and less costly to replace imaging sensors if needed in the future.

    To the delight of astronomers interested in extremely dim objects, the camera will have a large aperture (f/1.2, for the photographers among us), meaning that it’ll let a lot of light onto the imaging sensors. However, the large aperture will also make the depth of field very shallow, which means that objects will become blurry very quickly if they are not precisely projected onto the focal plane. That’s why the focal plane will need to be extremely flat, demanding that individual CCDs don’t stick out or recess by more than 0.0004 inches.

    To eliminate unwanted background signals, known as dark currents, the sensors will also need to be cooled to minus 150 degrees Fahrenheit. The temperature will need to be kept stable to half a degree. Because water vapor inside the camera housing would form ice on the sensors at this chilly temperature, the focal plane must also be kept in a vacuum.

    In addition to the 189 “science” sensors that will capture images of the sky, the focal plane will also have three specialty sensors in each of the four corners of the focal plane. Two so-called guiders will frequently monitor the position of a reference star and help LSST stay in sync with the Earth’s rotation. The third sensor, called a wavefront sensor, will be split into two halves that will be positioned six-hundredths of an inch above and below the focal plane. It’ll see objects as blurry “donuts” and provide information that will be used to adjust the telescope’s focus.

    Cinematography of astronomical dimension

    Once the camera has taken enough data from a patch in the sky, about every 36 seconds, the telescope will be repositioned to look at the next spot. A computer algorithm will determine the patches in the sky that will be surveyed by LSST on any given night.

    While the telescope is moving, a shutter between the filter and the third lens camera will close to prevent more light from falling onto the imaging sensors. At the same time, the CCDs will be read out and their information digitized.

    The data will be sent into the processing and analysis pipeline that will handle LSST’s enormous flood of information (about 20 terabytes of data every single night). There, it will be turned into useable images. The system will also flag potential interesting events and send out alerts to astronomers within a minute.

    This way—patch by patch—a complete image of the entire Southern sky will be stitched together every few days. Then the imaging process will start over and repeat for the 10-year duration of the survey, ultimately creating the largest time-lapse movie of the universe ever made and providing researchers with unprecedented research opportunities.

    For more information on LSST, visit LSST’s website or SLAC’s LSST camera website.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 11:48 am on April 17, 2018 Permalink | Reply
    Tags: , , , Marriage of a 20keV superconducting XFEL with a 100PW laser, , ,   

    From SPIE: “Marriage of a 20keV superconducting XFEL with a 100PW laser” 

    SPIE

    SPIE

    16 April 2018
    Toshiki Tajima, University of California, Irvine
    Ruxin Li, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences

    A new regime of science at exawatts and zeptoseconds.

    The Chinese national science and technology major infrastructure, Shanghai Coherent Light Facility (SCLF), organized an international review meeting for the Station of Extreme Light (SEL) in Shanghai on July 10, 2017.

    The Shanghai Institute of Applied Physics is building a Soft X-ray Free Electron Laser that is set to open to users in 2019. Credit Michael Banks

    The reviewing committee members included experts in strong-field laser physics, high-energy-density physics, and theoretical physics from Germany, USA, UK, France, Japan, Canada; and China chaired by R. Sauerbrey and N .Wang. The working group, led by Ruxin Li of the Shanghai Institute of Optics and Fine Mechanics (SIOM), Chinese Academy of Sciences (CAS), has made a series of breakthroughs on high energy, high power, and high-repetition laser system development.

    Reflecting on this, the Review Committee Report1 stated: “The architecture of the laser system of the Optical Parametric Chirped Pulse Amplification (OPCPA) and its interaction with the XFEL are well thought out. The proposed 1023 W/cm2 peak laser power is feasible. The working group has made a series of breakthroughs on high-power laser technologies in the past decades. Their constant effort has resulted in valuable experience, outstanding engineering skills, and international recognition for the group. Their strong track record has laid a strong foundation, which will provide the basis for successful construction of the 100 PW laser system.”

    Based on this, the Committee applauded the work, stating: “The Station of Extreme Light at Shanghai Coherent Light Facility is dedicated to cutting-edge research in strong field science and applications. This includes, for example, astrophysics, nuclear physics, cosmology, and matter under extreme conditions. The combination of the hard XFEL and the world-leading 100PW laser in SEL will initiate exploration of effects such as vacuum birefringence, one of the most prominent strong-field QED effects, acceleration mechanisms leading to ultra-high energy cosmic rays, simulation of black hole physics, and generation of new forms of matter.”

    The developments proposed are based on solid research carried out at SIOM (and other scientific organizations). In particular, the research and development of the OPCPA laser amplifier at the highest power level at SIOM. Shown in Figure 1 is SIOM’s 10PW laser CPA device and the 10PW laser system. The 10PW laser system, Shanghai Superintense-Ultrafast Lasers Facility (SULF), is based on CPA technology and the diameter of the Ti-Sapphire used in the main amplifier is 235mm, which is the largest crystal for the laser manufactured by the scientists at SIOM.

    Based on these developments, SIOM has launched a 100PW laser system, Station for Extreme Light (SEL). This system has two significant salient features. First, the level of its power will be an order of magnitude beyond the planned highest-powered laser, Extreme Light Infrastructure (ELI). Secondly, its design is a combination of the 100PW laser as part of the system in the SCLF’s XFEL. This project received strong endorsement from the International Review Meeting that convened at SCLF of SIOM on July 10, 2017, and was approved by the Government of China. The overall funding level is approximately USD$1.3 Billion.

    Figure 1 10PW laser system in Shanghai pumped by CPA.

    II. Extreme field regime
    The parameters of SEL are well beyond what has so far been available. Table 1 shows typical principal physical parameters. The coherent x-ray energy from the SCLF ranges between 3 to 15 keV (hard x-rays) produced from the superconducting x-ray free electron laser (XFEL). The photon number per pulse of this XFEL is 1012. Its pulse focusability is 200nm with the energy resolution of 0.6eV. The x-ray’s intensity at focus is as high as 1021W/cm2.

    The parameters of the 100PW laser for optical photons are as follows: Its peak power is 100PW, while its focal intensity is as high as 1023W/cm2. (If we can managed to focus better than this, it could go toward 1025W/cm2). While this is a single shot performance, it could deliver the repetition rate of 1Hz of optical laser if the power is at 0.1 to 1PW.

    These parameters by themselves are exciting. However, their coexistence and marriage as a combined unit shows a remarkable capability for future scienctific exploration. The combination of a synchrotron light source and an intense laser was first suggested and conducted in 1990s. Toshiki Tajima suggested that Professor Mamoru Fujiwara at Osaka University make use of the high-energy (8GeV) electrons of the SPRing-8 combined with an intense laser to make extremely high-energy gamma photons, which he did in his lab.2 Since then, the combination of these accelerator-based synchrotron light sources (or even more advanced XFEL with intense lasers) have come a long way. The present SCLF’s marriage of these two will uncover a new regime of science and greatly impact various technologies and applications, such as nuclear photonics and nonlinear interferometry.

    4
    Table 1 shows the schematic layout of the SEL. The interaction of XFEL and the plasma chamber takes place in the experimental area. Figure 3 indicates the 100PW laser based on the OPCPA technology.

    4
    Figure 2: Schematic layout figure of SEL that couples the 100PW laser with the XFEL.

    5
    Figure 3: Details of the amplification stages of the 100PW laser based on OPCPA.

    The scheme of this marriage is seen in the concept of the SEL at which the coherent high-energy x-rays photons are shone in the configuration shown in Figure 2. This way we will be able to observe the interaction of the high-energy x-ray photons and most intense lasers and their developed matter interaction. This will greatly increase the experimental probe of intense laser-matter interaction. The XFEL beam will provide ultra-short MHz x-ray beam with energy range of 3-5keV and significantly large photon number of 1012. Specific x-ray energy of 12.914keV will be used for QED experiments with very low energy spread of 0.6eV. The x-ray beam will collide head-on with the 100PW laser pulse in the experimental chamber. The 100PW laser system contains four beams and each beam reaches the peak power of 25 PW.

    Figure 2 shows that the main laser system will occupy two floors and its power supply and control system are set at different floors. After the four-beam combination, the laser pulse will be sent to the experimental area on the bottom floor. There is a large-size vacuum chamber, where the 100PW laser pulse will be focused to 5μm and collide with the x-ray beam.

    Details of the 100PW laser system are shown in Figure 3. At the core is the OPCPA system. The 100PW laser pulse starts at high temporal laser source, where its temporal synchronization signal comes from the XFEL beam. This source will generate high-quality seed pulses, which will go into the PW level repetition-rate OPCPA front-end. The laser energy will reach 25J and its spectrum width will support 15fs at PW level OPCPA front-end.

    The main amplifier is based on OPCPA technology and it provides 99% energy gain of the whole laser system, which requires sufficient pump energy from a Nd Glass pump laser. The final optics assembly will compress the high-energy of 2500J 4ns laser pulse to 15fs. After the compression, the laser pulse will be sent into the experimental chamber with the peak intensity 1023 W/cm2. As shown in Figure 1, we developed and tested the performance of a high-intensity laser with CPA up to 10PW level.

    III. High Field Science
    The proposed SEL aims to achieve the ultimate in high field science [3],[4],[5]. Here, we describe a simple way to reach that goal.

    The radiation dominance regime (1023 W/cm2) as described in Ref. 2 may be accessible and experimentally explored for the first time in sufficient details with the help of the coherent X-ray probe. As discussed in Sec. 1, if one can focus a bit narrowly, we may be able to enter the so-called QED Quantum regime (~1024 W/cm2)[4],[5].

    The particle acceleration by laser will enter a new regime. The wakefield generation [6] becomes so nonlinear that it enters what is sometimes called the bow-wake regime [7]. This may be relevant to the astrophysical extreme high-energy cosmic ray genesis by AGN (active galactic nuclei) jets [8]. In this regime, the physics of wakefield acceleration and that of the radiation pressure acceleration begin to merge (1023W/cm2)[9],[10]. Thus, the laser pulse should be able to pick up ions as well as electrons to become accelerated. Soon or later, the energy of ions begins to exceed that of electrons and their acceleration should become as coherent as the electron acceleration in this regime. Such acceleration will allow ion accelerators to be smaller. (A broader scope at this regime and slightly higher intensity regime than just mentioned has been reviewed [9].)

    However, it could go much further than that, since the invention of a new compression technique called “thin film compression11.” With this technique, a laser may be compressed to even higher power and intensity such as EW and further by relativistic compression into the shortest possible pulses ever in zs12. We will thus see the continuous manifestation of the Intensity-Pulse Duration Theorem into the extension of EW and zs [13]. It will not only explore strong field QED physics [14],[15], but we will also see the emergence of new phenomena at play in a wider variety of fundamental physics, including: (1) possible search of the proposed “fifth force” [16],[17]; (2) dark matter search by four wave mixing [18]; (3) x-ray wakefield in solid state matter [19] and related x-ray and optical solid state plasmonics [20]; (4) possible testing of the energy dependence of gamma photon propagation speed in a vacuum to test the foundational assumption of the Theory of Special Theory of Relativity [21]; and (5) zeptosecond streaking of the QED process [22].

    Chen et al.[23] suggested the exploration of general relativity using the equivalence principle of acceleration-gravity to test the Hawking-Unruh process.

    IV. Gamma-ray diagnosis and the marriage of XFEL and HFS
    In the issues of high field science, we often enter into the physical processes in higher energies and shorter timescales, which may not be easily resolvable in optical diagnosis. Here, the powerful XFEL’s resolution in time and space come in [24]. X-rays can be also signatures in high intensity experiments such as laser-driven acceleration experiments [25]. A typical display of such interplay may be seen in the diagnostics of the physical processes in the problem of x-ray wakefield acceleration in solid-state matter. In this case, nanoscopic materials with a nanohole structure [20] need to be observed and controlled. The surface of the nanotubes may be exhibiting surface plasmons and polaritons in nanometer size and zs temporal dynamics, best diagnosed by the XFEL. This is but an example of the marriage of a 20keV superconducting XFEL and a 100PW laser. In addition this technology will enhance studies in photon-induced nuclear physics [26] and the treatment of nuclear materials [27] (including nuclear waste), nuclear pharmacology, nuclear biochemistry, and medicine [28],[29].

    Another example is to use gamma photons to mediate the vacuum nonlinearity caused by intense laser pulse to exploit zeptosecond streaking via the gamma photon mediation [22]. In this scheme the presence of intense laser pulse and x-ray photon play a crucial role. If this example elucidates a beginning of exploration of zeptosecond photometric and zeptosecond optics, it would be an achievement comparable of the opening of the femtosecond optics flowing by attosecond optics [30].

    One more example of exploring the proposition was recently made for the Fifth Force [17]. In the Hungarian nuclear experiment, a mysterious photon at the energy of 17MeV was observed. The paper [5] suggested this emission of gamma photon may be due to the unknown force (the Fifth Force). It may be helpful if we can inject a large amount of monoenergetic photons at this energy to see if the reversal of this process of photon emission (i.e. injection of photon) can explore this process more quantitatively. We can check of the fifth force (17MeV gamma)16,17,31 with the process and an outcome of the following, utilizing the energy specific laser induced gamma photon interaction: e + 17MeV gamma → e + X.

    Finally, there is a recent suggestion by Day and Fairbairn [32] that XFEL laser pulses at 3.5keV may be used to investigate the astrophysically observed x-ray excess by fluorescent dark matter. Such an avenue may open up with this device. Such an effort along with the astrophysical observations may become an important interdisciplinary development.

    In order to maximize the success of these implications, we recommend the formation of a broad international collaboration with the organizations and institutions that are engaging in related fields. Learning from these labs in their technologies, practice, and collaborative engagements should reduce risks and duplications and enhance learning and the scope of experience. Collaborations with a variety technology sectors are important both for the execution of experiments and their applications.

    The authors are grateful for close discussions with all the committee members (Naiyan Wang, Roland Sauerbrey, Pisin Chen, See Leang Chin, Thomas Edward Cowan, Thomas Heinzl, Yongfeng Lu, Gerard Mourou, Edmond Turcu, Hitoki Yoneda, Lu Yu) of SEL. The discussions with Profs. T. Tait, K. Abazajian, T. Ebisuzaki, and K. Homma were also very useful. Prof. X. M. Zhang helped with our manuscript.

    References:
    1. Report of the International Review Meeting for Station of Extreme Light (2017).

    2. G. A. Mourou, T. Tajima and S. V. Bulanov, Optics in the relativistic regime, Rev. Mod. Phys. 78, p. 309, 2006.

    3. T. Tajima, K. Mima and H. Baldis, Eds., High-Field Science, Kluwer Academic/Plenum Publishers, New York, NY, 2000.

    4. T. Tajima and G. Mourou, Zettawatt-exawatt lasers and their applications in ultrastrong-field physics, Phys. Rev. ST AB 5, p. 031301, 2002.

    5. G. Mourou and T. Tajima, Summary of the IZEST science and aspiration, Eur. Phys. J. ST 223, pp. 979-984, 2014.

    6. T. Tajima and J. M. Dawson, Laser electron accelerator, Phys. Rev. Lett. 43, p. 267, 1979.

    7. C. K. Lau, P. C. Yeh, O. Luk, J. McClenaghan, T. Ebisuzaki and T. Tajima, Ponderomotive acceleration by relativistic waves, Phys. Rev. ST AB 18, p. 024401, 2015; T. Tajima, Laser acceleration in novel media, Eur. Phys. J. ST 223, pp. 1037-1044, 2014.

    8. T. Ebisuzaki and T. Tajima, Astrophysical ZeV acceleration in the relativistic jet from an accreting supermassive blackhole, Astropart. Phys. 56, pp. 9-15, 2014.

    9. T. Tajima, B. C. Barish, C. P. Barty, S. Bulanov, P. Chen, J. Feldhaus, et al., Science of extreme light infrastructure, AIP Conf. Proc. 1228, pp. 11-35, 2010.

    10. T. Esirkepov, M. Borghesi, S. V. Bulanov, G. Mourou and T. Tajima, Highly efficient relativistic-ion generation in the laser-piston regime, Phys. Rev. Lett. 92, p. 175003, 2004.

    11. G. Mourou, S. Mironov, E. Khazanov and A. Sergeev, Single cycle thin film compressor opening the door to Zeptosecond-Exawatt physics, Eur. Phys. J. ST 223, pp. 1181-1188, 2014.

    12. N. Naumova, J. Nees, I. Sokolov, and G. Mourou, Relativistic generation of isolated attosecond pulses in a λ3 focal volume, Phys. Rev. Lett. 92, p. 063902, 2004.

    13. G. Mourou and T. Tajima, More intense, shorter pulses, Science 331, pp. 41-42, 2011.

    14. M. Marklund and P. K. Shukla, Nonlinear collective effects in photon-photon and photon-plasma interactions, Rev. Mod. Phys. 78, p. 591, 2006.

    15. A. Di Piazza, C. Müller, K. Z. Hatsagortsyan and C. H. Keitel, Extremely high-intensity laser interactions with fundamental quantum systems, Rev. Mod. Phys. 84, p. 1177, 2012.

    16. A. J. Krasznahorkay, M. Csatlós, L. Csige, Z. Gácsi, J. Gulyás, M. Hunyadi, et al., Observation of anomalous internal pair creation in Be 8: a possible indication of a light, neutral boson, Phys. Rev. Lett. 116, p. 042501, 2016.

    17. J. L. Feng, B. Fornal, I. Galon, S. Gardner, J. Smolinsky, T. M. Tait and P. Tanedo, Protophobic fifth-force interpretation of the observed anomaly in Be-8 nuclear transitions, Phys. Rev. Lett. 117, p. 071803, 2016.

    18. K. Homma, D. Habs and T. Tajima, Probing the semi-macroscopic vacuum by higher-harmonic generation under focused intense laser fields, Appl. Phys. B 106, pp. 229-240, 2012.

    19. T. Tajima, Laser acceleration in novel media, Eur. Phys. J. ST 223, pp. 1037-1044, 2014.

    20. X. Zhang, T. Tajima, D. Farinella, Y. Shin, G. Mourou, J. Wheeler and B. Shen, Particle-in-cell simulation of x-ray wakefield acceleration and betatron radiation in nanotubes, Phys. Rev. AB 19, p. 101004, 2016.

    21. T. Tajima, M. Kando and M. Teshima, Feeling the texture of vacuum: laser acceleration toward PeV, Progr. Theor. Phys. 125, pp. 617-631, 2011.

    22. T. Tajima, G. Mourou and K. Nakajima, Laser acceleration, Riv. Nuovo Cim. 40, p. 1, 2017.

    23. P. Chen and G. Mourou, Accelerating plasma mirrors to investigate the black hole information loss paradox, Phys. Rev. Lett. 118, p. 045001, 2017.

    24. C. Pellegrini, A. Marinelli and S. Reiche, The physics of x-ray free-electron lasers, Rev. Mod. Phys. 88, p. 015006, 2016.

    25. S. Corde, K. T. Phuoc, G. Lambert, R. Fitour, V. Malka, A. Rousse and E. Lefebvre, Femtosecond x rays from laser-plasma accelerators, Rev. Mod. Phys. 85, p. 1, 2013.

    26. S. V. Bulanov, T. Z. Esirkepov, M. Kando, H. Kiriyama and K. Kondo, Relativistically strong electromagnetic radiation in a plasma, J. Exp. Theor. Phys. 122, pp. 426-433, 2016.

    27. S. Gales, IZEST meeting presentation, ELI-EP, French Embassy in Tokyo, 2013. https://gargantua.polytechnique.fr/siatel-web/linkto/mICYYYSI7yY6. Accessed 10 November 2017.

    28. D. Habs and U. Köster, Production of medical radioisotopes with high specific activity in photonuclear reactions with γ-beams of high intensity and large brilliance, Appl. Phys. B 103, pp. 501-519, 2011; Ö. Özdemir, Eds., Current Cancer Treatment – Novel Beyond Conventional Approaches, INTECH Open Access Publisher, 2011.

    29. A. Bracco and G. Köerner, Eds., Nuclear Physics for Medicine, Nuclear Physics European Collaboration Committee, 2014.

    30. F. Krausz and M. Ivanov, Attosecond physics, Rev. Mod. Phys. 81, p. 163, 2009.

    31. T. Tajima, T. Tait, and J. Feng, private comment, 2017.

    32. F. Day and M. Fairbairn, submitted to J. High Energy Phys., 2017.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 10:51 am on April 17, 2018 Permalink | Reply
    Tags: , , , CAS A supernova remnant, , Crab Nebula supernnova remnant, ,   

    From Isaac Newton Group of Telescopes: “A Key Element to Life is Lacking in the Crab Nebula” 

    Isaac Newton Group of Telescopes Logo
    Isaac Newton Group of Telescopes

    16 April, 2018
    No writer credit.

    Work by Jane Greaves and Phil Cigan from Cardiff University, UK suggests there may be a cosmic paucity of a chemical element essential to life. Greaves has been searching for phosphorus in the universe, because of its link to life on Earth. If this element is lacking in other parts of the cosmos, then it could be difficult for extra-terrestrial life to exist.

    She explains “Phosphorus is one of just six chemical elements on which Earth organisms depend, and it is crucial to the compound adenosine triphosphate (ATP), which cells use to store and transfer energy. Astronomers have just started to pay attention to the cosmic origins of phosphorus, and found quite a few surprises. In particular, phosphorus is created in supernovae – the explosions of massive stars – but the amounts seen so far don’t match our computer models. I wondered what the implications were for life on other planets if unpredictable amounts of phosphorus are spat out into space, and later used in the construction of new planets.”

    The team used LIRIS on the William Herschel Telescope (WHT) to observe infrared light from phosphorus and iron in the Crab Nebula, a supernova remnant around 6,500 light-years away in the constellation of Taurus.

    2
    LIRIS on the William Herschel Telescope (WHT)


    ING 4 meter William Herschel Telescope at Roque de los Muchachos Observatory on La Palma in the Canary Islands, 2,396 m (7,861 ft)

    3
    A composite of infrared (shown as red), visible (green) and ultraviolet (violet) images of the Crab Nebula, with IR enhanced and visible/UV balanced to yield neutral star colours. Composite image made with the Cosmic Coloring Compositor. Credit: NRAO

    Supernova remnant Crab nebula. NASA/ESA Hubble


    X-ray picture of Crab pulsar, taken by Chandra

    Cigan, an expert on these stellar remnants, says: “This is only the second such study of phosphorus that has been made. The first looked at the Cassiopeia A (Cas A) supernova remnant, and so we are able to compare two different stellar explosions and see if they ejected different proportions of phosphorus and iron. The first element supports life, while the second is a major part of our planet’s core. We are eager for a chance to come back and use LIRIS again to complete our study of how phosphorus abundance changes across the Crab Nebula”.

    Cassiopeia A false color image using Hubble and Spitzer telescopes and Chandra X-ray Observatory. Credit NASA JPL-Caltech

    NASA/Spitzer Infrared Telescope

    NASA/Chandra X-ray Telescope

    3
    Spectrum of one position near the centre of the Crab Nebula, taken with LIRIS at the WHT. The overlaid dotted line is a synthetic representation of how the phosphorus line would appear if the Crab Nebula had the same ratio of phosphorus to iron as the median value in Cas A, the only other supernova remnant where phosphorus was studied previously. Credit: Jane Greaves and Phil Cigan.

    These preliminary results suggest that material blown out into space could vary dramatically in chemical composition. Greaves remarks “The route to carrying phosphorus into new-born planets looks rather precarious. We already think that only a few phosphorus-bearing minerals that came to the Earth – probably in meteorites – were reactive enough to get involved in making proto-biomolecules.”

    She adds: “If phosphorus is sourced from supernovae, and then travels across space in meteoritic rocks, I’m wondering if a young planet could find itself lacking in reactive phosphorus because of where it was born? That is, it started off near the wrong kind of supernova? In that case, life might really struggle to get started out of phosphorus-poor chemistry on another world otherwise similar to our own.”

    Press Release:

    30 March 2018
    Written by Robert Massey

    Media contacts

    Dr Robert Massey
    Royal Astronomical Society
    Mob: +44 (0)7802 877 699
    ewass-press@ras.ac.uk

    Ms Anita Heward
    Royal Astronomical Society
    Mob: +44 (0)7756 034 243
    ewass-press@ras.ac.uk

    Dr Morgan Hollis
    Royal Astronomical Society
    Mob: +44 (0)7802 877 700
    ewass-press@ras.ac.uk

    Dr Helen Klus
    Royal Astronomical Society
    ewass-press@ras.ac.uk

    Ms Marieke Baan
    European Astronomical Society
    Mob: +31 6 14 32 26 27
    ewass-press@ras.ac.uk

    Science contacts

    Dr Jane Greaves
    University of Cardiff
    Mob: +44 (0)7599 628268
    GreavesJ1@cardiff.ac.uk

    Dr Phil Cigan
    University of Cardiff
    ciganp@cardiff.ac.uk

    Paucity of phosphorus hints at precarious path for extraterrestrial life
    European Week of Astronomy and Space Science press release
    RAS PR 18/17 (EWASS 13)
    3 April 2018

    See the full Prelease here .

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    Isaac Newton Group telescopes
    Isaac Newton Group telescopes


    ING Isaac Newton 2.5m telescope at Roque de los Muchachos Observatory on La Palma in the Canary Islands, Spain, Altitude 2,344 m (7,690 ft)

     
  • richardmitnick 7:49 am on April 17, 2018 Permalink | Reply
    Tags: , , , , , From NASA Spaceflight: TESS,   

    From NASA Spaceflight: TESS 

    NASA Spaceflight

    NASA Spaceflight

    April 16, 2018
    Chris Gebhardt

    TESS background/overview:

    NASA/TESS

    The original idea for TESS goes back to 2005 when Dr. George Ricker was the Principle Investigator High Energy Transient Explorer (HETE) – the first satellite mission dedicated to the study of gamma-ray bursts. Slowly, the idea evolved in 2008 and 2009, with Dr. Ricker, now TESS’s Principal Investigator at MIT (Massachusetts Institute of Technology), saying “We wanted to initially try to do this as a privately funded system, and MIT was very helpful for us. We had support from Google for some of the studies that were originally going to be done.”

    That led to a collaboration with NASA Ames to create a proposal for a small-class explorer exoplanet mission that was ultimately not selected for flight. That then led to a partnership with Orbital ATK and the Goddard Space Flight Center in Greenbelt, Maryland, for a revised mission proposal over 2011 and 2012.

    TESS was officially selected for inclusion in NASA’s Medium Explorer mission program on 5 April 2013, and with just over five years of design and build operations, now stands ready to launch. “It’s been a long time coming. It’s been 13 years, but for the last five years, basically, pretty much [everything with the mission has been] the same,” said Dr. Ricker.

    1
    TESS undergoes final pre-launch processing at the Kennedy Space Flight Center. Credit: Chris Gebhart for NSF/L2

    While TESS is generally perceived as a follow-on to NASA’s Kepler planet hunting satellite, it will perform a very different kind of mission. Where Kepler was a prolonged, deep, and narrow field observatory that looked continuously at specific stars in one quarter of 1% of the sky at an optimal range of 2,000 to 3,000 light years distance, TESS will perform a wide- and shallow-field survey covering 85% of the sky with an optimal distance stretching to 300 light years.

    TESS will accomplish its observations by using the sole science instrument onboard: a package of four wide-field-of-view CCD cameras with a low-noise, low-power 16.8 megapixel CCD detector. Each camera as a 24° x 24° field of view, a 100 mm (4 in) pupil diameter, a lens assembly with seven optical elements, and a bandpass range of 600 to 1,000 nm.

    When functioning together – as designed – the four cameras have a 24° x 96° field of view.

    The overall spacecraft is built on a LEOStar-2 satellite bus by Orbital ATK. The spacecraft bus is capable of three-axis stabilization via four hydrazine thrusters as well as four reaction wheels. This provides TESS’s cameras with greater than three-arc-second fine pointing control – necessary for the sensitive light observations TESS will perform once in its science orbit.

    The data collected during TESS’s observational campaigns – as well as general spacecraft communications – will route through a Ka-band antenna with a 100 Mbit/s downlink capability. The entire craft is powered by two solar arrays capable of generating 400 watts.

    “There’s more than 100 scientists and other personnel cooperating on the mission,” said Dr. Ricker, “and as far as the mission itself is concerned, all the work that was involved in designing, developing, and building the hardware, we’ve estimated that there’s more than a million person-hours that have gone into that over the past five years.”

    Launch and Orbit:

    The launch phase of the mission will see a Falcon 9 deliver TESS into a lunar transfer orbit, sending the craft to a precise point when the moon’s gravity will grab TESS and fling it out into a farther orbit than it’s initially launched into.

    At 350 kg (772 lb), TESS is the lightest-known payload to have ever launched on a Falcon 9. After lifting off from SLC-40 at the Cape Canaveral Air Force Station, FL, the Falcon 9 will fly due east from the pad. The first stage, after 2 minutes 29 seconds of powered flight, will separate from the second stage and perform a landing on the Of Course I Still Love You drone ship in the Atlantic.

    SpaceX will also attempt to recover the payload fairing, but as there is no fairing catching boat – yet – on the east coast, the fairing will parachute into the ocean for intact recovery, serving primarily as a test of the new recovery systems.

    For the launch, after stage separation, the second stage will continue to fire its single MVac (vacuum optimized Merlin engine) until SECO-1 (Stage Engine Cut Off -1) at 8minutes 22 seconds into flight. This will be followed by a 32 minute 33 second coast of the stage and TESS before the second stage engine re-starts for a burn to send TESS into a Lunar Transfer Orbit.

    Shortly after SECO-2, TESS will separate from the top of the Falcon 9 second stage at 48 minutes 42 seconds after launch having been placed into a super synchronous transfer orbit of 200 x 270,000 km (124 x 167,770 mi). The second stage will then perform a third burn to inject itself into a disposal hyperbolic (Earth-escape) orbit.

    Over the first five days, TESS’s control teams will check out the overall health of the spacecraft before activating TESS’s science instruments 7-8 days after launch. TESS will then perform a final lunar flyby on 16 May – one month after launch, a lunar gravity assist which will change the the craft’s orbital inclination to send it into its 13.7 day, 108,000 x 373,000 km (67,000 x 232,000 mi) science orbit of Earth – an orbit that is in perfect 2:1 resonance with the moon.

    2
    The maneuvers and encounters Leading to the final TESS orbit. PLEA and PLEP are the post lunar-encounter-apogee and perigee, respectively. Credit Ricker et al. 2015.

    The specific orbit, referred to as the P/2 lunar resonant orbit, will place TESS completely outside the Van Allen Radiation belts, with TESS’s apogee (farthest point in orbit from the Earth) approximately 90 degrees away from the position of the Moon. This will minimize the Moon’s potential destabilizing effect on TESS and maintain a stable orbit for decades while also providing a consistent, good camera temperature range for the observatory’s operations.

    Moreover, this orbit will provide TESS with unobstructed views of both the Northern and Southern Hemispheres. For almost all of its orbit, TESS will be in data gathering mode, only transmitting its stored data to Earth once per orbit during the three hours of its closest approach to Earth, or perigee. Assuming an on-time launch, TESS will enter operations on 12 June.

    Overall, TESS has daily launch opportunities from 16-21 April, no launch opportunity on the 22nd (per NASA documentation), and then daily opportunities again from 23-26 April. There is no opportunity on 22 April because the amount of time between the consecutive daily opportunities on 21 and 23 April is just slightly longer than 24 hours, thus barely skipping over all times on the 22nd.

    However, if for some reason TESS is not off the ground by 26 April, the exoplanet hunter must stand down launch operations so that NASA’s Launch Services Provider (LSP) group can shift gears to support the agency’s InSight mission launch to Mars from Vandenberg Air Force Base, California.

    The LSP does not have a large enough staff to support two missions from both coasts, and since InSight has a short interplanetary launch window it must launch within, InSight would get priority over TESS. After InSight, TESS has additional launch opportunities in both May and June.

    Mission:

    Once its checkout phase is complete, TESS will begin its 26 observational campaigns (13 for each hemisphere) to survey 85% of the sky for transiting exoplanets near Earth. Observations will start with the Southern Hemisphere, and those 13 campaigns will last approximately one year.

    According to Dr. Ricker, choosing to survey the Southern Hemisphere first was “a function of the follow-up resources that are currently available. Many of the most powerful telescopes that ground-based astronomers use are located in the Southern Hemisphere.”

    TESS will then be re-aimed to perform the 13 observational campaigns needed to cover the Northern Hemisphere. During all 26 campaigns, the entire south and north polar sky regions will receive near-continuous year-long assessments from TESS’s cameras – as each observation campaign for the Southern and Northern Hemispheres overlap completely at their respective pole.

    3
    Dr. Ricker shows the number of exoplanets TESS is predicted to find within 100 parsecs (326 lightyears) of Earth. Credit Ricker et al. for NSF/L2

    Every 13.7 days, when TESS swings closest to Earth, the craft will downlink its observation data to scientists at MIT who will process it and make it available to other scientists and the public. Specifically, TESS’s team will focus on the 1,000 closest red dwarf stars to Earth as well as nearby G, K, and M type stars with apparent magnitudes greater than 12.

    Over its primary 2 year mission, TESS will observe about half a million stars in an area 400 times larger than the Kepler mission and is expected to find 20,000 exoplanets – including 500-1,000 Earth-sized planets and Super-Earths.

    These planets will be added to the growing number of known exoplanets. According to NASA’s Exoplanet Archive hosted by CalTech, as of 12 April 2018, there are 3,717 known exoplanets with 2,652 of those found by the Kepler Space Telescope.

    TESS’s primary mission duration is two years, during which all of its science objectives are scheduled to be completed. While a mission extension is never a guarantee, TESS can be extended for additional observations based on its design and orbit. “We can extend, because the orbit will be operating and aligned for more than two decades,” said Dr. Ricker. “Now, as is the case for many Explorer missions, we fully expect that there will be an extended mission for TESS, so we pre-designed the satellite and the operation so that it can go on for a much longer time.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NASASpaceFlight.com, now in its eighth year of operations, is already the leading online news resource for everyone interested in space flight specific news, supplying our readership with the latest news, around the clock, with editors covering all the leading space faring nations.

    Breaking more exclusive space flight related news stories than any other site in its field, NASASpaceFlight.com is dedicated to expanding the public’s awareness and respect for the space flight industry, which in turn is reflected in the many thousands of space industry visitors to the site, ranging from NASA to Lockheed Martin, Boeing, United Space Alliance and commercial space flight arena.

    With a monthly readership of 500,000 visitors and growing, the site’s expansion has already seen articles being referenced and linked by major news networks such as MSNBC, CBS, The New York Times, Popular Science, but to name a few.

     
  • richardmitnick 6:40 am on April 17, 2018 Permalink | Reply
    Tags: , , , , Hubble Uncovers Evidence for Extrasolar Planet Under Construction Jun 13 2013,   

    From Hubble via Manu: “Hubble Uncovers Evidence for Extrasolar Planet Under Construction Jun 13, 2013” 


    Manu Garcia, a friend from IAC.

    The universe around us.
    Astronomy, everything you wanted to know about our local universe and never dared to ask.

    NASA Hubble Banner

    NASA/ESA Hubble Telescope

    NASA/ESA Hubble Telescope

    Jun 13, 2013

    Donna Weaver
    Space Telescope Science Institute, Baltimore, Md.
    410-338-4493
    dweaver@stsci.edu

    Ray Villard
    Space Telescope Science Institute, Baltimore, Md.
    410-338-4514
    villard@stsci.edu

    John Debes
    Space Telescope Science Institute, Baltimore, Md.
    410-338-4782
    debes@stsci.edu

    1
    Nearly 900 extrasolar planets have been confirmed to date, but now for the first time astronomers think they are seeing compelling evidence for a planet under construction in an unlikely place, at a great distance from its diminutive red dwarf star.
    The keen vision of NASA’s Hubble Space Telescope has detected a mysterious gap in a vast protoplanetary disk of gas and dust swirling around the nearby star TW Hydrae, located 176 light-years away in the constellation Hydra (the Sea Serpent). The gap’s presence is best explained as due to the effects of a growing, unseen planet that is gravitationally sweeping up material and carving out a lane in the disk, like a snow plow.

    2
    TW Hydrae Protoplanetary Disk
    This graphic shows a gap in a protoplanetary disk of dust and gas whirling around the nearby red dwarf star TW Hydrae.
    The gap’s presence is best explained as due to the effects of a growing, unseen planet that is gravitationally sweeping up material and carving out a lane in the disk, like a snow plow. In the NASA Hubble Space Telescope image at left, a gap can be seen about 7.5 billion miles away from the star in the center of the disk. If the putative planet orbited in our solar system, it would be roughly twice Pluto’s distance from our Sun. The image was taken in near-infrared light by the Near Infrared Camera and Multi-Object Spectrometer (NICMOS).
    Astronomers used a masking device on NICMOS to block out the star’s bright light so that the disk’s structure could be seen. The Hubble observations reveal that the gap, which is 1.9 billion miles wide, is not completely cleared out. The graphic at right shows the gap relative to the star. TW Hydrae resides 176 light-years away in the constellation Hydra (the Sea Serpent). The Hubble observations were taken on June 17, 2005.

    3
    TW Hydrae Disk – Hubble
    Credits: NASA, ESA, J. Debes (STScI), H. Jang-Condell (University of Wyoming), A. Weinberger (Carnegie Institution of Washington), A. Roberge (Goddard Space Flight Center), and G. Schneider (University of Arizona/Steward Observatory)

    4
    Compass and Scale Image of TW Hydrae Disk
    Credits: NASA, ESA, and Z. Levay (STScI/AURA)

    5
    Comparison of TW Hydrae System and Solar System
    This illustration shows that the TW Hydrae protoplanetary disk is much wider than the size of our solar system. In fact, the gap in the TW Hydrae disk produced by a suspected planet resides 7.5 billion miles from the star. At this distance, the putative planet would orbit far beyond our Kuiper Belt, a reservoir of icy, leftover material from the formation of our solar system.


    Nearly 900 extrasolar planets have been confirmed to date, but now for the first time astronomers think they are seeing compelling evidence for a planet under construction in an unlikely place, at a great distance from its diminutive red dwarf star.

    The keen vision of NASA’s Hubble Space Telescope has detected a mysterious gap in a vast protoplanetary disk of gas and dust swirling around the nearby star TW Hydrae, located 176 light-years away in the constellation Hydra (the Sea Serpent). The gap’s presence is best explained as due to the effects of a growing, unseen planet that is gravitationally sweeping up material and carving out a lane in the disk, like a snow plow.

    Researchers, led by John Debes of the Space Telescope Science Institute in Baltimore, Md., found the gap about 7.5 billion miles from the red dwarf star. If the putative planet orbited in our solar system, it would be roughly twice Pluto’s distance from the Sun.

    The suspected planet’s wide orbit means that it is moving slowly around its host star. Finding the suspected planet in this orbit challenges current planet formation theories. The conventional planet-making recipe proposes that planets form over tens of millions of years from the slow but persistent buildup of dust, rocks, and gas as a budding planet picks up material from the surrounding disk. TW Hydrae, however, is only 8 million years old. There has not been enough time for a planet to grow through the slow accumulation of smaller debris. In fact, a planet at 7.5 billion miles from its star would take more than 200 times longer to form than Jupiter did at its distance from the Sun because of its much slower orbital speed and a deficiency of material in the disk.

    An alternative planet-formation theory suggests that a piece of the disk becomes gravitationally unstable and collapses on itself. In this scenario, a planet could form more quickly, in just a few thousand years.

    “If we can actually confirm that there’s a planet there, we can connect its characteristics to measurements of the gap properties,” Debes says. “That might add to planet formation theories as to how you can actually form a planet very far out. There’s definitely a gap structure. We think it’s probably a planet given the fact that the gap is sharp and circular.”

    What complicates the story is that the red dwarf star is only 55 percent the mass of our Sun. “It’s so intriguing to see a system like this,” Debes says. “This is the lowest-mass star for which we’ve observed a gap so far out.”

    The disk also lacks large dust grains in its outer regions. Observations from ALMA (the Atacama Large Millimeter Array) show that millimeter-sized (tenths-of-an-inch-sized) dust, roughly the size of a grain of sand, cuts off sharply at about 5.5 billion miles from the star, just short of the gap.

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    The disk is 41 billion miles across.

    “Typically, you need pebbles before you can have a planet. So, if there is a planet and there is no dust larger than a grain of sand farther out, that would be a huge challenge to traditional planet-formation models,” Debes says.

    The Hubble observations reveal that the gap, which is 1.9 billion miles wide, is not completely cleared out. The team suggests that if a planet exists, it is in the process of forming and not very massive. Based on the evidence, team member Hannah Jang-Condell at the University of Wyoming in Laramie estimates that the putative planet is 6 to 28 times more massive than Earth. Within this range lies a class of planets called super-Earths and ice giants. Such a small planet mass is also a challenge to direct-collapse planet-formation theories, which predict that clumps of material one to two times more massive than Jupiter can collapse to form a planet.

    TW Hydrae has been a popular target with astronomers. The system is one of the closest examples of a face-on disk, giving astronomers an overhead view of the star’s environment. Debes’s team used Hubble’s Near Infrared Camera and Multi-Object Spectrometer (NICMOS) to observe the star in near-infrared light.

    NASA/Hubble NICMOS

    The team then re-analyzed archival Hubble data, using more NICMOS images as well as optical and spectroscopic observations from the Space Telescope Imaging Spectrograph (STIS).

    NASA Hubble STIS

    Armed with these observations, they composed the most comprehensive view of the system in scattered light over many wavelengths.

    When Debes accounted for the rate at which the disk dims from reflected starlight, the gap was highlighted. It was a feature that two previous Hubble studies had suspected but could not definitively confirm. These earlier observations noted an uneven brightness in the disk but did not identify it as a gap.

    “When I first saw the gap structure, it just popped out like that,” Debes says. “The fact that we see the gap at every wavelength tells you that it’s a structural feature rather than an instrumental artifact or a feature of how the dust scatters light.

    The team plans to use ALMA and NASA’s upcoming James Webb Space Telescope, an infrared observatory set to launch in 2018, to study the system in more detail.

    The team’s paper was published online on June 14 in The Astrophysical Journal.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA’s Goddard Space Flight Center manages the telescope. The Space Telescope Science Institute (STScI), is a free-standing science center, located on the campus of The Johns Hopkins University and operated by the Association of Universities for Research in Astronomy (AURA) for NASA, conducts Hubble science operations.

    ESA50 Logo large

    AURA Icon

    NASA image

     
  • richardmitnick 4:48 pm on April 16, 2018 Permalink | Reply
    Tags: , , , , , Featured Image: Stars from Broken Clouds and Disks   

    From AAS NOVA: “Featured Image: Stars from Broken Clouds and Disks” 

    AASNOVA

    AAS NOVA

    16 April 2018
    Susanna Kohler

    1
    This still from a simulation captures binary star formation in action. Researchers have long speculated on the processes that lead to clouds of gas and dust breaking up into smaller pieces to form multiple-star systems — but these take place over a large range of scales, making them difficult to simulate. In a new study led by Leonardo Sigalotti (UAM Azcapotzalco, Mexico), researchers have used a smoothed-particle hydrodynamics code to model binary star formation on scales of thousands of AU down to scales as small as ~0.1 AU. In the scene shown above, a collapsing cloud of gas and dust has recently fragmented into two pieces, forming a pair of disks separated by around 200 AU. In addition, we can see that smaller-scale fragmentation is just starting in one of these disks, Disk B. Here, one of the disk’s spiral arms has become unstable and is beginning to condense; it will eventually form another star, producing a hierarchical system: a close binary within the larger-scale binary. Check out the broader process in the four panels below (which show the system as it evolves over time), or visit the paper linked below for more information about what the authors learned.

    2
    Evolution of a collapsed cloud after large-scale fragmentation into a binary protostar: (a) 44.14 kyr, (b) 44.39 kyr, (c) 44.43 kyr, and (d) 44.68 kyr. The insets show magnifications of the binary cores. [Adapted from Sigalotti et al. 2018]

    Citation

    Leonardo Di G. Sigalotti et al 2018 ApJ 857 40. http://iopscience.iop.org/article/10.3847/1538-4357/aab619/meta

    Related Journal Articles

    Consistent SPH Simulations of Protostellar Collapse and Fragmentation doi: 10.3847/1538-4357/aa5655
    Rotationally Induced Fragmentation in the Prestellar Core L1544 doi: 10.1088/0004-637X/780/2/188
    Signatures of Gravitational Instability in Resolved Images of Protostellar Disks doi: 10.3847/0004-637X/823/2/141
    The Burst Mode of Accretion in Primordial Protostars doi: 10.1088/0004-637X/768/2/131
    Gravitational Collapse and Fragmentation of Molecular Cloud Cores with GADGET-2 doi: 10.1086/520492
    The Formation of Low-mass Binary Star Systems Via Turbulent Fragmentation doi: 10.1088/0004-637X/725/2/1485

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    1

    AAS Mission and Vision Statement

    The mission of the American Astronomical Society is to enhance and share humanity’s scientific understanding of the Universe.

    The Society, through its publications, disseminates and archives the results of astronomical research. The Society also communicates and explains our understanding of the universe to the public.
    The Society facilitates and strengthens the interactions among members through professional meetings and other means. The Society supports member divisions representing specialized research and astronomical interests.
    The Society represents the goals of its community of members to the nation and the world. The Society also works with other scientific and educational societies to promote the advancement of science.
    The Society, through its members, trains, mentors and supports the next generation of astronomers. The Society supports and promotes increased participation of historically underrepresented groups in astronomy.
    The Society assists its members to develop their skills in the fields of education and public outreach at all levels. The Society promotes broad interest in astronomy, which enhances science literacy and leads many to careers in science and engineering.

    Adopted June 7, 2009

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: