Updates from richardmitnick Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:07 pm on November 24, 2015 Permalink | Reply
    Tags: , ,   

    From ORNL: “New supercomputer simulations enhance understanding of protein motion and function” 


    Oak Ridge National Laboratory

    November 23, 2015
    Morgan McCorkle, Communications
    mccorkleml@ornl.gov, 865.574.7308

    Miki Nolin

    Illustration of the structure of a phosphoglycerate kinase protein that was subjected to molecular dynamics simulations. The relative motions of the red and blue domains of the proteins are highly complex, and can be described in terms of motion of a configurational point on a rough energy landscape (illustrated). The transitions of the structure between energy minima on the landscape can be described in terms of a network (illustrated), which is found to be fractal (self-similar) on every timescale. Image credit: Thomas Splettstoesser; http://www.scistyle.com

    Supercomputing simulations at the Department of Energy’s Oak Ridge National Laboratory could change how researchers understand the internal motions of proteins that play functional, structural and regulatory roles in all living organisms. The team’s results are featured in Nature Physics.

    “Proteins have never been seen this way before,” said coauthor Jeremy Smith, director of ORNL’s Center for Molecular Biophysics and a Governor’s Chair at the University of Tennessee (UT). “We used considerable computer power to provide a unified conceptual picture of the motions in proteins over a huge range of timescales, from the very shortest lengths of time at which atoms move (picoseconds) right up to the lifetimes of proteins in cells (roughly 1000 seconds). It changes what we think a protein fundamentally is.”

    Studying proteins—their structure and function—is essential to advancing understanding of biological systems relevant to different energy and medical sciences, from bioenergy research and subsurface biogeochemistry to drug design.

    Results obtained by Smith’s UT graduate student, Xiaohu Hu, revealed that the dynamics of single protein molecules are “self-similar” and out of equilibrium over an enormous range of timescales.

    With the help of Titan— the fastest supercomputer in the U.S., located at the DOE Office of Science’s Oak Ridge Leadership Computing Facility—Smith’s team developed a complete picture of protein dynamics, revealing that the structural fluctuations within any two identical protein molecules, even if coded from the same gene, turn out to be different.


    “A gene is a code for a protein, producing different copies of the protein that should be the same, but the internal fluctuations of these individual protein molecules may never reach equilibrium, or converge,” Smith said. “This is because the fluctuations themselves are continually aging and don’t have enough time to settle down before the protein molecules are eaten up in the cell and replaced.”

    Understanding the out-of-equilibrium phenomenon has biological implications because the function of a protein depends on its motions. Two individual protein molecules, even though they come from the same gene, will not function precisely the same way within the cell.

    “You may have, for example, two identical enzyme molecules that catalyze the same reaction,” said Smith. “But due to the absence of equilibrium, the rate at which the catalysis happens will be slightly different for the two proteins. This affects the biological function of the protein.”

    The team also discovered that the dynamics of single protein molecules are self-similar, or fractal over the whole range of timescales. In other words, the motions in a single protein molecule look the same however long you look at them for, from picoseconds to hundreds of seconds.

    “The motions in a protein, how the bits of the protein wiggle and jiggle relative to each other, resemble one another on all these timescales,” Smith said. “We represent the shape of a protein as a point. If it changes its shape due to motions, it goes to a different point, and so on. We joined these points, drawing pictures, and we found that these pictures are the same when you look at them on whatever timescale, whether it’s nanoseconds, microseconds, or milliseconds.”

    By building a more complete picture of protein dynamics, the team’s research reveals that motions of a single protein molecule on very fast timescales resemble those that govern the protein’s function.

    To complete all of the simulations, the team combined the power of Titan with two other supercomputers—Anton, a specialty parallel computer built by D.E. Shaw Research, and Hopper, the National Energy Research Scientific Computing Center’s Cray XE6 supercomputer located at Lawrence Berkeley National Laboratory.



    “Titan was especially useful for us to get accurate statistics,” Smith said. “It allowed us to do a lot of simulations in order to reduce the errors and get more confident results.”

    The title of the Nature Physics paper is The Dynamics of Single Protein Molecules is Non-Equilibrium and Self-Similar Over Thirteen Decades in Timehttp://www.nature.com/nphys/journal/vaop/ncurrent/full/nphys3553.html

    This research was supported by the DOE Office of Science through an Advanced Scientific Computing Research (ASCR) Leadership Computing Challenge (ALCC) allocation and funded in part by a DOE Experimental Program to Stimulate Competitive Research (EPSCoR) award. The Oak Ridge Leadership Computing Facility and National Energy Research Scientific Computing Center are DOE Office of Science User Facilities.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.


  • richardmitnick 3:44 pm on November 24, 2015 Permalink | Reply
    Tags: , , ,   

    From NOAO: “Oodles of Faint Dwarf Galaxies in Fornax Shed Light on a Cosmological Mystery” 

    NOAO Banner

    November 23, 2015
    Dr. Joan Najita
    National Optical Astronomy Observatory
    950 N Cherry Ave
    Tucson AZ 85719 USA
    +1 520-318-8416
    E-mail: najita@noao.edu

    Image of the inner 3 square degrees of the NGFS survey footprint compared with the size of the Moon. Low surface brightness dwarf galaxies are marked by red circles. Gray circles indicate previously known dwarf galaxies. The dwarf galaxies, which vastly outnumber the bright galaxies, may be the “missing satellites” predicted by cosmological simulations.

    An astonishing number of faint low surface brightness dwarf galaxies recently discovered in the Fornax cluster of galaxies may help to solve the long-standing cosmological mystery of “The Missing Satellites”. The discovery, made by an international team of astronomers led by Roberto Muñoz and Thomas Puzia of Pontificia Universidad Católica de Chile, was carried out using the Dark Energy Camera (DECam) on the 4-m Blanco telescope at Cerro Tololo Inter-American Observatory (CTIO). CTIO is operated by the National Optical Astronomy Observatory (NOAO).

    CTIO Victor M Blanco 4m Telescope
    DECam (built at FNAL) and the CTIO Victor M Blanco telescope in Chile in which it is housed.

    Computer simulations of the evolution of the matter distribution in the Universe predict that dwarf galaxies should vastly outnumber galaxies like the Milky Way, with hundreds of low mass dwarf galaxies predicted for every Milky Way-like galaxy. The apparent shortage of dwarf galaxies relative to these predictions, “the missing satellites problem,” could imply that the cosmological simulations are wrong or that the predicted dwarf galaxies have simply not yet been discovered. The discovery of numerous faint dwarf galaxies in Fornax suggests that the “missing satellites” are now being found.

    The discovery, recently published in the Astrophysical Journal, comes as one of the first results from the Next Generation Fornax Survey (NGFS), a study of the central 30 square degree region of the Fornax galaxy cluster using optical imaging with DECam and near-infrared imaging with ESO’s VISTA/VIRCam. The Fornax cluster, located at a distance of 62 million light-years, is the second richest galaxy cluster within 100 million light-years after the much richer Virgo cluster.

    The deep, high-quality images of the Fornax cluster core obtained with DECam were critical to the recovery of the missing dwarf galaxies. “With the combination of DECam’s huge field of view (3 square degrees) and our novel observing strategy and data reduction algorithms, we were able to detect extremely diffuse low-surface brightness galaxies,” explained Roberto Muñoz, the lead author of the study.

    Because the low surface brightness dwarf galaxies are extremely diffuse, stargazers residing in one of these galaxies would see a night sky very different from that seen from Earth. The stellar density of the faint dwarf galaxies (one star per million cubic parsecs) is about a million times lower than that in the neighborhood of the Sun, or almost a billion times lower than in the bulge of the Milky Way.

    As a result, “inhabitants of worlds in one of our NGFS ultra-faint dwarfs would find their sky sparsely populated with visible objects and extremely boring. They would perhaps not even realize that they live in a galaxy!” mused coauthor Thomas Puzia.

    The large number of dwarf galaxies discovered in the Fornax cluster echoes the emerging census of satellites of our own Galaxy, the Milky Way. More than 20 dwarf galaxy companions have been discovered in the past year, many of which were also discovered with DECam.

    Reference: “Unveiling a Rich System of Faint Dwarf Galaxies in the Next Generation Fornax Survey,” Roberto P. Muñoz et al., 2015 November 1, Astrophysical Journal Letters [http://iopscience.iop.org/article/10.1088/2041-8205/813/1/L15, preprint: http://arxiv.org/abs/1510.02475%5D.

    Cerro Tololo Inter-American Observatory is managed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy Inc. (AURA) under a cooperative agreement with the National Science Foundation.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    NOAO News
    NOAO is the US national research & development center for ground-based night time astronomy. In particular, NOAO is enabling the development of the US optical-infrared (O/IR) System, an alliance of public and private observatories allied for excellence in scientific research, education and public outreach.

    Our core mission is to provide public access to qualified professional researchers via peer-review to forefront scientific capabilities on telescopes operated by NOAO as well as other telescopes throughout the O/IR System. Today, these telescopes range in aperture size from 2-m to 10-m. NOAO is participating in the development of telescopes with aperture sizes of 20-m and larger as well as a unique 8-m telescope that will make a 10-year movie of the Southern sky.

    In support of this mission, NOAO is engaged in programs to develop the next generation of telescopes, instruments, and software tools necessary to enable exploration and investigation through the observable Universe, from planets orbiting other stars to the most distant galaxies in the Universe.

    To communicate the excitement of such world-class scientific research and technology development, NOAO has developed a nationally recognized Education and Public Outreach program. The main goals of the NOAO EPO program are to inspire young people to become explorers in science and research-based technology, and to reach out to groups and individuals who have been historically under-represented in the physics and astronomy science enterprise.

    The National Optical Astronomy Observatory is proud to be a US National Node in the International Year of Astronomy, 2009.

    About Our Observatories:
    Kitt Peak National Observatory (KPNO)

    Kitt Peak

    Kitt Peak National Observatory (KPNO) has its headquarters in Tucson and operates the Mayall 4-meter, the 3.5-meter WIYN , the 2.1-meter and Coudé Feed, and the 0.9-meter telescopes on Kitt Peak Mountain, about 55 miles southwest of the city.

    Cerro Tololo Inter-American Observatory (CTIO)

    NOAO Cerro Tolo

    The Cerro Tololo Inter-American Observatory (CTIO) is located in northern Chile. CTIO operates the 4-meter, 1.5-meter, 0.9-meter, and Curtis Schmidt telescopes at this site.

    The NOAO System Science Center (NSSC)

    Gemini North
    Gemini North

    Gemini South telescope
    Gemini South

    The NOAO System Science Center (NSSC) at NOAO is the gateway for the U.S. astronomical community to the International Gemini Project: twin 8.1 meter telescopes in Hawaii and Chile that provide unprecendented coverage (northern and southern skies) and details of our universe.

    NOAO is managed by the Association of Universities for Research in Astronomy under a Cooperative Agreement with the National Science Foundation.

  • richardmitnick 2:17 pm on November 24, 2015 Permalink | Reply
    Tags: , , , Lyman-alpha emissions   

    From CANDELS: “Coming Out of the Dark Ages” 

    Hubble Candles

    November 24, 2015

    Until about 400,000 years after the Big Bang, the Universe was mostly full of electrons and protons, zipping in random directions. It was only when the Universe cooled down enough, because of expansion, that electrons and protons had a chance to combine to form neutral hydrogen (the lightest element in the Universe) for the first time. This epoch is known as the epoch of recombination. The Universe then enters and remains in what we call the Dark Ages until the formation of the first luminous sources — first stars, first galaxies, quasars, and so on. During this period, the Universe was full of neutral hydrogen, and thus completely opaque to any ultra-violet (UV) radiation because neutral hydrogen is very efficient at absorbing UV radiation. Intense UV ionizing photons from the first stars and first galaxies then start to ionize their surrounding, forming ionized bubbles. These bubbles grow with time, and eventually the entire Universe was filled with ionized bubbles. The epoch during which this change of phase or transition occurred i.e., the ionization of most of the neutral hydrogen to ionized hydrogen — is called the epoch of reionization (see Figure below). This was the last major transition in the history of the Universe, and had a significant impact on the large scale structure of the Universe. Therefore, this is one of the frontier research areas in modern observational cosmology.

    Time line history of the Universe from Big Bang (left) to the present day Universe (right). Before the process of reionization, the Universe was completely filled with neutral hydrogen. It is only after the formation of first sources including first stars, first galaxies, that the neutral hydrogen in the Universe started ionizing, and by about one billion years after the Big Bang, most of the neutral hydrogen in the Universe was vaporized marking the end of the epoch of reionization (Image credit: NASA, ESA, A. Fields (STScI).

    Probing the Epoch of Reionization

    One of the most powerful and practical tools to probe the epoch of reionization is the Lyman-alpha emission test. Lyman-alpha photons are a n=2 to n=1 transition in neutral hydrogen which emits a photon with a wavelength of lambda=1215.67 Angstroms. In the presence of neutral hydrogen, Lyman-alpha photons are scattered again and again and eventually many of the Lyman-alpha photons are scattered away form our line of sight . As a result, we expect to see fewer and fewer galaxies with Lyman-alpha emission as we probe higher and higher redshifts (closer to the Big Bang).

    To study the epoch of reionization, we did exactly this using a large sample of very distant (high-redshift) galaxy candidates selected from the Hubble Space Telescope (HST) CANDELS survey — the largest galaxy survey ever undertaken using HST. To know the exact distance of a galaxy, it is critical to obtain spectroscopic observations of these galaxies. We did this using a near-infrared spectrograph, MOSFIRE, on the Keck Telescope located at 13,000 ft on top of Mauna Kea, a dormant-volcano mountain in Hawaii.

    NASA Hubble Telescope
    NASA/ESA Hubble

    Keck MOSFIRE

    Keck Observatory

    To our surprise, we discovered that most of the galaxies we observed did not show Lyman-alpha emission. The figure below shows our results combined with previous studies. This figure shows the Lyman-alpha equivalent width, the ratio of strength of Lyman-alpha emission from a galaxy to its underlying blue stellar light continuum (non Lyman-alpha light), as a function of redshift (or age of the Universe on the top axis), as we probe closer and closer to the Big Bang. As can be seen, there are fewer galaxies, and at the same time the strength of Lyman-alpha emission also decreases as we go to higher redshifts. While this can be a result of a few different things, upon careful inspection, we think that this is likely because of the Universe becoming more neutral as we go beyond redshift ~7, and we are witnessing the epoch of reionization in-progress.

    This Figure shows the evolution of strength of Lyman-alpha emission in galaxies, as we get closer and closer to the Big Bang. As can be seen, the strength of Lyman-alpha emission appears to be decreasing or in other words we are missing vetry strong Lyman-alpha emitting galaxies as we go towards higher redshifts. This is likely a consequence of increasing neutral hydrogen, as expected from theoretical studies (Image credit: Tilvi et al 2014).

    Currently, Lyman-alpha emission provides the best tool to discover and confirm very distant galaxies. While there are a few other emission lines that could be used to confirm distance to a galaxy, their strengths compared to the Lyman-alpha emission is much weaker. Despite this, we have made quite a significant progress in understanding the first billion years of the Universe.

    The figure below shows the summary of progress astronomers have made over the past few years, understanding the transition of Universe from a completely neutral to an ionized phase. Below redshift of about 6, that is about 1 billion years after the Big Bang, the Universe is almost completely full of ionized hydrogen—only one part in 10,000 is neutral. At redshifts greater than 6, the Universe becomes more and more neutral. The James Webb Space Telescope (JWST) will be very instrumental in discovering galaxies within the first 600 Myrs, and will help us gain even more insight into the details of the crucial epoch.

    NASA James Webb Telescope

    This figure shows the evolution of neutral hydrogen fraction as a function of redshift (or age of the Universe shown on top axis). Only one part in 10,000 is neutral below redshift of about 6 which implies that the Universe is mostly ionized and the process of reionization has occurred at redshifts greater than six, where the Universe is becoming increasingly neutral (Image credit: V. Tilvi).

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    About the CANDELS blog

    In late 2009, the Hubble Space Telescope began an ambitious program to map five carefully selected areas of the sky with its sensitive near-infrared camera, the Wide-Field Camera 3. The observations are important for addressing a wide variety of questions, from testing theories for the birth and evolution of galaxies, to refining our understanding of the geometry of the universe.

    This is a research blog written by people involved in the project. We aim to share some of the excitement of working at the scientific frontier, using one of the greatest telescopes ever built. We will also share some of the trials and tribulations of making the project work, from the complications of planning and scheduling the observations to the challenges of trying to understand the data. Along the way, we may comment on trends in astronomy or other such topics.

    CANDELS stands for the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey. It builds on the legacy of the Hubble Deep Field, as well as the wider-area surveys called GOODS, AEGIS, COSMOS, and UKIDSS UDS. The CANDELS observations are designed to search for galaxies within about a billion years of the big bang, study galaxies at cosmic high-noon about 3 billion years after the big bang – when star-formation and black hole growth were at their peak intensity – and discover distant supernovae for refining our understanding of cosmic acceleration. You can find more details, and download the CANDELS data, from the CANDELS website.

    You can also use the Hubble Legacy Archive to view the CANDELS images.

  • richardmitnick 1:57 pm on November 24, 2015 Permalink | Reply
    Tags: , , ,   

    From phys.org: “Irregular heartbeat of the Sun driven by double dynamo” 


    July 9, 2015
    Dr Robert Massey

    Montage of images of solar activity between August 1991 and September 2001 taken by the Yohkoh Soft X-ray Telecope, showing variation in solar activity during a sunspot cycle. Credit: Yohkoh/ISAS/Lockheed-Martin/NAOJ/U. Tokyo/NASA

    A new model of the Sun’s solar cycle is producing unprecedentedly accurate predictions of irregularities within the Sun’s 11-year heartbeat. The model draws on dynamo effects in two layers of the Sun, one close to the surface and one deep within its convection zone. Predictions from the model suggest that solar activity will fall by 60 per cent during the 2030s to conditions last seen during the ‘mini ice age’ that began in 1645. Results will be presented today by Prof Valentina Zharkova at the National Astronomy Meeting in Llandudno.

    It is 172 years since a scientist first spotted that the Sun’s activity varies over a cycle lasting around 10 to 12 years. But every cycle is a little different and none of the models of causes to date have fully explained fluctuations. Many solar physicists have put the cause of the solar cycle down to a dynamo caused by convecting fluid deep within the Sun. Now, Zharkova and her colleagues have found that adding a second dynamo, close to the surface, completes the picture with surprising accuracy.

    “We found magnetic wave components appearing in pairs, originating in two different layers in the Sun’s interior. They both have a frequency of approximately 11 years, although this frequency is slightly different, and they are offset in time. Over the cycle, the waves fluctuate between the northern and southern hemispheres of the Sun. Combining both waves together and comparing to real data for the current solar cycle, we found that our predictions showed an accuracy of 97%,” said Zharkova.

    Zharkova and her colleagues derived their model using a technique called principal component analysis of the magnetic field observations from the Wilcox Solar Observatory in California.

    Stanford Wilcox Solar Observatory
    Wilcox Solar Observatory

    They examined three solar cycles-worth of magnetic field activity, covering the period from 1976-2008. In addition, they compared their predictions to average sunspot numbers, another strong marker of solar activity. All the predictions and observations were closely matched.

    Looking ahead to the next solar cycles, the model predicts that the pair of waves become increasingly offset during Cycle 25, which peaks in 2022. During Cycle 26, which covers the decade from 2030-2040, the two waves will become exactly out of synch and this will cause a significant reduction in solar activity.

    Comparison of three images over four years apart illustrates how the level of solar activity has risen from near minimum to near maximum in the Sun’s 11-years solar cycle. Credit: SOHO/ESA/NASA

    “In cycle 26, the two waves exactly mirror each other – peaking at the same time but in opposite hemispheres of the Sun. Their interaction will be disruptive, or they will nearly cancel each other. We predict that this will lead to the properties of a ‘Maunder minimum’,” said Zharkova. “Effectively, when the waves are approximately in phase, they can show strong interaction, or resonance, and we have strong solar activity. When they are out of phase, we have solar minimums. When there is full phase separation, we have the conditions last seen during the Maunder minimum, 370 years ago.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    About Phys.org in 100 Words

    Phys.org™ (formerly Physorg.com) is a leading web-based science, research and technology news service which covers a full range of topics. These include physics, earth science, medicine, nanotechnology, electronics, space, biology, chemistry, computer sciences, engineering, mathematics and other sciences and technologies. Launched in 2004, Phys.org’s readership has grown steadily to include 1.75 million scientists, researchers, and engineers every month. Phys.org publishes approximately 100 quality articles every day, offering some of the most comprehensive coverage of sci-tech developments world-wide. Quancast 2009 includes Phys.org in its list of the Global Top 2,000 Websites. Phys.org community members enjoy access to many personalized features such as social networking, a personal home page set-up, RSS/XML feeds, article comments and ranking, the ability to save favorite articles, a daily newsletter, and other options.

  • richardmitnick 1:43 pm on November 24, 2015 Permalink | Reply
    Tags: , , ,   

    From Sky and Telescope: “Mystery Signal from a Black Hole-Powered Jet” 

    SKY&Telescope bloc

    Sky & Telescope

    November 23, 2015
    Monica Young

    This artist’s concept shows a supermassive black hole shooting out a jet of plasma headed almost straight for Earth. In the telescope, though, this object would appear as a (usually) randomly flickering point of light. NASA / JPL-Caltech

    Observing a blazar is a little like standing beneath a relativistic waterfall. Look up: that flickering point of light is a head-on view of the powerful plasma jet shooting out from a supermassive black hole.

    The free-flying electrons within that mess of plasma twirl at almost light speed around magnetic fields, and they radiate across the electromagnetic spectrum, often drowning out any other forms of emission. We might even see a sudden outburst when turbulence, a sudden influx of plasma, or some other force roils the jet.

    But when Markus Ackermann (DESY, Germany) and colleagues pored through almost seven years of data collected with the Fermi Gamma-Ray Space Telescope, they saw something completely unexpected: a regular signal coming from a blazar. Gamma rays from PG 1553+113 seem to brighten roughly every 2.2 years, with three complete cycles captured so far.

    NASA Fermi Telescope

    Moreover, other wavelengths seem to echo this cycle. Inspired by the gamma-ray find, Ackermann’s team sought out radio and optical measurements from blazar-monitoring campaigns — and both wavelengths show hints of the same periodic signal. The team also looked at X-ray data collected over the years by the Swift and Rossi X-ray Timing Explorer spacecraft, but there weren’t enough data points for a proper analysis.

    NASA SWIFT Telescope


    The results are published in the November 10th Astrophysical Journal Letters. (Click here for full text).

    This light curve shows how the brightness of blazar PG 1553+113 varies for gamma rays with more than 100 million electron volts of energy. The plot, which includes data from August 4, 2008, to July 19, 2015, displays three complete cycles of an apparently regular, 2-year cycle. M. Ackermann & others / Astrophysics Journal Letters

    If this signal is real, it has to come from the black hole-powered jet, and the authors explore a number of explanations.

    For example, the jet might be precessing or rotating, sweeping its beam past Earth every 2 years or so. Or perhaps a strong magnetic field chokes the flow of gas toward the black hole, creating instabilities that then regularly flood the jet with material. The most intriguing prospect is another supermassive black hole in the system, its presence affecting gas flow and jet alignment.

    At this point, though, the authors admit they don’t have enough data to distinguish between these possibilities. Further monitoring might remedy that.
    Keep Watching

    “I am always skeptical about claims of periodicity based on only 2 to 3 cycles,” says Alan Marscher (Boston University), a blazar expert not involved in the study. Even completely random processes, he adds, can create apparently regular signals over short periods of time.

    These light curves compare how the blazar varies in X-rays (top panel), optical (middle), and radio waves (bottom). Though there aren’t enough X-rays to track the regular variation seen in gamma rays, the optical and radio data seem to echo the gamma-ray cycle, which is shown as a dotted line in the middle panel. M. Ackermann & others / Astrophysics Journal Letters

    And Ackermann’s team is frank about the data’s limits. After all, blazars are known to flare randomly and, due to the length of the suspected cycle, only three complete periods have been captured so far. The authors estimate a few percent probability that this signal is indeed a chance alignment of random flares.

    Still, the fact that the signal is observed across radio, optical, and gamma rays strengthens the case. “Seeing such well-correlated oscillations across the different wavebands isn’t as common as simple models would expect,” Marscher notes.

    “It’s worth keeping an eye on this object.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Sky & Telescope magazine, founded in 1941 by Charles A. Federer Jr. and Helen Spence Federer, has the largest, most experienced staff of any astronomy magazine in the world. Its editors are virtually all amateur or professional astronomers, and every one has built a telescope, written a book, done original research, developed a new product, or otherwise distinguished him or herself.

    Sky & Telescope magazine, now in its eighth decade, came about because of some happy accidents. Its earliest known ancestor was a four-page bulletin called The Amateur Astronomer, which was begun in 1929 by the Amateur Astronomers Association in New York City. Then, in 1935, the American Museum of Natural History opened its Hayden Planetarium and began to issue a monthly bulletin that became a full-size magazine called The Sky within a year. Under the editorship of Hans Christian Adamson, The Sky featured large illustrations and articles from astronomers all over the globe. It immediately absorbed The Amateur Astronomer.

    Despite initial success, by 1939 the planetarium found itself unable to continue financial support of The Sky. Charles A. Federer, who would become the dominant force behind Sky & Telescope, was then working as a lecturer at the planetarium. He was asked to take over publishing The Sky. Federer agreed and started an independent publishing corporation in New York.

    “Our first issue came out in January 1940,” he noted. “We dropped from 32 to 24 pages, used cheaper quality paper…but editorially we further defined the departments and tried to squeeze as much information as possible between the covers.” Federer was The Sky’s editor, and his wife, Helen, served as managing editor. In that January 1940 issue, they stated their goal: “We shall try to make the magazine meet the needs of amateur astronomy, so that amateur astronomers will come to regard it as essential to their pursuit, and professionals to consider it a worthwhile medium in which to bring their work before the public.”

  • richardmitnick 1:20 pm on November 24, 2015 Permalink | Reply
    Tags: , ,   

    From MIT: “A new way to make X-rays” 

    MIT News

    November 23, 2015
    David L. Chandler

    MIT researchers have found a phenomenon that might lead to more compact, tunable X-ray devices made of graphene.

    By using plasmons to “wiggle” a free electron in a sheet of graphene, researchers have developed a new method for generating X-rays. In this image of one of their simulations, the color and height represent the intensity of radiation (with blue the lowest intensity and red the highest), at a moment in time just after an electron (grey sphere) moving close to the surface generates a pulse. Courtesy of the researchers

    The most widely used technology for producing X-rays – used in everything from medical and dental imaging, to testing for cracks in industrial materials – has remained essentially the same for more than a century. But based on a new analysis by researchers at MIT, that might potentially change in the next few years.

    The finding, based on a new theory backed by exact simulations, shows that a sheet of graphene – a two-dimensional form of pure carbon – could be used to generate surface waves called plasmons when the sheet is struck by photons from a laser beam. These plasmons in turn could be triggered to generate a sharp pulse of radiation, tuned to wavelengths anywhere from infrared light to X-rays.

    What’s more, the radiation produced by the system would be of a uniform wavelength and tightly aligned, similar to that from a laser beam. The team says this could potentially enable lower-dose X-ray systems in the future, making them safer. The new work is reported this week in the journal Nature Photonics, in a paper by MIT professors Marin Soljačić and John Joannopoulos and postdocs Ido Kaminer, Liang Jie Wong (now at the Singapore Institute of Manufacturing Technology), and Ognjen Ilic.

    Soljačić says that there is growing interest in finding new ways of generating sources of light, especially at scales that could be incorporated into microchips or that could reduce the size and cost of the high-intensity beams used for basic scientific and biomedical research. Of all the wavelengths of electromagnetic radiation commonly used for applications, he says, “coherent X-rays are particularly hard to create.” They also have the highest energy. The new system could, in principle, create ultraviolet light sources on a chip and table-top X-ray devices that could produce the sorts of beams that now require huge, multimillion-dollar particle accelerators.

    To make focused, high-power X-ray beams, “the usual approach is to create high-energy charged particles [using an accelerator] and ‘wiggle’ them,” says Kaminer. “The oscillations will produce X-rays. But that approach is very expensive,” and the few facilities available nationwide that can produce such beams are highly oversubscribed. “The dream of the community is to make them small and inexpensive,” he says.

    Most sources of X-rays rely on extremely high-energy electrons, which are hard to produce. But the new method gets around that, using the tightly-confined power of the wave-like plasmons that are produced when a specially patterned sheet of graphene gets hit by photons from a laser beam. These plasmons can then release their energy in a tight beam of X-rays when triggered by a pulse from a conventional electron gun similar to those found in electron microscopes.

    “The reason this is unique is that we’re substantially bypassing the problem of accelerating the electrons,” he says. “Every other approach involves accelerating the electrons. This is unique in producing X-rays from low-energy electrons.”

    In addition, the system would be unique in its tunability, able to deliver beams of single-wavelength light all the way from infrared, through visible light and ultraviolet, on into X-rays. And there are three different inputs that can be used to control the tuning of the output, Kaminer explains – the frequency of the laser beam to initiate the plasmons, the energy of the triggering electron beam, and the “doping” of the graphene sheet.

    Such beams could have applications in crystallography, the team says, which is used in many scientific fields to determine the precise atomic structure of molecules. Because of its tight, narrow beam, the system might also allow more precise pinpointing of medical and dental X-rays, thus potentially reducing the radiation dose received by a patient, they say.

    So far, the work is theoretical, based on precise simulations, but the group’s simulations in the past have tended to match quite well with experimental results, Soljačić says. “We have the ability in our field to model these phenomena very exactly.”

    They are now in the process of building a device to test the system in the lab, starting initially with producing ultraviolet sources and working up to the higher-energy X-rays. “We hope to have solid confirmation of the principles within a year, and X-rays, if that goes well, optimistically within three years,” Soljačić says.

    But as with any drastically new technology, he acknowledges, the devil is in the details, and unexpected issues could crop up. So his estimate of when a practical X-ray device could emerge from this, he says with a smile, is “from three years, to never.”

    Hrvoje Buljan, a professor of physics at the University of Zagreb in Croatia, who was not involved in this study, says the work provides “a significant new approach to produce X-ray radiation.” He adds, “The experimental implementation still needs to be performed. Based on the proposal, all of the ingredients for the proof of principle experiments are there, and such experiments will be feasible.”

    The work was supported by the U.S. Army Research Laboratory and the U.S. Army Research Office, through the Institute for Soldier Nanotechnologies, by the Science and Engineering Research Council, A*STAR, Singapore, and by the European Research Council Marie Curie IOF grant.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 1:07 pm on November 24, 2015 Permalink | Reply
    Tags: , , ,   

    From Symmetry: “Charge-parity violation” 


    Photo by Reidar Hahn, Fermilab with Sandbox Studio, Chicago

    Matter and antimatter behave differently. Scientists hope that investigating how might someday explain why we exist.

    One of the great puzzles for scientists is why there is more matter than antimatter in the universe—the reason we exist.

    It turns out that the answer to this question is deeply connected to the breaking of fundamental conservation laws of particle physics. The discovery of these violations has a rich history, dating back to 1956.

    Parity violation

    It all began with a study led by scientist Chien-Shiung Wu of Columbia University. She and her team were studying the decay of cobalt-60, an unstable isotope of the element cobalt. Cobalt-60 decays into another isotope, nickel-60, and in the process, it emits an electron and an electron antineutrino. The nickel-60 isotope then decays into a pair of photons.

    The conservation law being tested was parity conservation, which states that the laws of physics shouldn’t change when all the signs of a particle’s spatial coordinates are flipped. The experiment observed the decay of cobalt-60 in two arrangements that mirrored one another.

    The release of photons in the decay is an electromagnetic process, and electromagnetic processes had been shown to conserve parity. But the release of the electron and electron antineutrino is a radioactive decay process, mediated by the weak force. Such processes had not been tested in this way before.

    Parity conservation dictated that, in this experiment, the electrons should be emitted in the same direction and in the same proportion as the photons.

    But Wu and her team found just the opposite to be true. This meant that nature was playing favorites. Parity, or P symmetry, had been violated.

    Two theorists, Tsung Dao Lee and Chen Ning Yang, who had suggested testing parity in this way, shared the 1957 Nobel Prize in physics for the discovery.

    Charge-parity violation

    Many scientists were flummoxed by the discovery of parity violation, says Ulrich Nierste, a theoretical physicist at the Karlsruhe Institute of Technology in Germany.

    “Physicists then began to think that they may have been looking at the wrong symmetry all along,” he says.

    The finding had ripple effects. For one, scientists learned that another symmetry they thought was fundamental—charge conjugation, or C symmetry—must be violated as well.

    Charge conjugation is a symmetry between particles and their antiparticles. When applied to particles with a property called spin, like quarks and electrons, the C and P transformations are in conflict with each other.

    Physicists then began to think that they may have been looking at the wrong symmetry all along.

    This means that neither can be a good symmetry if one of them is violated. But, scientists thought, the combination of the two—called CP symmetry—might still be conserved. If that were the case, there would at least be a symmetry between the behavior of particles and their oppositely charged antimatter partners.

    Alas, this also was not meant to be. In 1964, a research group led by James Cronin and Val Fitch discovered in an experiment at Brookhaven National Laboratory that CP is violated, too.

    The team studied the decay of neutral kaons into pions; both are composite particles made of a quark and antiquark. Neutral kaons come in two versions that have different lifetimes: a short-lived one that primarily decays into two pions and a long-lived relative that prefers to leave three pions behind.

    However, Cronin, Fitch and their colleagues found that, rarely, long-lived kaons also decayed into two instead of three pions, which required CP symmetry to be broken.

    The discovery of CP violation was recognized with the 1980 Nobel Prize in physics. And it led to even more discoveries.

    It prompted theorists Makoto Kobayashi and Toshihide Maskawa to predict in 1973 the existence of a new generation of elementary particles. At the time, only two generations were known. Within a few years, experiments at SLAC National Accelerator Laboaratory found the tau particle—the third generation of a group including electrons and muons. Scientists at Fermi National Accelerator Laboratory later discovered a third generation of quarks—bottom and top quarks.
    Digging further into CP violation

    In the late 1990s, scientists at Fermilab and European laboratory CERN found more evidence of CP violation in decays of neutral kaons. And starting in 1999, the BaBar experiment at SLAC and the Belle experiment at KEK in Japan began to look into CP violation in decays of composite particles called B mesons

    By analyzing dozens of different types of B meson decays, scientists on BaBar and Belle revealed small differences in the way B mesons and their antiparticles fall apart. The results matched the predictions of Kobayashi and Maskawa, and in 2008 their work was recognized with one half of the physics Nobel Prize.

    “But checking if the experimental data agree with the theory was only one of our goals,” says BaBar spokesperson Michael Roney of the University of Victoria in Canada. “We also wanted to find out if there is more to CP violation than we know.”

    This is because these experiments are seeking to answer a big question: Why are we here?

    When the universe formed in the big bang 14 billion years ago, it should have generated matter and antimatter in equal amounts. If nature treated both exactly the same way, matter and antimatter would have annihilated each other, leaving nothing behind but energy.

    And yet, our matter-dominated universe exists.

    CP violation is essential to explain this imbalance. However, the amount of CP violation observed in particle physics experiments so far is a million to a billion times too small.

    Current and future studies

    Recently, BaBar and Belle combined their data treasure troves in a joint analysis (1). It revealed for the first time CP violation in a class of B meson decays that each experiment couldn’t have analyzed alone due to limited statistics.

    This and all other studies to date are in full agreement with the standard theory. But researchers are far from giving up hope on finding unexpected behaviors in processes governed by CP violation.

    The future Belle II, currently under construction at KEK, will produce B mesons at a much higher rate than its predecessor, enabling future CP violation studies with higher precision.

    And the LHCb experiment at CERN’s Large Hadron Collider is continuing studies of B mesons, including heavier ones that were only rarely produced in the BaBar and Belle experiments. The experiment will be upgraded in the future to collect data at 10 times the current rate.

    To date, CP violation has been observed only in particles like these ones made of quarks.

    “We know that the types of CP violation already seen using some quark decays cannot explain matter’s dominance in the universe,” says LHCb collaboration member Sheldon Stone of Syracuse University. “So the question is: Where else could we possibly find CP violation?”

    One place for it to hide could be in the decay of the Higgs boson. Another place to look for CP violation is in the behavior of elementary leptons—electrons, muons, taus and their associated neutrinos. It could also appear in different kinds of quark decays.

    “To explain the evolution of the universe, we would need a large amount of extra CP violation,” Nierste says. “It’s possible that this mechanism involves unknown particles so heavy that we’ll never be able to create them on Earth.”

    Such heavyweights would have been produced last in the very early universe and could be related to the lack of antimatter in the universe today. Researchers search for CP violation in much lighter neutrinos, which could give us a glimpse of a possible large violation at high masses.

    The search continues.

    1.First observation of CP violation in B0->D(*)CP h0 decays by a combined time-dependent analysis of BaBar and Belle data.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.

  • richardmitnick 12:32 pm on November 24, 2015 Permalink | Reply
    Tags: , , , ,   

    From ESA: “Tracking new missions from down under” 

    European Space Agency

    A new 4.5 m-diameter ‘acquisition aid’ dish antenna is being added to ESA’s existing New Norcia, Western Australia, tracking station, ready to catch the first signals from newly launched missions. The new antenna will allow acquisition and tracking during the critical initial orbits of new missions (see Liftoff: ESOC assumes control), up to roughly 100 000 km range. It can also ‘slave’ the much larger 35m dish, which can then be used to retrieve ranging data and telemetry signals – on-board status information – from the newly launched spacecraft.

    24 November 2015

    For beachgoers, Australia’s pristine west coast is an ideal location to catch some rays. It is also ideal for catching signals from newly launched rockets and satellites, which is one reason why ESA is redeveloping its tracking capabilities down under.

    When rockets and their satellites leap into the sky from Europe’s Spaceport in Kourou, French Guiana, they typically head east across the Atlantic, rising higher and faster with every second.

    Some 50 minutes after launch, the new mission can be seen from Western Australia, rising up from the Indian Ocean horizon and then arcing high in the sky, already in space.

    By the time the satellite, travelling at some 28 000 km/h, separates to start its life in orbit, it will already be in radio range of the land down under.

    ESA’s New Norcia station, DSA 1 (Deep Space Antenna 1), hosts a 35-metre deep-space antenna (NNO-1) together with a new 4.5-metre ‘acquisition aid’ antenna (NNO-2). It is located 140 kilometres north of Perth, Western Australia, close to the town of New Norcia. The large dish is designed for communicating with deep-space missions and provides support to spacecraft such as Mars Express, Rosetta and Gaia for routine operations. The small dish allows acquisition and tracking during the critical initial orbits of new missions, up to roughly 100 000 km range.

    By early next year, a new radio dish will be working at ESA’s existing New Norcia, Western Australia, tracking station, tracking station, ready to catch the first signals from new missions.

    New Norcia currently has a large, 35 m-diameter dish for tracking deep-space missions such as Rosetta, Mars Express and Gaia, typically voyaging in the Solar System several hundred million km away.

    ESA Rosetta spacecraft

    ESA Mars Express Orbiter
    Mars ExpressESA Gaia satellite

    Its size and technology are not ideal, however, for initial signalling to new satellites in low-Earth orbit.

    In contrast, the new dish, just 4.5 m across, will lock onto and track new satellites during the critical initial orbits (see Liftoff: ESOC assumes control), up to roughly 100 000 km out.

    It can also ‘slave’ the much larger dish, which can then receive ranging data and telemetry – onboard status information – from the new spacecraft.

    Mission control team watch liftoff from the Main Control Room at ESOC, ESA’s European Space Operations Centre, Darmstadt, 15 July 2015

    “For satellite signals, the new dish has a wider field of view than the 35 m antenna,” says Gunther Sessler, ESA’s project manager, “and can grab the signal even when the new satellite’s position is not precisely known.

    “It also offers rapid sky searches in case the satellite’s position after separation is completely unknown, which can happen if the rocket over- or under-performs.”

    In addition to satellites, the new antenna can also track rockets, including Ariane 5, Vega and Soyuz.

    The upgrade was prompted by the need to move the capability that, so far, has been provided by the ESA tracking station at Perth, 140 km southeast of New Norcia.

    23/12/2004 ESA’s Perth station hosts a 15-metre antenna with transmission and reception in both S- and X-band and provides routine support for XMM-Newton and Cluster, as well as other missions during their Launch and Early Orbit Phase (LEOP). It is located 20 kms north of Perth on the campus of the Perth International Telecommunications Centre (PITC).

    That station’s location has become increasingly untenable through urban sprawl and radio interference from TV broadcast vans.

    The upgrade ensures that ESA’s Estrack tracking network can continue providing crucial satellite services along the most-used trajectories.

    “With the closing of Perth station, ESA would have lost its capability in Western Australia, which is a critical location for most European missions,” says Manfred Lugert, ground facilities manager at ESA’s operations centre in Darmstadt, Germany.

    The antenna was designed for low maintenance and operating costs and can go into hibernation when it is not needed between launches.

    Perth station will remain in operation until the end of 2015, when it will be dismantled and many of its components reused at other ESA stations.

    Once testing is completed, the dish will enter service in early 2016 in time for Galileo navsat launches and the first ExoMars mission, in March.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The European Space Agency (ESA), established in 1975, is an intergovernmental organization dedicated to the exploration of space, currently with 19 member states. Headquartered in Paris, ESA has a staff of more than 2,000. ESA’s space flight program includes human spaceflight, mainly through the participation in the International Space Station program, the launch and operations of unmanned exploration missions to other planets and the Moon, Earth observation, science, telecommunication as well as maintaining a major spaceport, the Guiana Space Centre at Kourou, French Guiana, and designing launch vehicles. ESA science missions are based at ESTEC in Noordwijk, Netherlands, Earth Observation missions at ESRIN in Frascati, Italy, ESA Mission Control (ESOC) is in Darmstadt, Germany, the European Astronaut Centre (EAC) that trains astronauts for future missions is situated in Cologne, Germany, and the European Space Astronomy Centre is located in Villanueva de la Cañada, Spain.

    ESA50 Logo large

  • richardmitnick 2:02 am on November 24, 2015 Permalink | Reply
    Tags: , , , Our expanding universe   

    From CAASTRO: “Large scale galaxy motions match expectations for dark matter” 

    CAASTRO bloc

    CAASTRO ARC Centre of Excellence for All Sky Astrophysics

    For nearly a century, astronomers have known that the universe is expanding – most galaxies are moving away from each other. When we measure the motion of a distant galaxy, the overall expansion of the universe is, in most cases, by far the dominant contributor to that galaxy’s movement. However, astronomers have long been fascinated by a secondary contributor to galaxy motions: the gravitational attraction of nearby matter. By studying the motions of galaxies, we can measure the distribution of all matter in the nearby universe, including dark matter.

    One important statistic that can be used to understand the large scale motions of galaxies is the “bulk flow”. It is the average motion of all galaxies within a large region of the universe. The faster the bulk flow, the stronger the gravitational attraction of nearby matter on large scales. Two studies of galaxy motions presented this month by researchers in the CAASTRO Dark Universe research theme show that this bulk flow is consistent with our expectations.

    Dr Morag Scrimgeour, a former CAASTRO PhD student at ICRAR-UWA who was awarded the Charlene Heisler Prize of the Astronomical Society of Australia (ASA) for her thesis, and and the 6dF Galaxy Survey team measured the bulk flow of galaxies in the 6dF Galaxy Survey. They found that the bulk flow of galaxies in the southern sky out to a depth of 300 million light-years is 243 +/- 58 km/s. This is within the range of theoretical predictions for the bulk flow taken from the standard model for the universe, albeit on the high end of that range. The analysis concluded that the galaxies covering this large volume are collectively moving in a direction that aims roughly towards the Shapley Supercluster, an extremely massive supercluster of galaxies about 600 million light-years away.


    This study shows that the bulk flow is consistent with theoretical expectations drawn from the “standard model” of the universe, and now another study from the CAASTRO Dark Universe research theme suggests that the bulk flow is also consistent with expectations relating to the specific geometry of the galaxies that we see in the nearby universe. Dr Christopher Springob (ICRAR-UWA) and collaborators have taken the bulk flow as measured for the 2MASS Tully-Fisher Survey and compared it to what we would expect it to be if the dark matter is distributed across the nearby universe in the same pattern that we observe for the galaxies themselves.

    2MASS telescope
    2MASS telescope

    Again, the model and our measurements are largely in agreement. Assuming that dark matter is more heavily concentrated wherever galaxies are more heavily concentrated (such as the Shapley Supercluster) gives a prediction for the bulk flow that is consistent with what we actually observe.

    Shapley SuperclusterRichard Powell

    Publication details:
    Christopher Springob, Tao Hong, Lister Staveley-Smith et al. in MNRAS (2015) 2MTF V. Cosmography, Beta, and the residual bulk flow

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Astronomy is entering a golden age, in which we seek to understand the complete evolution of the Universe and its constituents. But the key unsolved questions in astronomy demand entirely new approaches that require enormous data sets covering the entire sky.

    In the last few years, Australia has invested more than $400 million both in innovative wide-field telescopes and in the powerful computers needed to process the resulting torrents of data. Using these new tools, Australia now has the chance to establish itself at the vanguard of the upcoming information revolution centred on all-sky astrophysics.

    CAASTRO has assembled the world-class team who will now lead the flagship scientific experiments on these new wide-field facilities. We will deliver transformational new science by bringing together unique expertise in radio astronomy, optical astronomy, theoretical astrophysics and computation and by coupling all these capabilities to the powerful technology in which Australia has recently invested.


    The University of Sydney
    The University of Western Australia
    The University of Melbourne
    Swinburne University of Technology
    The Australian National University
    Curtin University
    University of Queensland

  • richardmitnick 1:46 am on November 24, 2015 Permalink | Reply
    Tags: , , ,   

    From JPL-Caltech: “NEOWISE Identifies Greenhouse Gases in Comets” 


    November 23, 2015
    DC Agle
    Jet Propulsion Laboratory, Pasadena, Calif.

    An infrared view from NASA’s NEOWISE mission of the Oort cloud comet C/2006 W3 (Christensen). The spacecraft observed this comet on April 20th, 2010 as it traveled through the constellation Sagittarius. Comet Christensen was nearly 370 million miles (600 million kilometers) from Earth at the time. The image is half of a degree of the sky on each side. Infrared light with wavelengths of 3.4, 12 and 22 micron channels are mapped to blue, green, and red, respectively. The signal at these wavelengths is dominated primarily by the comet’s dust thermal emission, giving it a golden hue.

    After its launch in 2009, NASA’s NEOWISE spacecraft observed 163 comets during the WISE/NEOWISE prime mission. This sample from the space telescope represents the largest infrared survey of comets to date. Data from the survey are giving new insights into the dust, comet nucleus sizes, and production rates for difficult-to-observe gases like carbon dioxide and carbon monoxide. Results of the NEOWISE census of comets were recently published in the Astrophysical Journal.

    Carbon monoxide (CO) and carbon dioxide (CO2) are common molecules found in the environment of the early solar system, and in comets. In most circumstances, water-ice sublimation likely drives the activity in comets when they come nearest to the sun, but at larger distances and colder temperatures, other common molecules like CO and CO2 may be the main drivers. Spaceborne carbon dioxide and carbon monoxide are difficult to directly detect from the ground because their abundance in Earth’s own atmosphere obscures the signal. The NEOWISE spacecraft soars high above Earth’s atmosphere, making these measurements of a comet’s gas emissions possible.

    “This is the first time we’ve seen such large statistical evidence of carbon monoxide taking over as a comet’s gas of choice when they are farther out from the sun,” said James Bauer, deputy principal investigator of the NEOWISE mission from NASA’s Jet Propulsion Laboratory in Pasadena, California, and author of a paper on the subject. “By emitting what is likely mostly carbon monoxide beyond four astronomical units (4 times the Earth-Sun distance; about 370 million miles, 600 million kilometers) it shows us that comets may have stored most of the gases when they formed, and secured them over billions of years. Most of the comets that we observed as active beyond 4 AU are long-period comets, comets with orbital periods greater than 200 years that spend most of their time beyond Neptune’s orbit.”

    While the amount of carbon monoxide and dioxide increases relative to ejected dust as a comet gets closer to the sun, the percentage of these two gases, when compared to other volatile gases, decreases.

    “As they get closer to the sun, these comets seem to produce a prodigious amount of carbon dioxide,” said Bauer. “Your average comet sampled by NEOWISE would expel enough carbon dioxide to provide the bubble power for thousands of cans of soda per second.”

    The pre-print version of this paper is available at: http://arxiv.org/abs/1509.08446

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NASA JPL Campus

    Jet Propulsion Laboratory (JPL) is a federally funded research and development center and NASA field center located in the San Gabriel Valley area of Los Angeles County, California, United States. Although the facility has a Pasadena postal address, it is actually headquartered in the city of La Cañada Flintridge [1], on the northwest border of Pasadena. JPL is managed by the nearby California Institute of Technology (Caltech) for the National Aeronautics and Space Administration. The Laboratory’s primary function is the construction and operation of robotic planetary spacecraft, though it also conducts Earth-orbit and astronomy missions. It is also responsible for operating NASA’s Deep Space Network.

    Caltech Logo

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 498 other followers

%d bloggers like this: