Tagged: Scientific American Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:47 pm on April 30, 2019 Permalink | Reply
    Tags: "Cosmology Has Some Big Problems", New measurements of the Hubble constant- the rate of universal expansion suggested major differences between two independent methods of calculation., Scientific American   

    From Scientific American: “Cosmology Has Some Big Problems” 

    Scientific American

    From Scientific American

    April 30, 2019
    Bjørn Ekeberg

    The field relies on a conceptual framework that has trouble accounting for new observations.

    1
    Credit: Thanapol Sisrang Getty Images

    What do we really know about our universe?

    Born out of a cosmic explosion 13.8 billion years ago, the universe rapidly inflated and then cooled, it is still expanding at an increasing rate and mostly made up of unknown dark matter and dark energy … right?

    This well-known story is usually taken as a self-evident scientific fact, despite the relative lack of empirical evidence—and despite a steady crop of discrepancies arising with observations of the distant universe.

    In recent months, new measurements of the Hubble constant, the rate of universal expansion, suggested major differences between two independent methods of calculation. Discrepancies on the expansion rate have huge implications not simply for calculation but for the validity of cosmology’s current standard model at the extreme scales of the cosmos.

    Another recent probe found galaxies inconsistent with the theory of dark matter, which posits this hypothetical substance to be everywhere. But according to the latest measurements, it is not, suggesting the theory needs to be reexamined.

    It’s perhaps worth stopping to ask why astrophysicists hypothesize dark matter to be everywhere in the universe? The answer lies in a peculiar feature of cosmological physics that is not often remarked. For a crucial function of theories such as dark matter, dark energy and inflation, which each in its own way is tied to the big bang paradigm, is not to describe known empirical phenomena but rather to maintain the mathematical coherence of the framework itself while accounting for discrepant observations. Fundamentally, they are names for something that must exist insofar as the framework is assumed to be universally valid.

    Each new discrepancy between observation and theory can of course in and of itself be considered an exciting promise of more research, a progressive refinement toward the truth. But when it adds up, it could also suggest a more confounding problem that is not resolved by tweaking parameters or adding new variables.

    Consider the context of the problem and its history. As a mathematically driven science, cosmological physics is usually thought to be extremely precise. But the cosmos is unlike any scientific subject matter on earth. A theory of the entire universe, based on our own tiny neighborhood as the only known sample of it, requires a lot of simplifying assumptions. When these assumptions are multiplied and stretched across vast distances, the potential for error increases, and this is further compounded by our very limited means of testing.

    Historically, Newton’s physical laws made up a theoretical framework that worked for our own solar system with remarkable precision. Both Uranus and Neptune, for example, were discovered through predictions based on Newton’s model. But as the scales grew larger, its validity proved limited. Einstein’s general relativity framework provided an extended and more precise reach beyond the furthest reaches of our own galaxy. But just how far could it go?

    The big bang paradigm that emerged in the mid-20th century effectively stretches the model’s validity to a kind of infinity, defined either as the boundary of the radius of the universe (calculated at 46 billion light-years) or in terms of the beginning of time. This giant stretch is based on a few concrete discoveries, such as Edwin Hubble’s observation that the universe appears to be expanding (in 1929) and the detection of the microwave background radiation (in 1964).

    2
    The 15 meter Holmdel horn antenna at Bell Telephone Laboratories in Holmdel, New Jersey was built in 1959 for pioneering work in communication satellites for the NASA ECHO I. The antenna was 50 feet in length and the entire structure weighed about 18 tons. It was composed of aluminum with a steel base. It was used to detect radio waves that bounced off Project ECHO balloon satellites. The horn was later modified to work with the Telstar Communication Satellite frequencies as a receiver for broadcast signals from the satellite. In 1964, radio astronomers Robert Wilson and Arno Penzias discovered the cosmic microwave background radiation with it, for which they were awarded the 1978 Nobel prize in physics. In 1990 the horn was dedicated to the National Park Service as a National Historic Landmark.

    But considering the scale involved, these limited observations have had an outsized influence on cosmological theory.

    Edwin Hubble looking through a 100-inch Hooker telescope at Mount Wilson in Southern California, 1929 discovers the Universe is Expanding

    NASA/ Cosmic Background Explorer COBE 1989 to 1993.

    Cosmic Microwave Background NASA/WMAP

    NASA/WMAP 2001 to 2010

    CMB per ESA/Planck

    ESA/Planck 2009 to 2013

    It is of course entirely plausible that the validity of general relativity breaks down much closer to our own home than at the edge of the hypothetical end of the universe. And if that were the case, today’s multilayered theoretical edifice of the big bang paradigm would turn out to be a confusing mix of fictional beasts invented to uphold the model along with empirically valid variables, mutually reliant on each other to the point of making it impossible to sort science from fiction.

    Compounding this problem, most observations of the universe occur experimentally and indirectly. Today’s space telescopes provide no direct view of anything—they produce measurements through an interplay of theoretical predictions and pliable parameters, in which the model is involved every step of the way. The framework literally frames the problem; it determines where and how to observe. And so, despite the advanced technologies and methods involved, the profound limitations to the endeavor also increase the risk of being led astray by the kind of assumptions that cannot be calculated.

    After spending many years researching the foundations of cosmological physics from a philosophy of science perspective, I have not been surprised to hear some scientists openly talking about a crisis in cosmology. In the big “inflation debate” in Scientific American a few years ago, a key piece of the big bang paradigm was criticized by one of the theory’s original proponents for having become indefensible as a scientific theory.

    Why? Because inflation theory relies on ad hoc contrivances to accommodate almost any data, and because its proposed physical field is not based on anything with empirical justification. This is probably because a crucial function of inflation is to bridge the transition from an unknowable big bang to a physics we can recognize today. So, is it science or a convenient invention?

    Inflation

    4
    Alan Guth, from Highland Park High School and M.I.T., who first proposed cosmic inflation

    HPHS Owls

    Lambda-Cold Dark Matter, Accelerated Expansion of the Universe, Big Bang-Inflation (timeline of the universe) Date 2010 Credit: Alex MittelmannColdcreation

    Alan Guth’s notes:
    5

    A few astrophysicists, such as Michael J. Disney, have criticized the big bang paradigm for its lack of demonstrated certainties. In his analysis, the theoretical framework has far fewer certain observations than free parameters to tweak them—a so-called “negative significance” that would be an alarming sign for any science. As Disney writes in American Scientist: “A skeptic is entitled to feel that a negative significance, after so much time, effort and trimming, is nothing more than one would expect of a folktale constantly re-edited to fit inconvenient new observations.”

    As I discuss in my new book, Metaphysical Experiments, there is a deeper history behind the current problems. The big bang hypothesis itself originally emerged as an indirect consequence of general relativity undergoing remodeling. Einstein had made a fundamental assumption about the universe, that it was static in both space and time, and to make his equations add up, he added a “cosmological constant,” for which he freely admitted there was no physical justification.

    But when Hubble observed that the universe was expanding and Einstein’s solution no longer seemed to make sense, some mathematical physicists tried to change a fundamental assumption of the model: that the universe was the same in all spatial directions but variant in time. Not insignificantly, this theory came with a very promising upside: a possible merger between cosmology and nuclear physics. Could the brave new model of the atom also explain our universe?

    From the outset, the theory only spoke to the immediate aftermath of an explicitly hypothetical event, whose principal function was as a limit condition, the point at which the theory breaks down. Big bang theory says nothing about the big bang; it is rather a possible hypothetical premise for resolving general relativity.

    On top of this undemonstrable but very productive hypothesis, floor upon floor has been added intact, with vastly extended scales and new discrepancies. To explain observations of galaxies inconsistent with general relativity, the existence of dark matter was posited as an unknown and invisible form of matter calculated to make up more than a quarter of all mass-energy content in the universe—assuming, of course, the framework is universally valid.

    Fritz Zwicky discovered Dark Matter when observing the movement of the Coma Cluster., Vera Rubin a Woman in STEM denied the Nobel, did most of the work on Dark Matter.

    Fritz Zwicky from http:// palomarskies.blogspot.com

    Astronomer Vera Rubin at the Lowell Observatory in 1965, worked on Dark Matter (The Carnegie Institution for Science)

    Coma cluster via NASA/ESA Hubble

    In 1998, when a set of supernova measurements of accelerating galaxies seemed at odds with the framework, a new theory emerged of a mysterious force called dark energy, calculated to fill circa 70 percent of the mass-energy of the universe.

    [The Supernova Cosmology Project is one of two research teams that determined the likelihood of an accelerating universe and therefore a positive cosmological constant, using data from the redshift of Type Ia supernovae. The project was headed by Saul Perlmutter at Lawrence Berkeley National Laboratory, with members from Australia, Chile, France, Portugal, Spain, Sweden, the United Kingdom, and the United States.

    This discovery was named “Breakthrough of the Year for 1998” by Science Magazin and, along with the High-z Supernova Search Team, the project team won the 2007 Gruber Prize in Cosmology and the 2015 Breakthrough Prize in Fundamental Physics. In 2011, Perlmutter was awarded the Nobel Prize in Physics for this work, alongside Adam Riess and Brian P. Schmidt from the High-z team.]

    The crux of today’s cosmological paradigm is that in order to maintain a mathematically unified theory valid for the entire universe, we must accept that 95 percent of our cosmos is furnished by completely unknown elements and forces for which we have no empirical evidence whatsoever. For a scientist to be confident of this picture requires an exceptional faith in the power of mathematical unification.

    In the end, the conundrum for cosmology is its reliance on the framework as a necessary presupposition for conducting research. For lack of a clear alternative, as astrophysicist Disney also notes, it is in a sense stuck with the paradigm. It seems more pragmatic to add new theoretical floors than to rethink the fundamentals.

    Contrary to the scientific ideal of getting progressively closer to the truth, it looks rather like cosmology, to borrow a term from technology studies, has become path-dependent: overdetermined by the implications of its past inventions.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 5:44 pm on April 17, 2019 Permalink | Reply
    Tags: , , Einstein’s Unfinished Revolution: The Search for What Lies Beyond the Quantum, , , , Scientific American   

    From Scientific American: “Cosmologist Lee Smolin says that at certain key points, the scientific worldview is based on fallacious reasoning” 

    Scientific American

    From Scientific American

    April 17, 2019
    Jim Daley

    Lee Smolin, author of six books about the philosophical issues raised by contemporary physics, says every time he writes a new one, the experience completely changes the direction his own research is taking. In his latest book, Einstein’s Unfinished Revolution: The Search for What Lies Beyond the Quantum, Smolin, a cosmologist and quantum theorist at the Perimeter Institute for Theoretical Physics in Ontario, tackles what he sees as the limitations in quantum theory.

    1
    Credit: Perimeter Institute

    “I want to say the scientific worldview is based on fallacious reasoning at certain key points,” Smolin says. In Einstein’s Unfinished Revolution, he argues one of those key points was the assumption that quantum physics is a complete theory. This incompleteness, Smolin argues, is the reason quantum physics has not been able to solve certain questions about the universe.

    “Most of what we do [in science] is take the laws that have been discovered by experiments to apply to parts of the universe, and just assume that they can be scaled up to apply to the whole universe,” Smolin says. “I’m going to be suggesting that’s wrong.”

    Join Smolin at the Perimeter Institute as he discusses his book and takes the audience on a journey through the basics of quantum physics and the experiments and scientists who have changed our understanding of the universe. The discussion, “Einstein’s Unfinished Revolution,” is part of Perimeter’s public lecture series and will take place on Wednesday, April 17, at 7 P.M. Eastern time. Online viewers can participate in the discussion by tweeting to @Perimeter using the #piLIVE hashtag.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 8:17 am on April 5, 2019 Permalink | Reply
    Tags: , In string theory a “solution” implies a vacuum of spacetime that is governed by Einstein’s theory of gravity coupled to a quantum field theory., In the past two decades a new branch of string theory called F-theory has allowed physicists to work with strongly interacting or strongly coupled strings, , Scientific American, String theorists can use algebraic geometry to analyze the various ways of compactifying extra dimensions in F-theory and to find solutions., ,   

    From Scientific American: “Found: A Quadrillion Ways for String Theory to Make Our Universe” 

    Scientific American

    From Scientific American

    Mar 29, 2019
    Anil Ananthaswamy

    Stemming from the “F-theory” branch of string theory, each solution replicates key features of the standard model of particle physics.

    1
    Photo: dianaarturovna/Getty Images

    Physicists who have been roaming the “landscape” of string theory — the space of zillions and zillions of mathematical solutions of the theory, where each solution provides the kinds of equations physicists need to describe reality — have stumbled upon a subset of such equations that have the same set of matter particles as exists in our universe.

    String Theory depiction. Cross section of the quintic Calabi–Yau manifold Calabi yau.jpg. Jbourjai (using Mathematica output)

    Standard Model of Supersymmetry via DESY

    But this is no small subset: there are at least a quadrillion such solutions, making it the largest such set ever found in string theory.

    According to string theory, all particles and fundamental forces arise from the vibrational states of tiny strings. For mathematical consistency, these strings vibrate in 10-dimensional spacetime. And for consistency with our familiar everyday experience of the universe, with three spatial dimensions and the dimension of time, the additional six dimensions are “compactified” so as to be undetectable.

    Different compactifications lead to different solutions. In string theory, a “solution” implies a vacuum of spacetime that is governed by Einstein’s theory of gravity coupled to a quantum field theory. Each solution describes a unique universe, with its own set of particles, fundamental forces and other such defining properties.

    Some string theorists have focused their efforts on trying to find ways to connect string theory to properties of our known, observable universe — particularly the standard model of particle physics, which describes all known particles and all their mutual forces except gravity.

    Much of this effort has involved a version of string theory in which the strings interact weakly. However, in the past two decades, a new branch of string theory called F-theory has allowed physicists to work with strongly interacting, or strongly coupled, strings.

    ____________________________________________________
    F-theory is a branch of string theory developed by Cumrun Vafa. The new vacua described by F-theory were discovered by Vafa and allowed string theorists to construct new realistic vacua — in the form of F-theory compactified on elliptically fibered Calabi–Yau four-folds. The letter “F” supposedly stands for “Father”.

    F-theory is formally a 12-dimensional theory, but the only way to obtain an acceptable background is to compactify this theory on a two-torus. By doing so, one obtains type IIB superstring theory in 10 dimensions. The SL(2,Z) S-duality symmetry of the resulting type IIB string theory is manifest because it arises as the group of large diffeomorphisms of the two-dimensional torus.

    More generally, one can compactify F-theory on an elliptically fibered manifold (elliptic fibration), i.e. a fiber bundle whose fiber is a two-dimensional torus (also called an elliptic curve). For example, a subclass of the K3 manifolds is elliptically fibered, and F-theory on a K3 manifold is dual to heterotic string theory on a two-torus. Also, the moduli spaces of those theories should be isomorphic.

    The large number of semirealistic solutions to string theory referred to as the string theory landscape, with 10 272 , 000 {\displaystyle 10^{272,000}} {\displaystyle 10^{272,000}} elements or so, is dominated by F-theory compactifications on Calabi–Yau four-folds.[3] There are about 10 15 {\displaystyle 10^{15}} 10^{15} of those solutions consistent with the Standard Model of particle physics.

    -Wikipedia

    ____________________________________________________

    “An intriguing, surprising result is that when the coupling is large, we can start describing the theory very geometrically,” says Mirjam Cvetic of the University of Pennsylvania in Philadelphia.

    This means that string theorists can use algebraic geometry — which uses algebraic techniques to tackle geometric problems — to analyze the various ways of compactifying extra dimensions in F-theory and to find solutions. Mathematicians have been independently studying some of the geometric forms that appear in F-theory. “They provide us physicists a vast toolkit”, says Ling Lin, also of the University of Pennsylvania. “The geometry is really the key… it is the ‘language’ that makes F-theory such a powerful framework.”

    Now, Cvetic, Lin, James Halverson of Northeastern University in Boston, and their colleagues have used such techniques to identify a class of solutions with string vibrational modes that lead to a similar spectrum of fermions (or, particles of matter) as is described by the standard model — including the property that all fermions come in three generations (for example, the electron, muon and tau are the three generations of one type of fermion).

    Standard Model of Particle Physics (LATHAM BOYLE AND MARDUS OF WIKIMEDIA COMMONS)

    The F-theory solutions found by Cvetic and colleagues have particles that also exhibit the handedness, or chirality, of the standard model particles. In particle physics lingo, the solutions reproduce the exact “chiral spectrum” of standard model particles. For example, the quarks and leptons in these solutions come in left and right-handed versions, as they do in our universe.

    The new work shows that there are at least a quadrillion solutions in which particles have the same chiral spectrum as the standard model, which is 10 orders of magnitude more solutions than had been found within string theory until now. “This is by far the largest domain of standard model solutions,” Cvetic says. “It’s somehow surprising and actually also rewarding that it turns out to be in the strongly coupled string theory regime, where geometry helped us.”

    A quadrillion — while it’s much, much smaller than the size of the landscape of solutions in F-theory (which at last count was shown to be of the order of 10272,000) — is a tremendously large number. “And because it’s a tremendously large number, and it gets something nontrivial in real world particle physics correct, we should take it seriously and study it further,” Halverson says.

    Further study would involve uncovering stronger connections with the particle physics of the real world. The researchers still have to work out the couplings or interactions between particles in the F-theory solutions — which again depend on the geometric details of the compactifications of the extra dimensions.

    It could be that within the space of the quadrillion solutions, there are some with couplings that could cause the proton to decay within observable timescales. This would clearly be at odds with the real world, as experiments have yet to see any sign of protons decaying. Alternatively, physicists could search for solutions that realize the spectrum of standard model particles that preserve a mathematical symmetry called R-parity. “This symmetry forbids certain proton decay processes and would be very attractive from a particle physics point of view, but is missing in our current models,” Lin says.

    Also, the work assumes supersymmetry, which means that all the standard model particles have partner particles. String theory needs this symmetry in order to ensure the mathematical consistency of solutions.

    But in order for any supersymmetric theory to tally with the observable universe, the symmetry has to be broken (much like how a diner’s selection of cutlery and drinking glass on her left or right side will “break” the symmetry of the table setting at a round dinner table). Else, the partner particles would have the same mass as standard model particles — and that is clearly not the case, since we don’t observe any such partner particles in our experiments.

    Crucially, experiments at the Large Hadron Collider (LHC) have also shown that supersymmetry — if it is the correct description of nature — is not broken even at the energy scales probed by the LHC, given that the LHC has yet to find any supersymmetric particles.

    String theorists think that supersymmetry might be broken only at extremely high energies that are not within experimental reach anytime soon. “The expectation in string theory is that high-scale [supersymmetry] breaking, which is fully consistent with LHC data, is completely possible,” Halverson says. “It requires further analysis to determine whether or not it happens in our case.”

    Despite these caveats, other string theorists are approving of the new work. “This is definitely a step forward in demonstrating that string theory gives rise to many solutions with features of the standard model,” says string theorist Washington Taylor of MIT.

    “It’s very nice work,” says Cumrun Vafa, one of the developers of F-theory, at Harvard University. “The fact you can arrange the geometry and topology to fit with not only Einstein’s equations, but also with the [particle] spectrum that we want, is not trivial. It works out nicely here.”

    But Vafa and Taylor both caution that these solutions are far from matching perfectly with the standard model. Getting solutions to match exactly with the particle physics of our world is one of the ultimate goals of string theory. Vafa is among those who think that, despite the immensity of the landscape of solutions, there exists a unique solution that matches our universe. “I bet there is exactly one,” he says. But, “to pinpoint this is not going to be easy.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 1:28 pm on February 8, 2019 Permalink | Reply
    Tags: ESA Galileo navigation system, , ESA's answer to USA GPS, Scientific American   

    From Scientific American: “Wayward Satellites Test Einstein’s Theory of General Relativity” 

    Scientific American

    From Scientific American

    February 8, 2019
    Megan Gannon

    The botched launch of two Galileo navigation probes made for an unexpected experiment.

    1
    Galileo satellite. Credit: P. Carril and ESA

    ESA Galileo’s navigagtion constellation

    In August 2014 a rocket launched the fifth and sixth satellites of the Galileo global navigation system, the European Union’s $11-billion answer to the U.S.’s GPS. But celebration turned to disappointment when it became clear that the satellites had been dropped off at the wrong cosmic “bus stops.” Instead of being placed in circular orbits at stable altitudes, they were stranded in elliptical orbits useless for navigation.

    The mishap, however, offered a rare opportunity for a fundamental physics experiment. Two independent research teams—one led by Pacôme Delva of the Paris Observatory in France, the other by Sven Herrmann of the University of Bremen in Germany—monitored the wayward satellites to look for holes in Einstein’s general theory of relativity.

    “General relativity continues to be the most accurate description of gravity, and so far it has withstood a huge number of experimental and observational tests,” says Eric Poisson, a physicist at the University of Guelph in Ontario, who was not involved in the new research. Nevertheless, physicists have not been able to merge general relativity with the laws of quantum mechanics, which explain the behavior of energy and matter at a very small scale. “That’s one reason to suspect that gravity is not what Einstein gave us,” Poisson says. “It’s probably a good approximation, but there’s more to the story.”

    Einstein’s theory predicts time will pass more slowly close to a massive object, which means that a clock on Earth’s surface should tick at a more sluggish rate relative to one on a satellite in orbit. This time dilation is known as gravitational redshift. Any subtle deviation from this pattern might give physicists clues for a new theory that unifies gravity and quantum physics.

    Even after the Galileo satellites were nudged closer to circular orbits, they were still climbing and falling about 8,500 kilometers twice a day. Over the course of three years Delva’s and Herrmann’s teams watched how the resulting shifts in gravity altered the frequency of the satellites’ superaccurate atomic clocks. In a previous gravitational redshift test, conducted in 1976, when the Gravity Probe-A suborbital rocket was launched into space with an atomic clock onboard, researchers observed that general relativity predicted the clock’s frequency shift with an uncertainty of 1.4 × 10–4.

    The new studies, published last December in Physical Review Letters, again verified Einstein’s prediction—and increased that precision by a factor of 5.6. So, for now, the century-old theory still reigns.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 10:49 am on January 30, 2019 Permalink | Reply
    Tags: , , , , Finding Alien Life May Require Giant Telescopes Built in Orbit, How big such a telescope must be to offer a reasonable chance of success in that interstellar quest depends on life’s still-unknown cosmic prevalence, iSAT-in-Space Assembled Telescope” study, NASA’s still-in-development Space Launch System (SLS), Scientific American, The forces demanding supersize space telescopes are straightforward: The larger a scope’s light-collecting mirror is the deeper and finer its cosmic gaze, Two of NASA’s pinnacle projects—the International Space Station (ISS) and the Hubble Space Telescope—owe their existence to orbital construction work   

    From Scientific American: “Finding Alien Life May Require Giant Telescopes Built in Orbit” 

    Scientific American

    From Scientific American

    December 12, 2018 [Just presented in social media.]
    Lee Billings

    Scientific American reports on new efforts from NASA and other federal agencies seeking to service and assemble large structures—such as life-finding telescopes—in space.

    1
    Astronauts repair and upgrade the Hubble Space Telescope during the first servicing mission to that orbital observatory, in 1993. NASA is now studying how telescopes far larger than Hubble might someday be assembled and serviced in space by astronauts or robots. Credit: NASA.

    After snapping the final piece into place with a satisfying “click” she feels through her spacesuit gloves, the astronaut pauses to appreciate the view. Her reflection swims before her in a silvery disk the size of three tennis courts; for a moment she feels like a bug floating on a darkened pond. Composed of hundreds of interlocking metallic hexagons like the one she has just installed, the disk is a colossal mirror 30 meters wide, the starlight-gathering eye of the largest space telescope ever built. From her perch on the robotic arm of a small space station, Earth is a tiny blue and white orb she could cover with an outstretched thumb, dwarfed by the bright and silent moon spinning thousands of kilometers below her feet.

    Although this scene remains the stuff of science fiction, an ad hoc assemblage of scientists, engineers and technocrats now say it is well on its way to becoming reality. Under the auspices of a modest NASA-sponsored initiative, this diverse group is gauging how the space agency might build bigger, better space telescopes than previously thought possible—by constructing and servicing them in space. The effort, formally known as the “in-Space Assembled Telescope” study (iSAT), is part of a long trend in which science advances by piggybacking on technologies created for more practical concerns.

    For example, the development of surveillance satellites and warhead-carrying rockets during the 20th-century cold war also catalyzed the creation of robotic interplanetary probes and even NASA’s crewed Apollo lunar missions. Similarly, in the 21st century a soaring military and industrial demand for building and servicing satellites in orbit could lead to dramatically enhanced space telescopes capable of definitively answering some of science’s biggest questions—such as whether or not we are alone. “The iSAT is a program that can be NASA’s next Apollo,” says study member Matt Greenhouse, an astrophysicist at the space agency’s Goddard Space Flight Center. “And the science enabled by the iSAT would likely include discovery of extraterrestrial life—an achievement that would eclipse Apollo in terms of impact on humanity.”


    NASA Goddard Space Flight Center campus

    “And the science enabled by the iSAT would likely include discovery of extraterrestrial life—an achievement that would eclipse Apollo in terms of impact on humanity.”

    Ready for Prime Time

    In some respects, building and repairing spacecraft in space is a revolution that has already arrived, merely kept under the radar by a near-flawless track record that makes it seem deceptively routine. Two of NASA’s pinnacle projects—the International Space Station (ISS) and the Hubble Space Telescope—owe their existence to orbital construction work.

    ISS

    NASA/ESA Hubble Telescope

    Assembled and resupplied in orbit over two decades, the ISS is now roughly as big as a football field and has more living space than a standard six-bedroom house. And only space-based repairs allowed Hubble to become the world’s most iconic and successful telescope, after a space shuttle crew on a first-of-its-kind servicing mission in 1993 fixed a crippling defect in the observatory’s primary mirror.

    NASA COSTAR

    NASA COSTAR installation

    Astronauts have since conducted four more Hubble servicing missions, replacing equipment and upgrading instruments to leave behind an observatory reborn.

    COSTAR was removed from HST in 2009 during the fifth servicing mission and replaced by the Cosmic Origins Spectrograph. It is now on exhibit in the Smithsonian’s National Air and Space Museum.

    NASA Hubble Cosmic Origins Spectrograph

    3
    An artist’s rendition of the upcoming Dragonfly mission, a collaboration between NASA and Space Systems Loral to demonstrate technologies required for orbital construction. Dragonfly’s robotic arm (inset) will assemble and deploy reflectors to create a large radio antenna when the mission launches sometime in the 2020s. Credit: NASA and SSL.

    Today multiple projects are carrying the momentum forward from those pioneering efforts, cultivating powerful new capabilities. Already NASA and the Pentagon’s Defense Advanced Research Projects Agency (DARPA) as well as private-sector companies such as Northrop Grumman and Space Systems Loral (SSL) are building robotic spacecraft for launch in the next few years on lengthy missions to refuel, repair, re-position and upgrade governmental and commercial satellites. Those spacecraft—or at least the technologies they demonstrate—could also be used to assemble telescopes and other large structures in space such as those associated with NASA’s perennial planning for human missions to the moon and Mars. Last year—under the auspices of a “partnership forum” between NASA, the U.S. Air Force and National Reconnaissance Office—the space agency took the lead on crafting a national strategy for further public and private development of in-space assembly in the 2020s and beyond.

    These trends could end what some experts see as a “dark age” in space science and exploration. “Imagine a world where once your car runs low on fuel, instead of driving to the gas station you take it to the junkyard and abandon it. Imagine a world where once you’ve moved into your house for the first time you have no way of ever getting more groceries inside, having a plumber come to fix a leaky pipe or any way to bring in and install a new TV. Imagine a world where we all live in tents that we can carry on our backs and no one thinks to build anything larger or more permanent. That seems crazy, doesn’t it?” says iSAT study member Joe Parrish, a program manager for DARPA’s Tactical Technology Office who helms its Robotic Servicing of Geosynchronous Satellites (RSGS) mission. “But that’s exactly the world we live in right now with our $1-billion–class assets in space. … I think we will look back on the era before on-orbit servicing and assembly the way we now look back on the era when leeches were used to treat diseases.”

    Bigger Is Better

    The fundamental reality behind the push for in-space assembly is easy to understand: Anything going to space must fit within the rocket taking it there. Even the very biggest—the mammoth 10-meter rocket fairing of NASA’s still-in-development Space Launch System (SLS)—would be unable to hold something like the ISS or even the space agency’s smaller “Gateway,” a moon-orbiting space station proposed for the 2020s.

    NASA Space Launch System depiction

    Launching such megaprojects piece by piece, for orbital assembly by astronauts or robots, is literally the only way to get them off the ground. And coincidentally, even though massive “heavy lift” rockets such as the SLS remain ruinously expensive, the midsize rockets that could support orbital assembly with multiple launches are getting cheaper all the time.

    The forces demanding supersize space telescopes are straightforward, too: The larger a scope’s light-collecting mirror is, the deeper and finer its cosmic gaze. Simply put, bigger is better when it comes to telescopes—especially ones with transformative objectives such as tracking the coalescence of galaxies, stars and planets throughout the universe’s 13.8-billion-year history, learning the nature of dark matter and dark energy, and seeking out signs of life on habitable worlds orbiting other stars. Most of today’s designs for space telescopes pursuing such alluring quarry cap out with mirrors as wide as 15 meters—but only because that is the approximate limit of what could be folded to fit within a heavy-lift rocket like the SLS.

    Astronomers have long fantasized about building space observatories even bigger, with mirrors 30 meters wide or more—rivaling the sizes of ground-based telescopes already under construction for the 2020s. Assembled far above our planet’s starlight-scattering atmosphere, these behemoths could perform feats the likes of which ground-based observers can only dream, such as taking pictures of potentially Earth-like worlds around a huge sample of other stars to determine whether those worlds are actually habitable—or even inhabited. If our own Earth is any example to go by, life is a planetary phenomenon that can transform the atmosphere and surface of its home world in clearly recognizable ways; provided, that is, one has a telescope big enough to see such details across interstellar distances.

    A recent “Exoplanet Science Strategy” report from the National Academies of Sciences, Engineering and Medicine said NASA should take the lead on a major new space telescope that begins to approach that grand vision—something capable of surveying hundreds (or at least dozens) of nearby stars for snapshots of potential exo-Earths. That recommendation (itself an echo from several previous prestigious studies) is reinforced by the core conclusion of another new Academies report which calls for the agency to make the search for alien life a more fundamental part of its future space exploration activities. These reports build on the growing consensus that our galaxy likely holds billions of potentially habitable worlds, courtesy of statistics from NASA’s recently deceased Kepler space telescope and the space agency’s newly launched Transiting Exoplanet Survey Satellite.

    NASA/Kepler Telescope

    NASA/MIT TESS

    Whether viewed through the lens of scientific progress, technological capability or public interest, the case for building a life-finding space telescope is stronger than ever before—and steadily strengthening. Sooner or later it seems NASA will find itself tasked with making this longed-for giant leap in the search for life among the stars.

    How big such a telescope must be to offer a reasonable chance of success in that interstellar quest depends on life’s still-unknown cosmic prevalence. With a bit of luck, one with a four-meter mirror might suffice to hit the jackpot, locating an inhabited exo-Earth around one of our sun’s nearest neighboring stars. But if the cosmos is less kind and the closest life-bearing worlds are much farther away, something in excess of the 15-meter limit imposed by near-future rockets could be necessary to sniff out any living planets within our solar system’s corner of the galaxy. In short, in-space assembly may offer the only viable path to completing the millennia-long effort to end humanity’s cosmic loneliness.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 11:37 am on December 10, 2018 Permalink | Reply
    Tags: Advances like those made by Hubble are possible only through sustained publicly-funded research, Arthur “Art” Code, , , , , Lyman Spitzer, , OAO-2, Scientific American, Space Astronomy Laboratory at UW–Madison,   

    From Scientific American: “The World’s First Space Telescope” 

    Scientific American

    From Scientific American

    December 7, 2018
    James Lattis

    50 years ago, astronomers launched the Orbiting Astronomical Observatory, whose descendants include the Hubble, Spitzer and James Webb telescopes.

    In July 1958, an astronomer at the University of Wisconsin–Madison named Arthur “Art” Code received a telegram from the fledgling Space Science Board of the National Academy of Sciences. The agency wanted to know what he and his colleagues would do if given the opportunity to launch into Earth’s orbit an instrument weighing up to 100 pounds.

    Code, newly-minted director of the University’s Washburn Observatory, had something in mind. His department was already well known for pioneering a technique for measuring the light emitted by celestial objects, called photoelectric photometry, and Code had joined the university with the intent of adapting it to the burgeoning field of space astronomy.

    He founded the Space Astronomy Laboratory at UW–Madison and, with his colleagues, proposed to launch a small telescope equipped with a photoelectric photometer, designed to measure the ultraviolet (UV) energy output of stars—a task impossible from Earth’s surface. Fifty years ago, on December 7, 1968, that idea culminated in NASA’s launch of the first successful space-based observatory: the Orbiting Astronomical Observatory, or OAO-2.

    NASA U Wisconsin Orbiting Astronomical Observatory OAO-2

    With it was born the era of America’s Great Observatories, bearing the Hubble, Spitzer, Chandra and Compton space telescopes, a time during which our understanding of the universe repeatedly deepened and transformed.

    NASA/ESA Hubble Telescope

    NASA/Spitzer Infrared Telescope

    NASA/Chandra X-ray Telescope

    NASA Compton Gamma Ray Observatory

    Today, dwindling political appetite and lean funding threaten our progress. Contemporary projects like the James Webb Space Telescope flounder, and federal budgets omit promising projects like the Wide Field Infrared Survey Telescope (WFIRST).

    NASA/ESA/CSA Webb Telescope annotated

    NASA WFIRST

    In celebrating the half century since OAO-2’s launch, we are reminded that major scientific achievements like it become part of the public trust, and to make good on the public trust, we must repay our debt to history by investing in our future. Advances like those made by Hubble are possible only through sustained, publicly-funded research.

    These first investments originated in the late 1950s, during the space race between the U.S. the USSR. They led to economic gains in the private sector, technological and scientific innovations, and the birth of new fields of exploration.

    Astronomer Lyman Spitzer, considered the father of the Hubble Space Telescope, first posited the idea of space-based observing seriously in a 1946 RAND Corporation study. By leaving Earth’s atmosphere, he argued, astronomers could point telescopes at and follow nearly anything in the sky, from comets to galaxy clusters, and measure light in a broader range of the electromagnetic spectrum.

    When Code pitched Wisconsin’s idea to the Space Board, the result was NASA funding to create part of the scientific payload for OAO. The agency went to work planning a spacecraft that could support these astronomical instruments. The Cook Electric Company in Chicago and Grumman Aircraft Engineering Corporation in New York won contracts to help pull it off.

    The payload, named the Wisconsin Experiment Package (WEP), bundled five telescopes equipped with photoelectric photometers and two scanning spectrophotometers, all with UV capabilities. The Massachusetts Institute of Technology created a package of X-ray and gamma detectors.

    Scientists and engineers had to make the instruments on OAO both programmable and capable of operating autonomously between ground contacts. Because repairs were impossible once in orbit, they designed redundant systems and operating modes. Scientists also had to innovate systems for handling complex observations, transmitting data to Earth digitally (still a novelty in those days), and for processing data before they landed in the hands of astronomers.

    The first effort, OAO-1, suffered a fatal power failure after launch in 1966, and the scientific instruments were never turned on. But NASA reinvested, and OAO-2 launched with a new WEP from Wisconsin, and this time a complementary instrument from the Smithsonian Astrophysical Observatory, called Celescope, that used television camera technology to produce images of celestial objects emitting UV light. Expected to operate just one year, OAO-2 continued to make observations for four years.

    Numerous “guest” astronomers received access to the instruments during the extended mission. Such collaborations ultimately led to the creation of the Space Telescope Science Institute, which Code helped organize as acting director in 1981.

    And the data yielded many scientific firsts, including a modern understanding of stellar physics, surprise insights into stellar explosions called novae, and exploration of a comet that had far-reaching implications for theories of planet formation and evolution.

    To be responsible beneficiaries of such insights, we must remember that just as we are yesterday’s future, the firsts of tomorrow depend on today. We honor that public trust only by continuing to fund James Webb, WFIRST, and other projects not yet conceived.

    In the forward of a 1971 volume publishing OAO-2’s scientific results, NASA’s Chief of Astronomy Nancy G. Roman wrote: “The performance of this satellite has completely vindicated the early planners and has rewarded … the entire astronomical community with many exciting new discoveries and much important data to aid in the unravelling of the secrets of the stars.”

    Let’s keep unraveling these stellar secrets.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 10:52 am on October 15, 2018 Permalink | Reply
    Tags: , , , , , NASA Viking 2 Lander, Scientific American, Search for Alien Life Should Be a Fundamental Part of NASA New Report Urges, , The Viking missions to Mars were the last time the space agency performed a direct explicit search for life on another world   

    From Scientific American: “Search for Alien Life Should Be a Fundamental Part of NASA, New Report Urges” 

    Scientific American

    From Scientific American

    October 15, 2018
    Adam Mann

    1
    An image taken by the Viking 2 lander from Utopia Planitia on the surface of Mars in 1976. The Viking missions to Mars were the last time the space agency performed a direct, explicit search for life on another world. Credit: NASA

    NASA Viking 2 Lander

    For decades many researchers have tended to view astrobiology as the underdog of space science. The field—which focuses on the investigation of life beyond Earth—has often been criticized as more philosophical than scientific, because it lacks in tangible samples to study.

    Now that is all changing. Whereas astronomers once knew of no planets outside our solar system, today they have thousands of examples. And although organisms were previously thought to need the relatively mild surface conditions of our world to survive, new findings about life’s ability to persist in the face of extreme darkness, heat, salinity and cold have expanded researchers’ acceptance that it might be found anywhere from Martian deserts to the ice-covered oceans of Saturn’s moon Enceladus.

    Highlighting astrobiology’s increasing maturity and clout, a new Congressionally mandated report from the National Academy of Sciences (NAS) [National Academies Press] urges NASA to make the search for life on other worlds an integral, central part of its exploration efforts. The field is now well set to be a major motivator for the agency’s future portfolio of missions, which could one day let humanity know whether or not we are alone in the universe. “The opportunity to really address this question is at a critically important juncture,” says Barbara Sherwood Lollar, a geologist at the University of Toronto and chair of the committee that wrote the report.

    The astronomy and planetary science communities are currently gearing up to each perform their decadal surveys—once-every-10-year efforts that identify a field’s most significant open questions—and present a wish list of projects to help answer them. Congress and government agencies such as NASA look to the decadal surveys to plan research strategies; the decadals, in turn, look to documents such as the new NAS report for authoritative recommendations on which to base their findings. Astrobiology’s reception of such full-throated encouragement now may boost its odds of becoming a decadal priority.

    Another NAS study released last month could be considered a second vote in astrobiology’s favor. This “Exoplanet Science Strategy” report recommended NASA lead the effort on a new space telescope that could directly gather light from Earth-like planets around other stars. Two concepts, the Large Ultraviolet/Optical/Infrared (LUVOIR) telescope and the Habitable Exoplanet Observatory (HabEx), are current contenders for a multibillion-dollar NASA flagship mission that would fly as early as the 2030s.

    NASA Large UV Optical Infrared Surveyor (LUVOIR)

    NASA Habitable Exoplanet Imaging Mission (HabEx) The Planet Hunter

    Either observatory could use a coronagraph, or “starshade”—objects that selectively block starlight but allow planetary light through—to search for signs of habitability and of life in distant atmospheres.

    NASA JPL Starshade

    NASA/WFIRST


    JPL-Caltech is developing coronagraph technology to enable direct imaging and spectroscopy of exoplanets using the Astrophysics Focused Telescope Assets (AFTA) on the NASA Wide-Field Infrared Survey Telescope (WFIRST).

    But either would need massive and sustained support from outside astrobiology to succeed in the decadal process and beyond.

    There have been previous efforts to back large, astrobiologically focused missions such as NASA’s Terrestrial Planet Finder concepts—ambitious space telescope proposals in the mid-2000s that would have spotted Earth-size exoplanets and characterized their atmospheres (if these projects had ever made it off the drawing board). Instead, they suffered ignominious cancellations that taught astrobiologists several hard lessons. There was still too little information at the time about the number of planets around other stars, says Caleb Scharf, an astrobiologist at Columbia University, meaning advocates could not properly estimate such a mission’s odds of success. His community had yet to realize that in order to do large projects it needed to band together and show how its goals aligned with those of astronomers less professionally interested in finding alien life, he adds. “If we want big toys,” he says. “We need to play better with others.”

    There has also been tension in the past between the astrobiological goals of solar system exploration and the more geophysics-steeped goals that traditionally underpin such efforts, says Jonathan Lunine, a planetary scientist at Cornell University. Missions to other planets or moons have limited capacity for instruments, and those specialized for different tasks often end up in ferocious competitions for a slot onboard. Historically, because the search for life was so open-ended and difficult to define, associated instrumentation lost out to hardware with clearer, more constrained geophysical research priorities. Now, Lunine says, a growing understanding of all the ways biological and geologic evolution are interlinked is helping to show that such objectives do not have to be at odds. “I hope that astrobiology will be embedded as a part of the overall scientific exploration of the solar system,” he says. “Not as an add-on, but as one of the essential disciplines.”

    Above and beyond the recent NAS reports, NASA is arguably already demonstrating more interest in looking for life in our cosmic backyard than it has for decades. This year the agency released a request for experiments that could be carried to another world in our solar system to directly hunt for evidence of living organisms—the first such solicitation since the 1976 Viking missions that looked for life on Mars. “The Ladder of Life Detection,” a paper written by NASA scientists and published in Astrobiology in June, outlined ways to clearly determine if a sample contains extraterrestrial creatures—a goal mentioned in the NAS report. The document also suggests NASA partner with other agencies and organizations working on astrobiological projects, as the space agency did last month when it hosted a workshop with the nonprofit SETI Institute on the search for “techno-signatures,” potential indicators of intelligent aliens.



    “I think astrobiology has gone from being something that seemed fringy or distracting to something that seems to be embraced at NASA as a major touchstone for why we’re doing space exploration and why the public cares,” says Ariel Anbar, a geochemist at Arizona State University in Tempe.

    All this means is astrobiology’s growing influence is helping bring what once were considered outlandish ideas into reality. Anbar recalls attending a conference in the early 1990s, when then–NASA Administrator Dan Goldin displayed an Apollo-era image of Earth from space and suggested the agency try to do the same thing for a planet around another star.

    “That was pretty out there 25 years ago,” he says. “Now it’s not out there at all.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 11:11 am on August 17, 2018 Permalink | Reply
    Tags: , , , Is Gravity Quantum?, , , , Scientific American   

    From Scientific American: “Is Gravity Quantum?” 

    Scientific American

    From Scientific American

    August 14, 2018
    Charles Q. Choi

    1
    Artist’s rendition of gravitational waves generated by merging neutron stars. The primordial universe is another source of gravitational waves, which, if detected, could help physicists devise a quantum theory of gravity. Credit: R. Hurt, Caltech-JPL

    All the fundamental forces of the universe are known to follow the laws of quantum mechanics, save one: gravity. Finding a way to fit gravity into quantum mechanics would bring scientists a giant leap closer to a “theory of everything” that could entirely explain the workings of the cosmos from first principles. A crucial first step in this quest to know whether gravity is quantum is to detect the long-postulated elementary particle of gravity, the graviton. In search of the graviton, physicists are now turning to experiments involving microscopic superconductors, free-falling crystals and the afterglow of the big bang.

    Quantum mechanics suggests everything is made of quanta, or packets of energy, that can behave like both a particle and a wave—for instance, quanta of light are called photons. Detecting gravitons, the hypothetical quanta of gravity, would prove gravity is quantum. The problem is that gravity is extraordinarily weak. To directly observe the minuscule effects a graviton would have on matter, physicist Freeman Dyson famously noted, a graviton detector would have to be so massive that it collapses on itself to form a black hole.

    “One of the issues with theories of quantum gravity is that their predictions are usually nearly impossible to experimentally test,” says quantum physicist Richard Norte of Delft University of Technology in the Netherlands. “This is the main reason why there exist so many competing theories and why we haven’t been successful in understanding how it actually works.”

    In 2015 [Physical Review Letters], however, theoretical physicist James Quach, now at the University of Adelaide in Australia, suggested a way to detect gravitons by taking advantage of their quantum nature. Quantum mechanics suggests the universe is inherently fuzzy—for instance, one can never absolutely know a particle’s position and momentum at the same time. One consequence of this uncertainty is that a vacuum is never completely empty, but instead buzzes with a “quantum foam” of so-called virtual particles that constantly pop in and out of existence. These ghostly entities may be any kind of quanta, including gravitons.

    Decades ago, scientists found that virtual particles can generate detectable forces. For example, the Casimir effect is the attraction or repulsion seen between two mirrors placed close together in vacuum. These reflective surfaces move due to the force generated by virtual photons winking in and out of existence. Previous research suggested that superconductors might reflect gravitons more strongly than normal matter, so Quach calculated that looking for interactions between two thin superconducting sheets in vacuum could reveal a gravitational Casimir effect. The resulting force could be roughly 10 times stronger than that expected from the standard virtual-photon-based Casimir effect.

    Recently, Norte and his colleagues developed a microchip to perform this experiment. This chip held two microscopic aluminum-coated plates that were cooled almost to absolute zero so that they became superconducting. One plate was attached to a movable mirror, and a laser was fired at that mirror. If the plates moved because of a gravitational Casimir effect, the frequency of light reflecting off the mirror would measurably shift. As detailed online July 20 in Physical Review Letters, the scientists failed to see any gravitational Casimir effect. This null result does not necessarily rule out the existence of gravitons—and thus gravity’s quantum nature. Rather, it may simply mean that gravitons do not interact with superconductors as strongly as prior work estimated, says quantum physicist and Nobel laureate Frank Wilczek of the Massachusets Institute of Technology, who did not participate in this study and was unsurprised by its null results. Even so, Quach says, this “was a courageous attempt to detect gravitons.”

    Although Norte’s microchip did not discover whether gravity is quantum, other scientists are pursuing a variety of approaches to find gravitational quantum effects. For example, in 2017 two independent studies suggested that if gravity is quantum it could generate a link known as “entanglement” between particles, so that one particle instantaneously influences another no matter where either is located in the cosmos. A tabletop experiment using laser beams and microscopic diamonds might help search for such gravity-based entanglement. The crystals would be kept in a vacuum to avoid collisions with atoms, so they would interact with one another through gravity alone. Scientists would let these diamonds fall at the same time, and if gravity is quantum the gravitational pull each crystal exerts on the other could entangle them together.

    The researchers would seek out entanglement by shining lasers into each diamond’s heart after the drop. If particles in the crystals’ centers spin one way, they would fluoresce, but they would not if they spin the other way. If the spins in both crystals are in sync more often than chance would predict, this would suggest entanglement. “Experimentalists all over the world are curious to take the challenge up,” says quantum gravity researcher Anupam Mazumdar of the University of Groningen in the Netherlands, co-author of one of the entanglement studies.

    Another strategy to find evidence for quantum gravity is to look at the cosmic microwave background [CMB] radiation, the faint afterglow of the big bang, says cosmologist Alan Guth of M.I.T.

    Cosmic Background Radiation per ESA/Planck

    ESA/Planck 2009 to 2013

    Quanta such as gravitons fluctuate like waves, and the shortest wavelengths would have the most intense fluctuations. When the cosmos expanded staggeringly in size within a sliver of a second after the big bang, according to Guth’s widely supported cosmological model known as inflation, these short wavelengths would have stretched to longer scales across the universe.

    Inflation

    4
    Alan Guth, from Highland Park High School and M.I.T., who first proposed cosmic inflation

    HPHS Owls

    Lambda-Cold Dark Matter, Accelerated Expansion of the Universe, Big Bang-Inflation (timeline of the universe) Date 2010 Credit: Alex MittelmannColdcreation

    Alan Guth’s notes:
    5

    This evidence of quantum gravity could be visible as swirls in the polarization, or alignment, of photons from the cosmic microwave background radiation.

    However, the intensity of these patterns of swirls, known as B-modes, depends very much on the exact energy and timing of inflation. “Some versions of inflation predict that these B-modes should be found soon, while other versions predict that the B-modes are so weak that there will never be any hope of detecting them,” Guth says. “But if they are found, and the properties match the expectations from inflation, it would be very strong evidence that gravity is quantized.”

    One more way to find out whether gravity is quantum is to look directly for quantum fluctuations in gravitational waves, which are thought to be made up of gravitons that were generated shortly after the big bang. The Laser Interferometer Gravitational-Wave Observatory (LIGO) first detected gravitational waves in 2016, but it is not sensitive enough to detect the fluctuating gravitational waves in the early universe that inflation stretched to cosmic scales, Guth says.


    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    ESA/eLISA the future of gravitational wave research

    1
    Skymap showing how adding Virgo to LIGO helps in reducing the size of the source-likely region in the sky. (Credit: Giuseppe Greco (Virgo Urbino group)

    A gravitational-wave observatory in space, such as the Laser Interferometer Space Antenna (eLISA, just above), could potentially detect these waves, Wilczek adds.

    In a paper recently accepted by the journal Classical and Quantum Gravity, however, astrophysicist Richard Lieu of the University of Alabama, Huntsville, argues that LIGO should already have detected gravitons if they carry as much energy as some current models of particle physics suggest. It might be that the graviton just packs less energy than expected, but Lieu suggests it might also mean the graviton does not exist. “If the graviton does not exist at all, it will be good news to most physicists, since we have been having such a horrid time in developing a theory of quantum gravity,” Lieu says.

    Still, devising theories that eliminate the graviton may be no easier than devising theories that keep it. “From a theoretical point of view, it is very hard to imagine how gravity could avoid being quantized,” Guth says. “I am not aware of any sensible theory of how classical gravity could interact with quantum matter, and I can’t imagine how such a theory might work.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 11:28 am on July 23, 2018 Permalink | Reply
    Tags: , , , , Did a Stellar Intruder Deform Our Outer Solar System?, , Scientific American, Sedna   

    From Scientific American: “Did a Stellar Intruder Deform Our Outer Solar System?” 

    Scientific American

    From Scientific American

    July 23, 2018
    Shannon Hall

    New results suggest a massive star once swung dangerously close to our sun—helping to shape the mysterious features we see today.

    1
    The odd orbit of the dwarf planet Sedna (shown here in an artist’s conceptualization) and other outer solar system objects suggests a visiting star may have swerved too close to the sun long ago. Credit: NASA and JPL-Caltech

    There is a mystery brewing in the far reaches of our solar system.

    Astronomers have long thought the eight planets orbit in nearly perfect circles because they once formed within the swirling disk of dust and gas that surrounded the young sun. But in 2003 scientists discovered something strange: a dwarf planet known as Sedna whose elongated orbit takes it from twice Pluto’s distance to more than 20 times its distance from the sun. And it is not alone. In the years since astronomers have uncovered nearly two dozen distant icy objects whose orbits are oblong and strangely tilted compared to the plane of the solar system. To explain such oddities, scientists speculated that maybe these worlds are scars from a violent past, a sign something—perhaps a passing star—knocked them off course in our solar system’s infancy. Or maybe there is a distant ninth planet whose gravity sculpts their peculiar orbits.

    The latter hypothesis has gained traction over the past several years, leaving the first in the dust, says Susanne Pfalzner, an astronomer at the Max Planck Institute for Radio Astronomy in Germany. Anomalies in the orbits of some small outer solar system objects have amassed evidence for a “Planet Nine” roughly 10 times Earth’s mass. Meanwhile a stellar interloper has been considered too unlikely—until now. Pfalzner and her colleagues recently published a paper to the preprint server arXiv that has been accepted by The Astrophysical Journal showing stars might buzz our solar system far more often than previously thought. Not only do the results lend credibility to a stellar flyby but they just might also explain how the elusive Planet Nine would have landed in its odd orbit in the first place.

    2
    Effect of a prograde, parabolic fly-by of a star with a) M=0.5 M, b) M2= 1, Mand c) M2= 5 Mthat is inclined by 60 degree and has a angle of periastron equal zero. The perihelion distance is always chosen in such a way as to lead to a 30-35 AU disc. The top row indicates the eccentricity distribution of the matter with a central area of most particles on circular orbits and more eccentric orbits at larger distances form the Sun. The eccentricities are indicated by the different colours given in the bar. The origin of the different eccentricity populations in the original disc can be seen in bottom row, where matter indicated in grey becomes unbound from the Sun. Note that in c) the path of the perturber is not visible because it is outside the shown frame. Credit: arXiv:1807.02960 [astro-ph.GA]

    Astronomers know the sun has not always been so solitary. It was born within a cluster of hundreds to perhaps tens of thousands of stars that dispersed only 10 million years later. So while the sun was still entombed within that cluster, stars would have rocked to and fro in a dizzying dance that easily could have brought one waltzing into our nascent solar system. But after the cluster broke apart the likelihood of such an encounter dropped nearly to zero, or so the thinking went. But Pfalzner and her colleagues now argue the odds of an encounter remained quite high after the cluster had started to disperse. After many long computer simulations they found there is a 20 to 30 percent chance a star perhaps as massive as the sun would swing nearly as close as Pluto at 50 to 150 astronomical units. (One AU is the mean distance from Earth to the sun, or 93 million miles.) And there is no doubt such a close approach would surely shake our young solar system.

    Although the large planets would remain unbothered (much like the sun is only slightly jostled by the minor gravities of the eight planets), the encounter would perturb the solar system’s smaller objects—tossing them around and placing them in odd orbits in the distant reaches of the solar system. What is more: the simulations also re-created a second trend astronomers have observed in the solar system, that outer objects tend to cluster together in space. They travel together in tight-knit groups that all cross the plane of the solar system at roughly the same spot before swinging outward to the same distant point. In short, simulations including a stellar interloper can perfectly re-create the observations to date. “But whether they’ll last for 4.5 billion years” or over the solar system’s entire life span, “is the million-dollar question,” says Scott Kenyon, an astronomer at Harvard–Smithsonian Center for Astrophysics who was not involved in the research. And Pfalzner agrees. She would like to model the long-term behavior next to see whether those changes will hold over the solar system’s entire lifetime. It could be that a flyby clusters objects for a cosmic moment before they randomize again. If that is the case, then a planet is the best explanation for the observations.

    Scientists are eagerly tracking down more data with a number of different observing campaigns. A handful of teams, for example, are already scouring large chunks of the heavens in search of more oddities in the outer solar system. Scott Sheppard, an astronomer at the Carnegie Institution for Science who was not involved in the study, cannot contain his excitement over the upcoming Large Synoptic Survey Telescope—an 8.4-meter-wide scope that will likely uncover hundreds of new solar system rocks.

    LSST


    LSST Camera, built at SLAC



    LSST telescope, currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    “That’s really going to open up the floodgates for trying to discover these distant objects,” he says.

    Meanwhile Kenyon is hopeful the Gaia spacecraft, which is in the process of charting one billion stars to unprecedented accuracy, will help find our sun’s long-lost siblings.

    ESA/GAIA satellite

    That will allow scientists to better understand the stellar cluster in which our young solar system formed, along with the likelihood another star zoomed too close. “Gaia is the new savior on the block,” he says. A recent Gaia study even traced the paths of nearby stars into the past and projected those paths into the future, only to find that 25 stars speed dangerously close to home over a 10-million-year time period. That tally is seven times as much nearby stellar traffic as previously thought. Then, of course, there are a number of surveys searching for the elusive Planet Nine itself.

    But Pfalzner argues the discovery of another major member of the solar system will not rule out a stellar flyby. “It’s not an either–or scenario,” she says. “If Planet Nine exists, this would not be in any way a contradiction to the flyby model, but possibly even a point in favor for it.” Her team argues Planet Nine’s predicted orbit, which is also both eccentric (stretched out) and inclined (tilted from the solar system’s plane), was likely shaped by the stellar interloper itself. So she and others will continue to hunt for both Planet Nine and further oddities.

    And although astronomers might disagree over the specifics of our solar system’s origin story, they are all certain the treasure trove of objects already discovered in the outer solar system is only the beginning. Sedna was the tip of the iceberg, Sheppard says. “There’s just so much sky we haven’t covered to date that it’s more likely than not there’s something pretty big out there.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 8:15 pm on March 22, 2018 Permalink | Reply
    Tags: , , , New England Is Sitting on a Bed of Hot Rocks, , Scientific American   

    From Rutgers via Scientific American: “New England Is Sitting on a Bed of Hot Rocks” 

    Rutgers smaller
    Our once and future Great Seal.

    Rutgers University

    Scientific American

    April 2018
    Shannon Hall

    2
    Colorful forests fill the landscape in the Berkshires of western Massachusetts.
    Photograph by Berthold Steinhilber, laif, Redux, courtesy of natgeo.com, which also provided the link to the article in Geology, which Scientific American was to lazy to do. Looking for the link is how I found the photo.

    1
    Credit: Thomas Fuchs

    For the past 200 million years New England has been a place without intense geologic change. With few exceptions, there have been no rumbling volcanoes or major earthquakes. But it might be on the verge of awakening.

    Findings published this January in Geology show a bubble of hot rock rising underneath the northern Appalachian Mountains. The feature was first detected in 2016 by EarthScope, a collection of thousands of seismic instruments sprinkled throughout the U.S. Vadim Levin, a geophysicist at Rutgers University, says this wealth of sensors lets earth scientists peer under the North American continent, just as the Hubble Space Telescope has enabled astronomers to gaze deep into the night sky. Should the broiling rock breach the surface—which could happen, though not until tens of millions of years from now—it would transform New England into a burbling volcanic landscape.

    The finding has sparked many questions, given that New England is not located along an active plate margin (where one tectonic plate rubs against another) but sits squarely in the middle of the North American plate. The exact source of the hot rock bubble, for example, is unclear. Because the edge of the North American continent is colder than a plate near an active margin, Levin suspects this edge is cooling the mantle—the layer just below the crust that extends toward the earth’s core. As cold chunks of mantle sink, they may displace hotter segments, which would rise toward the surface. Scientists believe they have now imaged such an ascending piece. Although it sounds simple, this scenario “is a story that at present does not have a place in a textbook,” Levin says.

    Or perhaps pieces of the North American continent are breaking off and sinking into the mantle (which would also push the warmer mantle upward), observes William Menke, a geophysicist at Columbia University, who was not part of the study.

    Scientists do not yet know which model is correct or if an entirely different one may be involved. Levin and his colleagues are eager to collect more data to bring this unusual hotspot into sharper focus and, in doing so, flesh out the theory of plate tectonics. “We know little about the interior of our planet, and every time we look with a new light … we find things we did not expect,” Levin says. “When we do, we need to rethink our understanding of how the planet functions.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    rutgers-campus

    Rutgers, The State University of New Jersey, is a leading national research university and the state’s preeminent, comprehensive public institution of higher education. Rutgers is dedicated to teaching that meets the highest standards of excellence; to conducting research that breaks new ground; and to providing services, solutions, and clinical care that help individuals and the local, national, and global communities where they live.

    Founded in 1766, Rutgers teaches across the full educational spectrum: preschool to precollege; undergraduate to graduate; postdoctoral fellowships to residencies; and continuing education for professional and personal advancement.

    As a ’67 graduate of University college, second in my class, I am proud to be a member of

    Alpha Sigma Lamda, National Honor Society of non-tradional students.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: