Tagged: Spacetime Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:54 pm on February 24, 2019 Permalink | Reply
    Tags: "Ask Ethan: How Can We Measure The Curvature Of Spacetime?", A difference in the height of two atomic clocks of even ~1 foot (33 cm) can lead to a measurable difference in the speed at which those clocks run, A team of physicists working in Europe were able to conjugate three atom interferometers simultaneously, At every point you can infer the force of gravity or the amount of spacetime curvature, , Decades before Newton put forth his law of universal gravitation Italian scientists Francesco Grimaldi and Giovanni Riccioli made the first calculations of the gravitational constant G, , , , In the future it may be possible to extend this technique to measure the curvature of spacetime not just on Earth but on any worlds we can put a lander on. This includes other planets moons asteroids , It’s been over 100 years since Einstein and over 300 since Newton. We’ve still got a long way to go, Making multiple measurements of the field gradient simultaneously allows you to measure G between multiple locations that eliminates a source of error: the error induced when you move the apparatus. B, Pound-Rebka experiment, Spacetime, The same law of gravity governs the entire Universe, We can do even better than the Pound-Rebka experiment today by using the technology of atomic clocks, You can even infer G the gravitational constant of the Universe.   

    From Ethan Siegel: “Ask Ethan: How Can We Measure The Curvature Of Spacetime?” 

    From Ethan Siegel
    Feb 23, 2019

    1
    Instead of an empty, blank, 3D grid, putting a mass down causes what would have been ‘straight’ lines to instead become curved by a specific amount. In General Relativity, we treat space and time as continuous, but all forms of energy, including but not limited to mass, contribute to spacetime curvature. For the first time, we can measure the curvature at Earth’s surface, as well as how that curvature changes with altitude. (CHRISTOPHER VITALE OF NETWORKOLOGIES AND THE PRATT INSTITUTE)

    It’s been over 100 years since Einstein, and over 300 since Newton. We’ve still got a long way to go.

    From measuring how objects fall on Earth to observing the motion of the Moon and planets, the same law of gravity governs the entire Universe. From Galileo to Newton to Einstein, our understanding of the most universal force of all still has some major holes in it. It’s the only force without a quantum description. The fundamental constant governing gravitation, G, is so poorly known that many find it embarrassing. And the curvature of the fabric of spacetime itself went unmeasured for a century after Einstein put forth the theory of General Relativity. But much of that has the potential to change dramatically, as our Patreon supporter Nick Delroy realized, asking:

    Can you please explain to us how awesome this is, and what you hope the future holds for gravity measurement. The instrument is obviously localized but my imagination can’t stop coming up with applications for this.

    The big news he’s excited about, of course, is a new experimental technique that measured the curvature of spacetime due to gravity for the first time [Physical Review Letters].

    2
    The identical behavior of a ball falling to the floor in an accelerated rocket (left) and on Earth (right) is a demonstration of Einstein’s equivalence principle. Although you cannot tell whether an acceleration is due to gravity or any other acceleration from a single measurement, measuring differing accelerations at different points can show whether there’s a gravitational gradient along the direction of acceleration. (WIKIMEDIA COMMONS USER MARKUS POESSEL, RETOUCHED BY PBROKS13)

    Think about how you might design an experiment to measure the strength of the gravitational force at any location in space. Your first instinct might be something simple and straightforward: take an object at rest, release it so it’s in free-fall, and observe how it accelerates.

    By measuring the change in position over time, you can reconstruct what the acceleration at this location must be. If you know the rules governing the gravitational force — i.e., you have the correct law of physics, like Newton’s or Einstein’s theories — you can use this information to determine even more information. At every point, you can infer the force of gravity or the amount of spacetime curvature. Beyond that, if you know additional information (like the relevant matter distribution), you can even infer G, the gravitational constant of the Universe.

    3
    Newton’s law of Universal Gravitation relied on the concept of an instantaneous action (force) at a distance, and is incredibly straightforward. The gravitational constant in this equation, G, along with the values of the two masses and the distance between them, are the only factors in determining a gravitational force. Although Newton’s theory has since been superseded by Einstein’s General Relativity, G also appears in Einstein’s theory. (WIKIMEDIA COMMONS USER DENNIS NILSSON)

    This simple approach was the first one taken to investigate the nature of gravity. Building on the work of others, Galileo determined the gravitational acceleration at Earth’s surface. Decades before Newton put forth his law of universal gravitation, Italian scientists Francesco Grimaldi and Giovanni Riccioli made the first calculations of the gravitational constant, G.

    But experiments like this, as valuable as they are, are limited. They can only give you information about gravitation along one dimension: towards the center of the Earth. Acceleration is based on either the sum of all the net forces (Newton) acting on an object, or the net curvature of spacetime (Einstein) at one particular location in the Universe. Since you’re observing an object in free-fall, you’re only getting a simplistic picture.

    4
    According to legend, the first experiment to show that all objects fell at the same rate, irrespective of mass, was performed by Galileo Galilei atop the Leaning Tower of Pisa. Any two objects dropped in a gravitational field, in the absence of (or neglecting) air resistance, will accelerate down to the ground at the same rate. This was later codified as part of Newton’s investigations into the matter. (GETTY IMAGES)

    Thankfully, there’s a way to get a multidimensional picture as well: perform an experiment that’s sensitive to changes in the gravitational field/potential as an object changes its position. This was first accomplished, experimentally, in the 1950s by the Pound-Rebka experiment [ Explanation of the Pound-Rebka experiment http://vixra.org/pdf/1212.0035v1.pdf ].

    What the experiment did was cause a nuclear emission at a low elevation, and note that the corresponding nuclear absorption didn’t occur at a higher elevation, presumably due to gravitational redshift, as predicted by Einstein. Yet if you gave the low-elevation emitter a positive boost to its speed, through attaching it to a speaker cone, that extra energy would balance the loss of energy that traveling upwards in a gravitational field extracted. As a result, the arriving photon has the right energy, and absorption occurs. This was one of the classical tests of General Relativity, confirming Einstein where his theory’s predictions departed from Newton’s.

    5
    Physicist Glen Rebka, at the lower end of the Jefferson Towers, Harvard University, calling Professor Pound on the phone during setup of the famed Pound-Rebka experiment. (CORBIS MEDIA / HARVARD UNIVERSITY)

    We can do even better than the Pound-Rebka experiment today, by using the technology of atomic clocks. These clocks are the best timekeepers in the Universe, having surpassed the best natural clocks — pulsars — decades ago. Now capable of monitoring time differences to some 18 significant features between clocks, Nobel Laureate David Wineland led a team that demonstrated that raising an atomic clock by barely a foot (about 33 cm in the experiment) above another one caused a measurable frequency shift in what the clock registered as a second.

    If we were to take these two clocks to any location on Earth, and adjust the heights as we saw fit, we could understand how the gravitational field changes as a function of elevation. Not only can we measure gravitational acceleration, but the changes in acceleration as we move away from Earth’s surface.

    6
    A difference in the height of two atomic clocks of even ~1 foot (33 cm) can lead to a measurable difference in the speed at which those clocks run. This allows us to measure not only the strength of the gravitational field, but the gradient of the field as a function of altitude/elevation. (DAVID WINELAND AT PERIMETER INSTITUTE, 2015)



    But even these achievements cannot map out the true curvature of space. That next step wouldn’t be achieved until 2015: exactly 100 years after Einstein first put forth his theory of General Relativity. In addition, there was another problem that has cropped up in the interim, which is the fact that various methods of measuring the gravitational constant, G, appear to give different answers.

    Three different experimental techniques have been used to determine G: torsion balances, torsion pendulums, and atom interferometry experiments. Over the past 15 years, measured values of the gravitational constant have ranged from as high as 6.6757 × 10–11 N/kg2⋅m2 to as low as 6.6719 × 10–11 N/kg2⋅m2. This difference of 0.05%, for a fundamental constant, makes it one of the most poorly-determined constants in all of nature.

    6
    In 1997, the team of Bagley and Luther performed a torsion balance experiment that yielded a result of 6.674 x 10^-11 N/kg²/m², which was taken seriously enough to cast doubt on the previously reported significance of the determination of G. Note the relatively large variations in the measured values, even since the year 2000.(DBACHMANN / WIKIMEDIA COMMONS)

    But that’s where the new study, first published in 2015 but refined many times over the past four years, comes in. A team of physicists, working in Europe, were able to conjugate three atom interferometers simultaneously. Instead of using just two locations at different heights, they were able to get the mutual differences between three different heights at a single location on the surface, which enables you to not simply get a single difference, or even the gradient of the gravitational field, but the change in the gradient as a function of distance.

    When you explore how the gravitational field changes as a function of distance, you can understand the shape of the change in spacetime curvature. When you measure the gravitational acceleration in a single location, you’re sensitive to everything around you, including what’s underground and how it’s moving. Measuring the gradient of the field is more informative than just a single value; measuring how that gradient changes gives you even more information.

    7
    The scheme of the experiment that measures the three atomic groupings launched in rapid sequence and then excited by lasers to measure not only the gravitational acceleration, but showing the effects of the changes in curvature that had never been measured before. (G. ROSI ET AL., PHYS. REV. LETT. 114, 013001, 2015)

    That’s what makes this new technique so powerful. We’re not simply going to a single location and finding out what the gravitational force is. Nor are we going to a location and finding out what the force is and how that force is changing with elevation. Instead, we’re determining the gravitational force, how it changes with elevation, and how the change in the force is changing with elevation.

    “Big deal,” you might say, “we already know the laws of physics. We know what those laws predict. Why should I care that we’re measuring something that confirms to slightly better accuracy what we’ve known should be true all along?”

    Well, there are multiple reasons. One is that making multiple measurements of the field gradient simultaneously allows you to measure G between multiple locations that eliminates a source of error: the error induced when you move the apparatus. By making three measurements, rather than two, simultaneously, you get three differences (between 1 and 2, 2 and 3, and 1 and 3) rather than just 1 (between 1 and 2).

    8
    The top of the Makkah royal clock tower runs a few quadrillionths of a second faster than the same clock would at the base, due to differences in the gravitational field. Measuring the changes in the gradient of the gravitational field provides even more information, enabling us to finally measure the curvature of space directly. (AL JAZEERA ENGLISH C/O: FADI EL BENNI)

    But another reason that’s perhaps even more important is to better understand the gravitational pull of the objects we’re measuring. The idea that we know the rules governing gravity is true, but we only know what the gravitational force should be if we know the magnitude and distribution of all the masses that are relevant to our measurement. The Earth, for example, is not a uniform structure at all. There are fluctuations in the gravitational strength we experience everywhere we go, dependent on factors like:

    the density of the crust beneath your feet,
    the location of the crust-mantle boundary,
    the extent of isostatic compensation that takes place at that boundary,
    the presence or absence of oil reservoirs or other density-varying deposits underground,

    and so on. If we can implement this technique of three-atom interferometry wherever we like on Earth, we can better understand our planet’s interior simply by making measurements at the surface.

    9
    Various geologic zones in the Earth’s mantle create and move magma chambers, leading to a variety of geological phenomena. It’s possible that external intervention could trigger a catastrophic event. Improvements in geodesy could improve our understanding of what’s happening, existing, and changing beneath Earth’s surface. (KDS4444 / WIKIMEDIA COMMONS)

    In the future, it may be possible to extend this technique to measure the curvature of spacetime not just on Earth, but on any worlds we can put a lander on. This includes other planets, moons, asteroids and more. If we want to do asteroid mining, this could be the ultimate prospecting tool. We could improve our geodesy experiments significantly, and improve our ability to monitor the planet. We could better track internal changes in magma chambers, as just one example. If we applied this technology to upcoming spacecrafts, it could even help correct for Newtonian noise in next-generation gravitational wave observatories like LISA or beyond.


    ESA/NASA eLISA space based, the future of gravitational wave research

    10
    The gold-platinum alloy cubes, of central importance to the upcoming LISA mission, have already been built and tested in the proof-of-concept LISA Pathfinder mission.

    ESA/LISA Pathfinder


    This image shows the assembly of one of the Inertial Sensor Heads for the LISA Technology Package (LTP). Improved techniques for accounting for Newtonian noise in the experiment might improve LISA’s sensitivity significantly. (CGS SPA)

    The Universe is not simply made of point masses, but of complex, intricate objects. If we ever hope to tease out the most sensitive signals of all and learn the details that elude us today, we need to become more precise than ever. Thanks to three-atom interferometry, we can, for the first time, directly measure the curvature of space.

    Understanding the Earth’s interior better than ever is the first thing we’re going to gain, but that’s just the beginning. Scientific discovery isn’t the end of the game; it’s the starting point for new applications and novel technologies. Come back in a few years; you might be surprised at what becomes possible based on what we’re learning for the first time today.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 8:22 pm on March 18, 2018 Permalink | Reply
    Tags: , , Mathematics vs Physics, , , , Shake a Black Hole, Spacetime, The black hole stability conjecture   

    From Quanta: “To Test Einstein’s Equations, Poke a Black Hole” 

    Quanta Magazine
    Quanta Magazine

    mathematical physics
    https://sciencesprings.wordpress.com/2018/03/17/from-ethan-siegel-where-is-the-line-between-mathematics-and-physics/

    March 8, 2018
    Kevin Hartnett

    1
    Fantastic animation. Olena Shmahalo/Quanta Magazine

    In November 1915, in a lecture before the Prussian Academy of Sciences, Albert Einstein described an idea that upended humanity’s view of the universe. Rather than accepting the geometry of space and time as fixed, Einstein explained that we actually inhabit a four-dimensional reality called space-time whose form fluctuates in response to matter and energy.

    Einstein elaborated this dramatic insight in several equations, referred to as his “field equations,” that form the core of his theory of general relativity. That theory has been vindicated by every experimental test thrown at it in the century since.

    Yet even as Einstein’s theory seems to describe the world we observe, the mathematics underpinning it remain largely mysterious. Mathematicians have been able to prove very little about the equations themselves. We know they work, but we can’t say exactly why. Even Einstein had to fall back on approximations, rather than exact solutions, to see the universe through the lens he’d created.

    Over the last year, however, mathematicians have brought the mathematics of general relativity into sharper focus. Two groups have come up with proofs related to an important problem in general relativity called the black hole stability conjecture. Their work proves that Einstein’s equations match a physical intuition for how space-time should behave: If you jolt it, it shakes like Jell-O, then settles down into a stable form like the one it began with.

    “If these solutions were unstable, that would imply they’re not physical. They’d be a mathematical ghost that exists mathematically and has no significance from a physical point of view,” said Sergiu Klainerman, a mathematician at Princeton University and co-author, with Jérémie Szeftel, of one of the two new results [https://arxiv.org/abs/1711.07597].

    To complete the proofs, the mathematicians had to resolve a central difficulty with Einstein’s equations. To describe how the shape of space-time evolves, you need a coordinate system — like lines of latitude and longitude — that tells you which points are where. And in space-time, as on Earth, it’s hard to find a coordinate system that works everywhere.

    Shake a Black Hole

    General relativity famously describes space-time as something like a rubber sheet. Absent any matter, the sheet is flat. But start dropping balls onto it — stars and planets — and the sheet deforms. The balls roll toward one another. And as the objects move around, the shape of the rubber sheet changes in response.

    Einstein’s field equations describe the evolution of the shape of space-time. You give the equations information about curvature and energy at each point, and the equations tell you the shape of space-time in the future. In this way, Einstein’s equations are like equations that model any physical phenomenon: This is where the ball is at time zero, this is where it is five seconds later.

    “They’re a mathematically precise quantitative version of the statement that space-time curves in the presence of matter,” said Peter Hintz, a Clay research fellow at the University of California, Berkeley, and co-author, with András Vasy, of the other recent result [https://arxiv.org/abs/1606.04014].

    In 1916, almost immediately after Einstein released his theory of general relativity, the German physicist Karl Schwarzschild found an exact solution to the equations that describes what we now know as a black hole (the term wouldn’t be invented for another five decades). Later, physicists found exact solutions that describe a rotating black hole and one with an electrical charge.

    These remain the only exact solutions that describe a black hole. If you add even a second black hole, the interplay of forces becomes too complicated for present-day mathematical techniques to handle in all but the most special situations.

    Yet you can still ask important questions about this limited group of solutions. One such question developed out of work in 1952 by the French mathematician Yvonne Choquet-Bruhat. It asks, in effect: What happens when you shake a black hole?

    2
    Lucy Reading-Ikkanda/Quanta Magazine

    This problem is now known as the black hole stability conjecture. The conjecture predicts that solutions to Einstein’s equations will be “stable under perturbation.” Informally, this means that if you wiggle a black hole, space-time will shake at first, before eventually settling down into a form that looks a lot like the form you started with. “Roughly, stability means if I take special solutions and perturb them a little bit, change data a little bit, then the resulting dynamics will be very close to the original solution,” Klainerman said.

    So-called “stability” results are an important test of any physical theory. To understand why, it’s useful to consider an example that’s more familiar than a black hole.

    Imagine a pond. Now imagine that you perturb the pond by tossing in a stone. The pond will slosh around for a bit and then become still again. Mathematically, the solutions to whatever equations you use to describe the pond (in this case, the Navier-Stokes equations) should describe that basic physical picture. If the initial and long-term solutions don’t match, you might question the validity of your equations.

    “This equation might have whatever properties, it might be perfectly fine mathematically, but if it goes against what you expect physically, it can’t be the right equation,” Vasy said.

    For mathematicians working on Einstein’s equations, stability proofs have been even harder to find than solutions to the equations themselves. Consider the case of flat, empty Minkowski space — the simplest of all space-time configurations. This solution to Einstein’s equations was found in 1908 in the context of Einstein’s earlier theory of special relativity. Yet it wasn’t until 1993 that mathematicians managed to prove that if you wiggle flat, empty space-time, you eventually get back flat, empty space-time. That result, by Klainerman and Demetrios Christodoulou, is a celebrated work in the field.

    One of the main difficulties with stability proofs has to do with keeping track of what is going on in four-dimensional space-time as the solution evolves. You need a coordinate system that allows you to measure distances and identify points in space-time, just as lines of latitude and longitude allow us to define locations on Earth. But it’s not easy to find a coordinate system that works at every point in space-time and then continues to work as the shape of space-time evolves.

    “We don’t know of a one-size-fits-all way to do this,” Hintz wrote in an email. “After all, the universe does not hand you a preferred coordinate system.”

    The Measurement Problem

    The first thing to recognize about coordinate systems is that they’re a human invention. The second is that not every coordinate system works to identify every point in a space.

    Take lines of latitude and longitude: They’re arbitrary. Cartographers could have anointed any number of imaginary lines to be 0 degrees longitude.

    2

    And while latitude and longitude work to identify just about every location on Earth, they stop making sense at the North and South poles. If you knew nothing about Earth itself, and only had access to latitude and longitude readings, you might wrongly conclude there’s something topologically strange going on at those points.

    This possibility — of drawing wrong conclusions about the properties of physical space because the coordinate system used to describe it is inadequate — is at the heart of why it’s hard to prove the stability of space-time.

    “It could be the case that stability is true, but you’re using coordinates that are not stable and thus you miss the fact that stability is true,” said Mihalis Dafermos, a mathematician at the University of Cambridge and a leading figure in the study of Einstein’s equations.

    In the context of the black hole stability conjecture, whatever coordinate system you’re using has to evolve as the shape of space-time evolves — like a snugly fitting glove adjusting as the hand it encloses changes shape. The fit between the coordinate system and space-time has to be good at the start and remain good throughout. If it doesn’t, there are two things that can happen that would defeat efforts to prove stability.

    First, your coordinate system might change shape in a way that makes it break down at certain points, just as latitude and longitude fail at the poles. Such points are called “coordinate singularities” (to distinguish them from physical singularities, like an actual black hole). They are undefined points in your coordinate system that make it impossible to follow an evolving solution all the way through.

    Second, a poorly fitting coordinate system might disguise the underlying physical phenomena it’s meant to measure. To prove that solutions to Einstein’s equations settle down into a stable state after being perturbed, mathematicians must keep careful track of the ripples in space-time that are set in motion by the perturbation. To see why, it’s worth considering the pond again. A rock thrown into a pond generates waves. The long-term stability of the pond results from the fact that those waves decay over time — they grow smaller and smaller until there’s no sign they were ever there.

    The situation is similar for space-time. A perturbation will set off a cascade of gravitational waves, and proving stability requires proving that those gravitational waves decay. And proving decay requires a coordinate system — referred to as a “gauge” — that allows you to measure the size of the waves. The right gauge allows mathematicians to see the waves flatten and eventually disappear altogether.

    “The decay has to be measured relative to something, and it’s here where the gauge issue shows up,” Klainerman said. “If I’m not in the right gauge, even though in principle I have stability, I can’t prove it because the gauge will just not allow me to see that decay. If I don’t have decay rates of waves, I can’t prove stability.”

    The trouble is, while the coordinate system is crucial, it’s not obvious which one to choose. “You have a lot of freedom about what this gauge condition can be,” Hintz said. “Most of these choices are going to be bad.”

    Partway There

    A full proof of the black hole stability conjecture requires proving that all known black hole solutions to Einstein’s equations (with the spin of the black hole below a certain threshold) are stable after being perturbed. These known solutions include the Schwarzschild solution, which describes space-time with a nonrotating black hole, and the Kerr family of solutions, which describe configurations of space-time empty of everything save a single rotating black hole (where the properties of that rotating black hole — its mass and angular momentum — vary within the family of solutions).

    Both of the new results make partial progress toward a proof of the full conjecture.

    Hintz and Vasy, in a paper posted to the scientific preprint site arxiv.org in 2016 [see above 1606.04014], proved that slowly rotating black holes are stable. But their work did not cover black holes rotating above a certain threshold.

    Their proof also makes some assumptions about the nature of space-time. The original conjecture is in Minkowski space, which is not just flat and empty but also fixed in size. Hintz and Vasy’s proof takes place in what’s called de Sitter space, where space-time is accelerating outward, just like in the actual universe. This change of setting makes the problem simpler from a technical point of view, which is easy enough to appreciate at a conceptual level: If you drop a rock into an expanding pond, the expansion is going to stretch the waves and cause them to decay faster than they would have if the pond were not expanding.

    “You’re looking at a universe undergoing an accelerated expansion,” Hintz said. “This makes the problem a little easier as it appears to dilute the gravitational waves.”

    Klainerman and Szeftel’s work has a slightly different flavor. Their proof, the first part of which was posted online last November [see above 1711.07597], takes place in Schwarzschild space-time — closer to the original, more difficult setting for the problem. They prove the stability of a nonrotating black hole, but they do not address solutions in which the black hole is spinning. Moreover, they only prove the stability of black hole solutions for a narrow class of perturbations — where the gravitational waves generated by those perturbations are symmetric in a certain way.

    Both results involve new techniques for finding the right coordinate system for the problem. Hintz and Vasy start with an approximate solution to the equations, based on an approximate coordinate system, and gradually increase the precision of their answer until they arrive at exact solutions and well-behaved coordinates. Klainerman and Szeftel take a more geometric approach to the challenge.

    The two teams are now trying to build on their respective methods to find a proof of the full conjecture. Some expert observers think the day might not be far off.

    “I really think things are now at the stage that the remaining difficulties are just technical,” Dafermos said. “Somehow one doesn’t need new ideas to solve this problem.” He emphasized that a final proof could come from any one of the large number of mathematicians currently working on the problem.

    For 100 years Einstein’s equations have served as a reliable experimental guide to the universe. Now mathematicians may be getting closer to demonstrating exactly why they work so well.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 12:03 pm on November 9, 2017 Permalink | Reply
    Tags: , , Cosmologists have come to realize that our universe may be only one component of the multiverse, , Fred Adams, Mordehai Milgrom and MOND theory, , , Spacetime, The forces are not nearly as finely tuned as many scientists think, The parameters of our universe could have varied by large factors and still allowed for working stars and potentially habitable planets, The strong interaction- the weak interaction- electromagnetism- gravity   

    From Nautilus: “The Not-So-Fine Tuning of the Universe” 

    Nautilus

    Nautilus

    January 19, 2017 [Just found this referenced in another article.]
    Fred Adams
    Illustrations by Jackie Ferrentino

    Before there is life, there must be structure. Our universe synthesized atomic nuclei early in its history. Those nuclei ensnared electrons to form atoms. Those atoms agglomerated into galaxies, stars, and planets. At last, living things had places to call home. We take it for granted that the laws of physics allow for the formation of such structures, but that needn’t have been the case.

    Over the past several decades, many scientists have argued that, had the laws of physics been even slightly different, the cosmos would have been devoid of complex structures. In parallel, cosmologists have come to realize that our universe may be only one component of the multiverse, a vast collection of universes that makes up a much larger region of spacetime. The existence of other universes provides an appealing explanation for the apparent fine-tuning of the laws of physics. These laws vary from universe to universe, and we live in a universe that allows for observers because we couldn’t live anywhere else.

    1
    Setting The Parameters: The universe would have been habitable even if the forces of electromagnetism and gravity had been stronger or weaker. The crosshatched area shows the range of values consistent with life. The asterisk shows the actual values in our universe; the axes are scaled to these values. The constraints are that stars must be able to undergo nuclear fusion (below black curve), live long enough for complex life to evolve (below red curve), be hot enough to support biospheres (left of blue curve), and not outgrow their host galaxies (right of the cyan curve). Fred C. Adams.

    Astrophysicists have discussed fine-tuning so much that many people take it as a given that our universe is preternaturally fit for complex structures. Even skeptics of the multiverse accept fine-tuning; they simply think it must have some other explanation. But in fact the fine-tuning has never been rigorously demonstrated. We do not really know what laws of physics are necessary for the development of astrophysical structures, which are in turn necessary for the development of life. Recent work on stellar evolution, nuclear astrophysics, and structure formation suggest that the case for fine-tuning is less compelling than previously thought. A wide variety of possible universes could support life. Our universe is not as special as it might seem.

    The first type of fine-tuning involves the strengths of the fundamental forces of nature in working stars. If the electromagnetic force had been too strong, the electrical repulsion of protons would shut down nuclear fusion in stellar cores, and stars would fail to shine. If electromagnetism had been too weak, nuclear reactions would run out of control, and stars would blow up in spectacular explosions. If gravity had been too strong, stars would either collapse into black holes or never ignite.

    On closer examination, though, stars are remarkably robust. The strength of the electric force could vary by a factor of nearly 100 in either direction before stellar operations would be compromised. The force of gravity would have to be 100,000 times stronger. Going in the other direction, gravity could be a billion times weaker and still allow for working stars. The allowed strengths for the gravitational and electromagnetic forces depend on the nuclear reaction rate, which in turn depends on the strengths of the nuclear forces. If the reaction rate were faster, stars could function over an even wider range of strengths for gravitation and electromagnetism. Slower nuclear reactions would narrow the range.

    In addition to these minimal operational requirements, stars must meet a number of other constraints that further restrict the allowed strength of the forces. They must be hot. The surface temperature of a star must be high enough to drive the chemical reactions necessary for life. In our universe, there are ample regions around most stars where planets are warm enough, about 300 kelvins, to support biology. In universes where the electromagnetic force is stronger, stars are cooler, making them less hospitable.

    Stars must also have long lives. The evolution of complex life forms takes place over enormous spans of time. Since life is driven by a complex ensemble of chemical reactions, the basic clock for biological evolution is set by the time scales of atoms. In other universes, these atomic clocks will tick at different rates, depending on the strength of electromagnetism, and this variation must be taken into account. When the force is weaker, stars burn their nuclear fuel faster, and their lifetimes decrease.

    3
    Mordehai Milgrom
    Also in Physics
    The Physicist Who Denies Dark Matter
    By Oded Carmeli
    He is one of those dark matter people,” Mordehai Milgrom said about a colleague stopping by his office at the Weizmann Institute of Science. Milgrom introduced us, telling me that his friend is searching for evidence of dark matter in READ MORE

    Finally, stars must be able to form in the first place. In order for galaxies and, later, stars to condense out of primordial gas, the gas must be able to lose energy and cool down. The cooling rate depends (yet again) on the strength of electromagnetism. If this force is too weak, gas cannot cool down fast enough and would remain diffuse instead of condensing into galaxies. Stars must also be smaller than their host galaxies—otherwise star formation would be problematic. These effects put another lower limit on the strength of electromagnetism.

    Putting it all together, the strengths of the fundamental forces can vary by several orders of magnitude and still allow planets and stars to satisfy all the constraints (as illustrated in the figure below). The forces are not nearly as finely tuned as many scientists think.

    A second example of possible fine-tuning arises in the context of carbon production. After moderately large stars have fused the hydrogen in their central cores into helium, helium itself becomes the fuel. Through a complicated set of reactions, helium is burned into carbon and oxygen. Because of their important role in nuclear physics, helium nuclei are given a special name: alpha particles. The most common nuclei are composed of one, three, four, and five alpha particles. The nucleus with two alpha particles, beryllium-8, is conspicuously absent, and for a good reason: It is unstable in our universe.

    The instability of beryllium creates a serious bottleneck for the creation of carbon. As stars fuse helium nuclei together to become beryllium, the beryllium nuclei almost immediately decay back into their constituent parts. At any given time, the stellar core maintains a small but transient population of beryllium. These rare beryllium nuclei can interact with helium to produce carbon. Because the process ultimately involves three helium nuclei, it is called the triple-alpha reaction. But the reaction is too slow to produce the amount of carbon observed in our universe.

    To resolve this discrepancy, physicist Fred Hoyle predicted in 1953 that the carbon nucleus has to have a resonant state at a specific energy, as if it were a little bell that rang with a certain tone. Because of this resonance, the reaction rates for carbon production are much larger than they would be otherwise—large enough to explain the abundance of carbon found in our universe. The resonance was later measured in the laboratory at the predicted energy level.

    3
    Credit above

    The worry is that, in other universes, with alternate strengths of the forces, the energy of this resonance could be different, and stars would not produce enough carbon. Carbon production is compromised if the energy level is changed by more than about 4 percent. This issue is sometimes called the triple-alpha fine-tuning problem.

    Fortunately, this problem has a simple solution. What nuclear physics takes away, it also gives. Suppose nuclear physics did change by enough to neutralize the carbon resonance. Among the possible changes of this magnitude, about half would have the side effect of making beryllium stable, so the loss of the resonance would become irrelevant. In such alternate universes, carbon would be produced in the more logical manner of adding together alpha particles one at a time. Helium could fuse into beryllium, which could then react with additional alpha particles to make carbon. There is no fine-tuning problem after all.

    A third instance of potential fine-tuning concerns the simplest nuclei composed of two particles: deuterium nuclei, which contain one proton and one neutron; diprotons, consisting of two protons; and dineutrons, consisting of two neutrons. In our universe, only deuterium is stable. The production of helium takes place by first combining two protons into deuterium.

    If the strong nuclear force had been even stronger, diprotons could have been stable. In this case, stars could have generated energy through the simplest and fastest of nuclear reactions, where protons combine to become diprotons and eventually other helium isotopes. It is sometimes claimed that stars would then burn through their nuclear fuel at catastrophic rates, resulting in lifetimes that are too short to support biospheres. Conversely, if the strong force had been weaker, then deuterium would be unstable, and the usual stepping stone on the pathway to heavy elements would not be available. Many scientists have speculated that the absence of stable deuterium would lead to a universe with no heavy elements at all and that such a universe would be devoid of complexity and life.

    As it turns out, stars are remarkably stable entities. Their structure adjusts automatically to burn nuclear fuel at exactly the right rate required to support themselves against the crush of their own gravity. If the nuclear reaction rates are higher, stars will burn their nuclear fuel at a lower central temperature, but otherwise they will not be so different. In fact, our universe has an example of this type of behavior. Deuterium nuclei can combine with protons to form helium nuclei through the action of the strong force. The cross section for this reaction, which quantifies the probability of its occurrence, is quadrillions of times larger than for ordinary hydrogen fusion. Nonetheless, stars in our universe burn their deuterium in a relatively uneventful manner. The stellar core has an operating temperature of 1 million kelvins, compared to the 15 million kelvins required to burn hydrogen under ordinary conditions. These deuterium-burning stars have cooler centers and are somewhat larger than the sun, but are otherwise unremarkable.

    Similarly, if the strong nuclear force were lower, stars could continue to operate in the absence of stable deuterium. A number of different processes provide paths by which stars can generate energy and synthesize heavy elements. During the first part of their lives, stars slowly contract, their central cores grow hotter and denser, and they glow with the power output of the sun. Stars in our universe eventually become hot and dense enough to ignite nuclear fusion, but in alternative universes they could continue this contraction phase and generate power by losing gravitational potential energy. The longest-lived stars could shine with a power output roughly comparable to the sun for up to 1 billion years, perhaps long enough for biological evolution to take place.

    For sufficiently massive stars, the contraction would accelerate and become a catastrophic collapse. These stellar bodies would basically go supernova. Their central temperatures and densities would increase to such large values that nuclear reactions would ignite. Many types of nuclear reactions would take place in the death throes of these stars. This process of explosive nucleosynthesis could supply the universe with heavy nuclei, in spite of the lack of deuterium.

    Once such a universe produces trace amounts of heavy elements, later generations of stars have yet another option for nuclear burning. This process, called the carbon-nitrogen-oxygen cycle, does not require deuterium as an intermediate state. Instead, carbon acts as a catalyst to instigate the production of helium. This cycle operates in the interior of the sun and provides a small fraction of its total power. In the absence of stable deuterium, the carbon-nitrogen-oxygen cycle would dominate the energy generation. And this does not exhaust the options for nuclear power generation. Stars could also produce helium through a triple-nucleon process that is roughly analogous to the triple-alpha process for carbon production. Stars thus have many channels for providing both energy and complex nuclei in alternate universes.

    A fourth example of fine-tuning concerns the formation of galaxies and other large-scale structures. They were seeded by small density fluctuations produced in the earliest moments of cosmic time. After the universe had cooled down enough, these fluctuations started to grow stronger under the force of gravity, and denser regions eventually become galaxies and galaxy clusters. The fluctuations started with a small amplitude, denoted Q, equal to 0.00001. The primeval universe was thus incredibly smooth: The density, temperature, and pressure of the densest regions and of the most rarefied regions were the same to within a few parts per 100,000. The value of Q represents another possible instance of fine-tuning in the universe.

    If Q had been lower, it would have taken longer for fluctuations to grow strong enough to become cosmic structures, and galaxies would have had lower densities. If the density of a galaxy is too low, the gas in the galaxy is unable to cool. It might not ever condense into galactic disks or coalesce into stars. Low-density galaxies are not viable habitats for life. Worse, a long enough delay might have prevented galaxies from forming at all. Beginning about 4 billion years ago, the expansion of the universe began to accelerate and pull matter apart faster than it could agglomerate—a change of pace that is usually attributed to a mysterious dark energy. If Q had been too small, it could have taken so long for galaxies to collapse that the acceleration would have started before structure formation was complete, and further growth would have been suppressed. The universe could have ended up devoid of complexity, and lifeless. In order to avoid this fate, the value of Q cannot be smaller by more than a factor of 10.

    What if Q had been larger? Galaxies would have formed earlier and ended up denser. That, too, would have posed a danger for the prospects of habitability. Stars would have been much closer to one another and interacted more often. In so doing, they could have stripped planets out of their orbits and sent them hurtling into deep space. Furthermore, because stars would be closer together, the night sky would be brighter—perhaps as bright as day. If the stellar background were too dense, the combined starlight could boil the oceans of any otherwise suitable planets.

    5
    Galactic What-If: A galaxy that formed in a hypothetical universe with large initial density fluctuations might be even more hospitable than our Milky Way. The central region is too bright and hot for life, and planetary orbits are unstable. But the outer region is similar to the solar neighborhood. In between, the background starlight from the galaxy is comparable in brightness to the sunlight received by Earth, so all planets, no matter their orbits, are potentially habitable. Fred C. Adams.

    In this case, the fine-tuning argument is not very constraining. The central regions of galaxies could indeed produce such intense background radiation that all planets would be rendered uninhabitable. But the outskirts of galaxies would always have a low enough density for habitable planets to survive. An appreciable fraction of galactic real estate remains viable even when Q is thousands of times larger than in our universe. In some cases, a galaxy might be even more hospitable. Throughout much of the galaxy, the night sky could have the same brightness as the sunshine we see during the day on Earth. Planets would receive their life-giving energy from the entire ensemble of background stars rather than from just their own sun. They could reside in almost any orbit. In an alternate universe with larger density fluctuations than our own, even Pluto would get as much daylight as Miami. As a result, a moderately dense galaxy could have more habitable planets than the Milky Way.

    In short, the parameters of our universe could have varied by large factors and still allowed for working stars and potentially habitable planets. The force of gravity could have been 1,000 times stronger or 1 billion times weaker, and stars would still function as long-lived nuclear burning engines. The electromagnetic force could have been stronger or weaker by factors of 100. Nuclear reaction rates could have varied over many orders of magnitude. Alternative stellar physics could have produced the heavy elements that make up the basic raw material for planets and people. Clearly, the parameters that determine stellar structure and evolution are not overly fine-tuned.

    Given that our universe does not seem to be particularly fine-tuned, can we still say that our universe is the best one for life to develop? Our current understanding suggests that the answer is no. One can readily envision a universe that is friendlier to life and perhaps more logical. A universe with stronger initial density fluctuations would make denser galaxies, which could support more habitable planets than our own. A universe with stable beryllium would have straightforward channels available for carbon production and would not need the complication of the triple-alpha process. Although these issues are still being explored, we can already say that universes have many pathways for the development of complexity and biology, and some could be even more favorable for life than our own. In light of these generalizations, astrophysicists need to reexamine the possible implications of the multiverse, including the degree of fine-tuning in our universe.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
    • stewarthoughblog 1:13 am on November 10, 2017 Permalink | Reply

      The proposition that long-lived stars could last 1by and possibly be sufficient for life to evolve is not consistent with what science has observed with our solar system, planet and the origin of life. It is estimated that the first life did not appear until almost 1by after formation, making this wide speculation.

      The idea that an increased density of stars in the galaxy could support increased habitability of planets is inconsistent with astrophysical understanding of the criticality of solar radiation to not destroy all life and all biochemicals required.

      It is also widely speculative to propose that any of the fundamental constants and force tolerances can be virtually arbitrarily reassigned with minimal affect without much more serious scientific analysis. In light of the fundamental fact that the understanding of the origin of life naturalistically is a chaotic mess, it is widely speculative to conjecture the fine-tuning of the universe is not critical.

      Like

    • richardmitnick 10:13 am on November 10, 2017 Permalink | Reply

      Thanks for reading and commenting. I appreciate it.

      Like

  • richardmitnick 7:43 am on May 16, 2017 Permalink | Reply
    Tags: anti-de Sitter space, , , , , , Hypothetical universes, Naked singularity might evade cosmic censor, , Spacetime   

    From ScienceNews: “Naked singularity might evade cosmic censor” 

    ScienceNews bloc

    ScienceNews

    May 15, 2017
    Emily Conover

    Spacetime singularities might exist unhidden in strangely curved universes

    1
    LAID BARE Inside a black hole, the extreme curvature of space (shown) means that the standard rules of physics don’t apply. Such regions, called singularities, are thought to be shrouded by event horizons, but scientists showed that a singularity could be observable under certain conditions in a hypothetical curved spacetime. Henning Dalhoff/Science Source

    Certain stealthy spacetime curiosities might be less hidden than thought, potentially exposing themselves to observers in some curved universes.

    These oddities, known as singularities, are points in space where the standard laws of physics break down. Found at the centers of black holes, singularities are generally expected to be hidden from view, shielding the universe from their problematic properties. Now, scientists report in the May 5 Physical Review Letters that a singularity could be revealed in a hypothetical, saddle-shaped universe.

    Previously, scientists found that singularities might not be concealed in hypothetical universes with more than three spatial dimensions. The new result marks the first time the possibility of such a “naked” singularity has been demonstrated in a three-dimensional universe. “That’s extremely important,” says physicist Gary Horowitz of the University of California, Santa Barbara. Horowitz, who was not involved with the new study, has conducted previous research that implied that a naked singularity could probably appear in such saddle-shaped universes.

    In Einstein’s theory of gravity, the general theory of relativity, spacetime itself can be curved (SN: 10/17/15, p. 16). Massive objects such as stars bend the fabric of space, causing planets to orbit around them. A singularity occurs when the warping is so extreme that the equations of general relativity become nonsensical — as occurs in the center of a black hole. But black holes’ singularities are hidden by an event horizon, which encompasses a region around the singularity from which light can’t escape. The cosmic censorship conjecture, put forth in 1969 by mathematician and physicist Roger Penrose, proposes that all singularities will be similarly cloaked.

    According to general relativity, hypothetical universes can take on various shapes. The known universe is nearly flat on large scales, meaning that the rules of standard textbook geometry apply and light travels in a straight line. But in universes that are curved, those rules go out the window. To demonstrate the violation of cosmic censorship, the researchers started with a curved geometry known as anti-de Sitter space, which is warped such that a light beam sent out into space will eventually return to the spot it came from. The researchers deformed the boundaries of this curved spacetime and observed that a region formed in which the curvature increased over time to arbitrarily large values, producing a naked singularity.

    “I was very surprised,” says physicist Jorge Santos of the University of Cambridge, a coauthor of the study. “I always thought that gravity would somehow find a way” to maintain cosmic censorship.

    Scientists have previously shown that cosmic censorship could be violated if a universe’s conditions were precisely arranged to conspire to produce a naked singularity. But the researchers’ new result is more general. “There’s nothing finely tuned or unnatural about their starting point,” says physicist Ruth Gregory of Durham University in England. That, she says, is “really interesting.”

    But, Horowitz notes, there is a caveat. Because the violation occurs in a curved universe, not a flat one, the result “is not yet a completely convincing counterexample to the original idea.”

    Despite the reliance on a curved universe, the result does have broader implications. That’s because gravity in anti-de Sitter space is thought to have connections to other theories. The physics of gravity in anti-de Sitter space seems to parallel that of some types of particle physics theories, set in fewer dimensions. So cosmic censorship violation in this realm could have consequences for seemingly unrelated ideas.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 7:15 am on December 4, 2016 Permalink | Reply
    Tags: , , , Spacetime, , Why must time be a dimension?   

    From Starts With a Bang: “Why must time be a dimension?” 

    From Ethan Siegel
    12.3.16

    1
    A time-lapse photo like this composition reminds us that photographs are normally snapshots of locations at particular moments, with each moment distinct and unique from the last. Image credit: flickr user Anthony Pucci.

    Sure, we move through it just like space, but it was the aftermath of Einstein that led to us truly understanding it.

    “It is old age, rather than death, that is to be contrasted with life. Old age is life’s parody, whereas death transforms life into a destiny: in a way it preserves it by giving it the absolute dimension. Death does away with time.” -Simone de Beauvoir

    When we think about how we can move through the Universe, we immediately think of three different directions. Left-or-right, forwards-or-backwards, and upwards-or-downwards: the three independent directions of a Cartesian grid. All three of those count as dimensions, and specifically, as spatial dimensions. But we commonly talk about a fourth dimension of a very different type: time. But what makes time a dimension at all? That’s this week’s Ask Ethan question from Thomas Anderson, who wants to know:

    “I have always been a little perplexed about the continuum of 3+1 dimensional Space-time. Why is it always 3 [spatial] dimensions plus Time?”

    Let’s start by looking at the three dimensions of space you’re familiar with.

    2
    On the surface of a world like the Earth, two coordinates, like latitude and longitude, are sufficient to define a location. Image credit: Wikimedia Commons user Hellerick.

    Here on the surface of the Earth, we normally only need two coordinates to pinpoint our location: latitude and longitude, or where you are along the north-south and east-west axes of Earth. If you’re willing to go underground or above the Earth’s surface, you need a third coordinate — altitude/depth, or where you are along the up-down axis — to describe your location. After all, someone at your exact two-dimensional, latitude-and-longitude location but in a tunnel beneath your feet or in a helicopter overhead isn’t truly at the same location as you. It takes three independent pieces of information to describe your location in space.

    3
    Your location in this Universe isn’t just described by spatial coordinates (where), but also by a time coordinate (when). Image credit: Pixabay user rmathews100.

    But spacetime is even more complicated than space, and it’s easy to see why. The chair you’re sitting in right now can have its location described by those three coordinates: x, y and z. But it’s also occupied by you right now, as opposed to an hour ago, yesterday or ten years from now. In order to describe an event, knowing where it occurs isn’t enough; you also need to know when, which means you need to know the time coordinate, t. This played a big deal for the first time in relativity, when we were thinking about the issue of simultaneity. Start by thinking of two separate locations connected by a path, with two people walking from each location to the other one.

    4
    Two points connected by a 1-dimensional (linear) path. Image credit: Wikimedia Commons user Simeon87.

    You can visualize their paths by putting two fingers, one from each hand, at the two starting locations and “walking” them towards their destinations. At some point, they’re going to need to pass by one another, meaning your two fingers are going to have to be in the same spot at the same time. In relativity, this is what’s known as a simultaneous event, and it can only occur when all the space components and all the time components of two different physical objects align.

    This is supremely non-controversial, and explains why time needs to be considered as a dimension that we “move” through, the same as any of the spatial dimensions. But it was Einstein’s special theory of relativity that led his former professor, Hermann Minkowski, to devise a formulation that put the three space dimensions and the one time dimension together.

    5
    NASA

    We all realize that to move through space requires motion through time; if you’re here, now, you cannot be somewhere else now as well, you can only get there later. In 1905, Einstein’s special relativity taught us that the speed of light is a universal speed limit, and that as you approach it you experience the strange phenomena of time dilation and length contraction. But perhaps the biggest breakthrough came in 1907, when Minkowski realized that Einstein’s relativity had an extraordinary implication: mathematically, time behaves exactly the same as space does, except with a factor of c, the speed of light in vacuum, and a factor of i, the imaginary number √(-1).

    6
    An example of a light cone, the three-dimensional surface of all possible light rays arriving at and departing from a point in spacetime. Image credit: Wikimedia Commons user MissMJ.

    Putting all of these revelations together yielded a new picture of the Universe, particularly as respects how we move through it.

    If you’re completely stationary, remaining in the same spatial location, you move through time at its maximal rate.
    The faster you move through space, the slower you move through time, and the shorter the spatial distances in your direction-of-motion appear to be.
    And if you were completely massless, you would move at the speed of light, where you would traverse your direction-of-motion instantaneously, and no time would pass for you.

    7
    A stationary observer sees time pass normally, but an observer moving rapidly through space will have their clock run slower relative to the stationary observer. Image credit: Michael Schmid of Wikimedia Commons.

    From a physics point of view, the implications are astounding. It means that all massless particles are intrinsically stable, since no time can ever pass for them. It means that an unstable particle, like a muon created in the upper atmosphere, can reach the Earth’s surface, despite the fact that multiplying its lifetime (2.2 µs) by the speed of light yields a distance (660 meters) that’s far less than the distance it must travel. And it means that if you had a pair of identical twins and you left one on Earth while the other took a relativistic journey into space, the journeying twin would be much younger upon return, having experienced the passage of less time.

    8
    Mark and Scott Kelly at the Johnson Space Center, Houston Texas; one spent a year in space (and aged slightly less) while the other remained on the ground. Image credit: NASA.

    As Minkowski said in 1908,

    “The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth, space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.”

    Today, the formulation of spacetime is even more generic, and encompasses the curvature inherent to space itself, which is how special relativity got generalized. But the reason time is just as good a dimension as space is because we’re always moving through it, and the reason it’s sometimes written as a “1″ in “3+1″ (instead of just treated as another “1″ of the “4″) is because increasing your motion through space decreases your motion through time, and vice versa. (Mathematically, this is where the i comes in.)


    Having your camera anticipate the motion of objects through time is just one practical application of the idea of time-as-a-dimension.

    The remarkable thing is that anyone, regardless of their motion through space relative to anyone else, will see these same rules, these same effects and these same consequences. If time weren’t a dimension in this exact way, the laws of relativity would be invalid, and there might yet be a valid concept such as absolute space. We need the dimensionality of time for physics to work the way it does, and yet our Universe provides for it oh so well. Be proud to give it a “+1” in all you do.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 3:30 pm on January 1, 2016 Permalink | Reply
    Tags: , , , , Spacetime   

    From space.com: “Time Warps and Black Holes: The Past, Present & Future of Space-Time” 

    space-dot-com logo

    SPACE.com

    December 31, 2015
    Nola Taylor Redd

    Temp 1
    A massive object like the Earth will bend space-time, and cause objects to fall toward it. Credit: Science@NASA

    When giving the coordinates for a location, most people provide the latitude, longitude and perhaps altitude. But there is a fourth dimension often neglected: time. The combination of the physical coordinates with the temporal element creates a concept known as space-time, a background for all events in the universe.

    “In physics, space-time is the mathematical model that combines space and time into a single interwoven continuum throughout the universe,” Eric Davis, a physicist who works at the Institute for Advanced Studies at Austin and with the Tau Zero Foundation, told Space.com by email. Davis specializes in faster-than-light space-time and anti-gravity physics, both of which use Albert Einstein’s general relativity theory field equations and quantum field theory, as well as quantum optics, to conduct lab experiments.

    “Einstein’s special theory of relativity, published in 1905, adapted [German mathematician] Hermann Minkowski’s unified space-and-time model of the universe to show that time should be treated as a physical dimension on par with the three physical dimensions of space — height, width and length — that we experience in our lives,” Davis said.

    “Space-time is the landscape over which phenomena take place,” added Luca Amendola, a member of the Euclid Theory Working Group (a team of theoretical scientists working with the European Space Agency’s Euclid satellite) and a professor at Heidelberg University in Germany.

    ESA Euclid
    ESA/Euclid

    “Just as any landscape is not set in stone, fixed forever, it changes just because things happen — planets move, particles interact, cells reproduce,” he told Space.com via email.

    The history of space-time

    The idea that time and space are united is a fairly recent development in the history of science.

    “The concepts of space remained practically the same from the early Greek philosophers until the beginning of the 20th century — an immutable stage over which matter moves,” Amendola said. “Time was supposed to be even more immutable because, while you can move in space the way you like, you cannot travel in time freely, since it runs the same for everybody.”

    In the early 1900s, Minkowski built upon the earlier works of Dutch physicist Hendrik Lorentz and French mathematician and theoretical physicist Henri Poincare to create a unified model of space-time. Einstein, a student of Minkowski, adapted Minkowski’s model when he published his special theory of relativity in 1905.

    “Einstein had brought together Poincare’s, Lorentz’s and Minkowski’s separate theoretical works into his overarching special relativity theory, which was much more comprehensive and thorough in its treatment of electromagnetic forces and motion, except that it left out the force of gravity, which Einstein later tackled in his magnum opus general theory of relativity,” Davis said.

    Space-time breakthroughs

    In special relativity, the geometry of space-time is fixed, but observers measure different distances or time intervals according to their own relative velocity. In general relativity, the geometry of space-time itself changes depending on how matter moves and is distributed.

    “Einstein’s general theory of relativity is the first major theoretical breakthrough that resulted from the unified space-time model,” Davis said.

    General relativity led to the science of cosmology, the next major breakthrough that came thanks to the concept of unified space-time.

    “It is because of the unified space-time model that we can have a theory for the creation and existence of our universe, and be able to study all the consequences that result thereof,” Davis said.

    He explained that general relativity predicted phenomena such as black holes and white holes. It also predicts that they have an event horizon, the boundary that marks where nothing can escape, and the point of singularities at their center, a one dimensional point where gravity becomes infinite. General relativity could also explain rotating astronomical bodies that drag space-time with them, the Big Bang and the inflationary expansion of the universe, gravity waves, time and space dilation associated with curved space-time, gravitational lensing caused by massive galaxies, and the shifting orbit of Mercury and other planetary bodies, all of which science has shown true. It also predicts things such as warp-drive propulsions and traversable wormholes and time machines.

    “All of these phenomena rely on the unified space-time model,” he said, “and most of them have been observed.”

    An improved understanding of space-time also led to quantum field theory. When quantum mechanics, the branch of theory concerned with the movement of atoms and photons, was first published in 1925, it was based on the idea that space and time were separate and independent. After World War II, theoretical physicists found a way to mathematically incorporate Einstein’s special theory of relativity into quantum mechanics, giving birth to quantum field theory.

    “The breakthroughs that resulted from quantum field theory are tremendous,” Davis said.

    The theory gave rise to a quantum theory of electromagnetic radiation and electrically charged elementary particles — called quantum electrodynamics theory (QED theory) — in about 1950. In the 1970s, QED theory was unified with the weak nuclear force theory to produce the electroweak theory, which describes them both as different aspects of the same force. In 1973, scientists derived the quantum chromodynamics theory (QCD theory), the nuclear strong force theory of quarks and gluons, which are elementary particles.

    In the 1980s and the 1990s, physicists united the QED theory, the QCD theory and the electroweak theory to formulate the Standard Model of Particle Physics, the megatheory that describes all of the known elementary particles of nature and the fundamental forces of their interactions.

    6
    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    Later on, Peter Higgs‘ 1960s prediction of a particle now known as the Higgs boson, which was discovered in 2012 by the Large Hadron Collider at CERN, was added to the mix.

    7
    Peter Higgs

    CERN CMS Higgs Event
    Higgs event in CMS at the CERN/LHC

    CERN CMS Detector
    CMS

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN with map

    Experimental breakthroughs include the discovery of many of the elementary particles and their interaction forces known today, Davis said. They also include the advancement of condensed matter theory to predict two new states of matter beyond those taught in most textbooks. More states of matter are being discovered using condensed matter theory, which uses the quantum field theory as its mathematical machinery.

    “Condensed matter has to do with the exotic states of matter, such as those found in metallic glass, photonic crystals, metamaterials, nanomaterials, semiconductors, crystals, liquid crystals, insulators, conductors, superconductors, superconducting fluids, etc.,” Davis said. “All of this is based on the unified space-time model.”

    The future of space-time

    Scientists are continuing to improve their understanding of space-time by using missions and experiments that observe many of the phenomena that interact with it. The Hubble Space Telescope, which measured the accelerating expansion of the universe, is one instrument doing so.

    NASA Hubble Telescope
    NASA/ESA Hubble

    NASA’s Gravity Probe B mission, which launched in 2004, studied the twisting of space-time by a rotating body — the Earth.

    NASA Gravity Probe B
    NASA Gravity Probe B

    NASA’s NuSTAR mission, launched in 2012, studies black holes. Many other telescopes and missions have also helped to study these phenomena.

    NASA NuSTAR
    NASA/NuSTAR

    On the ground, particle accelerators have studied fast-moving particles for decades.

    “One of the best confirmations of special relativity is the observations that particles, which should decay after a given time, take in fact much longer when traveling very fast, as, for instance, in particle accelerators,” Amendola said. “This is because time intervals are longer when the relative velocity is very large.”

    Future missions and experiments will continue to probe space-time as well. The European Space Agency-NASA satellite Euclid, set to launch in 2020, will continue to test the ideas at astronomical scales as it maps the geometry of dark energy and dark matter, the mysterious substances that make up the bulk of the universe. On the ground, the LIGO and VIRGO observatories continue to study gravitational waves, ripples in the curvature of space-time.

    Caltech Ligo
    MIT/Caltech Advanced LIGO

    VIRGO interferometer EGO Campus
    VIRGO interferometer

    “If we could handle black holes the same way we handle particles in accelerators, we would learn much more about space-time,” Amendola said.

    Merging Black Holes

    3
    Merging black holes create ripples in space-time in this artist’s concept. Experiments are searching for these ripples, known as gravitational waves, but none have been detected. Credit: Swinburne Astronomy Productions

    Understanding space-time

    Will scientists ever get a handle on the complex issue of space-time? That depends on precisely what you mean.

    “Physicists have an excellent grasp of the concept of space-time at the classical levels provided by Einstein’s two theories of relativity, with his general relativity theory being the magnum opus of space-time theory,” Davis said. “However, physicists do not yet have a grasp on the quantum nature of space-time and gravity.”

    Amendola agreed, noting that although scientists understand space-time across larger distances, the microscopic world of elementary particles remains less clear.

    “It might be that space-time at very short distances takes yet another form and perhaps is not continuous,” Amendola said. “However, we are still far from that frontier.”

    Today’s physicists cannot experiment with black holes or reach the high energies at which new phenomena are expected to occur. Even astronomical observations of black holes remain unsatisfactory due to the difficulty of studying something that absorbs all light, Amendola said. Scientists must instead use indirect probes.

    “To understand the quantum nature of space-time is the holy grail of 21st century physics,” Davis said. “We are stuck in a quagmire of multiple proposed new theories that don’t seem to work to solve this problem.”

    Amendola remained optimistic. “Nothing is holding us back,” he said. “It’s just that it takes time to understand space-time.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 3:47 pm on November 25, 2015 Permalink | Reply
    Tags: , , Spacetime,   

    From Nature: “Theoretical physics: The origins of space and time” 2013 but Very Informative 

    Nature Mag
    Nature

    28 August 2013
    Zeeya Merali

    1

    “Imagine waking up one day and realizing that you actually live inside a computer game,” says Mark Van Raamsdonk, describing what sounds like a pitch for a science-fiction film. But for Van Raamsdonk, a physicist at the University of British Columbia in Vancouver, Canada, this scenario is a way to think about reality. If it is true, he says, “everything around us — the whole three-dimensional physical world — is an illusion born from information encoded elsewhere, on a two-dimensional chip”. That would make our Universe, with its three spatial dimensions, a kind of hologram, projected from a substrate that exists only in lower dimensions.

    This ‘holographic principle’ is strange even by the usual standards of theoretical physics. But Van Raamsdonk is one of a small band of researchers who think that the usual ideas are not yet strange enough. If nothing else, they say, neither of the two great pillars of modern physics — general relativity, which describes gravity as a curvature of space and time, and quantum mechanics, which governs the atomic realm — gives any account for the existence of space and time. Neither does string theory, which describes elementary threads of energy.

    Van Raamsdonk and his colleagues are convinced that physics will not be complete until it can explain how space and time emerge from something more fundamental — a project that will require concepts at least as audacious as holography. They argue that such a radical reconceptualization of reality is the only way to explain what happens when the infinitely dense ‘singularity‘ at the core of a black hole distorts the fabric of space-time beyond all recognition, or how researchers can unify atomic-level quantum theory and planet-level general relativity — a project that has resisted theorists’ efforts for generations.

    “All our experiences tell us we shouldn’t have two dramatically different conceptions of reality — there must be one huge overarching theory,” says Abhay Ashtekar, a physicist at Pennsylvania State University in University Park.

    Finding that one huge theory is a daunting challenge. Here, Nature explores some promising lines of attack — as well as some of the emerging ideas about how to test these concepts.

    2

    Gravity as thermodynamics

    One of the most obvious questions to ask is whether this endeavour is a fool’s errand. Where is the evidence that there actually is anything more fundamental than space and time?

    A provocative hint comes from a series of startling discoveries made in the early 1970s, when it became clear that quantum mechanics and gravity were intimately intertwined with thermodynamics, the science of heat.

    In 1974, most famously, Stephen Hawking of the University of Cambridge, UK, showed that quantum effects in the space around a black hole will cause it to spew out radiation as if it was hot. Other physicists quickly determined that this phenomenon was quite general. Even in completely empty space, they found, an astronaut undergoing acceleration would perceive that he or she was surrounded by a heat bath. The effect would be too small to be perceptible for any acceleration achievable by rockets, but it seemed to be fundamental. If quantum theory and general relativity are correct — and both have been abundantly corroborated by experiment — then the existence of Hawking radiation seemed inescapable.

    A second key discovery was closely related. In standard thermodynamics, an object can radiate heat only by decreasing its entropy, a measure of the number of quantum states inside it. And so it is with black holes: even before Hawking’s 1974 paper, Jacob Bekenstein, now at the Hebrew University of Jerusalem, had shown that black holes possess entropy. But there was a difference. In most objects, the entropy is proportional to the number of atoms the object contains, and thus to its volume. But a black hole’s entropy turned out to be proportional to the surface area of its event horizon — the boundary out of which not even light can escape. It was as if that surface somehow encoded information about what was inside, just as a two-dimensional hologram encodes a three-dimensional image.

    In 1995, Ted Jacobson, a physicist at the University of Maryland in College Park, combined these two findings, and postulated that every point in space lies on a tiny black-hole horizon that also obeys the entropy–area relationship. From that, he found, the mathematics yielded [Albert]Einstein’s equations of general relativity — but using only thermodynamic concepts, not the idea of bending space-time(1).

    “This seemed to say something deep about the origins of gravity,” says Jacobson. In particular, the laws of thermodynamics are statistical in nature — a macroscopic average over the motions of myriad atoms and molecules — so his result suggested that gravity is also statistical, a macroscopic approximation to the unseen constituents of space and time.

    In 2010, this idea was taken a step further by Erik Verlinde, a string theorist at the University of Amsterdam, who showed (2) that the statistical thermodynamics of the space-time constituents — whatever they turned out to be — could automatically generate Newton’s law of gravitational attraction.

    And in separate work, Thanu Padmanabhan, a cosmologist at the Inter-University Centre for Astronomy and Astrophysics in Pune, India, showed (3) that Einstein’s equations can be rewritten in a form that makes them identical to the laws of thermodynamics — as can many alternative theories of gravity. Padmanabhan is currently extending the thermodynamic approach in an effort to explain the origin and magnitude of dark energy: a mysterious cosmic force that is accelerating the Universe’s expansion.

    Testing such ideas empirically will be extremely difficult. In the same way that water looks perfectly smooth and fluid until it is observed on the scale of its molecules — a fraction of a nanometre — estimates suggest that space-time will look continuous all the way down to the Planck scale: roughly 10−35 metres, or some 20 orders of magnitude smaller than a proton.

    But it may not be impossible. One often-mentioned way to test whether space-time is made of discrete constituents is to look for delays as high-energy photons travel to Earth from distant cosmic events such as supernovae and γ-ray bursts [?]. In effect, the shortest-wavelength photons would sense the discreteness as a subtle bumpiness in the road they had to travel, which would slow them down ever so slightly. Giovanni Amelino-Camelia, a quantum-gravity researcher at the University of Rome, and his colleagues have found (4) hints of just such delays in the photons from a γ-ray burst recorded in April. The results are not definitive, says Amelino-Camelia, but the group plans to expand its search to look at the travel times of high-energy neutrinos produced by cosmic events. He says that if theories cannot be tested, “then to me, they are not science. They are just religious beliefs, and they hold no interest for me.”

    Other physicists are looking at laboratory tests. In 2012, for example, researchers from the University of Vienna and Imperial College London proposed (5) a tabletop experiment in which a microscopic mirror would be moved around with lasers. They argued that Planck-scale granularities in space-time would produce detectable changes in the light reflected from the mirror (see Nature http://doi.org/njf; 2012).

    Loop quantum gravity

    Even if it is correct, the thermodynamic approach says nothing about what the fundamental constituents of space and time might be. If space-time is a fabric, so to speak, then what are its threads?

    One possible answer is quite literal. The theory of loop quantum gravity, which has been under development since the mid-1980s by Ashtekar and others, describes the fabric of space-time as an evolving spider’s web of strands that carry information about the quantized areas and volumes of the regions they pass through (6). The individual strands of the web must eventually join their ends to form loops — hence the theory’s name — but have nothing to do with the much better-known strings of string theory. The latter move around in space-time, whereas strands actually are space-time: the information they carry defines the shape of the space-time fabric in their vicinity.

    Because the loops are quantum objects, however, they also define a minimum unit of area in much the same way that ordinary quantum mechanics defines a minimum ground-state energy for an electron in a hydrogen atom. This quantum of area is a patch roughly one Planck scale on a side. Try to insert an extra strand that carries less area, and it will simply disconnect from the rest of the web. It will not be able to link to anything else, and will effectively drop out of space-time.

    One welcome consequence of a minimum area is that loop quantum gravity cannot squeeze an infinite amount of curvature onto an infinitesimal point. This means that it cannot produce the kind of singularities that cause Einstein’s equations of general relativity to break down at the instant of the Big Bang and at the centres of black holes.

    In 2006, Ashtekar and his colleagues reported (7) a series of simulations that took advantage of that fact, using the loop quantum gravity version of Einstein’s equations to run the clock backwards and visualize what happened before the Big Bang. The reversed cosmos contracted towards the Big Bang, as expected. But as it approached the fundamental size limit dictated by loop quantum gravity, a repulsive force kicked in and kept the singularity open, turning it into a tunnel to a cosmos that preceded our own.

    This year, physicists Rodolfo Gambini at the Uruguayan University of the Republic in Montevideo and Jorge Pullin at Louisiana State University in Baton Rouge reported (8) a similar simulation for a black hole. They found that an observer travelling deep into the heart of a black hole would encounter not a singularity, but a thin space-time tunnel leading to another part of space. “Getting rid of the singularity problem is a significant achievement,” says Ashtekar, who is working with other researchers to identify signatures that would have been left by a bounce, rather than a bang, on the cosmic microwave background — the radiation left over from the Universe’s massive expansion in its infant moments.

    Loop quantum gravity is not a complete unified theory, because it does not include any other forces. Furthermore, physicists have yet to show how ordinary space-time would emerge from such a web of information. But Daniele Oriti, a physicist at the Max Planck Institute for Gravitational Physics in Golm, Germany, is hoping to find inspiration in the work of condensed-matter physicists, who have produced exotic phases of matter that undergo transitions described by quantum field theory. Oriti and his colleagues are searching for formulae to describe how the Universe might similarly change phase, transitioning from a set of discrete loops to a smooth and continuous space-time. “It is early days and our job is hard because we are fishes swimming in the fluid at the same time as trying to understand it,” says Oriti.

    Causal sets

    Such frustrations have led some investigators to pursue a minimalist programme known as causal set theory. Pioneered by Rafael Sorkin, a physicist at the Perimeter Institute in Waterloo, Canada, the theory postulates that the building blocks of space-time are simple mathematical points that are connected by links, with each link pointing from past to future. Such a link is a bare-bones representation of causality, meaning that an earlier point can affect a later one, but not vice versa. The resulting network is like a growing tree that gradually builds up into space-time. “You can think of space emerging from points in a similar way to temperature emerging from atoms,” says Sorkin. “It doesn’t make sense to ask, ‘What’s the temperature of a single atom?’ You need a collection for the concept to have meaning.”

    In the late 1980s, Sorkin used this framework to estimate(9) the number of points that the observable Universe should contain, and reasoned that they should give rise to a small intrinsic energy that causes the Universe to accelerate its expansion. A few years later, the discovery of dark energy confirmed his guess. “People often think that quantum gravity cannot make testable predictions, but here’s a case where it did,” says Joe Henson, a quantum-gravity researcher at Imperial College London. “If the value of dark energy had been larger, or zero, causal set theory would have been ruled out.”

    Causal dynamical triangulations

    That hardly constituted proof, however, and causal set theory has offered few other predictions that could be tested. Some physicists have found it much more fruitful to use computer simulations. The idea, which dates back to the early 1990s, is to approximate the unknown fundamental constituents with tiny chunks of ordinary space-time caught up in a roiling sea of quantum fluctuations, and to follow how these chunks spontaneously glue themselves together into larger structures.

    The earliest efforts were disappointing, says Renate Loll, a physicist now at Radboud University in Nijmegen, the Netherlands. The space-time building blocks were simple hyper-pyramids — four-dimensional counterparts to three-dimensional tetrahedrons — and the simulation’s gluing rules allowed them to combine freely. The result was a series of bizarre ‘universes’ that had far too many dimensions (or too few), and that folded back on themselves or broke into pieces. “It was a free-for-all that gave back nothing that resembles what we see around us,” says Loll.

    But, like Sorkin, Loll and her colleagues found that adding causality changed everything. After all, says Loll, the dimension of time is not quite like the three dimensions of space. “We cannot travel back and forth in time,” she says. So the team changed its simulations to ensure that effects could not come before their cause — and found that the space-time chunks started consistently assembling themselves into smooth four-dimensional universes with properties similar to our own(10).

    Intriguingly, the simulations also hint that soon after the Big Bang, the Universe went through an infant phase with only two dimensions — one of space and one of time. This prediction has also been made independently by others attempting to derive equations of quantum gravity, and even some who suggest that the appearance of dark energy is a sign that our Universe is now growing a fourth spatial dimension. Others have shown that a two-dimensional phase in the early Universe would create patterns similar to those already seen in the cosmic microwave background.

    Holography

    Meanwhile, Van Raamsdonk has proposed a very different idea about the emergence of space-time, based on the holographic principle. Inspired by the hologram-like way that black holes store all their entropy at the surface, this principle was first given an explicit mathematical form by Juan Maldacena, a string theorist at the Institute of Advanced Study in Princeton, New Jersey, who published (11) his influential model of a holographic universe in 1998. In that model, the three-dimensional interior of the universe contains strings and black holes governed only by gravity, whereas its two-dimensional boundary contains elementary particles and fields that obey ordinary quantum laws without gravity.

    Hypothetical residents of the three-dimensional space would never see this boundary, because it would be infinitely far away. But that does not affect the mathematics: anything happening in the three-dimensional universe can be described equally well by equations in the two-dimensional boundary, and vice versa.

    In 2010, Van Raamsdonk studied what that means when quantum particles on the boundary are ‘entangled’ — meaning that measurements made on one inevitably affect the other (12). He discovered that if every particle entanglement between two separate regions of the boundary is steadily reduced to zero, so that the quantum links between the two disappear, the three-dimensional space responds by gradually dividing itself like a splitting cell, until the last, thin connection between the two halves snaps. Repeating that process will subdivide the three-dimensional space again and again, while the two-dimensional boundary stays connected. So, in effect, Van Raamsdonk concluded, the three-dimensional universe is being held together by quantum entanglement on the boundary — which means that in some sense, quantum entanglement and space-time are the same thing.

    Or, as Maldacena puts it: “This suggests that quantum is the most fundamental, and space-time emerges from it.”

    [For references, please see the full article.]

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Nature is a weekly international journal publishing the finest peer-reviewed research in all fields of science and technology on the basis of its originality, importance, interdisciplinary interest, timeliness, accessibility, elegance and surprising conclusions. Nature also provides rapid, authoritative, insightful and arresting news and interpretation of topical and coming trends affecting science, scientists and the wider public.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: