Tagged: Ethan Siegel Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:09 pm on May 8, 2019 Permalink | Reply
    Tags: "What Was It Like When Life’s Complexity Exploded?", , As creatures grew in complexity they accumulated large numbers of genes that encoded for specific structures that performed a variety of functions., , , Ethan Siegel, Evolution- in many ways- is like an arms race. The different organisms that exist are continuously competing for limited resources., If an organism develops the ability to perform a specific function then it will have a genetic sequence that encode the information for forming a structure that performs it., In biology structure and function is arguably the most basic relationship of all., Once the first living organisms arose our planet was filled with organisms harvesting energy and resources from the environment, The second major evolutionary step involves the development of specialized components within a single organism, What we do know is that life existed on Earth for nearly four billion years before the Cambrian explosion which heralds the rise of complex animals.   

    From Ethan Siegel: “What Was It Like When Life’s Complexity Exploded?” 

    From Ethan Siegel
    May 8, 2019

    1
    During the Cambrian era in Earth’s history, some 550–600 million years ago, many examples of multicellular, sexually-reproducing, complex and differentiated life forms emerged for the first time. This period is known as the Cambrian explosion, and heralds an enormous leap in the complexity of organisms found on Earth. (GETTY)

    We’re a long way from the beginnings of life on Earth. Here’s the key to how we got there.

    The Universe was already two-thirds of its present age by the time the Earth formed, with life emerging on our surface shortly thereafter. But for billions of years, life remained in a relatively primitive state. It took nearly a full four billion years before the Cambrian explosion came: where macroscopic, multicellular, complex organisms — including animals, plants, and fungi — became the dominant lifeforms on Earth.

    As surprising as it may seem, there were really only a handful of critical developments that were necessary in order to go from single-celled, simple life to the extraordinarily diverse sets of creatures we’d recognize today. We do not know if this path is one that’s easy or hard among planets where life arises. We do not know whether complex life is common or rare. But we do know that it happened on Earth. Here’s how.

    2
    This coastline consists of quartzite Pre-cambrian rocks, many of which may have once contained evidence of the fossilized lifeforms that gave rise to modern plants, animals, fungi, and other multicellular, sexually-reproducing creatures. These rocks have undergone intensive folding over their long and ancient history, and do not display the rich evidence for complex life that later, Cambrian-era rocks do. (GETTY)

    Once the first living organisms arose, our planet was filled with organisms harvesting energy and resources from the environment, metabolizing them to grow, adapt, reproduce, and respond to external stimuli. As the environment changed due to resource scarcity, competition, climate change and many other factors, certain traits increased the odds of survival, while other traits decreased them. Owing to the phenomenon of natural selection, the organisms most adaptable to change survived and thrived.

    Relying on random mutations alone, and passing those traits onto offspring, is extremely limiting as far as evolution goes. If mutating your genetic material and passing it onto your offspring is the only mechanism you have for evolution, you might not ever achieve complexity.

    3
    Acidobacteria, like the example shown here, are likely some of the first photosynthetic organisms of all. They have no internal structure or membranes, loose, free-floating DNA, and are anoxygenic: they do not produce oxygen from photosynthesis. These are prokaryotic organisms that are very similar to the primitive life found on Earth some ~2.5–3 billion years ago. (US DEPARTMENT OF ENERGY / PUBLIC DOMAIN)

    But many billions of years ago, life developed the ability to engage in horizontal gene transfer, where genetic material can move from one organism to another via mechanisms other than asexual reproduction. Transformation, transduction, and conjugation are all mechanisms for horizontal gene transfer, but they all have something in common: single-celled, primitive organisms that develop a genetic sequence that’s useful for a particular purpose can transfer that sequence into other organisms, granting them the abilities that they worked so hard to evolve for themselves.

    This is the primary mechanism by which modern-day bacteria develop antibiotic resistance. If one primitive organism can develop a useful adaptation, other organisms can develop that same adaptation without having to evolve it from scratch.

    4
    The three mechanisms by which a bacterium can acquire genetic information horizontally, rather than vertically (through reproduction), are transformation, transduction, and conjugation. (NATURE, FURUYA AND LOWY (2006) / UNIVERSITY OF LEICESTER)

    The second major evolutionary step involves the development of specialized components within a single organism. The most primitive creatures have freely-floating bits of genetic material enclosed with some protoplasm inside a cell membrane, with nothing more specialized than that. These are the prokaryotic organisms of the world: the first forms of life thought to exist.

    But more evolved creatures contain within them the ability to create miniature factories, capable of specialized functions. These mini-organs, known as organelles, herald the rise of the eukaryotes. Eukaryotes are larger than prokaryotes, have longer DNA sequences, but also have specialized components that perform their own unique functions, independent of the cell they inhabit.

    5
    Unlike their more primitive prokaryotic counterparts, eukaryotic cells have differentiated cell organelles, with their own specialized structure and function that allows them to perform many of the cells life processes in a relatively independent fashion from the rest of the cell’s functioning. (CNX OPENSTAX)

    These organelles include a cell nucleus, the lysosomes, chloroplasts, golgi bodies, endoplasmic reticulum, and the mitochondria. Mitochondria themselves are incredibly interesting, because they provide a window into life’s evolutionary past.

    If you take an individual mitochondria out of a cell, it can survive on its own. Mitochondria have their own DNA and can metabolize nutrients: they meet all of the definitions of life on their own. But they are also produced by practically all eukaryotic cells. Contained within the more complicated, more highly-evolved cells are the genetic sequences that enables them to create components of themselves that appear identical to earlier, more primitive organisms. Contained within the DNA of complex creatures is the ability to create their own versions of simpler creatures.

    6
    Scanning electron microscope image at the sub-cellular level. While DNA is an incredibly complex, long molecule, it is made of the same building blocks (atoms) as everything else. To the best of our knowledge, the DNA structure that life is based on predates the fossil record. The longer and more complex a DNA molecule is, the more potential structures, functions, and proteins it can encode. (PUBLIC DOMAIN IMAGE BY DR. ERSKINE PALMER, USCDCP)

    In biology, structure and function is arguably the most basic relationship of all. If an organism develops the ability to perform a specific function, then it will have a genetic sequence that encode the information for forming a structure that performs it. If you gain that genetic code in your own DNA, then you, too, can create a structure that performs the specific function in question.

    As creatures grew in complexity, they accumulated large numbers of genes that encoded for specific structures that performed a variety of functions. When you form those novel structures yourself, you gain the abilities to perform those functions that couldn’t be performed without those structures. While simpler, single-celled organisms may reproduce faster, organisms capable of performing more functions are often more adaptable, and more resilient to change.

    7
    Mitochondria, which are some of the specialized organelles found inside eukaryotic cells, are themselves reminiscent of prokaryotic organisms. They even have their own DNA (in black dots), cluster together at discrete focus points. With many independent components, a eukaryotic cell can thrive under a variety of conditions that their simpler, prokaryotic counterparts cannot. But there are drawbacks to increased complexity, too. (FRANCISCO J IBORRA, HIROSHI KIMURA AND PETER R COOK (BIOMED CENTRAL LTD))

    By the time the Huronian glaciation ended and Earth was once again a warm, wet world with continents and oceans, eukaryotic life was common. Prokaryotes still existed (and still do), but were no longer the most complex creatures on our world. For life’s complexity to explode, however, there were two more steps that needed to not only occur, but to occur in tandem: multicellularity and sexual reproduction.

    Multicellularity, according to the biological record left behind on planet Earth, is something that evolved numerous independent times. Early on, single-celled organisms gained the ability to make colonies, with many stitching themselves together to form microbial mats. This type of cellular cooperation enables a group of organisms, working together, to achieve a greater level of success than any of them could individually.

    8
    Green algae, shown here, is an example of a true multicellular organism, where a single specimen is composed of multiple individual cells that all work together for the good of the organism as a whole. (FRANK FOX / MIKRO-FOTO.DE)

    Multicellularity offers an even greater advantage: the ability to have “freeloader” cells, or cells that can reap the benefits of living in a colony without having to do any of the work. In the context of unicellular organisms, freeloader cells are inherently limited, as producing too many of them will destroy the colony. But in the context of multicellularity, not only can the production of freeloader cells be turned on or off, but those cells can develop specialized structures and functions that assist the organism as a whole. The big advantage that multicellularity confers is the possibility of differentiation: having multiple types of cells working together for the optimal benefit of the entire biological system.

    Rather than having individual cells within a colony competing for the genetic edge, multicellularity enables an organism to harm or destroy various parts of itself to benefit the whole. According to mathematical biologist Eric Libby:

    “[A] cell living in a group can experience a fundamentally different environment than a cell living on its own. The environment can be so different that traits disastrous for a solitary organism, like increased rates of death, can become advantageous for cells in a group.”

    9
    Shown are representatives of all major lineages of eukaryotic organisms, color coded for occurrence of multicellularity. Solid black circles indicate major lineages composed entirely of unicellular species. Other groups shown contain only multicellular species (solid red), some multicellular and some unicellular species (red and black circles), or some unicellular and some colonial species (yellow and black circles). Colonial species are defined as those that possess multiple cells of the same type. There is ample evidence that multicellularity evolved independently in all the lineages shown separately here. (2006 NATURE EDUCATION MODIFIED FROM KING ET AL. (2004))

    There are multiple lineages of eukaryotic organisms, with multicellularity evolving from many independent origins. Plasmodial slime molds, land plants, red algae, brown algae, animals, and many other classifications of living creatures have all evolved multicellularity at different times throughout Earth’s history. The very first multicellular organism, in fact, may have arisen as early as 2 billion years ago, with some evidence supporting the idea that an early aquatic fungus came about even earlier.

    But it wasn’t through multicellularity alone that modern animal life became possible. Eukaryotes require more time and resources to develop to maturity than prokaryotes do, and multicellular eukaryotes have an even greater timespan from generation to generation. Complexity faces an enormous barrier: the simpler organisms they’re competing with can change and adapt more quickly.

    10
    A fascinating class of organisms known as siphonophores is itself a collection of small animals working together to form a larger colonial organism. These lifeforms straddle the boundary between a multicellular organism and a colonial organism. (KEVIN RASKOFF, CAL STATE MONTEREY / CRISCO 1492 FROM WIKIMEDIA COMMONS)

    Evolution, in many ways, is like an arms race. The different organisms that exist are continuously competing for limited resources: space, sunlight, nutrients and more. They also attempt to destroy their competitors through direct means, such as predation. A prokaryotic bacterium with a single critical mutation can have millions of generations of chances to take down a large, long-lived complex creature.

    There’s a critical mechanism that modern plants and animals have for competing with their rapidly-reproducing single-celled counterparts: sexual reproduction. If a competitor has millions of generations to figure out how to destroy a larger, slower organism for every generation the latter has, the more rapidly-adapting organism will win. But sexual reproduction allows for offspring to be significantly different from the parent in a way that asexual reproduction cannot match.

    11
    Sexually-reproducing organisms only deliver 50% of their DNA apiece to their children, with many random elements determining which particular 50% gets passed on. This is why offspring only have 50% of their DNA in common with their parents and with their siblings, unlike asexually-reproducing lifeforms. (PETE SOUZA / PUBLIC DOMAIN)

    To survive, an organism must correctly encode all of the proteins responsible for its functioning. A single mutation in the wrong spot can send that awry, which emphasizes how important it is to copy every nucleotide in your DNA correctly. But imperfections are inevitable, and even with the mechanisms organisms have developed for checking and error-correcting, somewhere between 1-in-10,000,000 and 1-in-10,000,000,000 of the copied base pairs will have an error.

    For an asexually-reproducing organism, this is the only source of genetic variation from parent to child. But for sexually-reproducing organisms, 50% of each parent’s DNA will compose the child, with some ~0.1% of the total DNA varying from specimen to specimen. This randomization means that even a single-celled organism which is well-adapted to outcompeting a parent will be poorly-adapted when faced with the challenges of the child.

    12
    In sexual reproduction, all organisms have two pairs of chromosomes, with each parent contributing 50% of their DNA (one set of each chromosome) to the child. Which 50% you get is a random process, allowing for enormous genetic variation from sibling to sibling, significantly different than either of the parents. (MAREK KULTYS / WIKIMEDIA COMMONS)

    Sexual reproduction also means that organisms will have an opportunity to a changing environment in far fewer generations than their asexual counterparts. Mutations are only one mechanism for change from the prior generation to the next; the other is variability in which traits get passed down from parent to offspring.

    If there is a wider variety among offspring, there is a greater chance of surviving when many members of a species will be selected against. The survivors can reproduce, passing on the traits that are preferential at that moment in time. This is why plants and animals can live decades, centuries, or millennia, and can still survive the continuous onslaught of organisms that reproduce hundreds of thousands of generations per year.

    It is no doubt an oversimplification to state that horizontal gene transfer, the development of eukaryotes, multicellularity, and sexual reproduction are all it takes to go from primitive life to complex, differentiated life dominating a world. We know that this happened here on Earth, but we do not know what its likelihood was, or whether the billions of years it needed on Earth are typical or far more rapid than average.

    What we do know is that life existed on Earth for nearly four billion years before the Cambrian explosion, which heralds the rise of complex animals. The story of early life on Earth is the story of most life on Earth, with only the last 550–600 million years showcasing the world as we’re familiar with it. After a 13.2 billion year cosmic journey, we were finally ready to enter the era of complex, differentiated, and possibly intelligent life.

    13
    The Burgess Shale fossil deposit, dating to the mid-Cambrian, is arguably the most famous and well-preserved fossil deposit on Earth dating back to such early times. At least 280 species of complex, differentiated plants and animals have been identified, signifying one of the most important epochs in Earth’s evolutionary history: the Cambrian explosion. This diorama shows a model-based reconstruction of what the living organisms of the time might have looked like in true color. (JAMES ST. JOHN / FLICKR)

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 5:07 pm on May 7, 2019 Permalink | Reply
    Tags: , , , , Ethan Siegel, , Our sun's future- not good   

    From Ethan Siegel: “This Is What Our Sun’s Death Will Look Like, With Pictures From NASA’s Hubble” 

    From Ethan Siegel
    May 6, 2019

    1
    The planetary nebula NGC 6369’s blue-green ring marks the location where energetic ultraviolet light has stripped electrons from oxygen atoms in the gas. Our Sun, being a single star that rotates on the slow end of stars, is very likely going to wind up looking akin to this nebula after perhaps another 6 or 7 billion years. (NASA AND THE HUBBLE HERITAGE TEAM (STSCI/AURA))

    NASA/ESA Hubble Telescope

    Our Sun will someday run out of fuel. Here’s what it will look like when that happens.

    The fate of our Sun is unambiguous, determined solely by its mass.

    2
    If all else fails, we can be certain that the evolution of the Sun will be the death of all life on Earth. Long before we reach the red giant stage, stellar evolution will cause the Sun’s luminosity to increase significantly enough to boil Earth’s oceans, which will surely eradicate humanity, if not all life on Earth. (OLIVERBEATSON OF WIKIMEDIA COMMONS / PUBLIC DOMAIN)

    Too small to go supernova, it’s still massive enough to become a red giant when its core’s hydrogen is exhausted.

    3
    As the Sun becomes a true red giant, the Earth itself may be swallowed or engulfed, but will definitely be roasted as never before. The Sun’s outer layers will swell to more than 100 times their present diameter.(WIKIMEDIA COMMONS/FSGREGS)

    As the inner regions contract and heat up, the outer portions expand, becoming tenuous and rarified.

    4
    Near the end of a Sun-like star’s life, it begins to blow off its outer layers into the depths of space, forming a protoplanetary nebula like the Egg Nebula, seen here. Its outer layers have not yet been heated to sufficient temperatures by the central, contracting star to create a true planetary nebula just yet. (NASA AND THE HUBBLE HERITAGE TEAM (STSCI / AURA), HUBBLE SPACE TELESCOPE / ACS)

    NASA Hubble Advanced Camera forSurveys

    The interior fusion reactions generate intense stellar winds, which gently expel the star’s outer layers.

    5
    The Eight Burst Nebula, NGC 3132, is not well-understood in terms of its shape or formation. The different colors in this image represent gas that radiates at different temperatures. It appears to have just a single star inside, which can be seen contracting down to form a white dwarf near the center of the nebula. (THE HUBBLE HERITAGE TEAM (STSCI/AURA/NASA))

    Single stars often shed their outer layers spherically, like 20% of planetary nebulae.

    6
    The spiral structure around the old, giant star R Sculptoris is due to winds blowing off outer layers of the star as it undergoes its AGB phase, where copious amounts of neutrons (from carbon-13 + helium-4 fusion) are produced and captured. The spiral structure is likely due to the presence of another large mass that periodically orbits the dying star: a binary companion. (ALMA (ESO/NAOJ/NRAO)/M. MAERCKER ET AL.)

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    Stars with binary companions frequently produce spirals or other asymmetrical configurations.

    7
    When our Sun runs out of fuel, it will become a red giant, followed by a planetary nebula with a white dwarf at the center. The Cat’s Eye nebula is a visually spectacular example of this potential fate, with the intricate, layered, asymmetrical shape of this particular one suggesting a binary companion. (NASA, ESA, HEIC, AND THE HUBBLE HERITAGE TEAM (STSCI/AURA); ACKNOWLEDGMENT: R. CORRADI (ISAAC NEWTON GROUP OF TELESCOPES, SPAIN) AND Z. TSVETANOV (NASA))


    Isaac Newton Group of Telescopes located at Roque de los Muchachos Observatory on La Palma in the Canary Islands

    8
    The Twin Jet nebula, shown here, is a stunning example of a bipolar nebula, which is thought to originate from either a rapidly rotating star, or a star that’s part of a binary system when it dies. We’re still working to understand exactly how our Sun will appear when it becomes a planetary nebula in the distant future. (ESA, HUBBLE & NASA, ACKNOWLEDGEMENT: JUDY SCHMIDT)

    The leading explanation is that many stars rotate rapidly, which generates large-scale magnetic fields.

    9
    Known as the Rotten Egg Nebula owing to the large presence of sulfur found inside, this is a planetary nebula in the earliest stages, where it is expected to grow significantly over the coming centuries. The gas being expelled is moving at an incredible speed of about 1,000,000 km/hr, or about 0.1% the speed of light. (ESA/HUBBLE & NASA, ACKNOWLEDGEMENT: JUDY SCHMIDT)

    Those fields accelerate the loosely-held particles populating the outer stellar regions along the dying star’s poles.

    10
    The Ant Nebula, also known as Menzel 3, is showcased in this image. The leading candidate explanation for its appearance is that the dying, central star is spinning, which winds its strong magnetic fields up into shapes that get entangled, like spaghetti twirled too long with a giant fork. The charged particles interact with those field lines, heating up, emitting radiation, and then get ejected, where they’ll disappear off into interstellar space. (NASA, ESA & THE HUBBLE HERITAGE TEAM (STSCI/AURA); ACKNOWLEDGMENT: R. SAHAI (JET PROPULSION LAB), B. BALICK (UNIVERSITY OF WASHINGTON))


    NASA’s Hubble Space Telescope delivers the most spectacular images of this natural phenomenon.

    11
    Nitrogen, hydrogen and oxygen are highlighted in the planetary nebula above, known as the Hourglass Nebula for its distinctive shape. The assigned colors distinctly show the locations of the various elements, which are segregated from one another. (NASA/HST/WFPC2; R SAHAI AND J TRAUGER (JPL))

    NASA/Hubble WFPC2. No longer in service.

    By assigning colors to specific elemental and spectral data, scientists create spectacular visualizations of these signatures.

    12
    The nebula, officially known as Hen 2–104, appears to have two nested hourglass-shaped structures that were sculpted by a whirling pair of stars in a binary system. The duo consists of an aging red giant star and a burned-out star, a white dwarf. This image is a composite of observations taken in various colors of light that correspond to the glowing gases in the nebula, where red is sulfur, green is hydrogen, orange is nitrogen, and blue is oxygen. (NASA, ESA, AND STSCI)

    The cold, neutral gas will be boiled off by the central white dwarf in just ~10,000 years.

    13
    The Helix Nebula may appear to be spherical in nature, but a detailed analysis has revealed a far more complex structure. By mapping out its 3D structure, we learn that its ring-like appearance is merely an artifact of the particular orientation and time at which we view it. Nebulae such as these are short-lived, lasting for only about 10,000 years until they fade away. (NASA, ESA, C.R. O’DELL (VANDERBILT UNIVERSITY), AND M. MEIXNER, P. MCCULLOUGH, AND G. BACON ( SPACE TELESCOPE SCIENCE INSTITUTE))

    In approximately 7 billion years, our Sun’s anticipated death should proceed in exactly this manner.

    14
    This planetary nebula may be known as the ‘Butterfly Nebula’, but in reality it’s hot, ionized luminous gas blown off in the death throes of a dying star. The outer portions are illuminated by the hot, white dwarf this dying star leaves behind. Our Sun is likely in for a similar fate at the end of its red giant, helium-burning phase. (STSCI / NASA, ESA, AND THE HUBBLE SM4 ERO TEAM)

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 11:26 am on May 5, 2019 Permalink | Reply
    Tags: 'Where Does A Proton’s Mass Come From?', 99.8% of the proton’s mass comes from gluons, , Antiquarks, Asymptotic freedom: the particles that mediate this force are known as gluons., , , , Ethan Siegel, , , , , , , The production of Higgs bosons is dominated by gluon-gluon collisions at the LHC, , The strong interaction is the most powerful interaction in the entire known Universe.   

    From Ethan Siegel: “Ask Ethan: ‘Where Does A Proton’s Mass Come From?'” 

    From Ethan Siegel
    May 4, 2019

    1
    The three valence quarks of a proton contribute to its spin, but so do the gluons, sea quarks and antiquarks, and orbital angular momentum as well. The electrostatic repulsion and the attractive strong nuclear force, in tandem, are what give the proton its size, and the properties of quark mixing are required to explain the suite of free and composite particles in our Universe. (APS/ALAN STONEBRAKER)

    The whole should equal the sum of its parts, but doesn’t. Here’s why.

    The whole is equal to the sum of its constituent parts. That’s how everything works, from galaxies to planets to cities to molecules to atoms. If you take all the components of any system and look at them individually, you can clearly see how they all fit together to add up to the entire system, with nothing missing and nothing left over. The total amount you have is equal to the amounts of all the different parts of it added together.

    So why isn’t that the case for the proton? It’s made of three quarks, but if you add up the quark masses, they not only don’t equal the proton’s mass, they don’t come close. This is the puzzle that Barry Duffey wants us to address, asking:

    “What’s happening inside protons? Why does [its] mass so greatly exceed the combined masses of its constituent quarks and gluons?”

    In order to find out, we have to take a deep look inside.

    2
    The composition of the human body, by atomic number and by mass. The whole of our bodies is equal to the sum of its parts, until you get down to an extremely fundamental level. At that point, we can see that we’re actually more than the sum of our constituent components. (ED UTHMAN, M.D., VIA WEB2.AIRMAIL.NET/UTHMAN (L); WIKIMEDIA COMMONS USER ZHAOCAROL (R))

    There’s a hint that comes just from looking at your own body. If you were to divide yourself up into smaller and smaller bits, you’d find — in terms of mass — the whole was equal to the sum of its parts. Your body’s bones, fat, muscles and organs sum up to an entire human being. Breaking those down further, into cells, still allows you to add them up and recover the same mass you have today.

    Cells can be divided into organelles, organelles are composed of individual molecules, molecules are made of atoms; at each stage, the mass of the whole is no different than that of its parts. But when you break atoms into protons, neutrons and electrons, something interesting happens. At that level, there’s a tiny but noticeable discrepancy: the individual protons, neutrons and electrons are off by right around 1% from an entire human. The difference is real.

    3
    From macroscopic scales down to subatomic ones, the sizes of the fundamental particles play only a small role in determining the sizes of composite structures. Whether the building blocks are truly fundamental and/or point-like particles is still not known. (MAGDALENA KOWALSKA / CERN / ISOLDE TEAM)

    CERN ISOLDE

    Like all known organisms, human beings are carbon-based life forms. Carbon atoms are made up of six protons and six neutrons, but if you look at the mass of a carbon atom, it’s approximately 0.8% lighter than the sum of the individual component particles that make it up. The culprit here is nuclear binding energy; when you have atomic nuclei bound together, their total mass is smaller than the mass of the protons and neutrons that comprise them.

    The way carbon is formed is through the nuclear fusion of hydrogen into helium and then helium into carbon; the energy released is what powers most types of stars in both their normal and red giant phases. That “lost mass” is where the energy powering stars comes from, thanks to Einstein’s E = mc². As stars burn through their fuel, they produce more tightly-bound nuclei, releasing the energy difference as radiation.

    4
    In between the 2nd and 3rd brightest stars of the constellation Lyra, the blue giant stars Sheliak and Sulafat, the Ring Nebula shines prominently in the night skies. Throughout all phases of a star’s life, including the giant phase, nuclear fusion powers them, with the nuclei becoming more tightly bound and the energy emitted as radiation coming from the transformation of mass into energy via E = mc². (NASA, ESA, DIGITIZED SKY SURVEY 2)

    NASA/ESA Hubble Telescope

    ESO Online Digitized Sky Survey Telescopes

    Caltech Palomar Samuel Oschin 48 inch Telescope, located in San Diego County, California, United States, altitude 1,712 m (5,617 ft)


    Australian Astronomical Observatory, Siding Spring Observatory, near Coonabarabran, New South Wales, Australia, 1.2m UK Schmidt Telescope, Altitude 1,165 m (3,822 ft)


    From http://archive.eso.org/dss/dss

    This is how most types of binding energy work: the reason it’s harder to pull apart multiple things that are bound together is because they released energy when they were joined, and you have to put energy in to free them again. That’s why it’s such a puzzling fact that when you take a look at the particles that make up the proton — the up, up, and down quarks at the heart of them — their combined masses are only 0.2% of the mass of the proton as a whole. But the puzzle has a solution that’s rooted in the nature of the strong force itself.

    The way quarks bind into protons is fundamentally different from all the other forces and interactions we know of. Instead of the force getting stronger when objects get closer, like the gravitational, electric, or magnetic forces, the attractive force goes down to zero when quarks get arbitrarily close. And instead of the force getting weaker when objects get farther away, the force pulling quarks back together gets stronger the farther away they get.

    5
    The internal structure of a proton, with quarks, gluons, and quark spin shown. The nuclear force acts like a spring, with negligible force when unstretched but large, attractive forces when stretched to large distances. (BROOKHAVEN NATIONAL LABORATORY)

    This property of the strong nuclear force is known as asymptotic freedom, and the particles that mediate this force are known as gluons. Somehow, the energy binding the proton together, responsible for the other 99.8% of the proton’s mass, comes from these gluons. The whole of matter, somehow, weighs much, much more than the sum of its parts.

    This might sound like an impossibility at first, as the gluons themselves are massless particles. But you can think of the forces they give rise to as springs: asymptoting to zero when the springs are unstretched, but becoming very large the greater the amount of stretching. In fact, the amount of energy between two quarks whose distance gets too large can become so great that it’s as though additional quark/antiquark pairs exist inside the proton: sea quarks.

    6
    When two protons collide, it isn’t just the quarks making them up that can collide, but the sea quarks, gluons, and beyond that, field interactions. All can provide insights into the spin of the individual components, and allow us to create potentially new particles if high enough energies and luminosities are reached. (CERN / CMS COLLABORATION)

    Those of you familiar with quantum field theory might have the urge to dismiss the gluons and the sea quarks as just being virtual particles: calculational tools used to arrive at the right result. But that’s not true at all, and we’ve demonstrated that with high-energy collisions between either two protons or a proton and another particle, like an electron or photon.

    The collisions performed at the Large Hadron Collider at CERN are perhaps the greatest test of all for the internal structure of the proton. When two protons collide at these ultra-high energies, most of them simply pass by one another, failing to interact. But when two internal, point-like particles collide, we can reconstruct exactly what it was that smashed together by looking at the debris that comes out.

    7
    A Higgs boson event as seen in the Compact Muon Solenoid detector at the Large Hadron Collider. This spectacular collision is 15 orders of magnitude below the Planck energy, but it’s the precision measurements of the detector that allow us to reconstruct what happened back at (and near) the collision point. Theoretically, the Higgs gives mass to the fundamental particles; however, the proton’s mass is not due to the mass of the quarks and gluons that compose it. (CERN / CMS COLLABORATION)

    Under 10% of the collisions occur between two quarks; the overwhelming majority are gluon-gluon collisions, with quark-gluon collisions making up the remainder. Moreover, not every quark-quark collision in protons occurs between either up or down quarks; sometimes a heavier quark is involved.

    Although it might make us uncomfortable, these experiments teach us an important lesson: the particles that we use to model the internal structure of protons are real. In fact, the discovery of the Higgs boson itself was only possible because of this, as the production of Higgs bosons is dominated by gluon-gluon collisions at the LHC. If all we had were the three valence quarks to rely on, we would have seen different rates of production of the Higgs than we did.

    8
    Before the mass of the Higgs boson was known, we could still calculate the expected production rates of Higgs bosons from proton-proton collisions at the LHC. The top channel is clearly production by gluon-gluon collisions. I (E. Siegel) have added the yellow highlighted region to indicate where the Higgs boson was discovered. (CMS COLLABORATION (DORIGO, TOMMASO FOR THE COLLABORATION) ARXIV:0910.3489)

    As always, though, there’s still plenty more to learn. We presently have a solid model of the average gluon density inside a proton, but if we want to know where the gluons are actually more likely to be located, that requires more experimental data, as well as better models to compare the data against. Recent advances by theorists Björn Schenke and Heikki Mäntysaari may be able to provide those much needed models. As Mäntysaari detailed:

    “It is very accurately known how large the average gluon density is inside a proton. What is not known is exactly where the gluons are located inside the proton. We model the gluons as located around the three [valence] quarks. Then we control the amount of fluctuations represented in the model by setting how large the gluon clouds are, and how far apart they are from each other. […] The more fluctuations we have, the more likely this process [producing a J/ψ meson] is to happen.”

    9
    A schematic of the world’s first electron-ion collider (EIC). Adding an electron ring (red) to the Relativistic Heavy Ion Collider (RHIC) at Brookhaven would create the eRHIC: a proposed deep inelastic scattering experiment that could improve our knowledge of the internal structure of the proton significantly. (BROOKHAVEN NATIONAL LABORATORY-CAD ERHIC GROUP)

    The combination of this new theoretical model and the ever-improving LHC data will better enable scientists to understand the internal, fundamental structure of protons, neutrons and nuclei in general, and hence to understand where the mass of the known objects in the Universe comes from. From an experimental point of view, the greatest boon would be a next-generation electron-ion collider, which would enable us to perform deep inelastic scattering experiments to reveal the internal makeup of these particles as never before.

    But there’s another theoretical approach that can take us even farther into the realm of understanding where the proton’s mass comes from: Lattice QCD.

    10
    A better understanding of the internal structure of a proton, including how the “sea” quarks and gluons are distributed, has been achieved through both experimental improvements and new theoretical developments in tandem. (BROOKHAVEN NATIONAL LABORATORY)

    The difficult part with the quantum field theory that describes the strong force — quantum chromodynamics (QCD) — is that the standard approach we take to doing calculations is no good. Typically, we’d look at the effects of particle couplings: the charged quarks exchange a gluon and that mediates the force. They could exchange gluons in a way that creates a particle-antiparticle pair or an additional gluon, and that should be a correction to a simple one-gluon exchange. They could create additional pairs or gluons, which would be higher-order corrections.

    We call this approach taking a perturbative expansion in quantum field theory, with the idea that calculating higher and higher-order contributions will give us a more accurate result.

    11
    Today, Feynman diagrams are used in calculating every fundamental interaction spanning the strong, weak, and electromagnetic forces, including in high-energy and low-temperature/condensed conditions. But this approach, which relies on a perturbative expansion, is only of limited utility for the strong interactions, as this approach diverges, rather than converges, when you add more and more loops for QCD.(DE CARVALHO, VANUILDO S. ET AL. NUCL.PHYS. B875 (2013) 738–756)

    Richard Feynman © Open University

    But this approach, which works so well for quantum electrodynamics (QED), fails spectacularly for QCD. The strong force works differently, and so these corrections get very large very quickly. Adding more terms, instead of converging towards the correct answer, diverges and takes you away from it. Fortunately, there is another way to approach the problem: non-perturbatively, using a technique called Lattice QCD.

    By treating space and time as a grid (or lattice of points) rather than a continuum, where the lattice is arbitrarily large and the spacing is arbitrarily small, you overcome this problem in a clever way. Whereas in standard, perturbative QCD, the continuous nature of space means that you lose the ability to calculate interaction strengths at small distances, the lattice approach means there’s a cutoff at the size of the lattice spacing. Quarks exist at the intersections of grid lines; gluons exist along the links connecting grid points.

    As your computing power increases, you can make the lattice spacing smaller, which improves your calculational accuracy. Over the past three decades, this technique has led to an explosion of solid predictions, including the masses of light nuclei and the reaction rates of fusion under specific temperature and energy conditions. The mass of the proton, from first principles, can now be theoretically predicted to within 2%.

    12
    As computational power and Lattice QCD techniques have improved over time, so has the accuracy to which various quantities about the proton, such as its component spin contributions, can be computed. By reducing the lattice spacing size, which can be done simply by raising the computational power employed, we can better predict the mass of not only the proton, but of all the baryons and mesons. (LABORATOIRE DE PHYSIQUE DE CLERMONT / ETM COLLABORATION)

    It’s true that the individual quarks, whose masses are determined by their coupling to the Higgs boson, cannot even account for 1% of the mass of the proton. Rather, it’s the strong force, described by the interactions between quarks and the gluons that mediate them, that are responsible for practically all of it.

    The strong nuclear force is the most powerful interaction in the entire known Universe. When you go inside a particle like the proton, it’s so powerful that it — not the mass of the proton’s constituent particles — is primarily responsible for the total energy (and therefore mass) of the normal matter in our Universe. Quarks may be point-like, but the proton is huge by comparison: 8.4 × 10^-16 m in diameter. Confining its component particles, which the binding energy of the strong force does, is what’s responsible for 99.8% of the proton’s mass.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 12:52 pm on May 4, 2019 Permalink | Reply
    Tags: "At Last, , , , , Ethan Siegel, Scientists Have Found The Galaxy’s Missing Exoplanets: Cold Gas Giants"   

    From Ethan Siegel: “At Last, Scientists Have Found The Galaxy’s Missing Exoplanets: Cold Gas Giants” 

    From Ethan Siegel
    Apr 30, 2019

    1
    There are four known exoplanets orbiting the star HR 8799, all of which are more massive than the planet Jupiter. These planets were all detected by direct imaging taken over a period of seven years, with the periods of these worlds ranging from decades to centuries. (JASON WANG / CHRISTIAN MAROIS)

    Our outer Solar System, from Jupiter to Neptune, isn’t unique after all.

    In the early 1990s, scientists began detecting the first planets orbiting stars other than the Sun: exoplanets. The easiest ones to see had the largest masses and the shortest orbits, as those are the planets with the greatest observable effects on their parent stars. The second types of planets were at the other extreme, massive enough to emit their own infrared light but so distant from their star that they could be independently resolved by a powerful enough telescope.

    Today, there are over 4,000 known exoplanets, but the overwhelming majority either orbit very close to or very far from their parent star. At long last, however, a team of scientists has discovered a bevy of those missing worlds [Astronomy and Astrophysics]: at the same distance our own Solar System’s gas giants orbit. Here’s how they did it.

    2
    In our own Solar System, the planets Jupiter and Saturn produce the greatest gravitational influence on the Sun, which will lead to our parent star moving relative to the Solar System’s center-of-mass by a substantial amount over the timescales it takes those giant planets to orbit. This motion results in a periodic redshift and blueshift that should be detectable over long enough observational timescales. (NASA’S THE SPACE PLACE)

    When you look at a star, you’re not simply seeing the light it emits from one constant, point-like surface. Instead, there’s a lot of physics going on inside that contributes to what you see.

    the star itself isn’t a solid surface, but emits the light you see for many layers going down hundreds or even thousands of kilometers,
    the star itself rotates, meaning one side moves towards you and the other away from you,
    the star has planets that move around it, occasionally blocking a portion of its light,
    the orbiting planets also gravitationally tug on the star, causing it to periodically “wobble” in time with the planet orbiting it,
    and the star moves throughout the galaxy, changing its motion relative to us.

    All of these, in some way, matter for detecting planets around a star.

    3
    At the photosphere, we can observe the properties, elements, and spectral features present at the outermost layers of the Sun. The top of the photosphere is about 4400 K, while the bottom, 500 km down, is more like 6000 K. The solar spectrum is a sum of all of these blackbodies, and every star we know of has similar properties to their photospheres. (NASA’S SOLAR DYNAMICS OBSERVATORY / GSFC)

    NASA/SDO

    That first point, which might seem the least important, is actually vital to the way we detect and confirm exoplanets. Our Sun, like all stars, is hotter towards the core and cooler towards the limb. At the hottest temperatures, all the atoms inside the star are fully ionized, but as you move to the outer, cooler portions, electrons remain in bound states.

    With the energy relentlessly coming from its environment, these electrons can move to different orbitals, absorbing a portion of the star’s energy. When they do, they leave a characteristic signature in the star’s light spectrum: an absorption feature. When we look at the absorption lines of stars, they can tell us what elements they’re made of, what temperature they’re emitting at, and how quickly they’re moving, both rotationally and with respect to our motion.

    4
    The solar spectrum shows a significant number of features, each corresponding to absorption properties of a unique element in the periodic table or a molecule or ion with electrons bound to it. Absorption features are redshifted or blueshifted if the object moves towards or away from us. (NIGEL A. SHARP, NOAO/NSO/KITT PEAK FTS/AURA/NSF)

    Kitt Peak National Observatory of the Quinlan Mountains in the Arizona-Sonoran Desert on the Tohono O’odham Nation, 88 kilometers 55 mi west-southwest of Tucson, Arizona, Altitude 2,096 m (6,877 ft)

    The more accurately you can measure the wavelength of a particular absorption feature, the more accurately you can determine the star’s velocity relative to your line-of-sight. If the star you’re observing moves towards you, that light gets shifted towards shorter wavelengths: a blueshift. Similarly, if the star you’re monitoring is moving away from you, that light will be shifted towards longer wavelengths: a redshift.

    This is simply the Doppler shift, which occurs for all waves. Whenever there’s relative motion between the source and the observer, the waves received will either be stretched towards longer or shorter wavelengths compared to what was emitted. This is true for sound waves when the ice cream truck goes by, and it’s equally true for light waves when we observe another star.

    5
    A light-emitting object moving relative to an observer will have the light that it emits appear shifted dependent on the location of an observer. Someone on the left will see the source moving away from it, and hence the light will be redshifted; someone to the right of the source will see it blueshifted, or shifted to higher frequencies, as the source moves towards it. (WIKIMEDIA COMMONS USER TXALIEN)

    When the first detection of exoplanets around stars was announced, it came from an extraordinary application of this property of matter and light. If you had an isolated star that moved through space, the wavelength of these absorption lines would only change over long periods of time: as the star we were watching moved relative to our Sun in the galaxy.

    But if the star weren’t isolated, but rather had planets orbiting it, those planets would cause the star to wobble in its orbit. As the planet moved in an ellipse around the star, the star would similarly move in a (much smaller) ellipse in time with the planet: keeping their mutual center-of-mass in the same place.

    6
    The radial velocity (or stellar wobble) method for finding exoplanets relies on measuring the motion of the parent star, as caused by the gravitational influence of its orbiting planets. Even though the planet itself may not be visible directly, their unmistakable influence on the star leaves a measurable signal behind in the periodic relative redshift and blueshift of the photons coming from it. (ESO)

    In a system with multiple planets, these patterns would simply superimpose themselves atop one another; there would be a separate signal for every planet you could identify. The strongest signals would come from the most massive planets, and the fastest signals — from the planets orbiting most closely to their stars — would be the easiest to identify.

    These are the properties that the very first exoplanets had: the so-called “hot Jupiters” of the galaxy. They were the easiest to find because, with very large masses, they could change the motion of their stars by hundreds or even thousands of meters-per-second. Similarly, with short periods and close orbital distances, many cycles of sinusoidal motion could be revealed with only a few weeks or months of observations. Massive, inner worlds are the easiest to find.

    7
    A composite image of the first exoplanet ever directly imaged (red) and its brown dwarf parent star, as seen in the infrared. A true star would be much physically larger and higher in mass than the brown dwarf shown here, but the large physical separation, which corresponds to a large angular separation at distances of under a few hundred light years, means that the world’s greatest current observatories make imaging like this possible. (EUROPEAN SOUTHERN OBSERVATORY (ESO))

    On the complete opposite end of the spectrum, some planets that are equal to or greater than Jupiter’s mass are extremely well-separated from their star: more distant than even Neptune is from the Sun. When you encounter a system such as this, the massive planet is so hot in its core that it can emit more infrared radiation than it reflects from the star it orbits.

    With a large enough separation, telescopes like Hubble can resolve both the main star and its large planetary companion. These two locations — the inner solar system and the extreme outer solar system — were the only places where we had found planets up until the explosion of exoplanets brought about by NASA’s Kepler spacecraft.

    NASA/Kepler Telescope, and K2 March 7, 2009 until November 15, 2018

    Until then, it was only high-mass planets, and only in the places where they aren’t found in our own Solar System.

    8
    Today, we know of over 4,000 confirmed exoplanets, with more than 2,500 of those found in the Kepler data. These planets range in size from larger than Jupiter to smaller than Earth. Yet because of the limitations on the size of Kepler and the duration of the mission, the majority of planets are very hot and close to their star, at small angular separations. TESS has the same issue with the first planets it’s discovering: they’re preferentially hot and in close orbits. Only through dedicated, long-period observations (or direct imaging) will we be able to detect planets with longer period (i.e., multi-year) orbits. (NASA/AMES RESEARCH CENTER/JESSIE DOTSON AND WENDY STENZEL; MISSING EARTH-LIKE WORLDS BY E. SIEGEL)

    NASA/MIT TESS replaced Kepler in search for exoplanets

    Kepler brought about a revolution because it used an entirely different method: the transit method.

    Planet transit. NASA/Ames

    When a planet passes in front of its parent star, relative to our line-of-sight, it blocks a tiny portion of the star’s light, revealing its presence to us. When the same planet transits its star multiple times, we can learn properties like its radius, orbital period, and the orbital distance from its star.

    But this was limited, too. While it was capable of revealing very low-mass planets compared to the earlier (stellar wobble/radial velocity) method, the primary mission only lasted for three years. This meant that any planet that took longer than about a year to orbit its star couldn’t be seen by Kepler. Ditto for any planet that didn’t happen to block its star’s light from our perspective, which you’re less likely to get the farther away from the star you look.

    The intermediate distance planets, at the distance of Jupiter and beyond, were still elusive.

    9
    The planets of the Solar System are difficult to detect using present technology. Inner planets that are aligned with the observer’s line-of-sight must be large and massive enough to produce an observable effect, while outer worlds require long-period monitoring to reveal their presence. Even then, they need enough mass so that the stellar wobble technique is effective enough to reveal them. (SPACE TELESCOPE SCIENCE INSTITUTE, GRAPHICS DEPT.)

    That’s where a dedicated, long-period study of stars can come in to fill in that gap. A large team of scientists, led by Emily Rickman, conducted an enormous survey using the CORALIE spectrograph at La Silla observatory.

    ESO Swiss 1.2 meter Leonhard Euler Telescope at La Silla, using the CORALIE spectrograph

    They measured the light coming from a large number of stars within about 170 light-years on a nearly continuous basis, beginning in 1998.

    By using the same instrument and leaving virtually no long-term gaps in the data, long-term, precise Doppler measurements finally became possible. A total of five brand new planets, one confirmation of a suggested planet, and three updated planets were announced in this latest study, bringing the total number of Jupiter-or-larger planets beyond the Jupiter-Sun distance up to 26. It shows us what we’d always hoped for: that our Solar System isn’t so unusual in the Universe; it’s just difficult to observe and detect planets like the ones we have.

    10
    While close-in planets are typically discoverable with stellar wobble or transit method observations, and extreme outer planets can be found with direct imaging, these in-between worlds require long-period monitoring that’s just beginning now. These newly-discovered worlds, down the line, may become excellent candidates for direct imaging as well. (E. L. RICKMAN ET AL., A&A ACCEPTED (2019), ARXIV:1904.01573)

    Even with these latest results, however, we still aren’t sensitive to the worlds we actually have in our Solar System. While the periods of these new worlds range from 15 to 40 years, even the smallest one is nearly three times as massive as Jupiter. Until we develop more sensitive measurement capabilities and make those observations over decadal timescales, real-life Jupiters, Saturns, Uranuses and Neptunes will remain undetected.

    Our view of the Universe will always be incomplete, as the techniques we develop will always be inherently biased to favor detections in one type of system. But the irreplaceable asset that will open up more of the Universe to us isn’t technique-based at all; it’s simply an increase in observing time. With longer and more sensitive observations of stars, closely tracking their motions, we can reveal lower-mass planets and worlds at greater distances.

    This is true of both the stellar wobble/radial velocity method and also the transit method, which hopefully will reveal even smaller-mass worlds with longer periods. There is still so much to learn about the Universe, but every step we take brings us closer to understanding the ultimate truths about reality. Although we might have worried that our Solar System was in some way unusual, we now know one more way we’re not. Having gas giant worlds in the outer solar system may pose a challenge for detections, but those worlds are out there and relatively common. Perhaps, then, so are solar systems like our own.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 2:04 pm on April 25, 2019 Permalink | Reply
    Tags: "Quarks Don’t Actually Have Colors", , Ethan Siegel,   

    From Ethan Siegel: “Quarks Don’t Actually Have Colors” 

    From Ethan Siegel
    Apr 25, 2019

    1
    A visualization of QCD illustrates how particle/antiparticle pairs pop out of the quantum vacuum for very small amounts of time as a consequence of Heisenberg uncertainty. Note that the quarks and antiquarks themselves come with specific color assignments that are always on opposite sides of the color wheel from one another. In the rules of the strong interaction, only colorless combinations are permitted in nature. (DEREK B. LEINWEBER)

    Red, green, and blue? What we call ‘color charge’ is far more interesting than that.

    At a fundamental level, reality is determined by only two properties of our Universe: the quanta that make up everything that exists and the interactions that take place between them. While the rules that govern all of this might appear complicated, the concept is extremely straightforward. The Universe is made up of discrete bits of energy that are bound up into quantum particles with specific properties, and those particles interact with one another according to the laws of physics that underlie our reality.

    Some of these quantum properties govern whether and how a particle will interact under a certain force. Everything has energy, and therefore everything experiences gravity. Only the particles with the right kinds of charges experience the other forces, however, as those charges are necessary for couplings to occur. In the case of the strong nuclear force, particles need a color charge to interact. Only, quarks don’t actually have colors. Here’s what’s going on instead.

    2
    The particles and antiparticles of the Standard Model are predicted to exist as a consequence of the laws of physics. Although we depict quarks, antiquarks and gluons as having colors or anticolors, this is only an analogy. The actual science is even more fascinating. (E. SIEGEL / BEYOND THE GALAXY)

    While we might not understand everything about this reality, we have uncovered all the particles of the Standard Model and the nature of the four fundamental forces — gravity, electromagnetism, the weak nuclear force, and the strong nuclear force — that govern their interactions. But not every particle experiences every interaction; you need the right type of charge for that.

    Of the four fundamental forces, every particle has an energy inherent to it, even massless particles like photons. So long as you have energy, you experience the gravitational force. Moreover, there’s only one type of gravitational charge: positive energy (or mass). For this reason, the gravitational force is always attractive, and occurs between everything that exists in the Universe.

    3
    An animated look at how spacetime responds as a mass moves through it helps showcase exactly how, qualitatively, it isn’t merely a sheet of fabric. Instead, all of space itself gets curved by the presence and properties of the matter and energy within the Universe. Note that the gravitational force is always attractive, as there is only one (positive) type of mass/energy. (LUCASVB)

    Electromagnetism is a little more complicated. Instead of one type of fundamental charge, there are two: positive and negative electric charges. When like charges (positive and positive or negative and negative) interact, they repel, while when opposite charges (positive and negative) interact, they attract.

    This offers an exciting possibility that gravity doesn’t: the ability to have a bound state that doesn’t exert a net force on an external, separately-charged object. When equal amounts of positive and negative charges bind together into a single system, you get a neutral object: one with no net charge to it. Free charges exert attractive and/or repulsive forces, but uncharged systems do not. That’s the biggest difference between gravitation and electromagnetism: the ability to have neutral systems composed of non-zero electric charges.

    4
    Newton’s law of universal gravitation (L) and Coulomb’s law for electrostatics (R) have almost identical forms, but the fundamental difference of one type vs. two types of charge open up a world of new possibilities for electromagnetism. (DENNIS NILSSON / RJB1 / E. SIEGEL)

    If we were to envision these two forces side-by-side, you might think of electromagnetism as having two directions, while gravitation only has a single direction. Electric charges can be positive or negative, and the various combinations of positive-positive, positive-negative, negative-positive, and negative-negative allow for both attraction and repulsion. Gravitation, on the other hand, only has one type of charge, and therefore only one type of force: attraction.

    Even though there are two types of electric charge, it only takes one particle to take care of the attractive and repulsive action of electromagnetism: the photon. The electromagnetic force has a relatively simple structure — two charges, where like ones repel and opposites attract — and a single particle, the photon, can account for both electric and magnetic effects. In theory, a single particle, the graviton, could do the same thing for gravitation.

    5
    Today, Feynman diagrams are used in calculating every fundamental interaction spanning the strong, weak, and electromagnetic forces, including in high-energy and low-temperature/condensed conditions. The electromagnetic interactions, shown here, are all governed by a single force-carrying particle: the photon. (DE CARVALHO, VANUILDO S. ET AL. NUCL.PHYS. B875 (2013) 738–756)

    But then, on an entirely different footing, there’s the strong force. It’s similar to both gravity and electromagnetism, in the sense that there is a new type of charge and new possibilities for a force associated with it.

    If you think about an atomic nucleus, you must immediately recognize that there must be an additional force that’s stronger than the electric force is, otherwise the nucleus, made of protons and neutrons, would fly apart due to electric repulsion. The creatively-named strong nuclear force is the responsible party, as the constituents of protons and neutrons, quarks, have both electric charges and a new type of charge: color charge.

    6
    The red-green-blue color analogy, similar to the dynamics of QCD, is how certain phenomena within and beyond the Standard Model is often conceptualized. The analogy is often taken even further than the concept of color charge, such as via the extension known as technicolor. (WIKIPEDIA USER BB3CXV)

    Contrary to what you might expect, though, there’s no color involved at all. The reason we call it color charge is because instead of one fundamental, attractive type of charge (like gravity), or two opposite types of fundamental charge (positive and negative, like electromagnetism), the strong force is governed by three fundamental types of charge, and they obey very different rules than the other, more familiar forces.

    For electric charges, a positive charge can be cancelled out by an equal and opposite charge — a negative charge — of the same magnitude. But for color charges, you have three fundamental types of charge. In order to cancel out a single color charge of one type, you need one of each of the second and third types. The combination of equal numbers of all three types results in a combination that we call “colorless,” and colorless is the only combination of composite particle that’s stable.

    7
    Quarks and antiquarks, which interact with the strong nuclear force, have color charges that correspond to red, green and blue (for the quarks) and cyan, magenta and yellow (for the antiquarks). Any colorless combination, of either red + green + blue, cyan + yellow + magenta, or the appropriate color/anticolor combination, is permitted under the rules of the strong force. (ATHABASCA UNIVERSITY / WIKIMEDIA COMMONS)

    This works independently for quarks, which have a positive color charge, and antiquarks, which have a negative color charge. If you picture a color wheel, you might put red, green and blue at three equidistant locations, like an equilateral triangle. But between red and green would be yellow; between green and blue would be cyan; between red and blue would be magenta.

    These in-between color charges correspond to the colors of the antiparticles: the anticolors. Cyan is the same as anti-red; magenta is the same as anti-green; yellow is the same as anti-blue. Just as you could add up three quarks with red, green and blue colors to make a colorless combination (like a proton), you could add up three antiquarks with cyan, magenta and yellow colors to make a colorless combination (like an antiproton).

    8
    Combinations of three quarks (RGB) or three antiquarks (CMY) are colorless, as are appropriate combinations of quarks and antiquarks. The gluon exchanges that keep these entities stable are quite complicated. (MASCHEN / WIKIMEDIA COMMONS)

    If you know anything about color, you might start thinking of other ways to generate a colorless combination. If three different colors or three different anticolors could work, maybe the right color-anticolor combination could get you there?

    In fact, it can. You could mix together the right combination of a quark and an antiquark to produce a colorless composite particle, known as a meson. This works, because:

    red and cyan,
    green and magenta,
    and blue and yellow

    are all colorless combinations. So long as you add up to a colorless net charge, the rules of the strong force permit you to exist.

    9
    The combination of a quark (RGB) and a corresponding antiquark (CMY) always ensure that the meson is colorless. (ARMY1987 / TIMOTHYRIAS OF WIKIMEDIA COMMONS)

    This might start your mind down some interesting paths. If red + green + blue is a colorless combination, but red + cyan is colorless too, does that mean that green + blue is the same as cyan?

    That’s absolutely right. It means that you can have a single (colored) quark paired with any of the following:

    two additional quarks,
    one antiquark,
    three additional quarks and one antiquark,
    one additional quark and two antiquarks,
    five additional quarks,

    or any other combination that leads to a colorless total. When you hear about exotic particles like tetraquarks (two quarks and two antiquarks) or pentaquarks (four quarks and one antiquark), know that they obey these rules.

    10
    With six quarks and six antiquarks to choose from, where their spins can sum to 1/2, 3/2 or 5/2, there are expected to be more pentaquark possibilities than all baryon and meson possibilities combined. The only rule, under the strong force, is that all such combinations must be colorless. (CERN / LHC / LHCb COLLABORATION)

    CERN/LHCb detector

    But color is only an analogy, and that analogy will actually break down pretty quickly if you start looking at it in too much detail. For example, the way the strong force works is by exchanging gluons, which carry a color-anticolor combination with them. If you are a blue quark and you emit a gluon, you might transform into a red quark, which means the gluon you emitted contained a cyan (anti-red) and a blue color charge, enabling you to conserve color.

    You might think, then, with three colors and three anticolors, that there would be nine possible types of gluon that you could have. After all, if you matched each of red, green and blue with each of cyan, magenta and yellow, there are nine possible combinations. This is a good first guess, and it’s almost right.

    11
    The strong force, operating as it does because of the existence of ‘color charge’ and the exchange of gluons, is responsible for the force that holds atomic nuclei together. A gluon must consist of a color/anticolor combination in order for the strong force to behave as it must, and does. (WIKIMEDIA COMMONS USER QASHQAIILOVE)

    As it turns out, though, there are only eight gluons that exist. Imagine you’re a red quark, and you emit a red/magenta gluon. You’re going to turn the red quark into a green quark, because that’s how you conserve color. That gluon will then find a green quark, where the magenta will annihilate with the green and leave the red color behind. In this fashion, colors get exchanged between interacting colored particles.

    This line of thinking is only good for six of the gluons, though:

    red/magenta,
    red/yellow,
    green/cyan,
    green/yellow,
    blue/cyan, and
    blue/magenta.

    When you run into the other three possibilities — red/cyan, green/magenta, and blue/yellow — there’s a problem: they’re all colorless.

    2

    When you have three color/anticolor combinations that are possible and colorless, they will mix together, producing two ‘real’ gluons that are asymmetric between the various color/anticolor combinations, and one that’s completely symmetric. Only the two antisymmetric combinations result in real particles. (E. SIEGEL)

    In physics, whenever you have particles that have the same quantum numbers, they mix together. These three types of gluons, all being colorless, absolutely do mix together. The details of how they mix are quite deep and go beyond the scope of a non-technical article, but you wind up with two combinations that are an unequal mix of the three different colors and anticolors, along with one combination that’s a mix of all the colors/anticolor pairs equally.

    That last one is truly colorless, and cannot physically interact with any of the particles or antiparticles with color charges. Therefore, there are only eight physical gluons. The exchanges of gluons between quarks (and/or antiquarks), and of colorless particles between other colorless particles, is literally what binds atomic nuclei together.

    13

    Individual protons and neutrons may be colorless entities, but there is still a residual strong force between them. All the known matter in the Universe can be divided into atoms, which can be divided into nuclei and electrons, where nuclei can be divided even farther. We may not have even yet reached the limit of division, or the ability to cut a particle into multiple components, but what we call color charge, or charge under the strong interactions, appears to be a fundamental property of quarks, antiquarks and gluons. (WIKIMEDIA COMMONS USER MANISHEARTH)

    We may call it color charge, but the strong nuclear force obeys rules that are unique among all the phenomena in the Universe. While we ascribe colors to quarks, anticolors to antiquarks, and color-anticolor combinations to gluons, it’s only a limited analogy. In truth, none of the particles or antiparticles have a color at all, but merely obey the rules of an interaction that has three fundamental types of charge, and only combinations that have no net charge under this system are allowed to exist in nature.

    This intricate interaction is the only known force that can overcome the electromagnetic force and keep two particles of like electric charge bound together into a single, stable structure: the atomic nucleus. Quarks don’t actually have colors, but they do have charges as governed by the strong interaction. Only with these unique properties can the building blocks of matter combine to produce the Universe we inhabit today.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 8:33 am on April 25, 2019 Permalink | Reply
    Tags: "Could An Incompleteness In Quantum Mechanics Lead To Our Next Scientific Revolution?", , Ethan Siegel, , ,   

    From Ethan Siegel: “Could An Incompleteness In Quantum Mechanics Lead To Our Next Scientific Revolution?” 

    From Ethan Siegel
    Apr 24, 2019

    1
    The proton’s structure, modeled along with its attendant fields, show how even though it’s made out of point-like quarks and gluons, it has a finite, substantial size which arises from the interplay of the quantum forces and fields inside it. The proton, itself, is a composite, not fundamental, quantum particle. (BROOKHAVEN NATIONAL LABORATORY)

    A single thought experiment reveals a paradox. Could quantum gravity be the solution?

    Sometimes, if you want to understand how nature truly works, you need to break things down to the simplest levels imaginable. The macroscopic world is composed of particles that are — if you divide them until they can be divided no more — fundamental. They experience forces that are determined by the exchange of additional particles (or the curvature of spacetime, for gravity), and react to the presence of objects around them.

    At least, that’s how it seems. The closer two objects are, the greater the forces they exert on one another. If they’re too far away, the forces drop off to zero, just like your intuition tells you they should. This is called the principle of locality, and it holds true in almost every instance. But in quantum mechanics, it’s violated all the time. Locality may be nothing but a persistent illusion, and seeing through that facade may be just what physics needs.

    2
    Quantum gravity tries to combine Einstein’s general theory of relativity with quantum mechanics. Quantum corrections to classical gravity are visualized as loop diagrams, as the one shown here in white. We typically view objects that are close to one another as capable of exerting forces on one another, but that might be an illusion, too. (SLAC NATIONAL ACCELERATOR LAB)

    Imagine that you had two objects located in close proximity to one another. They would attract or repel one another based on their charges and the distance between them. You might visualize this as one object generating a field that affects the other, or as two objects exchanging particles that impart either a push or a pull to one or both of them.

    You’d expect, of course, that there would be a speed limit to this interaction: the speed of light. Relativity gives you no other way out, since the speed at which the particles responsible for forces propagate is limited by the speed they can travel, which can never exceed the speed of light for any particle in the Universe. It seems so straightforward, and yet the Universe is full of surprises.

    3
    An example of a light cone, the three-dimensional surface of all possible light rays arriving at and departing from a point in spacetime. The more you move through space, the less you move through time, and vice versa. Only things contained within your past light-cone can affect you today; only things contained within your future light-cone can be perceived by you in the future. (WIKIMEDIA COMMONS USER MISSMJ)

    We have this notion of cause-and-effect that’s been hard-wired into us by our experience with reality. Physicists call this causality, and it’s one of the rare physics ideas that actually conforms to our intuition. Every observer in the Universe, from its own perspective, has a set of events that exist in its past and in its future.

    In relativity, these are events contained within either your past light-cone (for events that can causally affect you) or your future light-cone (for events that you can causally effect). Events that can be seen, perceived, or can otherwise have an effect on an observer are known as causally-connected. Signals and physical effects, both from the past and into the future, can propagate at the speed of light, but no faster. At least, that’s what your intuitive notions about reality tell you.

    4
    Schrödinger’s cat. Inside the box, the cat will be either alive or dead, depending on whether a radioactive particle decayed or not. If the cat were a true quantum system, the cat would be neither alive nor dead, but in a superposition of both states until observed. (WIKIMEDIA COMMONS USER DHATFIELD)

    But in the quantum Universe, this notion of relativistic causality isn’t as straightforward or universal as it would seem. There are many properties that a particle can have — such as its spin or polarization — that are fundamentally indeterminate until you make a measurement. Prior to observing the particle, or interacting with it in such a way that it’s forced to be in either one state or the other, it’s actually in a superposition of all possible outcomes.

    Well, you can also take two quantum particles and entangle them, so that these very same quantum properties are linked between the two entangled particles. Whenever you interact with one member of the entangled pair, you not only gain information about which particular state it’s in, but also information about its entangled partner.

    5
    By creating two entangled photons from a pre-existing system and separating them by great distances, we can ‘teleport’ information about the state of one by measuring the state of the other, even from extraordinarily different locations. (MELISSA MEISTER, OF LASER PHOTONS THROUGH A BEAM SPLITTER)

    This wouldn’t be so bad, except for the fact that you can set up an experiment as follows.

    You can create your pair of entangled particles at a particular location in space and time.
    You can transport them an arbitrarily large distance apart from one another, all while maintaining that quantum entanglement.
    Finally, you can make those measurements (or force those interactions) as close to simultaneously as possible.

    In every instance where you do this, you’ll find the member you measure in a particular state, and instantly “know” some information about the other entangled member.

    6
    A photon can have two types of circular polarizations, arbitrarily defined so that one is + and one is -. By devising an experiment to test correlations between the directional polarization of entangled particles, one can attempt to distinguish between certain formulations of quantum mechanics that lead to different experimental results.(DAVE3457 / WIKIMEDIA COMMONS)

    What’s puzzling is that you cannot check whether this information is true or not until much later, because it takes a finite amount of time for a light signal to arrive from the other member. When the signal does arrive, it always confirms what you’d known just by measuring your member of the entangled pair: your expectation for the state of the distant particle agreed 100% with what its measurement indicated.

    Only, there seems to be a problem. You “knew” information about a measurement that was taking place non-locally, which is to say that the measurement that occurred is outside of your light cone. Yet somehow, you weren’t entirely ignorant about what was going on over there. Even though no information was transmitted faster than the speed of light, this measurement describes a troubling truth about quantum physics: it is fundamentally a non-local theory.

    7
    Schematic of the third Aspect experiment testing quantum non-locality. Entangled photons from the source are sent to two fast switches that direct them to polarizing detectors. The switches change settings very rapidly, effectively changing the detector settings for the experiment while the photons are in flight. (CHAD ORZEL)

    There are limits to this, of course.

    It isn’t as clean as you want: measuring the state of your particle doesn’t tell us the exact state of its entangled pair, just probabilistic information about its partner.

    There is still no way to send a signal faster than light; you can only use this non-locality to predict a statistical average of entangled particle properties.

    And even though it has been the dream of many, from Einstein to Schrödinger to de Broglie, no one has ever come up with an improved version of quantum mechanics that tells you anything more than its original formulation.

    But there are many who still dream that dream.

    8
    If two particles are entangled, they have complementary wavefunction properties, and measuring one places meaningful constraints on the properties of the other. (WIKIMEDIA COMMONS USER DAVID KORYAGIN)

    One of them is Lee Smolin, who cowrote a paper [Physical Review D] way back in 2003 that showed an intriguing link between general ideas in quantum gravity and the fundamental non-locality of quantum physics. Although we don’t have a successful quantum theory of gravity, we have established a number of important properties concerning how a quantum theory of gravity will behave and still be consistent with the known Universe.

    9
    A variety of quantum interpretations and their differing assignments of a variety of properties. Despite their differences, there are no experiments known that can tell these various interpretations apart from one another, although certain interpretations, like those with local, real, deterministic hidden variables, can be ruled out. (ENGLISH WIKIPEDIA PAGE ON INTERPRETATIONS OF QUANTUM MECHANICS)

    There are many reasons to be skeptical that this conjecture will hold up to further scrutiny. For one, we don’t truly understand quantum gravity at all, and anything we can say about it is extraordinarily provisional. For another, replacing the non-local behavior of quantum mechanics with the non-local behavior of quantum gravity is arguably making the problem worse, not better. And, as a third reason, there is nothing thought to be observable or testable about these non-local variables that Markopoulou and Smolin claim could explain this bizarre property of the quantum Universe.

    Fortunately, we’ll have the opportunity to hear the story direct from Smolin himself and evaluate it on our own. You see, at 7 PM ET (4 PM PT) on April 17, Lee Smolin is giving a public lecture on exactly this topic at Perimeter Institute, and you can watch it right here.


    1:18:47

    I’ll be watching along with you, curious about what Smolin is calling Einstein’s Unfinished Revolution, which is the ultimate quest to supersede our two current (but mutually incompatible) descriptions of reality: General Relativity and quantum mechanics.

    10

    Best of all, I’ll be giving you my thoughts and commentary below in the form of a live-blog, beginning 10 minutes before the start of the talk. [See the full article.]

    Find out where we are in the quest for quantum gravity, and what promises it may (or may not) have for revolutionizing one of the greatest counterintuitive mysteries about the quantum nature of reality!

    Thanks for joining me for an interesting lecture and discussions on science, and just maybe, someday, we’ll have some interesting progress to report on this topic. Until then, you don’t have to shut up, but you still do have to calculate!

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 12:30 pm on April 13, 2019 Permalink | Reply
    Tags: "Ask Ethan: What Is An Electron?", , Electrons are leptons and thus fermions, Electrons were the first fundamental particles discovered, Ethan Siegel, , , Sometimes the simplest questions of all are the most difficult to meaningfully answer.,   

    From Ethan Siegel: “Ask Ethan: What Is An Electron?” 

    From Ethan Siegel
    Apr 13, 2019

    1
    This artist’s illustration shows an electron orbiting an atomic nucleus, where the electron is a fundamental particle but the nucleus can be broken up into still smaller, more fundamental constituents. (NICOLLE RAGER FULLER, NSF)

    Sometimes, the simplest questions of all are the most difficult to meaningfully answer.

    If you were to take any tiny piece of matter in our known Universe and break it up into smaller and smaller constituents, you’d eventually reach a stage where what you were left with was indivisible. Everything on Earth is composed of atoms, which can further be divided into protons, neutrons, and electrons. While protons and neutrons can still be divided farther, electrons cannot. They were the first fundamental particles discovered, and over 100 years later, we still know of no way to split electrons apart. But what, exactly, are they? That’s what Patreon supporter John Duffield wants to know, asking:

    “Please will you describe the electron… explaining what it is, and why it moves the way it does when it interacts with a positron. If you’d also like to explain why it moves the way that it does in an electric field, a magnetic field, and a gravitational field, that would be nice. An explanation of charge would be nice too, and an explanation of why the electron has mass.”

    Here’s what we know, at the deepest level, about one of the most common fundamental particles around.

    2
    The hydrogen atom, one of the most important building blocks of matter, exists in an excited quantum state with a particular magnetic quantum number. Even though its properties are well-defined, certain questions, like ‘where is the electron in this atom,’ only have probabilistically-determined answers. (WIKIMEDIA COMMONS USER BERNDTHALLER)

    In order to understand the electron, you have to first understand what it means to be a particle. In the quantum Universe, everything is both a particle and a wave simultaneously, where many of its exact properties cannot be perfectly known. The more you try and pin down a particle’s position, you destroy information about its momentum, and vice versa. If the particle is unstable, the duration of its lifetime will affect how well you’re able to know its mass or intrinsic energy. And if the particle has an intrinsic spin to it, measuring its spin in one direction destroys all the information you could know about how it’s spinning in the other directions.

    3
    Electrons, like all spin-1/2 fermions, have two possible spin orientations when placed in a magnetic field. Performing an experiment like this determines their spin orientation in one dimension, but destroys any information about their spin orientation in the other two dimensions as a result. This is a frustrating property inherent to quantum mechanics.(CK-12 FOUNDATION / WIKIMEDIA COMMONS)

    If you measure it at one particular moment in time, information about its future properties cannot be known to arbitrary accuracy, even if the laws governing it are completely understood. In the quantum Universe, many physical properties have a fundamental, inherent uncertainty to them.

    But that’s not true of everything. The quantum rules that govern the Universe are more complex than just the counterintuitive parts, like Heisenberg uncertainty.

    4
    An illustration between the inherent uncertainty between position and momentum at the quantum level. There is a limit to how well you can measure these two quantities simultaneously, and uncertainty shows up in places where people often least expect it. (E. SIEGEL / WIKIMEDIA COMMONS USER MASCHEN)

    The Universe is made up of quanta, which are those components of reality that cannot be further divided into smaller components. The most successful model of those smallest, fundamental components that compose our reality come to us in the form of the creatively-named Standard Model.

    In the Standard Model, there are two separate classes of quanta:

    the particles that make up the matter and antimatter in our material Universe, and
    the particles responsible for the forces that govern their interactions.

    The former class of particles are known as fermions, while the latter class are known as bosons.

    5
    The particles of the standard model, with masses (in MeV) in the upper right. The fermions make up the three leftmost columns and possess half-integer spins; the bosons populate the two columns on the right and have integer spins. While all particles have a corresponding antiparticle, only the fermions can be matter or antimatter. (WIKIMEDIA COMMONS USER MISSMJ, PBS NOVA, FERMILAB, OFFICE OF SCIENCE, UNITED STATES DEPARTMENT OF ENERGY, PARTICLE DATA GROUP)

    Even though, in the quantum Universe, many properties have an intrinsic uncertainty to them, there are some properties that we can know exactly. We call these quantum numbers, which are conserved quantities in not only individual particles, but in the Universe as a whole. In particular, these include properties like:

    electric charge,
    color charge,
    magnetic charge,
    angular momentum,
    baryon number,
    lepton number,
    and lepton family number.

    These are properties that are always conserved, as far as we can tell.

    6
    The quarks, antiquarks, and gluons of the standard model have a color charge, in addition to all the other properties like mass and electric charge that other particles and antiparticles possess. All of these particles, to the best we can tell, are truly point-like, and come in three generations. At higher energies, it is possible that still additional types of particles will exist, but they would go beyond the Standard Model’s description. (E. SIEGEL / BEYOND THE GALAXY)

    In addition, there are a few other properties that are conserved in the strong and electromagnetic interactions, but whose conservation can be violated by the weak interactions. These include

    weak hypercharge,
    weak isospin,
    and quark flavor numbers (like strangeness, charm, bottomness, or topness).

    Every quantum particle that exists has specific values for these quantum numbers that are allowed. Some of them, like electric charge, never change, as an electron will always have an electric charge of -1 and an up quark will always have an electric charge of +⅔. But others, like angular momentum, can take on various values, which can be either +½ or -½ for an electron, or -1, 0, or +1 for a W-boson.

    7
    The pattern of weak isospin, T3, and weak hypercharge, Y_W, and color charge of all known elementary particles, rotated by the weak mixing angle to show electric charge, Q, roughly along the vertical. The neutral Higgs field (gray square) breaks the electroweak symmetry and interacts with other particles to give them mass. (CJEAN42 OF WIKIMEDIA COMMONS)

    The particles that make up matter, known as the fermions, all have antimatter counterparts: the anti-fermions. The bosons, which are responsible for the forces and interactions between the particles, are neither matter nor antimatter, but can interact with either one, as well as themselves.

    The way we view these interactions is by exchanges of bosons between fermions and/or anti-fermions. You can have a fermion interact with a boson and give rise to another fermion; you can have a fermion and an anti-fermion interact and give rise to a boson; you can have an anti-fermion interact with a boson and give rise to another anti-fermion. As long as you conserve all the total quantum numbers you are required to conserve and obey the rules set forth by the Standard Model’s particles and interactions, anything that is not forbidden will inevitably occur with some finite probability.

    8
    The characteristic signals of positron/electron annihilation at low energies, a 511 keV photon line, has been thoroughly measured by the ESA’s INTEGRAL satellite. (J. KNÖDLSEDER (CESR) AND SPI TEAM; THE ESA’S INTEGRAL OBSERVATORY)

    ESA/Integral

    It’s important, before we enumerate what all the properties of the electron are, to note that this is merely the best understanding we have today of what the Universe is made of at a fundamental level. We do not know if there is a more fundamental description; we do not know if the Standard Model will someday be superseded by a more complete theory; we do not know if there are additional quantum numbers and when they might be (or might not be) conserved; we do not know how to incorporate gravity into the Standard Model.

    Although it should always go without saying, it warrants being stated explicitly here: these properties provide the best description of the electron as we know it today. In the future, they may turn out to be an incomplete description, or only an approximate description of what an electron (or a more fundamental entity that makes up our reality) truly is.

    9
    This diagram displays the structure of the standard model (in a way that displays the key relationships and patterns more completely, and less misleadingly, than in the more familiar image based on a 4×4 square of particles). In particular, this diagram depicts all of the particles in the Standard Model (including their letter names, masses, spins, handedness, charges, and interactions with the gauge bosons: i.e., with the strong and electroweak forces). (LATHAM BOYLE AND MARDUS OF WIKIMEDIA COMMONS)

    With that said, an electron is:

    a fermion (and not an antifermion),
    with an electric charge of -1 (in units of fundamental electric charge),
    with zero magnetic charge
    and zero color charge,
    with a fundamental intrinsic angular momentum (or spin) of ½, meaning it can take on values of +½ or -½,
    with a baryon number of 0,
    with a lepton number of +1,
    with a lepton family number of +1 in the electron family, 0 in the muon family and 0 in the tau family,
    with a weak isospin of -½,
    and with a weak hypercharge of -1.

    Those are the quantum numbers of the electron. It does couple to the weak interaction (and hence, the W and Z bosons) and the electromagnetic interaction (and hence, the photon), and also the Higgs boson (and hence, it has a non-zero rest mass). It does not couple to the strong force, and therefore cannot interact with the gluons.

    10
    The Positronium Beam experiment at University College London, shown here, combines electrons and positrons to create the quasi-atom known as positronium, which decays with a mean lifetime of approximately 1 microsecond. The decay products are well-predicted by the Standard Model, and usually proceed into 2 or 3 photons, depending on the relative spins of the electron and positron composing positronium. (UCL)

    If an electron and a positron (which has some of the same quantum numbers and some quantum numbers which are opposites) interact, there are finite probabilities that they will interact through either the electromagnetic or the weak force.

    Most interactions will be dominated by the possibility that electrons and positrons will attract one another, owing to their opposite electric charges. They can form an unstable atom-like entity known as positronium, where they become bound together similar to how protons and electrons bind together, except the electron and positron are of equal mass.

    However, because the electron is matter and the positron is antimatter, they can also annihilate. Depending on a number of factors, such as their relative spins, there are finite probabilities for how they will decay: into 2, 3, 4, 5, or greater numbers of photons. (But 2 or 3 are most common.)

    11
    The rest masses of the fundamental particles in the Universe determine when and under what conditions they can be created, and also describe how they will curve spacetime in General Relativity. The properties of particles, fields, and spacetime are all required to describe the Universe we inhabit. (FIG. 15–04A FROM UNIVERSE-REVIEW.CA)

    When you subject an electron to an electric or magnetic field, photons interact with it to change its momentum; in simple terms, that means they cause an acceleration. Because an electron also has a rest mass associated with it, courtesy of its interactions with the Higgs boson, it also accelerates in a gravitational field. However, the Standard Model cannot account for this, nor can any quantum theory we know of.

    Until we have a quantum theory of gravity, we have to take the mass and energy of an electron and put it into General Relativity: our non-quantum theory of gravitation. This is sufficient to give us the correct answer for every experiment we’ve been able to design, but it’s going to break down at some fundamental level. For example, if you ask what happens to the gravitational field of a single electron as it passes through a double slit, General Relativity has no answer.

    12
    The wave pattern for electrons passing through a double slit, one-at-a-time. If you measure “which slit” the electron goes through, you destroy the quantum interference pattern shown here. The rules of the Standard Model and of General Relativity do not tell us what happens to the gravitational field of an electron as it passes through a double slit; this would require something that goes beyond our current understanding, like quantum gravity. (DR. TONOMURA AND BELSAZAR OF WIKIMEDIA COMMONS)

    Electrons are incredibly important components of our Universe, as there are approximately 1080 of them contained within our observable Universe. They are required for the assembly of atoms, which form molecules, humans, planets and more, and are used in our world for everything from magnets to computers to the macroscopic sensation of touch.

    But the reason they have the properties they do is because of the fundamental quantum rules that govern the Universe. The Standard Model is the best description we have of those rules today, and it also provides the best description of the ways that electrons can and do interact, as well as describing which interactions they cannot undergo.

    Why electrons have these particular properties is beyond the scope of the Standard Model, though. For all that we know, we can only describe how the Universe works. Why it works the way it does is still an open question that we have no satisfactory answer for. All we can do is continue to investigate, and work towards a more fundamental answer.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 8:52 am on April 8, 2019 Permalink | Reply
    Tags: "Ask Ethan: Can We Find Exoplanets With Exomoons Like Ours?", , , , , Ethan Siegel   

    From Ethan Siegel: “Ask Ethan: Can We Find Exoplanets With Exomoons Like Ours?” 

    From Ethan Siegel
    Apr 6, 2019

    1
    Illustration of an exoplanetary system, potentially with an exomoon orbiting it. While we have yet to find a true ‘Earth-twin’ system, with an Earth-sized planet with a Moon-sized moon in the habitable zone of a Sun-like star, it may be possible in the not-too-distant future. (NASA/DAVID HARDY, VIA ASTROART.ORG)

    In all the Universe, there’s only one Earth. But can we find the other worlds that are like ours?

    Even though the ingredients for life have been confirmed to be practically everywhere we look, the only world where we’ve definitively confirmed its existence is Earth. Exoplanet science has exploded over the past 30 years, and we’ve learned of many worlds that are not only potentially habitable, but quite different from our own. We’ve found super-Earths, which may yet be rocky with thin, life-supporting atmospheres. We’ve found Earth-sized and smaller worlds around dwarf stars at the right temperatures for liquid water. And we’ve found giant planets whose moons, yet undiscovered, might have the capacity to support life.

    But do Earth-like worlds need a large moon to make life possible? Could large moons around giant planets support life? And what are our detection capabilities for exomoons today? That’s what Patreon supporterTim Graham wants to know, asking:

    “[A]re we capable of finding exoplanets in [their] habitable zone with a large moon?”

    Let’s look at the limits of our modern scientific capabilities, and see what it’ll take to get there.

    2
    Kepler-90 is a Sun-like star, but all of its eight planets are scrunched into the equivalent distance of Earth to the Sun. The inner planets have extremely tight orbits with a “year” on Kepler-90i lasting only 14.4 days. In comparison, Mercury’s orbit is 88 days. There is much left to discover, still, about this system, including whether any of these worlds possess exomoons. (NASA/AMES RESEARCH CENTER/WENDY STENZEL)

    Right now, there are a few successful ways we have of detecting and characterizing exoplanets around stars. The three most common, powerful, and prolific, though, are as follows:

    -direct imaging — where we can receive light identifiable as coming from an exoplanet directly, and distinct from any light originating from the star it orbits.

    Direct imaging-This false-color composite image traces the motion of the planet Fomalhaut b, a world captured by direct imaging. Credit: NASA, ESA, and P. Kalas (University of California, Berkeley and SETI Institute


    To directly image an exoplanet, the big challenge is to filter out the light from its parent star. This typically only happens for large planets that both emit their own (infrared) radiation and are sufficiently far from their parent star that the much-brighter star doesn’t overwhelm the planet’s intrinsic brightness. In other words, this helps us find large-mass exoplanets at large orbital radii from their stars.

    But if an exoplanet also contains a moon around it, the challenges of direct imaging are even more problematic. The moon-planet separation distance will be smaller than for the planet-star system; the absolute irradiance of the moon will be very small; the planet itself is not resolvable as more than a single pixel. But if the exomoon is tidally heated, like Jupiter’s moon Io is, it may shine very brightly. It cannot reveal an Earth-like planet with a Moon-like moon, but direct imaging may some day reveal exomoons after all.
    -radial velocity — where the gravitational pull of a planet on its parent star reveals not only the presence of an exoplanet, but its orbital period and information about its mass, too.

    Radial Velocity Method-Las Cumbres Observatory


    Radial velocity Image via SuperWasp http http://www.superwasp.org-exoplanets.htm

    The radial velocity (also known as the stellar wobble) method was, early on, the most successful way we had of discovering exoplanets. By measuring the light coming from a star over long stretches of time, we could identify long-term, periodic redshifts and blueshifts layered atop one another. When you have a star gravitationally pulling on an orbiting planet, the planet also pulls back on the star. If the planet is massive enough, and/or orbits the star enough times to build up an identifiable, periodic signal, we can unambiguously announce a detection.

    The problem with using this technique to search for exomoons is that a planet-moon system would have the same exact effect as a planet located in the center-of-mass of that system with a slightly larger (planet + moon) mass. For that reason, the radial velocity method won’t reveal exomoons.

    -transits across its parent star — where an exoplanet periodically passes in front of its parent star, blocking a portion of its light in a repeatable fashion.

    Planet transit. NASA/Ames

    But the last major current method — the transit method — offers some enticing possibilities. When an exoplanet is aligned just right with our line-of-sight, we can observe it appear to pass in front of the star it orbits, blocking a tiny fraction of its light. Since exoplanets simply orbit their stars in an ellipse, we should be able to find a transiting exoplanet as a periodic dimming variation of a specific duration each time it passes by.

    The Kepler mission, which has been our most successful planet-finder to date, relied exclusively on this method.

    NASA/Kepler Telescope, and K2 March 7, 2009 until November 15, 2018

    Its success over the past decade has brought thousands of new exoplanets to our attention, with over half of them later being confirmed by other methods, providing us with both a radius and a mass for the planet in question. Compared to all the other ways we have of finding and detecting exoplanets, the transit method stands out as the most successful.

    Each one of these methods has implications for exomoon detection, too.

    But it also has the potential to reveal exomoons. If you had only a single planet orbiting its parent star, you’d expect periodic transits that you could predict to occur at exactly the same time with every orbit. But if you had a planet-moon system, and it was aligned with your line-of-sight, the planet would appear to move forward as the moon orbited to the trailing side, or backwards as the moon orbited to the leading side.

    This would mean that the transits we observed wouldn’t necessarily occur with the exact same periods as you’d naively expect, but with a period that was perturbed by a small, significant amount every orbit. The presence of an exomoon could be detected with this additional transit timing variation superimposed atop it.

    Additionally, an exomoon would change the duration of a transit. If an exoplanet moves at the same, constant speed every time it transits across the face of its parent star, each transit would exhibit the same duration. There would be no variations in the amount of time measured for each dimming event.

    But if you had a moon orbiting the planet, there would be variations in the duration. When the moon was moving in the same direction that the planet orbited its parent star, the planet would be moving slightly backwards relative to normal, increasing the duration. Conversely, when the moon moves in the opposite direction of the planetary orbit, the planet moves forward at an increased speed, decreasing the transit duration.

    The transit duration variations, when combined with the transit timing variations, would reveal an unambiguous signal of an exomoon, along with many of its properties.

    4
    When a properly-aligned planet passes in front of a star relative to our line-of-sight, the overall brightness dips. When we see the same dip multiple times with a regular period, we can infer the existence of a potential planet. (WILLIAM BORUCKI, KEPLER MISSION PRINCIPAL INVESTIGATOR, NASA / 2010)

    But, by far, the best possibility we have today is through direct measurement of a transiting exomoon. If the planet that’s orbiting the star can make a viable transiting signal, then all it will take is the same serendipitous alignment to have its moon transit the star, and sufficiently good data to tease that signal out of the noise.

    This is not a pipe dream, but something that has already occurred once. Based on data taken by NASA’s Kepler mission, the stellar system Kepler-1625 is of particular interest, with a transiting light curve that not only displayed the definitive evidence of a massive planet orbiting it, but of a planet that wasn’t transiting with the exact same frequency you’d expect orbit after orbit. Instead, it was exhibiting this transit timing variation effect we discussed earlier.

    5
    Based on the Kepler lightcurve of the transiting exoplanet Kepler-1625b, we were able to infer the existence of a potential exomoon. The fact that the transits didn’t occur with the exact same periodicity, but that there were timing variations, was our major clue that led researchers in that direction. (NASA’S GODDARD SPACE FLIGHT CENTER/SVS/KATRINA JACKSON)

    So what could we do to go a step further? We could image it with an even more powerful telescope than Kepler: something like Hubble. We went ahead and did exactly that, and discovered that, lo and behold, we didn’t get something consistent with a single planet. Three things happened all in a row:

    The transit began, but an hour earlier than the average timing measurements would predict, displaying a timing variation.
    The planet moved off of the star, but was followed shortly thereafter by a second dip in brightness.
    This second dip was much lower in magnitude than the first dip, but didn’t begin until many hours after the first dip ended.

    All of this was consistent with exactly what you’d expect for an exomoon.

    Now, this doesn’t definitively prove that we’ve detected an exomoon, but it is far and away the best exomoon candidate we have today. These observations have enabled us to reconstruct a potential mass and size for the exoplanet and the exomoon, and the planet itself is approximately Jupiter’s mass, while the moon is Neptune’s mass. Although it would take a second observed Hubble transit to confirm it, it has already caused us to rethink what exoplanet and exomoon habitability might look like.

    6
    When Hubble pointed at the system Kepler-1625, it found the initial transit of the main planet began an hour earlier than anticipated, and was followed by a second, smaller transit. These observations were absolutely consistent with what you’d expect for an exomoon present in the system. (NASA’S GODDARD SPACE FLIGHT CENTER/SVS/KATRINA JACKSON)

    It’s possible that the Neptune-like exomoon we’ve found has its own moon: a moonmoon, as scientists have dubbed them. It’s possible that an Earth-sized world might be orbiting a giant world below our detection limits. And, of course, it’s possible that there are Earth-sized worlds with Moon-sized moons around them, but the technology isn’t there yet.

    But it should be close in short order. Right now, NASA’s TESS satellite is scouring the stars closest to Earth for transiting exoplanets.

    NASA/MIT TESS replaced Kepler in search for exoplanets

    This won’t reveal the exomoons we’re looking for, but it will reveal the locations where the best tool we’ll have for finding them — the James Webb Space Telescope — should point. While Webb might not be able to get a clean signal for an Earth-sized exomoon, it should be able to use the three methods together of transit timing variation, transit duration variation, and direct transits (measured many times and stacked atop one another) to find the smallest, closest exomoons that are out there.

    7
    This is an illustration of the different elements in NASA’s exoplanet program,including ground-based observatories, like the W. M. Keck Observatory, and space-based observatories, like Hubble, Spitzer, Kepler [above], Transiting Exoplanet Survey Satellite [above], James Webb Space Telescope, Wide Field Infrared Survey Telescope and future missions. The power of TESS and James Webb combined will reveal the most Moon-like exomoons to date, possibly even in their star’s habitable zone. (NASA)

    NASA/ESA Hubble Telescope

    NASA/Spitzer Infrared Telescope

    Keck Observatory, operated by Caltech and the University of California, Maunakea Hawaii USA, 4,207 m (13,802 ft)

    NASA/ESA/CSA Webb Telescope annotated

    NASA/WFIRST

    The most likely scenario is that we’ll find them around red dwarf stars, far closer in than Mercury is to the Sun, because that’s where detections are most favorable. But the longer we observe, the farther out we push that radius. Within the next decade, no one would be surprised if we had an exomoon around an exoplanet located in its star’s habitable zone.

    The Universe awaits. The time to look is now.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 1:33 pm on March 30, 2019 Permalink | Reply
    Tags: , , , , , , , Ethan Siegel, ,   

    From Ethan Siegel: “Ask Ethan: Why Haven’t We Found Gravitational Waves In Our Own Galaxy?” 

    From Ethan Siegel
    Mar 30, 2019

    Artist’s iconic conception of two merging black holes similar to those detected by LIGO Credit LIGO-Caltech/MIT/Sonoma State /Aurore Simonnet

    LIGO and Virgo have now detected a total of 11 binary merger events.


    VIRGO Gravitational Wave interferometer, near Pisa, Italy


    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger

    Gravity is talking. Lisa will listen. Dialogos of Eide

    ESA/eLISA the future of gravitational wave research

    Localizations of gravitational-wave signals detected by LIGO in 2015 (GW150914, LVT151012, GW151226, GW170104), more recently, by the LIGO-Virgo network (GW170814, GW170817). After Virgo came online in August 2018


    Skymap showing how adding Virgo to LIGO helps in reducing the size of the source-likely region in the sky. (Credit: Giuseppe Greco (Virgo Urbino group)

    But exactly 0 were in the Milky Way. Here’s why.

    One of the most spectacular recent advances in all of science has been our ability to directly detect gravitational waves. With the unprecedented power and sensitivity of the LIGO and Virgo gravitational waves observatories at our disposal, these powerful ripples in the fabric of spacetime are no longer passing by undetected. Instead, for the first time, we’re able to not only observe them, but to pinpoint the location of the sources that generate them and learn about their properties. As of today, 11 separate sources have been detected.

    But they’re all so far away! Why is that? That’s the question of Amitava Datta and Chayan Chatterjee, who ask:

    Why are all the known gravitational wave sources (coalescing binaries) in the distant universe? Why none has been detected in our neighborhood? […] My guess (which is most probably wrong) is that the detectors need to be precisely aligned for any detection. Hence all the detection until now are serendipitous.

    Let’s find out.

    The way observatories like LIGO and Virgo work is that they have two long, perpendicular arms that have the world’s most perfect vacuum inside of them. Laser light of the same frequency is broken up to travel down these two independent paths, reflected back and forth a number of times, and recombined together at the end.

    Light is just an electromagnetic wave, and when you combine multiple waves together, they generate an interference pattern. If the interference is constructive, you see one type of pattern; if it’s destructive, you see a different type. When LIGO and Virgo just hang out, normally, with no gravitational waves going through them, what you see is a relatively steady pattern, with only the random noise (mostly generated by the Earth itself) of the instruments to contend with.

    2
    When the two arms are of exactly equal length and there is no gravitational wave passing through, the signal is null and the interference pattern is constant. As the arm lengths change, the signal is real and oscillatory, and the interference pattern changes with time in a predictable fashion. (NASA’S SPACE PLACE)

    But if you were to change the length of one of these arms relative to the other, the amount of time the light spent traveling down that arm would also change. Because light is a wave, a small change in the time light travels means you’re at a different point in the wave’s crest/trough pattern, and therefore the interference pattern that gets created by combining it with another light wave will change.

    There could be many causes for a single arm to change: seismic noise, a jackhammer across the street, or even a passing truck miles away. But there’s an astrophysical source that could cause that change too: a passing gravitational wave.

    3
    When a gravitational wave passes through a location in space, it causes an expansion and a compression at alternate times in alternate directions, causing laser arm-lengths to change in mutually perpendicular orientations. Exploiting this physical change is how we developed successful gravitational wave detectors such as LIGO and Virgo. (ESA–C.CARREAU)

    There are two keys that enable us to determine what’s a gravitational wave from what’s mere terrestrial noise.

    Gravitational waves, when they pass through a detector, will cause both arms to change their distance together in opposite directions by a particular, in-phase amount. When you see a periodic pattern of arm lengths oscillating, you can place meaningful constraints on whether your signal was likely to be a gravitational wave or just an Earth-based source of noise.
    We build multiple detectors at different points on Earth. While each one will experience its own noise due to its local environment, a passing gravitational wave will have very similar effects on each of the detectors, separated by at most milliseconds in time.

    As you can see from the very first robust detection of these waves, dating back to observations taken on September 14, 2015, both effects are present.

    3
    The inspiral and merger of the first pair of black holes ever directly observed. The total signal, along with the noise (top) clearly matches the gravitational wave template from merging and inspiraling black holes of a particular mass (middle). Note how the frequency and amplitude change at the very end-stage of the merger. (B. P. ABBOTT ET AL. (LIGO SCIENTIFIC COLLABORATION AND VIRGO COLLABORATION))

    If we come forward to the present day, we’ve actually detected a large number of mergers: 11 separate ones thus far. Events seem to come in at random, as it’s only the very final stages of inspiral and merger — the final seconds or even milliseconds before two black holes or neutron stars collide — that have the right properties to be picked up by even our most sensitive detectors.

    If we look at the distances to these objects, though, we find something that might trouble us a little bit. Even though our gravitational wave detectors are more sensitive to objects the closer they are to us, the majority of objects we’ve found are many hundreds of millions or even billions of light-years away.

    4
    The 11 gravitational wave events detected by LIGO and Virgo, with their names, mass parameters, and other essential information encoded in Table form. Note how many events came in the last month of the second run: when LIGO and Virgo were operating simultaneously. The parameter dL is the luminosity distance; the closest object being the neutron star-neutron star merger of 2017, which corresponds to a distance of ~130 million light-years. (THE LIGO SCIENTIFIC COLLABORATION, THE VIRGO COLLABORATION; ARXIV:1811.12907)

    Why is this? If gravitational wave detectors are more sensitive to closer objects, shouldn’t we be detecting them more frequently, in defiance of what we’ve actually observed?

    There are a lot of potential explanations that could account for this mismatch between what you’d expect or not. As our questioners proposed, perhaps it’s due to orientation? After all, there are many phenomena in this Universe, such as pulsars or blazars, that only appear visible to us when the correct electromagnetic signal gets “beamed” directly to our line-of-sight.

    5
    Artist’s impression of an active galactic nucleus. The supermassive black hole at the center of the accretion disk sends a narrow high-energy jet of matter into space, perpendicular to the disc. A blazar about 4 billion light years away is the origin of many of the highest-energy cosmic rays and neutrinos. Only matter from outside the black hole can leave the black hole; matter from inside the event horizon can ever escape. (DESY, SCIENCE COMMUNICATION LAB)

    It’s a clever idea, but it misses a fundamental difference between the gravitational and electromagnetic forces. In electromagnetism, electromagnetic radiation gets generated by the acceleration of charged particles; in General Relativity, gravitational radiation (or gravitational waves) are generated by the acceleration of massive particles. So far, so good.

    But there are both electric and magnetic fields in electromagnetism, and electrically charged particles in motion generate magnetic fields. This allows you to create and accelerate particles and radiation in a collimated fashion; it doesn’t have to spread out in a spherical pattern. In gravitation, though, there are only gravitational sources (masses and energetic quanta) and the curvature of spacetime that results.

    6
    When you have two gravitational sources (i.e., masses) inspiraling and eventually merging, this motion causes the emission of gravitational waves. Although it might not be intuitive, a gravitational wave detector will be sensitive to these waves as a function of 1/r, not as 1/r², and will see those waves in all directions, regardless of whether they’re face-on or edge-on, or anywhere in between. (NASA, ESA, AND A. FEILD (STSCI))

    As it turns out, it doesn’t really matter whether we see an inspiraling and merging gravitational wave source face-on, edge-on, or at an angle; they still emit gravitational waves of a measurable and observable frequency and amplitude. There may be subtle differences in the magnitude and other properties of the signal that arrives at our eyes that are orientation-dependent, but gravitational waves propagate spherically outward from a source that generates them, and can literally be seen from anywhere in the Universe so long as your detector is sensitive enough.

    So why is it, then, that there aren’t gravitational waves from binary sources detected in our own galaxy?

    It might surprise you to learn that there are binary sources of mass, like black holes and neutron stars, orbiting and inspiraling right now.

    7
    From the very first binary neutron star system ever discovered, we knew that gravitational radiation was carrying energy away. It was only a matter of time before we found a system in the final stages of inspiral and merger. (NASA (L), MAX PLANCK INSTITUTE FOR RADIO ASTRONOMY / MICHAEL KRAMER)

    Long before gravitational waves were directly detected, we spotted what we thought was an ultra-rare configuration: two pulsars orbiting one another. We watched their pulse time vary in a way that showcased their orbital decay due to gravitational radiation. Many pulsars, including multiple binary pulsars, have since been observed. In every case where we’ve been able to measure them accurately enough, we see the orbital decay that shows yes, they are emitting gravitational waves.

    Women in STEM – Dame Susan Jocelyn Bell Burnell

    Dame Susan Jocelyn Bell Burnell, discovered pulsars with radio astronomy. Jocelyn Bell at the Mullard Radio Astronomy Observatory, Cambridge University, taken for the Daily Herald newspaper in 1968. Denied the Nobel.

    Dame Susan Jocelyn Bell Burnell at work on first plusar chart 1967 pictured working at the Four Acre Array in 1967. Image courtesy of Mullard Radio Astronomy Observatory.

    Dame Susan Jocelyn Bell Burnell 2009

    Dame Susan Jocelyn Bell Burnell (1943 – ), still working from http://www. famousirishscientists.weebly.com

    Similarly, we’ve observed X-ray emissions from systems that indicate there must be a black hole at the center. While binary black holes have only been discovered in two instances from electromagnetic observations, the stellar-mass black holes we know of have been discovered as they accrete or siphon matter from a companion star: the X-ray binary scenario.

    8
    LIGO and Virgo have discovered a new population of black holes with masses that are larger than what had been seen before with X-ray studies alone (purple). This plot shows the masses of all ten confident binary black hole mergers detected by LIGO/Virgo (blue), along with the one neutron star-neutron star merger seen (orange). LIGO/Virgo, with the upgrade in sensitivity, should detect multiple mergers every week beginning this April. (LIGO/VIRGO/NORTHWESTERN UNIV./FRANK ELAVSKY)

    These systems are:

    abundant within the Milky Way,
    inspiraling and radiating gravitational waves away to conserve energy,
    which means there are gravitational waves of specific frequencies and amplitudes passing through our detectors,
    with the sources generating those signals destined to someday merge and complete their coalescence.

    But again, we have not observed them in our ground-based gravitational wave detectors. And there’s a simple, straightforward reason for that: our detectors are in the wrong frequency range!

    8
    The sensitivities of a variety of gravitational wave detectors, old, new, and proposed. Note, in particular, Advanced LIGO (in orange), LISA (in dark blue), and BBO (in light blue). LIGO can only detect low-mass and short-period events; longer-baseline, lower-noise observatories are needed for either more massive black holes or for systems that are in an earlier stage of gravitational inspiral. (MINGLEI TONG, CLASS.QUANT.GRAV. 29 (2012) 155006)

    It’s only in the very, very last seconds of coalescence that gravitational waves from merging binaries fall into the LIGO/Virgo sensitivity range. For all the millions or even billions of years that neutron stars or black holes orbit one another and see their orbits decay, they do so at larger radial separations, which means they take longer to orbit each other, which means lower frequency gravitational waves.

    The reason we don’t see the binaries orbiting in our galaxy today is because LIGO’s and Virgo’s arms are too short! If they were millions of kilometers long instead of 3–4 km with many reflections, we’d have already seen them. As it stands right now, this will be a significant advance of LISA [above]: it can show us these binaries that are destined to merge in the future, even enabling us to predict where and when it will happen!

    It’s true: during the time that LIGO and Virgo have been operating, we haven’t seen any mergers of black holes or neutron stars in our own galaxy. This is no surprise; the results from our gravitational wave observations have taught us that there are somewhere around 800,000 merging black hole binaries throughout the Universe in any year. But there are two trillion galaxies in the Universe, meaning that we need to observe millions of galaxies in order to just get one event!

    This is why our gravitational wave observatories need to be sensitive to distances that go out billions of light-years in all directions; there simply won’t be enough statistics otherwise.

    8
    The range of Advanced LIGO and its capability of detecting merging black holes. Note that even though the amplitude of the waves will fall off as 1/r, the number of galaxies increases with volume: as r³. (LIGO COLLABORATION / AMBER STUVER / RICHARD POWELL / ATLAS OF THE UNIVERSE)

    There are plenty of neutron stars and black holes orbiting one another all throughout the Universe, including right here in our own Milky Way galaxy. When we look for these systems, with either radio pulses (for the neutron stars) or X-rays (for the black holes), we find them in great abundances. We can even see the evidence for the gravitational waves they emit, although the evidence we see is indirect.

    If we had more sensitive, lower-frequency gravitational wave observatories, we could potentially detect the waves generated by sources within our own galaxy directly. But if we want to get a true merger event, those are rare. They might be aeons in the making, but the actual events themselves take just a fraction of a second. It’s only by casting a very wide net that we can see them at all. Incredibly, the technology to do so is already here.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 11:19 am on March 26, 2019 Permalink | Reply
    Tags: , , , , Ethan Siegel, Galactic Motion,   

    From Ethan Siegel: “Ask Ethan: Could ‘Cosmic Redshift’ Be Caused By Galactic Motion, Rather Than Expanding Space?” 

    From Ethan Siegel
    Mar 23, 2019

    Both effects could be responsible for a redshift. But only one makes sense for our Universe.

    1
    The impressively huge galaxy cluster MACS J1149.5+223, whose light took over 5 billion years to reach us, was the target of one of the Hubble Frontier Fields programs. This massive object gravitationally lenses the objects behind it, stretching and magnifying them, and enabling us to see more distant recesses of the depths of space than in a relatively empty region. The lensed galaxies are among the most distant of all, and can be used to test the nature of redshift in our Universe. (NASA, ESA, S. RODNEY (JOHN HOPKINS UNIVERSITY, USA) AND THE FRONTIERSN TEAM; T. TREU (UNIVERSITY OF CALIFORNIA LOS ANGELES, USA), P. KELLY (UNIVERSITY OF CALIFORNIA BERKELEY, USA) AND THE GLASS TEAM; J. LOTZ (STSCI) AND THE FRONTIER FIELDS TEAM; M. POSTMAN (STSCI) AND THE CLASH TEAM; AND Z. LEVAY (STSCI))

    NASA/ESA Hubble Telescope

    Keck Observatory, Maunakea, Hawaii, USA.4,207 m (13,802 ft), above sea level,

    In physics, like in life, there are often multiple solutions to a problem that will give you the same result. In our actual Universe, however, there’s only one way that reality actually unfolds. The great challenge that presents itself to scientists is to figure out which one of the possibilities that nature allows is the one that describes the reality we inhabit. How do we do this with the expanding Universe? That’s what Vijay Kumar wants to know, asking:

    “When we observe a distant galaxy, the light coming from the galaxy is redshifted either due to expansion of space or actually the galaxy is moving away from us. How do we differentiate between the cosmological redshift and Doppler redshift? I have searched the internet for answers but could not get any reasonable answer.”

    The stakes are among the highest there are, and if we get it right, we can understand the nature of the Universe itself. But we must ensure we aren’t fooling ourselves.

    2
    An ultra-distant view of the Universe shows galaxies moving away from us at extreme speeds. At those distances, galaxies appear more numerous, smaller, less evolved, and to recede at great redshifts compared to the ones nearby. (NASA, ESA, R. WINDHORST AND H. YAN)

    When you look out at a distant object in the sky, you can learn a lot about it by observing its light. Stars will emit light based on their temperature and the rate at which they fuse elements in their core, radiating based on the physical properties of their photospheres. It takes millions, billions, or even trillions of stars to make up the light we see when we examine a distant galaxy, and from our perspective here on Earth, we receive that light all at once.

    But there’s an enormous amount of information encoded in that light, and astronomers have figured out how to extract it. By breaking up the light that arrives into its individual wavelengths — through the optical technique of spectroscopy — we can find specific emission and absorption features amidst the background continuum of light. Wherever an atom or molecule exists with the right energy levels, it absorbs or emits light of explicit, characteristic frequencies.

    3
    The visible light spectrum of the Sun, which helps us understand not only its temperature and ionization, but the abundances of the elements present. The long, thick lines are hydrogen and helium, but every other line is from a heavy element that must have been created in a previous-generation star, rather than the hot Big Bang. These elements all have specific signatures corresponding to explicit wavelengths. (NIGEL SHARP, NOAO / NATIONAL SOLAR OBSERVATORY AT KITT PEAK / AURA / NSF)

    National Solar Observatory at Kitt Peak in Arizona, elevation 6,886 ft (2,099 m)


    Kitt Peak National Observatory of the Quinlan Mountains in the Arizona-Sonoran Desert on the Tohono O’odham Nation, 88 kilometers 55 mi west-southwest of Tucson, Arizona, Altitude 2,096 m (6,877 ft)

    Whether an atom is neutral, ionized one, two, or three times, or is bound together in a molecule will determine what specific wavelengths it emits or absorbs. Whenever we find multiple lines emitted or absorbed by the same atom or molecule, we uniquely determine its presence in the system we’re looking at. The ratios of the different wavelengths emitted and absorbed by the same type of atom, ion, or molecule never changes throughout the entire Universe.

    But even though atoms, ions, molecules, and the quantum rules governing their transitions remains constant everywhere in space and at all times, what we observe isn’t constant. That’s because the different objects we observe can have their light systematically shifted, keeping the wavelength ratios the same but shifting the total wavelength by an overall multiplicative factor.

    4
    First noted by Vesto Slipher back in 1917, some of the objects we observe show the spectral signatures of absorption or emission of particular atoms, ions, or molecules, but with a systematic shift towards either the red or blue end of the light spectrum. (VESTO SLIPHER, (1917): PROC. AMER. PHIL. SOC., 56, 403)

    The question we want a scientific answer to, of course, is “why is this occurring?” Why does the light we observe from distant objects appear to shift all at once, by the same ratio for all lines in every individual object we observe?

    The first possibility is one we encounter all the time: a Doppler shift. When a wave-emitting object moves towards you, there’s less space between the wave crests you receive, and therefore the frequencies you observe are shifted towards higher values than the emitted frequencies from the source. Similarly, when an emitter moves away from you, there’s more space between the crests, and therefore your observed frequencies are shifted towards longer values. You’re familiar with this from the sounds emitted from moving vehicles — police sirens, ambulances, ice cream trucks — but it happens for light sources as well.

    5
    An object moving close to the speed of light that emits light will have the light that it emits appear shifted dependent on the location of an observer. Someone on the left will see the source moving away from it, and hence the light will be redshifted; someone to the right of the source will see it blueshifted, or shifted to higher frequencies, as the source moves towards it. (WIKIMEDIA COMMONS USER TXALIEN)

    There’s a second plausible possibility, however: this could be a cosmological shift. In General Relativity (our theory of gravity), it is physically impossible to have a static Universe that’s filled with matter and radiation throughout it. If we have a Universe that is, on the largest scales, filled with equal amounts of energy everywhere, that Universe is compelled to either expand or contract.

    If the Universe expands, the light emitted from a distant source will have its wavelength stretched as the very fabric of space itself expands, leading to a redshift. Similarly, if the Universe contracts, the light emitted will have its wavelength compressed, leading to a blueshift.

    6
    An illustration of how redshifts work in the expanding Universe. As a galaxy gets more and more distant, it must travel a greater distance and for a greater time through the expanding Universe. If the Universe were contracting, the light would appear blueshifted instead. (LARRY MCNISH OF RASC CALGARY CENTER, VIA CALGARY.RASC.CA/REDSHIFT.HTM)

    When we look out at the galaxies we actually have in the Universe, the overwhelming majority of them aren’t just redshifted, they’re redshifted by an amount proportional to their distance from us. The farther away a galaxy is, the greater its redshift, and the law is so good that these two properties increase in direct proportion to one another.

    First put forth in the late 1920s by scientists like Georges Lemaitre, Howard Robertson, and Edwin Hubble, this was taken even in those early days as overwhelming evidence in favor of the expanding Universe. In other words, nearly a century ago, people were already accepting the explanation that it was expanding space and not a Doppler shift that was responsible for the observed redshift-distance relation.

    Over time, of course, the data has gotten even better in support of this law.

    7
    The original 1929 observations of the Hubble expansion of the Universe, followed by subsequently more detailed, but also uncertain, observations. Hubble’s graph clearly shows the redshift-distance relation with superior data to his predecessors and competitors; the modern equivalents go much farther. (ROBERT P. KIRSHNER (R), EDWIN HUBBLE (L))

    As it turns out, there are actually a total of four possible explanations for the redshift-distance relation we observe. They are as follows:

    The light from these distant galaxies getting “tired” and losing energy as they travel through space.
    Galaxies evolved from an initial explosion, which pushes some galaxies farther away from us by the present.
    The galaxies move rapidly, where the faster-moving, higher-redshift galaxies wind up farther away over time.
    Or the fabric of space itself expanding.

    Fortunately, there are observational ways to discern each of these alternatives from one another. The results of our observational tests yield a clear winner.

    8
    According to the tired light hypothesis, the number of photons-per-second we receive from each object drops proportional to the square of its distance, while the number of objects we see increases as the square of the distance. Objects should be redder, but should emit a constant number of photons-per-second as a function of distance. In an expanding universe, however, we receive fewer photons-per-second as time goes on because they have to travel greater distances as the Universe expands, and the energy is also reduced by the redshift. Even factoring in galaxy evolution results in a changing surface brightness that’s fainter at great distances, consistent with what we see.(WIKIMEDIA COMMONS USER STIGMATELLA AURANTIACA)

    The first is to look at the surface brightness of distant galaxies. If the Universe weren’t expanding, a more distant galaxy would appear fainter, but a uniform density of galaxies would ensure we were encountering more of them the farther away we look. In a Universe where the light got tired, we would get a constant number density of photons from progressively more distant galaxies. The only difference is that the light would appear redder the farther away the galaxies are.

    This is known as the Tolman Surface Brightness test, and the results show us that the surface brightness of distant galaxies decreases as a function of redshift, rather than remaining constant. The tired-light hypothesis is no good.

    9
    The 3D reconstruction of 120,000 galaxies and their clustering properties, inferred from their redshift and large-scale structure formation. The data from these surveys allows us to perform deep galaxy counts, and we find that the data is consistent with an expansion scenario, not an initial explosion. (JEREMY TINKER AND THE SDSS-III COLLABORATION)

    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey

    SDSS-III The Milky Way, showing availble SDSS-III APOGEE spectra

    SDSS Telescope at Apache Point Observatory, near Sunspot NM, USA, Altitude2,788 meters (9,147 ft)

    The explosion hypothesis is interesting, because if we see galaxies moving away from us in all directions, we might be tempted to conclude there was an explosion long ago, with the galaxies we see behaving like outward-moving shrapnel. This should be easy to detect if so, however, since there should be smaller numbers of galaxies per unit volume at the greatest distances.

    On the other hand, if the Universe were expanding, we should actually expect greater numbers of galaxies per unit volume at the greatest distances, and those galaxies should be younger, less evolved, and smaller in mass and size. This is a question that can be settled observationally, and quite definitively: deep galaxy counts show an expanding Universe, not one where galaxies were flung to great distances from an explosion.

    10
    The differences between a motion-only based explanation for redshift/distances (dotted line) and General Relativity’s (solid) predictions for distances in the expanding Universe. Definitively, only General Relativity’s predictions match what we observe. (WIKIMEDIA COMMONS USER REDSHIFTIMPROVE)

    Finally, there’s a direct redshift-distance test we can perform to determine whether the redshift is due to a Doppler motion or to an expanding Universe. There are different ways to measure distance to an object, but the two most common are as follows:

    angular diameter distance, where you know an object’s physical size and infer its distance based on how large it appears,
    or luminosity distance, where you know how bright an object intrinsically is and infer its distance based on how bright it appears.

    When you look out at the distant Universe, the light has to travel through the Universe from the emitting object to your eyes. When you do the calculations to reconstruct the proper distance to the object based on your observations, there’s no doubt: the data agrees with the expanding Universe’s predictions, not with the Doppler explanation.

    11
    This image shows SDSS J0100+2802 (center), the brightest quasar in the early Universe. It’s light comes to us from when the Universe was only 0.9 billion years old, versus the 13.8 billion year age we have today. Based on its properties, we can infer a distance to this quasar of ~28 billion light-years. We have thousands of quasars and galaxies with similar measurements, establishing beyond a reasonable doubt that redshift is due to the expansion of space, not to a Doppler shift. (SLOAN DIGITAL SKY SURVEY)

    If we lived in a Universe where the distant galaxies were so redshifted because they were moving away from us so quickly, we’d never infer that an object was more than 13.8 billion light-years away, since the Universe is only 13.8 billion years old (since the Big Bang). But we routinely find galaxies that are 20 or even 30 billion light-years distant, with the most distant light of all, from the Cosmic Microwave Background, coming to us from 46 billion light-years away.

    It’s important to consider all the possibilities that are out there, as we must ensure that we’re not fooling ourselves by drawing the type of conclusion we want to draw. Instead, we have to devise observational tests that can discern between alternative explanations for a phenomenon. In the case of the redshift of distant galaxies, all the alternative explanations have fallen away. The expanding Universe, however unintuitive it may be, is the only one that fits the full suite of data.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: