Tagged: Dark Energy Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:13 am on September 24, 2022 Permalink | Reply
    Tags: "Star Light Star Bright … But Exactly How Bright?", , , , , Dark Energy, , , Type 1A supernovae   

    From The National Institute of Standards and Technology: “Star Light Star Bright … But Exactly How Bright?” 

    From The National Institute of Standards and Technology

    9.22.22

    Technical Contacts

    Susana Deustua
    susana.deustua@nist.gov
    (301) 975-3763

    John T. Woodward IV
    john.woodward@nist.gov
    (301) 975-5495

    1
    NIST researcher John Woodward with the four-inch telescope used to calibrate the luminosity of nearby stars.
    Credit: C. Suplee/NIST.

    2
    Astronomers use the brightness of a type of exploding star known as a Type 1A supernova (seen here as bright blue dot to the left of a remote spiral galaxy) to determine the age and expansion rate of the universe. New calibrations of the luminosity of nearby stars, observed by NIST researchers, could help astronomers refine their measurements.
    Credit: J. DePasquale (STScI), M. Kornmesser and M. Zamani (ESA/Hubble), A. Riess (STScI/JHU)NASA, ESA, and the SH0ES team, and the Digitized Sky Survey.

    3
    The four-inch telescope on Mt. Hopkins in Arizona. Credit: J. Woodward/NIST.

    4
    Side view of the telescope undergoing testing in the laboratory. Credit: C. Suplee/NIST.

    A picture may be worth a thousand words, but for astronomers, simply recording images of stars and galaxies isn’t enough. To measure the true size and absolute brightness (luminosity) of heavenly bodies, astronomers need to accurately gauge the distance to these objects. To do so, the researchers rely on “standard candles”– stars whose luminosities are so well known that they act like light bulbs of known wattage.

    One way to determine a star’s distance from Earth is to compare how bright the star appears in the sky to its luminosity.

    But even standard candles need to be calibrated. For more than a decade, scientists at the National Institute of Standards and Technology (NIST) have been working to improve the methods for calibrating standard stars. They observed two nearby bright stars, Vega and Sirius, in order to calibrate their luminosity over a range of visible-light wavelengths. The researchers are now completing their analysis and plan to release the calibration data to astronomers within the next 12 months.

    The calibration data could aid astronomers who use more distant standard candles–exploded stars known as type Ia supernovas–to determine the age and expansion rate of the universe. (Comparing the brightness of remote type Ia supernovas to nearby ones led to the Nobel-prize winning discovery that the expansion of the universe is not slowing down, as expected, but is actually speeding up.)

    ______________________________________________________________________________

    4 October 2011

    The Royal Swedish Academy of Sciences has decided to award the Nobel Prize in Physics for 2011

    with one half to

    Saul Perlmutter
    The Supernova Cosmology Project
    The DOE’s Lawrence Berkeley National Laboratory and The University of California-Berkeley,

    and the other half jointly to

    Brian P. SchmidtThe High-z Supernova Search Team, The Australian National University, Weston Creek, Australia.

    and

    Adam G. Riess

    The High-z Supernova Search Team,The Johns Hopkins University and The Space Telescope Science Institute, Baltimore, MD.

    Written in the stars

    “Some say the world will end in fire, some say in ice…” *

    What will be the final destiny of the Universe? Probably it will end in ice, if we are to believe this year’s Nobel Laureates in Physics. They have studied several dozen exploding stars, called supernovae, and discovered that the Universe is expanding at an ever-accelerating rate. The discovery came as a complete surprise even to the Laureates themselves.

    In 1998, cosmology was shaken at its foundations as two research teams presented their findings. Headed by Saul Perlmutter, one of the teams had set to work in 1988. Brian Schmidt headed another team, launched at the end of 1994, where Adam Riess was to play a crucial role.

    The research teams raced to map the Universe by locating the most distant supernovae. More sophisticated telescopes on the ground and in space, as well as more powerful computers and new digital imaging sensors (CCD, Nobel Prize in Physics in 2009), opened the possibility in the 1990s to add more pieces to the cosmological puzzle.

    The teams used a particular kind of supernova, called Type 1a supernova. It is an explosion of an old compact star that is as heavy as the Sun but as small as the Earth. A single such supernova can emit as much light as a whole galaxy. All in all, the two research teams found over 50 distant supernovae whose light was weaker than expected – this was a sign that the expansion of the Universe was accelerating. The potential pitfalls had been numerous, and the scientists found reassurance in the fact that both groups had reached the same astonishing conclusion.

    For almost a century, the Universe has been known to be expanding as a consequence of the Big Bang about 14 billion years ago. However, the discovery that this expansion is accelerating is astounding. If the expansion will continue to speed up the Universe will end in ice.

    The acceleration is thought to be driven by dark energy, but what that dark energy is remains an enigma – perhaps the greatest in physics today. What is known is that dark energy constitutes about three quarters of the Universe. Therefore the findings of the 2011 Nobel Laureates in Physics have helped to unveil a Universe that to a large extent is unknown to science. And everything is possible again.

    *Robert Frost, Fire and Ice, 1920
    ______________________________________________________________________________

    Astronomers may be able to use the NIST calibrations of Vega and Sirius to better compare the brightness of nearby and faraway type Ia supernovas, leading to more accurate measurements of the expansion of the universe and its age.

    In the ongoing NIST study, scientists observe the two nearby stars with a four-inch telescope they designed and placed atop Mount Hopkins in the desert of southern Arizona.

    John Woodward, Susana Deustua, and their colleagues have repeatedly observed the spectra, or colors, of light emitted by Vega (25 light-years away) and Sirius (8.6 light-years). One light-year, the distance that light travels through a vacuum is one year, is 9.46 trillion kilometers.

    At the beginning and end of each observing night, the researchers tilt the telescope downwards so that they can compare the stellar spectra to that of an artificial star–a quartz lamp whose luminosity has been exactly measured and placed 100 meters from the telescope.

    Before the scientists can directly make the comparisons, they must account for the effect of Earth’s atmosphere, which scatters and absorbs some of the starlight before it can reach the telescope. Although light from the ground-based lamp does not travel through the full depth of the atmosphere, some of it is scattered by air during its short, horizontal journey to the telescope.

    To assess how much of the ground-based light is scattered from the lamp, the NIST team measures the relative ratio of power generated by a helium-neon laser at its output and 100 m away, at the site of the lamp.

    To determine how much starlight is lost to the Earth’s atmosphere, the researchers record the amount of starlight reaching the telescope as it points in different directions, peering through different thicknesses of the atmosphere during the night. Changes in the amount of light recorded by the telescope as the night progresses allow astronomers to correct for the atmospheric absorption.

    Once Vega and Sirius are calibrated, astronomers can use those stars as steppingstones to calibrate the light from other stars. For instance, by using the same telescope, researchers can observe a set of slightly fainter stars—call them Set 2. The luminosity of those fainter stars can then be calibrated using Vega and Sirius as reference standards.

    Switching to a telescope large enough to observe both the newly calibrated Set 2, and a group of even fainter stars (call them Set 3), astronomers can calibrate the light from Set 3 in terms of Set 2. Astronomers can repeat the process as needed to calibrate light from extremely remote stars. In this way, astronomers will be able to transfer the NIST calibration of Vega and Sirius to stars that lie thousands to millions of light-years away.

    Next year, Deustua and Woodward will move their small telescope, now back at NIST, to the European Southern Observatory’s (ESO’s) Paranal Observatory in the high-altitude desert of northern Chile.

    With drier climate than Mt. Hopkins, the Chilean site promises more clear nights to observe Sirius and Vega and less moisture to absorb or scatter the light. The telescope will reside on a mountaintop away from ESO’s Very Large Telescope, a suite of four 8.2-m telescopes and four 1.2-m telescopes, so that the light from NIST’s quartz lamp won’t interfere with observations of distant galaxies.

    The team also plans to expand its repertoire of bright nearby stars to include Arcturus (37 light-years), Gamma Crucis (89 light-years), and Gamma Trianguli Australis (184 light-years) and to observe stars at longer, infrared wavelengths. The recently launched James Webb Space Telescope and the Roman Space Telescope, set for launch by the end of the decade, are designed to examine the universe at these wavelengths.

    The NIST researchers recently received seed money to build a larger telescope which could observe and calibrate fainter, more distant stars. That would allow astronomers to transfer the NIST calibration to remote standard candles more directly. Reducing the number of steppingstones between the stars observed by NIST and the stars astronomers are studying reduces calibration errors.

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    NIST Campus, Gaitherberg, MD.

    The National Institute of Standards and Technology‘s Mission, Vision, Core Competencies, and Core Values

    Mission

    To promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.

    NIST’s vision

    NIST will be the world’s leader in creating critical measurement solutions and promoting equitable standards. Our efforts stimulate innovation, foster industrial competitiveness, and improve the quality of life.

    NIST’s core competencies

    Measurement science
    Rigorous traceability
    Development and use of standards

    NIST’s core values

    NIST is an organization with strong values, reflected both in our history and our current work. NIST leadership and staff will uphold these values to ensure a high performing environment that is safe and respectful of all.

    Perseverance: We take the long view, planning the future with scientific knowledge and imagination to ensure continued impact and relevance for our stakeholders.
    Integrity: We are ethical, honest, independent, and provide an objective perspective.
    Inclusivity: We work collaboratively to harness the diversity of people and ideas, both inside and outside of NIST, to attain the best solutions to multidisciplinary challenges.
    Excellence: We apply rigor and critical thinking to achieve world-class results and continuous improvement in everything we do.

    Background

    The Articles of Confederation, ratified by the colonies in 1781, contained the clause, “The United States in Congress assembled shall also have the sole and exclusive right and power of regulating the alloy and value of coin struck by their own authority, or by that of the respective states—fixing the standards of weights and measures throughout the United States”. Article 1, section 8, of the Constitution of the United States (1789), transferred this power to Congress; “The Congress shall have power…To coin money, regulate the value thereof, and of foreign coin, and fix the standard of weights and measures”.

    In January 1790, President George Washington, in his first annual message to Congress stated that, “Uniformity in the currency, weights, and measures of the United States is an object of great importance, and will, I am persuaded, be duly attended to”, and ordered Secretary of State Thomas Jefferson to prepare a plan for Establishing Uniformity in the Coinage, Weights, and Measures of the United States, afterwards referred to as the Jefferson report. On October 25, 1791, Washington appealed a third time to Congress, “A uniformity of the weights and measures of the country is among the important objects submitted to you by the Constitution and if it can be derived from a standard at once invariable and universal, must be no less honorable to the public council than conducive to the public convenience”, but it was not until 1838, that a uniform set of standards was worked out. In 1821, John Quincy Adams had declared “Weights and measures may be ranked among the necessities of life to every individual of human society”.

    From 1830 until 1901, the role of overseeing weights and measures was carried out by the Office of Standard Weights and Measures, which was part of the U.S. Coast and Geodetic Survey in the Department of the Treasury.

    Bureau of Standards

    In 1901 in response to a bill proposed by Congressman James H. Southard (R- Ohio) the National Bureau of Standards was founded with the mandate to provide standard weights and measures and to serve as the national physical laboratory for the United States. (Southard had previously sponsored a bill for metric conversion of the United States.)

    President Theodore Roosevelt appointed Samuel W. Stratton as the first director. The budget for the first year of operation was $40,000. The Bureau took custody of the copies of the kilogram and meter bars that were the standards for US measures, and set up a program to provide metrology services for United States scientific and commercial users. A laboratory site was constructed in Washington DC (US) and instruments were acquired from the national physical laboratories of Europe. In addition to weights and measures the Bureau developed instruments for electrical units and for measurement of light. In 1905 a meeting was called that would be the first National Conference on Weights and Measures.

    Initially conceived as purely a metrology agency the Bureau of Standards was directed by Herbert Hoover to set up divisions to develop commercial standards for materials and products. Some of these standards were for products intended for government use; but product standards also affected private-sector consumption. Quality standards were developed for products including some types of clothing; automobile brake systems and headlamps; antifreeze; and electrical safety. During World War I, the Bureau worked on multiple problems related to war production even operating its own facility to produce optical glass when European supplies were cut off. Between the wars Harry Diamond of the Bureau developed a blind approach radio aircraft landing system. During World War II military research and development was carried out including development of radio propagation forecast methods; the proximity fuze and the standardized airframe used originally for Project Pigeon; and shortly afterwards the autonomously radar-guided Bat anti-ship guided bomb and the Kingfisher family of torpedo-carrying missiles.

    In 1948, financed by the United States Air Force the Bureau began design and construction of SEAC: the Standards Eastern Automatic Computer. The computer went into operation in May 1950 using a combination of vacuum tubes and solid-state diode logic. About the same time the Standards Western Automatic Computer, was built at the Los Angeles office of the NBS by Harry Huskey and used for research there. A mobile version- DYSEAC- was built for the Signal Corps in 1954.

    Due to a changing mission, the “National Bureau of Standards” became the “ The National Institute of Standards and Technology” in 1988.

    Following September 11, 2001, NIST conducted the official investigation into the collapse of the World Trade Center buildings.

    Organization

    NIST is headquartered in Gaithersburg, Maryland, and operates a facility in Boulder, Colorado, which was dedicated by President Eisenhower in 1954. NIST’s activities are organized into laboratory programs and extramural programs. Effective October 1, 2010, NIST was realigned by reducing the number of NIST laboratory units from ten to six. NIST Laboratories include:

    Communications Technology Laboratory (CTL)
    Engineering Laboratory (EL)
    Information Technology Laboratory (ITL)
    Center for Neutron Research (NCNR)
    Material Measurement Laboratory (MML)
    Physical Measurement Laboratory (PML)

    Extramural programs include:

    Hollings Manufacturing Extension Partnership (MEP), a nationwide network of centers to assist small and mid-sized manufacturers to create and retain jobs, improve efficiencies, and minimize waste through process improvements and to increase market penetration with innovation and growth strategies;
    Technology Innovation Program (TIP), a grant program where NIST and industry partners cost share the early-stage development of innovative but high-risk technologies;
    Baldrige Performance Excellence Program, which administers the Malcolm Baldrige National Quality Award, the nation’s highest award for performance and business excellence.

    NIST’s Boulder laboratories are best known for NIST‑F1 which houses an atomic clock.

    NIST‑F1 serves as the source of the nation’s official time. From its measurement of the natural resonance frequency of cesium—which defines the second—NIST broadcasts time signals via longwave radio station WWVB near Fort Collins in Colorado, and shortwave radio stations WWV and WWVH, located near Fort Collins and Kekaha in Hawai’i, respectively.

    NIST also operates a neutron science user facility: the NIST Center for Neutron Research (NCNR).

    The NCNR provides scientists access to a variety of neutron scattering instruments which they use in many research fields (materials science; fuel cells; biotechnology etc.).

    The SURF III Synchrotron Ultraviolet Radiation Facility is a source of synchrotron radiation in continuous operation since 1961.

    SURF III now serves as the US national standard for source-based radiometry throughout the generalized optical spectrum. All NASA-borne extreme-ultraviolet observation instruments have been calibrated at SURF since the 1970s, and SURF is used for measurement and characterization of systems for extreme ultraviolet lithography.

    The Center for Nanoscale Science and Technology performs research in nanotechnology, both through internal research efforts and by running a user-accessible cleanroom nanomanufacturing facility.

    This “NanoFab” is equipped with tools for lithographic patterning and imaging (e.g., electron microscopes and atomic force microscopes).
    Committees

    NIST has seven standing committees:

    Technical Guidelines Development Committee (TGDC)
    Advisory Committee on Earthquake Hazards Reduction (ACEHR)
    National Construction Safety Team Advisory Committee (NCST Advisory Committee)
    Information Security and Privacy Advisory Board (ISPAB)
    Visiting Committee on Advanced Technology (VCAT)
    Board of Overseers for the Malcolm Baldrige National Quality Award (MBNQA Board of Overseers)
    Manufacturing Extension Partnership National Advisory Board (MEPNAB)

    Measurements and standards

    As part of its mission, NIST supplies industry, academia, government, and other users with over 1,300 Standard Reference Materials (SRMs). These artifacts are certified as having specific characteristics or component content, used as calibration standards for measuring equipment and procedures, quality control benchmarks for industrial processes, and experimental control samples.

    Handbook 44

    NIST publishes the Handbook 44 each year after the annual meeting of the National Conference on Weights and Measures (NCWM). Each edition is developed through cooperation of the Committee on Specifications and Tolerances of the NCWM and the Weights and Measures Division (WMD) of the NIST. The purpose of the book is a partial fulfillment of the statutory responsibility for “cooperation with the states in securing uniformity of weights and measures laws and methods of inspection”.

    NIST has been publishing various forms of what is now the Handbook 44 since 1918 and began publication under the current name in 1949. The 2010 edition conforms to the concept of the primary use of the SI (metric) measurements recommended by the Omnibus Foreign Trade and Competitiveness Act of 1988.

     
  • richardmitnick 10:28 am on September 7, 2022 Permalink | Reply
    Tags: "'Lopsided' Universe could mean revision of standard cosmological model", , , , , Dark Energy, , ,   

    From The University of Oxford (UK): “‘Lopsided’ Universe could mean revision of Standard Cosmological Model – ΛCDM Model of Cosmology” 

    U Oxford bloc

    From The University of Oxford (UK)

    9.7.22

    1

    Dr Sebastian von Hausegger and Professor Subir Sarkar from the Rudolf Peierls Centre for Theoretical Physics at Oxford, together with their collaborators Dr Nathan Secrest (US Naval Observatory, Washington), Dr Roya Mohayaee (Institut d’Astrophysique, Paris) and Dr Mohamed Rameez (Tata Institute of Fundamental Research, Mumbai), have made a surprising discovery about the Universe. Their paper is in press in The Astrophysical Journal Letters [below].

    The researchers used observations of over a million quasars and half a million radio sources to test the ‘cosmological principle’ which underlies modern cosmology. It says that when averaged on large scales the Universe is isotropic and homogeneous. This allows a simple mathematical description of space-time – the Friedmann-Lemaître-Robertson-Walker (FLRW) metric – which enormously simplifies the application of Albert Einstein’s General Theory of Relativity to the Universe as a whole, thus yielding the “standard cosmological model”. Interpretation of observational data in the framework of this model has however led to the astounding conclusion that about 70% of the Universe is in the form of a mysterious “dark energy” which is causing its expansion rate to accelerate.

    ___________________________________________________________________
    The Dark Energy Survey

    Dark Energy Camera [DECam] built at The DOE’s Fermi National Accelerator Laboratory.

    NOIRLab National Optical Astronomy Observatory Cerro Tololo Inter-American Observatory (CL) Victor M Blanco 4m Telescope which houses the Dark-Energy-Camera – DECam at Cerro Tololo, Chile at an altitude of 7200 feet.

    NOIRLabNSF NOIRLab NOAO Cerro Tololo Inter-American Observatory(CL) approximately 80 km to the East of La Serena, Chile, at an altitude of 2200 meters.

    The Dark Energy Survey is an international, collaborative effort to map hundreds of millions of galaxies, detect thousands of supernovae, and find patterns of cosmic structure that will reveal the nature of the mysterious dark energy that is accelerating the expansion of our Universe. The Dark Energy Survey began searching the Southern skies on August 31, 2013.

    According to Albert Einstein’s Theory of General Relativity, gravity should lead to a slowing of the cosmic expansion. Yet, in 1998, two teams of astronomers studying distant supernovae made the remarkable discovery that the expansion of the universe is speeding up.
    Saul Perlmutter (center) [The Supernova Cosmology Project] shared the 2006 Shaw Prize in Astronomy, the 2011 Nobel Prize in Physics, and the 2015 Breakthrough Prize in Fundamental Physics with Brian P. Schmidt (right) and Adam Riess (left) [The High-z Supernova Search Team] for providing evidence that the expansion of the universe is accelerating.

    To explain cosmic acceleration, cosmologists are faced with two possibilities: either 70% of the universe exists in an exotic form, now called Dark Energy, that exhibits a gravitational force opposite to the attractive gravity of ordinary matter, or General Relativity must be replaced by a new theory of gravity on cosmic scales.

    The Dark Energy Survey is designed to probe the origin of the accelerating universe and help uncover the nature of Dark Energy by measuring the 14-billion-year history of cosmic expansion with high precision. More than 400 scientists from over 25 institutions in the United States, Spain, the United Kingdom, Brazil, Germany, Switzerland, and Australia are working on the project. The collaboration built and is using an extremely sensitive 570-Megapixel digital camera, DECam, mounted on the Blanco 4-meter telescope at Cerro Tololo Inter-American Observatory, high in the Chilean Andes, to carry out the project.

    Over six years (2013-2019), the Dark Energy Survey collaboration used 758 nights of observation to carry out a deep, wide-area survey to record information from 300 million galaxies that are billions of light-years from Earth. The survey imaged 5000 square degrees of the southern sky in five optical filters to obtain detailed information about each galaxy. A fraction of the survey time is used to observe smaller patches of sky roughly once a week to discover and study thousands of supernovae and other astrophysical transients.
    ___________________________________________________________________
    This has been interpreted as arising from the zero-point fluctuations of the quantum vacuum, with the associated energy scale set by HØ, the present rate of expansion of the universe. However, this is quite inexplicable in the successful Standard Model (quantum field theory) of fundamental interactions, the characteristic energy scale of which is higher by a factor of 1044. So, while the standard cosmological model (called ΛCDM) describes the observational data well, its main component, dark energy, has no physical basis.

    Testing foundational assumptions

    This is what motivated the researchers to re-examine its underlying assumptions. Professor Sarkar says: “When the foundations of today’s standard cosmological model were laid a hundred years ago, there was no data. We didn’t even know then that we live in a galaxy – just one among a hundred billion others. Now that we do have data, we can, and should, test these foundational assumptions since a lot rests on them – in particular the inference that dark energy dominates the Universe.”

    In fact, the Universe today is manifestly not homogeneous and isotropic. Astronomical surveys reveal a filamentary structure of galaxies, clusters of galaxies, and superclusters of clusters … and this ‘cosmic web’ extends to the deepest scales currently probed of about 2 billion light years.

    The conventional wisdom is that, while clumpy on small scales, the distribution of matter becomes homogeneous when averaged on scales larger than about 300 million light years. The Hubble expansion is smooth and isotropic on large scales, while on small scales the gravitational effect of inhomogeneities give rise to ‘peculiar’ velocities eg our nearest neighbor the Andromeda galaxy is not receding in the Hubble flow – rather it is falling towards us.

    Back in 1966, the cosmologist Dennis Sciama noted that because of this, the cosmic microwave background (CMB) radiation from the Big Bang could not be uniform on the sky.

    It must exhibit a ‘dipole anisotropy’ ie appear hotter in the direction of our local motion and colder in the opposite direction. This was indeed found soon afterwards and is attributed to our motion at about 370 km/s towards a particular direction (in the constellation of Crater). Accordingly, a special relativistic ‘boost’ is applied to all cosmological data (redshifts, apparent magnitudes etc) to transform them to the reference frame in which the universe is isotropic, since it is in this ‘cosmic rest frame’ that the Friedmann-Lemaître equations of the standard cosmological model hold. Application of these equations to the corrected data then indicates that the Hubble expansion rate is accelerating, as if driven by Einstein’s Cosmological Constant “L”, aka dark energy.

    The cosmological principle

    How can we check if this is true? If the dipole anisotropy in the CMB is due to our motion, then there must be a similar dipole in the sky distribution of all cosmologically distant sources. This is due to ‘aberration’ because of the finite speed of light – as was recognized by Oxford astronomer James Bradley in 1727, long before Albert Einstein’s formulation of the Special Theory of Relativity which predicts this effect. Such sources were first identified with radio telescopes; the relativist George Ellis and radio astronomer John Baldwin noted in 1984 that with a uniform sky map of at least a few hundred thousand such sources, this dipole could be measured and compared with the standard expectation. It was not however until this millennium that the first such data became available – the NRAO VLA Sky Survey (NVSS) catalogue of radio sources.

    The dipole amplitude turned out to be higher than expected, although its direction was consistent with that of the CMB. However, the uncertainties were large, so the significance of the discrepancy was not compelling. Two years ago, the present team of researchers upped the stakes by analyzing a bigger catalogue of 1.4 million quasars mapped by NASA’s Wide-field Infrared Explorer (WISE).

    They found a similar discrepancy but at much higher significance. Dr von Hausegger comments: “If distant sources are not isotropic in the rest frame in which the CMB is isotropic, it implies a violation of the cosmological principle … which means going back to square one! So, we must now seek corroborating evidence to understand what causes this unexpected result.”

    In their recent paper, the researchers have addressed this by performing a joint analysis of the NVSS and WISE catalogues after performing various detailed checks to demonstrate their suitability for the purpose. These catalogues are systematically independent and have almost no shared objects so this is equivalent to performing two independent experiments. The dipoles in the two catalogues, made at widely different wavelengths, are found to be consistent with each other. The consistency of the two dipoles improves upon boosting to the frame in which the CMB is isotropic (assuming its dipole to be kinematic in origin), which suggests that cosmologically distant radio galaxies and quasars may have an intrinsic anisotropy in this frame. The joint significance of the discrepancy between the rest frames of radiation and matter now exceeds 5σ (ie a probability of less than 1 in 3.5 million of being a fluke). “This issue can no longer be ignored,” comments Professor Sarkar. “The validity of the FLRW metric itself is now in question!”

    Potential paradigm-changing finding

    New data with which to check this potentially paradigm-changing finding will soon come from the Legacy Survey of Space and Time (LSST) to be carried out at the Vera C Rubin Observatory in Chile.

    Oxford Physics is closely involved in this project, along with many other institutions in the UK and all over the world. Professor Ian Shipsey who has been a member of LSST since 2008, is excited about the prospect of carrying out fundamental cosmological tests. ‘As a particle physicist, I am acutely aware that the foundations of the Standard Model of particle physics are constantly under scrutiny.

    One of the reasons I joined LSST, and have worked for so long on it, is precisely to enable powerful tests of the foundations of the standard cosmological model,’ he says. To this end, Dr Hausegger and Professor Sarkar are leading projects in the LSST Dark Energy Science Collaboration to use the forthcoming data to test the homogeneity and isotropy of the Universe. ‘We will soon know if the standard cosmological model and the inference of dark energy are indeed valid,’ concludes Professor Sarkar.

    Science paper:
    The Astrophysical Journal Letters

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U Oxford campus

    The University of Oxford

    1
    Universitas Oxoniensis

    The University of Oxford [a.k.a. The Chancellor, Masters and Scholars of the University of Oxford] is a collegiate research university in Oxford, England. There is evidence of teaching as early as 1096, making it the oldest university in the English-speaking world and the world’s second-oldest university in continuous operation. It grew rapidly from 1167 when Henry II banned English students from attending the University of Paris [Université de Paris] (FR). After disputes between students and Oxford townsfolk in 1209, some academics fled north-east to Cambridge where they established what became the University of Cambridge (UK). The two English ancient universities share many common features and are jointly referred to as Oxbridge.

    The university is made up of thirty-nine semi-autonomous constituent colleges, six permanent private halls, and a range of academic departments which are organized into four divisions. All the colleges are self-governing institutions within the university, each controlling its own membership and with its own internal structure and activities. All students are members of a college. It does not have a main campus, and its buildings and facilities are scattered throughout the city centre. Undergraduate teaching at Oxford consists of lectures, small-group tutorials at the colleges and halls, seminars, laboratory work and occasionally further tutorials provided by the central university faculties and departments. Postgraduate teaching is provided predominantly centrally.

    Oxford operates the world’s oldest university museum, as well as the largest university press in the world and the largest academic library system nationwide. In the fiscal year ending 31 July 2019, the university had a total income of £2.45 billion, of which £624.8 million was from research grants and contracts.

    Oxford has educated a wide range of notable alumni, including 28 prime ministers of the United Kingdom and many heads of state and government around the world. As of October 2020, 72 Nobel Prize laureates, 3 Fields Medalists, and 6 Turing Award winners have studied, worked, or held visiting fellowships at the University of Oxford, while its alumni have won 160 Olympic medals. Oxford is the home of numerous scholarships, including the Rhodes Scholarship, one of the oldest international graduate scholarship programmes.

    The University of Oxford’s foundation date is unknown. It is known that teaching at Oxford existed in some form as early as 1096, but it is unclear when a university came into being.

    It grew quickly from 1167 when English students returned from The University of Paris-Sorbonne [Université de Paris-Sorbonne](FR). The historian Gerald of Wales lectured to such scholars in 1188, and the first known foreign scholar, Emo of Friesland, arrived in 1190. The head of the university had the title of chancellor from at least 1201, and the masters were recognized as a universitas or corporation in 1231. The university was granted a royal charter in 1248 during the reign of King Henry III.

    The students associated together on the basis of geographical origins, into two ‘nations’, representing the North (northerners or Boreales, who included the English people from north of the River Trent and the Scots) and the South (southerners or Australes, who included English people from south of the Trent, the Irish and the Welsh). In later centuries, geographical origins continued to influence many students’ affiliations when membership of a college or hall became customary in Oxford. In addition, members of many religious orders, including Dominicans, Franciscans, Carmelites and Augustinians, settled in Oxford in the mid-13th century, gained influence and maintained houses or halls for students. At about the same time, private benefactors established colleges as self-contained scholarly communities. Among the earliest such founders were William of Durham, who in 1249 endowed University College, and John Balliol, father of a future King of Scots; Balliol College bears his name. Another founder, Walter de Merton, a Lord Chancellor of England and afterwards Bishop of Rochester, devised a series of regulations for college life. Merton College thereby became the model for such establishments at Oxford, as well as at the University of Cambridge. Thereafter, an increasing number of students lived in colleges rather than in halls and religious houses.

    In 1333–1334, an attempt by some dissatisfied Oxford scholars to found a new university at Stamford, Lincolnshire, was blocked by the universities of Oxford and Cambridge petitioning King Edward III. Thereafter, until the 1820s, no new universities were allowed to be founded in England, even in London; thus, Oxford and Cambridge had a duopoly, which was unusual in large western European countries.

    The new learning of the Renaissance greatly influenced Oxford from the late 15th century onwards. Among university scholars of the period were William Grocyn, who contributed to the revival of Greek language studies, and John Colet, the noted biblical scholar.

    With the English Reformation and the breaking of communion with the Roman Catholic Church, recusant scholars from Oxford fled to continental Europe, settling especially at the University of Douai. The method of teaching at Oxford was transformed from the medieval scholastic method to Renaissance education, although institutions associated with the university suffered losses of land and revenues. As a centre of learning and scholarship, Oxford’s reputation declined in the Age of Enlightenment; enrollments fell and teaching was neglected.

    In 1636, William Laud, the chancellor and Archbishop of Canterbury, codified the university’s statutes. These, to a large extent, remained its governing regulations until the mid-19th century. Laud was also responsible for the granting of a charter securing privileges for The University Press, and he made significant contributions to the Bodleian Library, the main library of the university. From the beginnings of the Church of England as the established church until 1866, membership of the church was a requirement to receive the BA degree from the university and “dissenters” were only permitted to receive the MA in 1871.

    The university was a centre of the Royalist party during the English Civil War (1642–1649), while the town favored the opposing Parliamentarian cause. From the mid-18th century onwards, however, the university took little part in political conflicts.

    Wadham College, founded in 1610, was the undergraduate college of Sir Christopher Wren. Wren was part of a brilliant group of experimental scientists at Oxford in the 1650s, the Oxford Philosophical Club, which included Robert Boyle and Robert Hooke. This group held regular meetings at Wadham under the guidance of the college’s Warden, John Wilkins, and the group formed the nucleus that went on to found the Royal Society.

    Before reforms in the early 19th century, the curriculum at Oxford was notoriously narrow and impractical. Sir Spencer Walpole, a historian of contemporary Britain and a senior government official, had not attended any university. He said, “Few medical men, few solicitors, few persons intended for commerce or trade, ever dreamed of passing through a university career.” He quoted the Oxford University Commissioners in 1852 stating: “The education imparted at Oxford was not such as to conduce to the advancement in life of many persons, except those intended for the ministry.” Nevertheless, Walpole argued:

    “Among the many deficiencies attending a university education there was, however, one good thing about it, and that was the education which the undergraduates gave themselves. It was impossible to collect some thousand or twelve hundred of the best young men in England, to give them the opportunity of making acquaintance with one another, and full liberty to live their lives in their own way, without evolving in the best among them, some admirable qualities of loyalty, independence, and self-control. If the average undergraduate carried from university little or no learning, which was of any service to him, he carried from it a knowledge of men and respect for his fellows and himself, a reverence for the past, a code of honor for the present, which could not but be serviceable. He had enjoyed opportunities… of intercourse with men, some of whom were certain to rise to the highest places in the Senate, in the Church, or at the Bar. He might have mixed with them in his sports, in his studies, and perhaps in his debating society; and any associations which he had this formed had been useful to him at the time, and might be a source of satisfaction to him in after life.”

    Out of the students who matriculated in 1840, 65% were sons of professionals (34% were Anglican ministers). After graduation, 87% became professionals (59% as Anglican clergy). Out of the students who matriculated in 1870, 59% were sons of professionals (25% were Anglican ministers). After graduation, 87% became professionals (42% as Anglican clergy).

    M. C. Curthoys and H. S. Jones argue that the rise of organized sport was one of the most remarkable and distinctive features of the history of the universities of Oxford and Cambridge in the late 19th and early 20th centuries. It was carried over from the athleticism prevalent at the public schools such as Eton, Winchester, Shrewsbury, and Harrow.

    All students, regardless of their chosen area of study, were required to spend (at least) their first year preparing for a first-year examination that was heavily focused on classical languages. Science students found this particularly burdensome and supported a separate science degree with Greek language study removed from their required courses. This concept of a Bachelor of Science had been adopted at other European universities (The University of London (UK) had implemented it in 1860) but an 1880 proposal at Oxford to replace the classical requirement with a modern language (like German or French) was unsuccessful. After considerable internal wrangling over the structure of the arts curriculum, in 1886 the “natural science preliminary” was recognized as a qualifying part of the first-year examination.

    At the start of 1914, the university housed about 3,000 undergraduates and about 100 postgraduate students. During the First World War, many undergraduates and fellows joined the armed forces. By 1918 virtually all fellows were in uniform, and the student population in residence was reduced to 12 per cent of the pre-war total. The University Roll of Service records that, in total, 14,792 members of the university served in the war, with 2,716 (18.36%) killed. Not all the members of the university who served in the Great War were on the Allied side; there is a remarkable memorial to members of New College who served in the German armed forces, bearing the inscription, ‘In memory of the men of this college who coming from a foreign land entered into the inheritance of this place and returning fought and died for their country in the war 1914–1918’. During the war years the university buildings became hospitals, cadet schools and military training camps.

    Reforms

    Two parliamentary commissions in 1852 issued recommendations for Oxford and Cambridge. Archibald Campbell Tait, former headmaster of Rugby School, was a key member of the Oxford Commission; he wanted Oxford to follow the German and Scottish model in which the professorship was paramount. The commission’s report envisioned a centralized university run predominantly by professors and faculties, with a much stronger emphasis on research. The professional staff should be strengthened and better paid. For students, restrictions on entry should be dropped, and more opportunities given to poorer families. It called for an enlargement of the curriculum, with honors to be awarded in many new fields. Undergraduate scholarships should be open to all Britons. Graduate fellowships should be opened up to all members of the university. It recommended that fellows be released from an obligation for ordination. Students were to be allowed to save money by boarding in the city, instead of in a college.

    The system of separate honor schools for different subjects began in 1802, with Mathematics and Literae Humaniores. Schools of “Natural Sciences” and “Law, and Modern History” were added in 1853. By 1872, the last of these had split into “Jurisprudence” and “Modern History”. Theology became the sixth honor school. In addition to these B.A. Honors degrees, the postgraduate Bachelor of Civil Law (B.C.L.) was, and still is, offered.

    The mid-19th century saw the impact of the Oxford Movement (1833–1845), led among others by the future Cardinal John Henry Newman. The influence of the reformed model of German universities reached Oxford via key scholars such as Edward Bouverie Pusey, Benjamin Jowett and Max Müller.

    Administrative reforms during the 19th century included the replacement of oral examinations with written entrance tests, greater tolerance for religious dissent, and the establishment of four women’s colleges. Privy Council decisions in the 20th century (e.g. the abolition of compulsory daily worship, dissociation of the Regius Professorship of Hebrew from clerical status, diversion of colleges’ theological bequests to other purposes) loosened the link with traditional belief and practice. Furthermore, although the university’s emphasis had historically been on classical knowledge, its curriculum expanded during the 19th century to include scientific and medical studies. Knowledge of Ancient Greek was required for admission until 1920, and Latin until 1960.

    The University of Oxford began to award doctorates for research in the first third of the 20th century. The first Oxford D.Phil. in mathematics was awarded in 1921.

    The mid-20th century saw many distinguished continental scholars, displaced by Nazism and communism, relocating to Oxford.

    The list of distinguished scholars at the University of Oxford is long and includes many who have made major contributions to politics, the sciences, medicine, and literature. As of October 2020, 72 Nobel laureates and more than 50 world leaders have been affiliated with the University of Oxford.

    To be a member of the university, all students, and most academic staff, must also be a member of a college or hall. There are thirty-nine colleges of the University of Oxford (including Reuben College, planned to admit students in 2021) and six permanent private halls (PPHs), each controlling its membership and with its own internal structure and activities. Not all colleges offer all courses, but they generally cover a broad range of subjects.

    The colleges are:

    All-Souls College
    Balliol College
    Brasenose College
    Christ Church College
    Corpus-Christi College
    Exeter College
    Green-Templeton College
    Harris-Manchester College
    Hertford College
    Jesus College
    Keble College
    Kellogg College
    Lady-Margaret-Hall
    Linacre College
    Lincoln College
    Magdalen College
    Mansfield College
    Merton College
    New College
    Nuffield College
    Oriel College
    Pembroke College
    Queens College
    Reuben College
    St-Anne’s College
    St-Antony’s College
    St-Catherines College
    St-Cross College
    St-Edmund-Hall College
    St-Hilda’s College
    St-Hughs College
    St-John’s College
    St-Peters College
    Somerville College
    Trinity College
    University College
    Wadham College
    Wolfson College
    Worcester College

    The permanent private halls were founded by different Christian denominations. One difference between a college and a PPH is that whereas colleges are governed by the fellows of the college, the governance of a PPH resides, at least in part, with the corresponding Christian denomination. The six current PPHs are:

    Blackfriars
    Campion Hall
    Regent’s Park College
    St Benet’s Hall
    St-Stephen’s Hall
    Wycliffe Hall

    The PPHs and colleges join as the Conference of Colleges, which represents the common concerns of the several colleges of the university, to discuss matters of shared interest and to act collectively when necessary, such as in dealings with the central university. The Conference of Colleges was established as a recommendation of the Franks Commission in 1965.

    Teaching members of the colleges (i.e., fellows and tutors) are collectively and familiarly known as dons, although the term is rarely used by the university itself. In addition to residential and dining facilities, the colleges provide social, cultural, and recreational activities for their members. Colleges have responsibility for admitting undergraduates and organizing their tuition; for graduates, this responsibility falls upon the departments. There is no common title for the heads of colleges: the titles used include Warden, Provost, Principal, President, Rector, Master and Dean.

    Oxford is regularly ranked within the top 5 universities in the world and is currently ranked first in the world in the Times Higher Education World University Rankings, as well as the Forbes’s World University Rankings. It held the number one position in The Times Good University Guide for eleven consecutive years, and the medical school has also maintained first place in the “Clinical, Pre-Clinical & Health” table of The Times Higher Education World University Rankings for the past seven consecutive years. In 2021, it ranked sixth among the universities around the world by SCImago Institutions Rankings. The Times Higher Education has also recognised Oxford as one of the world’s “six super brands” on its World Reputation Rankings, along with The University of California-Berkeley, The University of Cambridge (UK), Harvard University, The Massachusetts Institute of Technology, and Stanford University. The university is fifth worldwide on the US News ranking. Its Saïd Business School came 13th in the world in The Financial Times Global MBA Ranking.
    Oxford was ranked ninth in the world in 2015 by The Nature Index, which measures the largest contributors to papers published in 82 leading journals. It is ranked fifth best university worldwide and first in Britain for forming CEOs according to The Professional Ranking World Universities, and first in the UK for the quality of its graduates as chosen by the recruiters of the UK’s major companies.

    In the 2018 Complete University Guide, all 38 subjects offered by Oxford rank within the top 10 nationally meaning Oxford was one of only two multi-faculty universities (along with Cambridge) in the UK to have 100% of their subjects in the top 10. Computer Science, Medicine, Philosophy, Politics and Psychology were ranked first in the UK by the guide.

    According to The QS World University Rankings by Subject, the University of Oxford also ranks as number one in the world for four Humanities disciplines: English Language and Literature, Modern Languages, Geography, and History. It also ranks second globally for Anthropology, Archaeology, Law, Medicine, Politics & International Studies, and Psychology.

     
  • richardmitnick 11:22 am on July 10, 2022 Permalink | Reply
    Tags: , "Do you see new physics in my CMB?", "ΛCDM": Lamda Cold Dark Matter Accerated Expansion of The universe, , , Can You See Dark Matter and Dark Energy?, cosmic birefringence, , Dark Energy, , , ,   

    From astrobites : “Do you see new physics in my CMB?” 

    Astrobites bloc

    From astrobites

    Jul 9, 2022
    Kayla Kornoelje

    Title: New physics from the polarised light of the cosmic microwave background
    Authors: Eiichiro Komatsu
    First Author’s Institution: Max-Planck-Institut für Astrophysik, Karl-Schwarzschild Str. 1, 85741 Garching, Germany
    Status: Submitted to ArXiv [28 Feb 2022]

    Astronomers have painted an extraordinary picture of our Universe with the standard cosmological model, ΛCDM.

    The only problem is that astronomers don’t exactly know what ΛCDM really is. What is Dark Energy and Dark Matter? What is the physics behind Inflation? The answers to these fundamental questions in cosmology could be hidden right inside your T.V.

    ___________________________________________________________________
    The Dark Energy Survey

    Dark Energy Camera [DECam] built at The DOE’s Fermi National Accelerator Laboratory.

    NOIRLab National Optical Astronomy Observatory Cerro Tololo Inter-American Observatory(CL) Victor M Blanco 4m Telescope which houses the Dark-Energy-Camera – DECam at Cerro Tololo, Chile at an altitude of 7200 feet.

    NOIRLabNSF NOIRLab NOAO Cerro Tololo Inter-American Observatory(CL) approximately 80 km to the East of La Serena, Chile, at an altitude of 2200 meters.

    Timeline of the Inflationary Universe WMAP.

    The The Dark Energy Survey is an international, collaborative effort to map hundreds of millions of galaxies, detect thousands of supernovae, and find patterns of cosmic structure that will reveal the nature of the mysterious dark energy that is accelerating the expansion of our Universe. The Dark Energy Survey began searching the Southern skies on August 31, 2013.

    According to Albert Einstein’s Theory of General Relativity, gravity should lead to a slowing of the cosmic expansion. Yet, in 1998, two teams of astronomers studying distant supernovae made the remarkable discovery that the expansion of the universe is speeding up.
    Saul Perlmutter (center) [The Supernova Cosmology Project] shared the 2006 Shaw Prize in Astronomy, the 2011 Nobel Prize in Physics, and the 2015 Breakthrough Prize in Fundamental Physics with Brian P. Schmidt (right) and Adam Riess (left) [The High-z Supernova Search Team] for providing evidence that the expansion of the universe is accelerating.

    To explain cosmic acceleration, cosmologists are faced with two possibilities: either 70% of the universe exists in an exotic form, now called Dark Energy, that exhibits a gravitational force opposite to the attractive gravity of ordinary matter, or General Relativity must be replaced by a new theory of gravity on cosmic scales.

    The Dark Energy Survey is designed to probe the origin of the accelerating universe and help uncover the nature of Dark Energy by measuring the 14-billion-year history of cosmic expansion with high precision. More than 400 scientists from over 25 institutions in the United States, Spain, the United Kingdom, Brazil, Germany, Switzerland, and Australia are working on the project. The collaboration built and is using an extremely sensitive 570-Megapixel digital camera, DECam, mounted on the Blanco 4-meter telescope at Cerro Tololo Inter-American Observatory, high in the Chilean Andes, to carry out the project.

    Over six years (2013-2019), the Dark Energy Survey collaboration used 758 nights of observation to carry out a deep, wide-area survey to record information from 300 million galaxies that are billions of light-years from Earth. The survey imaged 5000 square degrees of the southern sky in five optical filters to obtain detailed information about each galaxy. A fraction of the survey time is used to observe smaller patches of sky roughly once a week to discover and study thousands of supernovae and other astrophysical transients.
    ___________________________________________________________________

    Dark Matter Background
    Fritz Zwicky discovered Dark Matter in the 1930s when observing the movement of the Coma Cluster., Vera Rubin a Woman in STEM, denied the Nobel, some 30 years later, did most of the work on Dark Matter.

    Fritz Zwicky.
    Coma cluster via NASA/ESA Hubble, the original example of Dark Matter discovered during observations by Fritz Zwicky and confirmed 30 years later by Vera Rubin.
    In modern times, it was astronomer Fritz Zwicky, in the 1930s, who made the first observations of what we now call dark matter. His 1933 observations of the Coma Cluster of galaxies seemed to indicated it has a mass 500 times more than that previously calculated by Edwin Hubble. Furthermore, this extra mass seemed to be completely invisible. Although Zwicky’s observations were initially met with much skepticism, they were later confirmed by other groups of astronomers.

    Thirty years later, astronomer Vera Rubin provided a huge piece of evidence for the existence of dark matter. She discovered that the centers of galaxies rotate at the same speed as their extremities, whereas, of course, they should rotate faster. Think of a vinyl LP on a record deck: its center rotates faster than its edge. That’s what logic dictates we should see in galaxies too. But we do not. The only way to explain this is if the whole galaxy is only the center of some much larger structure, as if it is only the label on the LP so to speak, causing the galaxy to have a consistent rotation speed from center to edge.

    Vera Rubin, following Zwicky, postulated that the missing structure in galaxies is dark matter. Her ideas were met with much resistance from the astronomical community, but her observations have been confirmed and are seen today as pivotal proof of the existence of dark matter.
    Astronomer Vera Rubin at the Lowell Observatory in 1965, worked on Dark Matter (The Carnegie Institution for Science).

    Vera Rubin, with Department of Terrestrial Magnetism (DTM) image tube spectrograph attached to the Kitt Peak 84-inch telescope, 1970.

    Vera Rubin measuring spectra, worked on Dark Matter(Emilio Segre Visual Archives AIP SPL).
    Dark Matter Research

    Super Cryogenic Dark Matter Search from DOE’s SLAC National Accelerator Laboratory (US) at Stanford University (US) at SNOLAB (Vale Inco Mine, Sudbury, Canada).

    LBNL LZ Dark Matter Experiment (US) xenon detector at Sanford Underground Research Facility(US) Credit: Matt Kapust.

    Lamda Cold Dark Matter Accerated Expansion of The universe http://www.scinotions.com the-cosmic-inflation-suggests-the-existence-of-parallel-universes. Credit: Alex Mittelmann.

    DAMA at Gran Sasso uses sodium iodide housed in copper to hunt for dark matter LNGS-INFN.

    Yale HAYSTAC axion dark matter experiment at Yale’s Wright Lab.

    DEAP Dark Matter detector, The DEAP-3600, suspended in the SNOLAB (CA) deep in Sudbury’s Creighton Mine.

    The LBNL LZ Dark Matter Experiment (US) Dark Matter project at SURF, Lead, SD, USA.

    DAMA-LIBRA Dark Matter experiment at the Italian National Institute for Nuclear Physics’ (INFN’s) Gran Sasso National Laboratories (LNGS) located in the Abruzzo region of central Italy.

    DARWIN Dark Matter experiment. A design study for a next-generation, multi-ton dark matter detector in Europe at The University of Zurich [Universität Zürich](CH).

    PandaX II Dark Matter experiment at Jin-ping Underground Laboratory (CJPL) in Sichuan, China.

    Inside the Axion Dark Matter eXperiment U Washington (US) Credit : Mark Stone U. of Washington. Axion Dark Matter Experiment.
    ___________________________________________________________________
    Cosmic Inflation Theory

    In physical cosmology, cosmic inflation, cosmological inflation is a theory of exponential expansion of space in the early universe. The inflationary epoch lasted from 10^−36 seconds after the conjectured Big Bang singularity to some time between 10^−33 and 10^−32 seconds after the singularity. Following the inflationary period, the universe continued to expand, but at a slower rate. The acceleration of this expansion due to dark energy began after the universe was already over 7.7 billion years old (5.4 billion years ago).

    Inflation theory was developed in the late 1970s and early 80s, with notable contributions by several theoretical physicists, including Alexei Starobinsky at Landau Institute for Theoretical Physics, Alan Guth at Cornell University, and Andrei Linde at Lebedev Physical Institute. Alexei Starobinsky, Alan Guth, and Andrei Linde won the 2014 Kavli Prize “for pioneering the theory of cosmic inflation.” It was developed further in the early 1980s. It explains the origin of the large-scale structure of the cosmos. Quantum fluctuations in the microscopic inflationary region, magnified to cosmic size, become the seeds for the growth of structure in the Universe. Many physicists also believe that inflation explains why the universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the universe is flat, and why no magnetic monopoles have been observed.

    The detailed particle physics mechanism responsible for inflation is unknown. The basic inflationary paradigm is accepted by most physicists, as a number of inflation model predictions have been confirmed by observation; [a] however, a substantial minority of scientists dissent from this position. The hypothetical field thought to be responsible for inflation is called the inflation.

    In 2002 three of the original architects of the theory were recognized for their major contributions; physicists Alan Guth of M.I.T., Andrei Linde of Stanford, and Paul Steinhardt of Princeton shared the prestigious Dirac Prize “for development of the concept of inflation in cosmology”. In 2012 Guth and Linde were awarded the Breakthrough Prize in Fundamental Physics for their invention and development of inflationary cosmology.

    4
    Alan Guth, from M.I.T., who first proposed Cosmic Inflation.

    Lamda Cold Dark Matter Accerated Expansion of The universe http scinotions.com the-cosmic-inflation-suggests-the-existence-of-parallel-universes. Credit: Alex Mittelmann.

    Alan Guth’s notes:
    Alan Guth’s original notes on inflation.
    ___________________________________________________________________

    The cosmic microwave background (CMB) is leftover radiation from the Big Bang.

    It’s some of the oldest light in the Universe, and yes, you can see that light in T.V. static!

    The CMB is rich with data that carries profound information about cosmology just waiting to be understood, but most important for our discussion today are the properties of the CMB’s polarization.

    Can You See Dark Matter and Dark Energy?

    3
    Figure 1: An illustration of cosmic birefringence. The left and right images are representations of the CMB before (left) and after (right) photons begin to travel towards us. Notice that the CMB photon’s wavelength is rotated by an angle β, which represents the rotation due to cosmic birefringence. This changes the polarization pattern (black lines in the image) of the CMB. Figure 3 in the paper.

    First, let’s try and answer our first question: what is the nature of dark matter and dark energy? When the CMB was formed around 380,000 years after the Big Bang, the Universe was hot, dense, and filled with electrons. As photons from the CMB made their long journey towards us, they scattered off of these electrons. From these scattering interactions at the appropriately named surface of last scattering, CMB photons naturally got linearly polarized at some specific angle, and some astronomers are on the hunt for a rotation of this initial polarization angle, called cosmic birefringence. This is exactly like the birefringence of a crystal, as light passing through a crystal can also be deflected at an angle relative to its initial path. The biggest difference between these two types of birefringence is merely that the photons from the CMB are polarized due to an energy field rather than a crystal. Some astronomers theorize that this energy field could be related to dark matter and dark energy, so a detection of this cosmic birefringence could tell us a lot about the ‘dark side’ of cosmology. Not only would a detection rule out Einstein’s cosmological constant as the origin of dark energy, but it would also tell us about the physics behind it. Also, since cosmic birefringence isn’t predicted by the standard ΛCDM cosmological model, it would also provide evidence for entirely new physics!

    Through the analysis of Planck polarization data, the author of today’s paper have found a tantalizing hint for cosmic birefringence. By using the latest reprocessing of Planck data, the author found a weak signal of cosmic birefringence corresponding to an angle of β = 0.30°± 0.11°. However, while this is an exciting result, it is not conclusive enough to call this a true detection of cosmic birefringence just yet. This is due to limitations in the precision of the measurements of the initial rotation angle, along with other possible systematic effects.

    Can You See Inflation?

    So, we haven’t detected cosmic birefringence, and we still don’t fully understand the nature of dark matter and dark energy. But what about inflation? While data from the CMB already provides support for inflation, astronomers are still on the lookout for a key piece of evidence in support of inflation: B-modes. Polarization angles from the CMB can be deconstructed into two types of modes: E-modes, which describe parallel or perpendicular angles, and B-modes, which describe 45° angles. B-modes are important proof of the inflationary model as the gravitational waves produced by inflation are the dominant contributor to B-modes. A detection of these B-modes would not only provide strong evidence for inflation, but also provide information about the physics behind it through analysis of their shape and properties. Although these modes also haven’t been detected yet, by using one potential model of inflation, today’s author has shown that their detection may be possible. (see Figure 2).

    3
    Figure 2: Plot of the B-mode power spectrum, which describes the power and properties of B-modes, as a function of multipole, which loosely describes angular size. The main takeaway is that at low multipoles (around 2 – 10), the energy from gravitational waves (blue) and the total contribution of new physics (green) is higher than the background energy (gray). So, with access to low-multipole data from missions such as the upcoming LiteBird satellite mission, detection of the B-modes from inflationary gravitational waves should be possible. Figure 5 in the paper.

    The CMB in the Future

    So, have we seen new physics in the CMB yet? Unfortunately, not quite—detecting cosmic birefringence or B-modes, as you have seen, is no easy task. Even small errors due to contamination, miscalibration, and systematic uncertainties can render these signals undetectable. However, the future looks bright. The noise level for CMB experiments has dropped nearly exponentially with time, and new CMB experiments such as SPT-4, CMB Stage-4, the Simons Observatory, JAXA and LiteBird are set to come online in the next decade. With new high-precision data on the horizon, and a little innovation, we may start to find the answers to these ambitious questions, so keep on the look out for these new results. Who knows, maybe we’ll find new physics along the way too!

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.


    Stem Education Coalition

    What do we do?

    Astrobites is a daily astrophysical literature journal written by graduate students in astronomy. Our goal is to present one interesting paper per day in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.
    Why read Astrobites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. In 5 minutes a day reading Astrobites, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in a new area of astronomy.

     
  • richardmitnick 11:07 am on July 7, 2022 Permalink | Reply
    Tags: "BBN": Big Bang nucleosynthesis, , "Cosmic Web": the large scale structure of the universe., , "Predicting the composition of dark matter", A new analysis by a team of physicists offers an innovative means to predict "cosmological signatures" for models of "dark matter"., , , Dark Energy, , , Dark matter detected only by its gravitational pull on ordinary matter., In this study the normal matter and dark matter and dark energy in a region of the universe are followed through to the present day using the equations of gravity and hydrodynamics and cosmology., , , , This research establishes new ways to find these cosmological signatures in more complex models.   

    From New York University via “phys.org” : “Predicting the composition of dark matter” 

    NYU BLOC

    From New York University

    Via

    “phys.org”

    July 6, 2022

    1
    An artist’s rendition of big bang nucleosynthesis, the early universe period in which protons “p” and neutrons “n” combine to form light elements. The presence of dark matter “χ” changes how much of each element will form. Credit: Cara Giovanetti/New York University.

    A new analysis by a team of physicists offers an innovative means to predict “cosmological signatures” for models of “dark matter”.

    A team of physicists has developed a method for predicting the composition of dark matter—invisible matter detected only by its gravitational pull on ordinary matter and whose discovery has been long sought by scientists.

    Its work, which appears in the journal Physical Review Letters, centers on predicting “cosmological signatures” for models of dark matter with a mass between that of the electron and the proton. Previous methods had predicted similar signatures for simpler models of dark matter. This research establishes new ways to find these signatures in more complex models, which experiments continue to search for, the paper’s authors note.

    “Experiments that search for dark matter are not the only way to learn more about this mysterious type of matter,” says Cara Giovanetti, a Ph.D. student in New York University’s Department of Physics and the lead author of the paper.


    Predicting the composition of dark matter.
    This visualization of a computer simulation showcases the ‘cosmic web’- the large scale structure of the universe. Each bright knot is an entire galaxy, while the purple filaments show where material exists between the galaxies. To the human eye, only the galaxies would be visible, and this visualization allows us to see the strands of material connecting the galaxies and forming the cosmic web. This visualization is based on a scientific simulation of the growth of structure in the universe. The matter and dark matter and dark energy in a region of the universe are followed from very early times of the universe through to the present day using the equations of gravity, hydrodynamics, and cosmology. The normal matter has been clipped to show only the densest regions, which are the galaxies, and is shown in white. The dark matter is shown in purple. The size of the simulation is a cube with a side length of 134 megaparsecs (437 million light-years). Credit: Hubblesite; Visualization: Frank Summers, Space Telescope Science Institute; Simulation: Martin White and Lars Hernquist, Harvard University.

    “Precision measurements of different parameters of the universe—for example, the amount of helium in the universe, or the temperatures of different particles in the early universe—can also teach us a lot about dark matter,” adds Giovanetti, outlining the method described in the Physical Review Letters paper.

    In the research, conducted with Hongwan Liu, an NYU postdoctoral fellow, Joshua Ruderman, an associate professor in NYU’s Department of Physics, and Princeton physicist Mariangela Lisanti, Giovanetti and her co-authors focused on big bang nucleosynthesis (BBN)—a process by which light forms of matter, such as helium, hydrogen, and lithium, are created. The presence of invisible dark matter affects how each of these elements will form. Also vital to these phenomena is the cosmic microwave background (CMB)—electromagnetic radiation, generated by combining electrons and protons, that remained after the universe’s formation.

    The team sought a means to spot the presence of a specific category of dark matter—that with a mass between that of the electron and the proton—by creating models that took into account both BBN and CMB.

    “Such dark matter can modify the abundances of certain elements produced in the early universe and leave an imprint in the cosmic microwave background by modifying how quickly the universe expands,” Giovanetti explains.

    In its research, the team made predictions of cosmological signatures linked to the presence of certain forms of dark matter. These signatures are the result of dark matter changing the temperatures of different particles or altering how fast the universe expands.

    Their results showed that dark matter that is too light will lead to different amounts of light elements than what astrophysical observations see.

    “Lighter forms of dark matter might make the universe expand so fast that these elements don’t have a chance to form,” says Giovanetti, outlining one scenario.

    “We learn from our analysis that some models of dark matter can’t have a mass that’s too small, otherwise the universe would look different from the one we observe,” she adds.
    __________________________________
    Dark Matter Background
    Fritz Zwicky discovered Dark Matter in the 1930s when observing the movement of the Coma Cluster., Vera Rubin a Woman in STEM, denied the Nobel, some 30 years later, did most of the work on Dark Matter.

    Fritz Zwicky.
    Coma cluster via NASA/ESA Hubble, the original example of Dark Matter discovered during observations by Fritz Zwicky and confirmed 30 years later by Vera Rubin.
    In modern times, it was astronomer Fritz Zwicky, in the 1930s, who made the first observations of what we now call dark matter. His 1933 observations of the Coma Cluster of galaxies seemed to indicated it has a mass 500 times more than that previously calculated by Edwin Hubble. Furthermore, this extra mass seemed to be completely invisible. Although Zwicky’s observations were initially met with much skepticism, they were later confirmed by other groups of astronomers.

    Thirty years later, astronomer Vera Rubin provided a huge piece of evidence for the existence of dark matter. She discovered that the centers of galaxies rotate at the same speed as their extremities, whereas, of course, they should rotate faster. Think of a vinyl LP on a record deck: its center rotates faster than its edge. That’s what logic dictates we should see in galaxies too. But we do not. The only way to explain this is if the whole galaxy is only the center of some much larger structure, as if it is only the label on the LP so to speak, causing the galaxy to have a consistent rotation speed from center to edge.

    Vera Rubin, following Zwicky, postulated that the missing structure in galaxies is dark matter. Her ideas were met with much resistance from the astronomical community, but her observations have been confirmed and are seen today as pivotal proof of the existence of dark matter.
    Astronomer Vera Rubin at the Lowell Observatory in 1965, worked on Dark Matter (The Carnegie Institution for Science).

    Vera Rubin, with Department of Terrestrial Magnetism (DTM) image tube spectrograph attached to the Kitt Peak 84-inch telescope, 1970.

    Vera Rubin measuring spectra, worked on Dark Matter(Emilio Segre Visual Archives AIP SPL).
    Dark Matter Research

    Super Cryogenic Dark Matter Search from DOE’s SLAC National Accelerator Laboratory (US) at Stanford University (US) at SNOLAB (Vale Inco Mine, Sudbury, Canada).

    LBNL LZ Dark Matter Experiment (US) xenon detector at Sanford Underground Research Facility(US) Credit: Matt Kapust.

    Lamda Cold Dark Matter Accerated Expansion of The universe http scinotions.com the-cosmic-inflation-suggests-the-existence-of-parallel-universes. Credit: Alex Mittelmann.

    DAMA at Gran Sasso uses sodium iodide housed in copper to hunt for dark matter LNGS-INFN.

    Yale HAYSTAC axion dark matter experiment at Yale’s Wright Lab.

    DEAP Dark Matter detector, The DEAP-3600, suspended in the SNOLAB (CA) deep in Sudbury’s Creighton Mine.

    The LBNL LZ Dark Matter Experiment (US) Dark Matter project at SURF, Lead, SD, USA.

    DAMA-LIBRA Dark Matter experiment at the Italian National Institute for Nuclear Physics’ (INFN’s) Gran Sasso National Laboratories (LNGS) located in the Abruzzo region of central Italy.

    DARWIN Dark Matter experiment. A design study for a next-generation, multi-ton dark matter detector in Europe at The University of Zurich [Universität Zürich](CH).

    PandaX II Dark Matter experiment at Jin-ping Underground Laboratory (CJPL) in Sichuan, China.

    Inside the Axion Dark Matter eXperiment U Washington (US) Credit : Mark Stone U. of Washington. Axion Dark Matter Experiment.
    __________________________________

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    NYU Campus

    More than 175 years ago, Albert Gallatin, the distinguished statesman who served as secretary of the treasury under Presidents Thomas Jefferson and James Madison, declared his intention to establish “in this immense and fast-growing city … a system of rational and practical education fitting for all and graciously opened to all.” Founded in 1831, New York University is now one of the largest private universities in the United States. Of the more than 3,000 colleges and universities in America, New York University is one of only 60 member institutions of the distinguished Association of American Universities.

    New York University is a private research university in New York City. Chartered in 1831 by the New York State Legislature, NYU was founded by a group of New Yorkers led by then Secretary of the Treasury Albert Gallatin.

    In 1832, the initial non-denominational all-male institution began its first classes near City Hall based on a curriculum focused on a secular education. The university, in 1833, then moved and has maintained its main campus in Greenwich Village surrounding Washington Square Park. Since then, the university has added an engineering school in Brooklyn’s MetroTech Center and graduate schools throughout Manhattan. NYU has become the largest private university in the United States by enrollment, with a total of 51,848 enrolled students, including 26,733 undergraduate students and 25,115 graduate students, in 2019. NYU also receives the most applications of any private institution in the United States and admissions is considered highly selective.

    NYU is organized into 10 undergraduate schools, including the College of Arts & Science, Gallatin School, Steinhart School, Stern School of Business, Tandon School of Engineering, and the Tisch School of Arts. NYU’s 15 graduate schools includes the Grossman School of Medicine, School of Law, Wagner Graduate School of Public Service, School of Professional Studies, School of Social Work, Rory Meyers School of Nursing, and Silver School of Social Work. The university’s internal academic centers include the Courant Institute of Mathematical Sciences, Center for Data Science, Center for Neural Science, Clive Davis Institute, Institute for the Study of the Ancient World, Institute of Fine Arts, and the NYU Langone Health System. NYU is a global university with degree-granting campuses at NYU Abu Dhabi and NYU Shanghai, and academic centers in Accra, Berlin, Buenos Aires, Florence, London, Los Angeles, Madrid, Paris, Prague, Sydney, Tel Aviv, and Washington, D.C.

    Past and present faculty and alumni include 38 Nobel Laureates, 8 Turing Award winners, 5 Fields Medalists, 31 MacArthur Fellows, 26 Pulitzer Prize winners, 3 heads of state, a U.S. Supreme Court justice, 5 U.S. governors, 4 mayors of New York City, 12 U.S. Senators, 58 members of the U.S. House of Representatives, two Federal Reserve Chairmen, 38 Academy Award winners, 30 Emmy Award winners, 25 Tony Award winners, 12 Grammy Award winners, 17 billionaires, and seven Olympic medalists. The university has also produced six Rhodes Scholars, three Marshall Scholars, 29 Schwarzman Scholars, and one Mitchell Scholar.

    Research

    NYU is classified among “R1: Doctoral Universities – Very high research activity” and research expenditures totaled $917.7 million in 2017. The university was the founding institution of the American Chemical Society. The NYU Grossman School of Medicine received $305 million in external research funding from the National Institutes of Health in 2014. NYU was granted 90 patents in 2014, the 19th most of any institution in the world. NYU owns the fastest supercomputer in New York City. As of 2016, NYU hardware researchers and their collaborators enjoy the largest outside funding level for hardware security of any institution in the United States, including grants from the National Science Foundation, the Office of Naval Research, the Defense Advanced Research Projects Agency, the United States Army Research Laboratory, the Air Force Research Laboratory, the Semiconductor Research Corporation, and companies including Twitter, Boeing, Microsoft, and Google.

    In 2019, four NYU Arts & Science departments ranked in Top 10 of Shanghai Academic Rankings of World Universities by Academic Subjects (Economics, Politics, Psychology, and Sociology).

     
  • richardmitnick 3:23 pm on March 24, 2022 Permalink | Reply
    Tags: "What Can We Learn About the Universe from Just One Galaxy?", , , , CAMELS: Cosmology and Astrophysics with MachinE Learning Simulations, , Dark Energy, , , Omega matter: a cosmological parameter that describes how much dark matter is in the universe, ,   

    From The New Yorker: “What Can We Learn About the Universe from Just One Galaxy?” 


    Rea Irvin

    From The New Yorker

    March 23, 2022
    Rivka Galchen

    1
    Illustration by Nicholas Konrad /The New Yorker

    In new research, begun by an undergraduate, William Blake’s phrase “to see a world in a grain of sand” is suddenly relevant to astrophysics.

    Imagine if you could look at a snowflake at the South Pole and determine the size and the climate of all of Antarctica. Or study a randomly selected tree in the Amazon rain forest and, from that one tree—be it rare or common, narrow or wide, young or old—deduce characteristics of the forest as a whole. Or, what if, by looking at one galaxy among the hundred billion or so in the observable universe, one could say something substantial about the universe as a whole? A recent paper, whose lead authors include a cosmologist, a galaxy-formation expert, and an undergraduate named Jupiter (who did the initial work), suggests that this may be the case. The result at first seemed “crazy” to the paper’s authors. Now, having discussed their work with other astrophysicists and done various “sanity checks,” trying to find errors in their methods, the results are beginning to seem pretty clear. Francisco Villaescusa-Navarro, one of the lead authors of the work, said, “It does look like galaxies somehow retain a memory of the entire universe.”

    The research began as a sort of homework exercise. Jupiter Ding, while a freshman at Princeton University, wrote to the department of astrophysics, hoping to get involved in research. He mentioned that he had some experience with machine learning, a form of artificial intelligence that is adept at picking out patterns in very large data sets. Villaescusa-Navarro, an astrophysicist focused on cosmology, had an idea for what the student might work on. Villaescusa-Navarro had long wanted to look into whether machine learning could be used to help find relationships between galaxies and the universe. “I was thinking, What if you could look at only a thousand galaxies and from that learn properties about the entire universe? I wondered, What is the smallest number we could look at? What if you looked at only one hundred? I thought, O.K., we’ll start with one galaxy.”

    He had no expectation that one galaxy would provide much. But he thought that it would be a good way for Ding to practice using machine learning on a database known as CAMELS (Cosmology and Astrophysics with MachinE Learning Simulations). Shy Genel, an astrophysicist focussed on galaxy formation, who is another lead author on the paper, explained CAMELS this way: “We start with a description of reality shortly after the Big Bang. At that point, the universe is mostly hydrogen gas, and some helium and dark matter. And then, using what we know of the laws of physics, our best guess, we then run the cosmic history for roughly fourteen billion years.” Cosmological simulations have been around for about forty years, but they are increasingly sophisticated—and fast. CAMELS contains some four thousand simulated universes. Working with simulated universes, as opposed to our own, lets researchers ask questions that the gaps in our observational data preclude us from answering. They also let researchers play with different parameters, like the proportions of dark matter and hydrogen gas, to test their impact.

    Ding did the work on CAMELS from his dorm room, on his laptop. He wrote programs to work with the CAMELS data, then sent them to one of the university’s computing clusters, a collection of computers with far more power than his MacBook Air. That computing cluster contained the CAMELS data. Ding’s model trained itself by taking a set of simulated universes and looking at the galaxies within them. Once trained, the model would then be shown a sample galaxy and asked to predict features of the universe from which it was sampled.

    Ding is very humble about his contribution to the research, but he knows far more about astrophysics than even an exceptional first-year student typically does. Ding, a middle child with two sisters, grew up in State College, Pennsylvania. In high school, he took a series of college-level astronomy courses at Penn State and worked on a couple of research projects that involved machine learning. “My dad was really interested in astronomy as a high schooler,” Ding told me. “He went another direction, though.” His father is a professor of marketing at Penn State’s business school.

    Artificial intelligence is an umbrella concept for various disciplines, including machine learning. A famous early machine-learning task was to get a computer to recognize an image of a cat. This is something that a human can do easily, but, for a computer, there are no simple parameters that define the visual concept of a cat. Machine learning is now used for detecting patterns or relationships that are nearly impossible for humans to see, in part because the data is often in many dimensions. The programmer remains the captain, telling the computer what to learn, and deciding what input it’s trained on. But the computer adapts, iteratively, as it learns, and in that way becomes the author of its own algorithms. It was machine learning, for example, that discovered, through analyzing language patterns, the alleged main authors of the posts by “Q” (the supposed high-ranking government official who sparked the QAnon conspiracy theory). It was also able to identify which of Q’s posts appeared to be written by Paul Furber, a South African software developer, and by Ron Watkins, the son of the former owner of 8chan. Machine-learning programs have also been applied in health care, using data to predict which patients are most at risk of falling. Compared with the intuition of doctors, the machine-learning-based assessments reduced falls by about forty per cent, an enormous margin of improvement for a medical intervention.

    Machine learning has catapulted astrophysics research forward, too. Villaescusa-Navarro said, “As a community, we have been dealing with super-hard problems for many, many years. Problems that the smartest people in the field have been working on for decades. And from one day to the next, these problems are getting solved with machine learning.” Even generating a single simulated universe used to take a very long time. You gave a computer some initial conditions and then had to wait while it worked out what those conditions would produce some fourteen billion years down the line. It took less than fourteen billion years, of course, but there was no way to build up a large database of simulated universes in a timely way. Machine-learning advances have sped up these simulations, making a project like CAMELS possible. An even more ambitious project, Learning the Universe, will use machine learning to create simulated universes millions of times faster than CAMELS can; it will then use what’s called simulation-based inference—along with real observational data from telescopes—to determine which starting parameters lead to a universe that most closely resembles our own.

    Ding told me that one of the reasons he chose astronomy has been the proximity he feels to breakthroughs in the field, even as an undergraduate. “For example, I’m in a cosmology class right now, and when my professor talks about dark matter, she talks about it as something ‘a good friend of mine, Vera Rubin, put on the map,’ ” he said. “And dark energy was discovered by a team at Harvard University about twenty years ago, and I did a summer program there. So here I am, learning about this stuff pretty much in the places where these things were happening.” Ding’s research produced something profoundly unexpected. His model used a single galaxy in a simulated universe to pretty accurately say something about that universe. The specific characteristic it was able to predict is called Omega matter, which relates to the density of a universe. Its value was accurately predicted to within ten per cent.

    Ding was initially unsure how meaningful his results were and was curious to hear Villaescusa-Navarro’s perspective. He was more than skeptical. “My first thought was, This is completely crazy, I don’t believe it, this is the work of an undergraduate, there must be a mistake,” Villaescusa-Navarro said. “I asked him to run the program in a few other ways to see if he would still come up with similar results.” The results held.

    Villaescusa-Navarro began to do his own calculations. His doubt focussed foremost on the way that the machine learning itself worked. “One thing about neural networks is that they are amazing at finding correlations, but they also can pick up on numerical artifacts,” he said. Was a parameter wrong? Was there a bug in the code? Villaescusa-Navarro wrote his own program, to ask the same sort of question that he had assigned to Ding: What could information about one galaxy say about the universe in which it resided? Even when asked by a different program, written from scratch, the answer was still coming out the same. This suggested that the result was catching something real.

    “But we couldn’t just publish that,” Villaescusa-Navarro said. “We needed to try and understand why this might be working.” It was working for small galaxies, and for large galaxies, and for galaxies with very different features; only for a small handful of eccentric galaxies did the work not hold. Why?

    The recipe for making a universe is to start with a lot of hydrogen, a little helium, some dark matter, and some dark energy. Dark matter has mass, like the matter we’re familiar with, but it doesn’t reflect or emit light, so we can’t see it. We also can’t see dark energy, but we can think of it as working in the opposite direction of gravity. The universe’s matter, via gravity, pushes it to contract; the universe’s dark energy pushes it to expand.

    Omega matter is a cosmological parameter that describes how much dark matter is in the universe. Along with other parameters, it controls how much the universe is expanding. The higher its value, the slower the universe would grow. One of the research group’s hypotheses to explain their results is, roughly, that the amount of dark matter in a universe has a very strong effect on a galaxy’s properties—a stronger effect than other characteristics. For this reason, even one galaxy could have something to say about the Omega matter of its parent universe, since Omega matter is correlated to what can be pictured as the density of matter that makes a galaxy clump together.

    In December, Genel, an expert on galaxy formation, presented the preliminary results of the paper to the galaxy-formation group he belongs to at The Flatiron Institute Center for Computational Astrophysics. “This was really one of the most fun things that happened to me,” he said. He told me that any galaxy-formation expert could have no other first reaction than to think, This is impossible. A galaxy is, on the scale of a universe, about as substantial as a grain of sand is, relative to the size of the Earth. To think that all by itself it can say something so substantial is, to the majority of the astrophysics community, extremely surprising, in a way analogous to the discovery that each of our cells—from a fingernail cell to a liver cell—contains coding describing our entire body. (Though maybe to the poetic way of thinking—to see the world in a grain of sand—the surprise is that this is surprising.)

    Rachel Somerville, an astrophysicist who was at the talk, recalled the initial reaction as “skepticism, but respectful skepticism, since we knew these were serious researchers.” She remembers being surprised that the approach had even been tried, since it seemed so tremendously unlikely that it would work. Since that time, the researchers have shared their coding and results with experts in the field; the results are taken to be credible and compelling, though the hesitations that the authors themselves have about the results remain.

    The results are not “robust”—for now, the computer can make valid predictions only on the type of universe that it has been trained on. Even within CAMELS, there are two varieties of simulations, and, if the machine is trained on one variety, it cannot be used to make predictions for galaxies in the other variety. That also means that the results cannot be used to make predictions about the universe we live in—at least not yet.

    Villaescusa-Navarro told me, “It is a very beautiful result—I know I shouldn’t say that about my own work.” But what is beauty to an astrophysicist? “It’s about an unexpected connection between two things that seemed not to be related. In this case, cosmology and galaxy formation. It’s about something hidden being revealed.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 2:10 pm on March 12, 2022 Permalink | Reply
    Tags: "Ask Ethan-Did our Universe really arise from nothing?", , , , , Dark Energy, , , ,   

    From Ethan Siegel “Ask Ethan-Did our Universe really arise from nothing?”

    Mar 11, 2022

    The Big Bang was hot, dense, uniform, and filled with matter and energy. Before that? There was nothing. Here’s how that’s possible.

    The more curious we get about the great cosmic unknowns, the more unanswered questions our investigations of the Universe will reveal. Inquiring about the nature of anything — where it is, where it came from, and how it came to be — will inevitably lead you to the same great mysteries: about the ultimate nature and origin of the Universe and everything in it. Yet, no matter how far back we go, those same lingering questions always seem to remain: at some point, the entities that are our “starting point” didn’t necessarily exist, so how did they come to be? Eventually, you wind up at the ultimate question: how did something arise from nothing? As many recent questioners, including Luke Martin, Buzz Morse, Russell Blalack, John Heiss and many others have written:

    “Okay, you surely receive this question endlessly, but I shall ask nonetheless: How did something (the universe/big bang) come from nothing?”

    This is maybe one of the biggest questions of all, because it’s basically asking not only where did everything come from, but how did all of it arise in the first place. Here’s as far as science has gotten us, at least, so far.

    2
    A detailed look at the Universe reveals that it’s made of matter and not antimatter, that dark matter and dark energy are required, and that we don’t know the origin of any of these mysteries. However, the fluctuations in the CMB, the formation and correlations between large-scale structure, and modern observations of gravitational lensing all point towards the same picture. (Credit: Chris Blake and Sam Moorfield)

    Today, when we look out at the Universe, the full suite of observations we’ve collected, even with the known uncertainties taken into account, all point towards a remarkably consistent picture. Our Universe is made of matter (rather than antimatter), obeys the same laws of physics everywhere and at all times, and began — at least, as we know it — with a hot Big Bang some 13.8 billion years ago. It’s governed by General Relativity, it’s expanding and cooling and gravitating, and it’s dominated by dark energy (68%) and dark matter (27%), with normal matter, neutrinos, and radiation making up the rest.

    Today, of course, it’s full of galaxies, stars, planets, heavy elements, and in at least one location, intelligent and technologically advanced life. These structures weren’t always there, but rather arose as a result of cosmic evolution. In a remarkable scientific leap, 20th century scientists were able to reconstruct the timeline for how our Universe went from a mostly uniform Universe, devoid of complex structure and consisting exclusively of hydrogen and helium, to the structure-rich Universe we observe today.

    5
    Supernova remnants (L) and planetary nebulae (R) are both ways for stars to recycle their burned, heavy elements back into the interstellar medium and the next generation of stars and planets. These processes are two ways that the heavy elements necessary for chemical-based life to arise are generated, and it’s difficult (but not impossible) to imagine a Universe without them still giving rise to intelligent observers. (Credits: ESO/VLT/FORS Instrument & Team (L); NASA/ESA/C.R. O’Dell (Vanderbilt) and D. Thompson (LBT) (R))

    If we start from today, we can step backwards in time, and ask where any individual structure or component of that structure came from. For each answer we get, we can then ask, “ok, but where did that come from and how did that arise,” going back until we’re forced to answer, “we don’t know, at least not yet.” Then, at last, we can contemplate what we have, and ask, “how did that arise, and is there a way that it could have arisen from nothing?”

    So, let’s get started.

    The life we have today comes from complex molecules, which must have arisen from the atoms of the periodic table: the raw ingredients that make up all the normal matter we have in the Universe today. The Universe wasn’t born with these atoms; instead, they required multiple generations of stars living-and-dying, with the products of their nuclear reactions recycled into future generations of stars. Without this, planets and complex chemistry would be an impossibility.

    In order to form modern stars and galaxies, we need:

    gravitation to pull small galaxies and star clusters into one another, creating large galaxies and triggering new waves of star formation,
    which required pre-existing collections of mass, created from gravitational growth,
    which require dark matter haloes to form early on, preventing star forming episodes from ejecting that matter back into the intergalactic medium,
    which require the right balance of normal matter, dark matter, and radiation to give rise to the cosmic microwave background, the light elements formed in the hot Big Bang, and the abundances/patterns we see in them,
    which required initial seed fluctuations — density imperfections — to gravitationally grow into these structures,
    which require some way of creating these imperfections, along with some way of creating dark matter and creating the initial amounts of normal matter.

    These are three key ingredients that are required, in the early stages of the hot Big Bang, to give rise to the Universe as we observe it today. Assuming that we also require the laws of physics and spacetime itself to exist — along with matter/energy itself — we probably want to include those as the necessary ingredients that must somehow arise.

    So, in short, when we ask whether we can get a Universe from nothing or not, these are the novel, hitherto unexplained entities that we need to somehow arise.

    5
    An equally-symmetric collection of matter and antimatter (of X and Y, and anti-X and anti-Y) bosons could, with the right GUT properties, give rise to the matter/antimatter asymmetry we find in our Universe today. However, we assume that there is a physical, rather than a divine, explanation for the matter-antimatter asymmetry we observe today, but we do not yet know for certain. (Credit: E. Siegel/Beyond the Galaxy.)

    To get more matter than antimatter, we have to extrapolate back into the very early Universe, to a time when our physics is very much uncertain. The laws of physics as we know them are in some sense symmetric between matter and antimatter: every reaction we’ve ever created or observed can only create-or-destroy matter and antimatter in equal amounts. But the Universe we had, despite beginning in an incredibly hot and dense state where matter and antimatter could both be created in abundant, copious amounts, must have had some way to create a matter/antimatter asymmetry where none existed initially.

    There are many ways to accomplish this. Although we don’t know which scenario actually took place in our young Universe, all ways of doing so involve the following three elements:

    an out-of-equilibrium set of conditions, which naturally arise in an expanding, cooling Universe,
    a way to generate baryon-number-violating interactions, which the Standard Model allows through sphaleron interactions (and beyond-the-Standard-Model scenarios allow in additional ways),
    and a way to generate enough C and CP violation to create a matter/antimattery asymmetry in great enough amounts.

    The Standard Model has all of these ingredients, but not enough.

    If you consider a matter/antimatter symmetric Universe as “a Universe with nothing,” then it’s almost guaranteed that the Universe generated something from nothing, even though we aren’t quite certain exactly how it happened.

    6
    The overdense regions from the early Universe grow and grow over time, but are limited in their growth by both the initial small sizes of the overdensities and also by the presence of radiation that’s still energetic, which prevents structure from growing any faster. It takes tens-to-hundreds of millions of years to form the first stars; clumps of matter exist long before that, however. (Credit: Aaron Smith/TACC/UT-Austin)

    Similarly, there are lots of viable ways to generate dark matter. We know — from extensive testing and searching — that whatever dark matter is, it can’t be composed of any particles that are present in the Standard Model. Whatever its true nature is, it requires new physics beyond what’s presently known. But there are many ways it could have been created, including:

    from being thermally created in the hot, early Universe, and then failing to completely annihilate away, remaining stable thereafter (like the lightest supersymmetric or Kaluza-Klein particle),
    or from a phase transition that spontaneously occurred as the Universe expanded and cooled, ripping massive particles out of the quantum vacuum (e.g., the axion),
    as a new form of a neutrino, which itself can either mix with the known neutrinos (i.e., a sterile neutrino), or as a heavy right-handed neutrino that exists in addition to the conventional neutrinos,
    or as a purely gravitational phenomenon that gives rise to an ultramassive particle (e.g., a WIMPzilla).

    Why is there dark matter, today, when the remainder of the Universe appears to work just fine early on without it? There must have been some way to generate this “thing” where there wasn’t such a thing beforehand, but all of these scenarios require energy. So, then, where did all that energy come from?

    6
    The Universe as we observe it today began with the hot Big Bang: an early hot, dense, uniform, expanding state with specific initial conditions. But if we want to understand where the Big Bang comes from, we must not assume it’s the absolute beginning, and we must not assume that anything we can’t predict doesn’t have a mechanism to explain it. (Credit: C.-A. Faucher-Giguere, A. Lidz, and L. Hernquist, Science, 2008)

    Perhaps, according to cosmic inflation — our leading theory of the Universe’s pre-Big Bang origins — it really did come from nothing. This requires a little bit of an explanation, and is what is most frequently meant by “a Universe from nothing.” (Including, by the way, as it was used in the title of the book of the same name.)

    When you imagine the earliest stages of the hot Big Bang, you have to think of something incredibly hot, dense, high-energy, and almost perfectly uniform. When we ask, “how did this arise,” we typically have two options.

    We can go the Lady Gaga route, and just claim it must’ve been “born this way.” The Universe was born with these properties, which we call initial conditions, and there’s no further explanation. As a theoretical physicist, we call this approach “giving up.”
    Or we can do what theoretical physicists do best: try and concoct a theoretical mechanism that could explain the initial conditions, teasing out concrete predictions that differ from the standard, prevailing theory’s predictions and then going out seeking to measure the critical parameters.

    Cosmic inflation came about as a result of taking that second approach, and it literally changed our conception of how our Universe came to be.

    ___________________________________________________________________
    Inflation

    4
    Alan Guth, from M.I.T., who first proposed cosmic inflation

    Lamda Cold Dark Matter Accerated Expansion of The universe http scinotions.com the-cosmic-inflation-suggests-the-existence-of-parallel-universes. Credit: Alex Mittelmann.

    Alan Guth’s notes:
    Alan Guth’s original notes on inflation
    ________________________________________________________________
    7
    Exponential expansion, which takes place during inflation, is so powerful because it is relentless. With every ~10^-35 seconds (or so) that passes, the volume of any particular region of space doubles in each direction, causing any particles or radiation to dilute and causing any curvature to quickly become indistinguishable from flat. (Credit: E. Siegel (L); Ned Wright’s Cosmology Tutorial (R))

    Instead of extrapolating “hot and dense” back to an infinitely hot, infinitely dense singularity, inflation basically says, “perhaps the hot Big Bang was preceded by a period where an extremely large energy density was present in the fabric of space itself, causing the Universe to expand at a relentless (inflationary) rate, and then when inflation ended, that energy got transferred into matter-and-antimatter-and-radiation, creating what we see as the hot Big Bang: the aftermath of inflation.”

    In gory detail, this not only creates a Universe with the same temperature everywhere, spatial flatness, and no leftover relics from a hypothetical grand unified epoch, but also predicts a particular type and spectrum of seed (density) fluctuations, which we then went out and saw. From just empty space itself — although it is empty space filled with a large amount of field energy — a natural process has created the entire observable Universe, rich in structure, as we see it today.

    That’s the big idea of getting a Universe from nothing, but it isn’t satisfying to everyone.

    8
    Even in empty space, the quantum fluctuations inherent to the field nature of the fundamental interactions cannot be removed. As the Universe inflates in the earliest stages, those fluctuations get stretched across the Universe, giving rise to seed density and temperature fluctuations that can still be observed today. (Credit: E. Siegel/Beyond the Galaxy)

    To a large fraction of people, a Universe where space-and-time still exist, along with the laws of physics, the fundamental constants, and some non-zero field energy inherent to the fabric of space itself, is very much divorced from the idea of nothingness. We can imagine, after all, a location outside of space; a moment beyond the confines of time; a set of conditions that have no physical reality to constrain them. And those imaginings — if we define these physical realities as things we need to eliminate to obtain true nothingness — are certainly valid, at least philosophically.

    But that’s the difference between philosophical nothingness and a more physical definition of nothingness. As I wrote back in 2018, there are four scientific definitions of nothing, and they’re all valid, depending on your context:

    A time when your “thing” of interest didn’t exist,
    Empty, physical space,
    Empty spacetime in the lowest-energy state possible, and
    Whatever you’re left with when you take away the entire Universe and the laws governing it.

    We can definitely say we obtained “a Universe from nothing” if we use the first two definitions; we cannot if we use the third; and quite unfortunately, we don’t know enough to say what happens if we use the fourth. Without a physical theory to describe what happens outside of the Universe and beyond the realm physical laws, the concept of true nothingness is physically ill-defined.

    9
    Fluctuations in spacetime itself at the quantum scale get stretched across the Universe during inflation, giving rise to imperfections in both density and gravitational waves. While inflating space can rightfully be called ‘nothing’ in many regards, not everyone agrees. (Credit: E. Siegel; ESA/Planck and the DOE/NASA/NSF Interagency Task Force on CMB research)

    In the context of physics, it’s impossible to make sense of an idea of absolute nothingness. What does it mean to be outside of space and time, and how can space and time sensibly, predictably emerge from a state of non-existence? How can spacetime emerge at a particular location or time, when there’s no definition of location or time without it? Where do the rules governing quanta — the fields and particles both — arise from?

    This line of thought even assumes that space, time, and the laws of physics themselves weren’t eternal, when in fact they may be. Any theorems or proofs to the contrary rely on assumptions whose validity is not soundly established under the conditions which we’d seek to apply them. If you accept a physical definition of “nothing,” then yes, the Universe as we know it very much appears to have arisen from nothing. But if you leave physical constraints behind, then all certainly about our ultimate cosmic origins disappears.

    Unfortunately for us all, inflation, by its very nature, erases any information that might be imprinted from a pre-existing state on our observable Universe. Despite the limitless nature of our imaginations, we can only draw conclusions about matters for which tests involving our physical reality can be constructed. No matter how logically sound any other consideration may be, including a notion of absolute nothingness, it’s merely a construct of our minds.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 4:45 pm on January 21, 2022 Permalink | Reply
    Tags: "Any Single Galaxy Reveals the Composition of an Entire Universe", A group of scientists may have stumbled upon a radical new way to do cosmology., , Cosmic density of matter, , Dark Energy, , , , , , The Cosmology and Astrophysics with Machine Learning Simulations (CAMELS) project, Theoretical Astrophysics   

    From Quanta Magazine (US): “Any Single Galaxy Reveals the Composition of an Entire Universe” 

    From Quanta Magazine (US)

    January 20, 2022
    Charlie Wood

    1
    Credit: Kaze Wong / CAMELS collaboration.


    In the CAMELS project, coders simulated thousands of universes with diverse compositions, arrayed at the end of this video as cubes.

    A group of scientists may have stumbled upon a radical new way to do cosmology.

    Cosmologists usually determine the composition of the universe by observing as much of it as possible. But these researchers have found that a machine learning algorithm can scrutinize a single simulated galaxy and predict the overall makeup of the digital universe in which it exists — a feat analogous to analyzing a random grain of sand under a microscope and working out the mass of Eurasia. The machines appear to have found a pattern that might someday allow astronomers to draw sweeping conclusions about the real cosmos merely by studying its elemental building blocks.

    “This is a completely different idea,” said Francisco Villaescusa-Navarro, a theoretical astrophysicist at The Flatiron Institute Center for Computational Astrophysics (US) and lead author of the work. “Instead of measuring these millions of galaxies, you can just take one. It’s really amazing that this works.”

    It wasn’t supposed to. The improbable find grew out of an exercise Villaescusa-Navarro gave to Jupiter Ding, a Princeton University(US) undergraduate: Build a neural network that, knowing a galaxy’s properties, can estimate a couple of cosmological attributes. The assignment was meant merely to familiarize Ding with machine learning. Then they noticed that the computer was nailing the overall density of matter.

    “I thought the student made a mistake,” Villaescusa-Navarro said. “It was a little bit hard for me to believe, to be honest.”

    The results of the investigation that followed appeared on January 6 submitted for publication. The researchers analyzed 2,000 digital universes generated by The Cosmology and Astrophysics with Machine Learning Simulations (CAMELS) project [The Astrophysical Journal]. These universes had a range of compositions, containing between 10% and 50% matter with the rest made up of Dark Energy, which drives the universe to expand faster and faster. (Our actual cosmos consists of roughly one-third Dark Matter and visible matter and two-thirds Dark Energy.) As the simulations ran, Dark Matter and visible matter swirled together into galaxies. The simulations also included rough treatments of complicated events like supernovas and jets that erupt from supermassive black holes.

    Ding’s neural network studied nearly 1 million simulated galaxies within these diverse digital universes. From its godlike perspective, it knew each galaxy’s size, composition, mass, and more than a dozen other characteristics. It sought to relate this list of numbers to the density of matter in the parent universe.

    It succeeded. When tested on thousands of fresh galaxies from dozens of universes it hadn’t previously examined, the neural network was able to predict the cosmic density of matter to within 10%. “It doesn’t matter which galaxy you are considering,” Villaescusa-Navarro said. “No one imagined this would be possible.”

    “That one galaxy can get [the density to] 10% or so, that was very surprising to me,” said Volker Springel, an expert in simulating galaxy formation at The MPG Institute for Astrophysics [MPG Institut für Astrophysik](DE) who was not involved in the research.

    The algorithm’s performance astonished researchers because galaxies are inherently chaotic objects. Some form all in one go, and others grow by eating their neighbors. Giant galaxies tend to hold onto their matter, while supernovas and black holes in dwarf galaxies might eject most of their visible matter. Still, every galaxy had somehow managed to keep close tabs on the overall density of matter in its universe.

    One interpretation is “that the universe and/or galaxies are in some ways much simpler than we had imagined,” said Pauline Barmby, an astronomer at The Western University (CA). Another is that the simulations have unrecognized flaws.

    The team spent half a year trying to understand how the neural network had gotten so wise. They checked to make sure the algorithm hadn’t just found some way to infer the density from the coding of the simulation rather than the galaxies themselves. “Neural networks are very powerful, but they are super lazy,” Villaescusa-Navarro said.

    Through a series of experiments, the researchers got a sense of how the algorithm was divining the cosmic density. By repeatedly retraining the network while systematically obscuring different galactic properties, they zeroed in on the attributes that mattered most.

    Near the top of the list was a property related to a galaxy’s rotation speed, which corresponds to how much matter (dark and otherwise) sits in the galaxy’s central zone. The finding matches physical intuition, according to Springel. In a universe overflowing with Dark Matter, you’d expect galaxies to grow heavier and spin faster. So you might guess that rotation speed would correlate with the cosmic matter density, although that relationship alone is too rough to have much predictive power.

    The neural network found a much more precise and complicated relationship between 17 or so galactic properties and the matter density. This relationship persists despite galactic mergers, stellar explosions and black hole eruptions. “Once you get to more than [two properties], you can’t plot it and squint at it by eye and see the trend, but a neural network can,” said Shaun Hotchkiss, a cosmologist at The University of Auckland (NZ).

    While the algorithm’s success raises the question of how many of the universe’s traits might be extracted from a thorough study of just one galaxy, cosmologists suspect that real-world applications will be limited. When Villaescusa-Navarro’s group tested their neural network on a different property — cosmic clumpiness — it found no pattern. And Springel expects that other cosmological attributes, such as the accelerating expansion of the universe due to Dark Energy, have little effect on individual galaxies.

    The research does suggest that, in theory, an exhaustive study of the Milky Way and perhaps a few other nearby galaxies could enable an exquisitely precise measurement of our universe’s matter. Such an experiment, Villaescusa-Navarro said, could give clues to other numbers of cosmic import such as the sum of the unknown masses of the universe’s three types of neutrinos.

    3
    Neutrinos- Universe Today

    But in practice, the technique would have to first overcome a major weakness. The CAMELS collaboration cooks up its universes using two different recipes. A neural network trained on one of the recipes makes bad density guesses when given galaxies that were baked according to the other. The cross-prediction failure indicates that the neural network is finding solutions unique to the rules of each recipe. It certainly wouldn’t know what to do with the Milky Way, a galaxy shaped by the real laws of physics. Before applying the technique to the real world, researchers will need to either make the simulations more realistic or adopt more general machine learning techniques — a tall order.

    “I’m very impressed by the possibilities, but one needs to avoid being too carried away,” Springel said.

    But Villaescusa-Navarro takes heart that the neural network was able to find patterns in the messy galaxies of two independent simulations. The digital discovery raises the odds that the real cosmos may be hiding a similar link between the large and the small.

    “It’s a very beautiful thing,” he said. “It establishes a connection between the whole universe and a single galaxy.”

    _____________________________________________________________________________________
    The Dark Energy Survey

    Dark Energy Camera [DECam] built at DOE’s Fermi National Accelerator Laboratory(US).

    NOIRLab National Optical Astronomy Observatory(US) Cerro Tololo Inter-American Observatory(CL) Victor M Blanco 4m Telescope which houses the Dark-Energy-Camera – DECam at Cerro Tololo, Chile at an altitude of 7200 feet.

    NOIRLab(US)NSF NOIRLab NOAO (US) Cerro Tololo Inter-American Observatory(CL) approximately 80 km to the East of La Serena, Chile, at an altitude of 2200 meters.

    Timeline of the Inflationary Universe WMAP.

    The The Dark Energy Survey is an international, collaborative effort to map hundreds of millions of galaxies, detect thousands of supernovae, and find patterns of cosmic structure that will reveal the nature of the mysterious dark energy that is accelerating the expansion of our Universe. The Dark Energy Survey began searching the Southern skies on August 31, 2013.

    According to Albert Einstein’s Theory of General Relativity, gravity should lead to a slowing of the cosmic expansion. Yet, in 1998, two teams of astronomers studying distant supernovae made the remarkable discovery that the expansion of the universe is speeding up.

    Saul Perlmutter (center) [The Supernova Cosmology Project] shared the 2006 Shaw Prize in Astronomy, the 2011 Nobel Prize in Physics, and the 2015 Breakthrough Prize in Fundamental Physics with Brian P. Schmidt (right) and Adam Riess (left) [The High-z Supernova Search Team] for providing evidence that the expansion of the universe is accelerating.

    To explain cosmic acceleration, cosmologists are faced with two possibilities: either 70% of the universe exists in an exotic form, now called Dark Energy, that exhibits a gravitational force opposite to the attractive gravity of ordinary matter, or General Relativity must be replaced by a new theory of gravity on cosmic scales.

    The Dark Energy Survey is designed to probe the origin of the accelerating universe and help uncover the nature of Dark Energy by measuring the 14-billion-year history of cosmic expansion with high precision. More than 400 scientists from over 25 institutions in the United States, Spain, the United Kingdom, Brazil, Germany, Switzerland, and Australia are working on the project. The collaboration built and is using an extremely sensitive 570-Megapixel digital camera, DECam, mounted on the Blanco 4-meter telescope at Cerro Tololo Inter-American Observatory, high in the Chilean Andes, to carry out the project.

    Over six years (2013-2019), the Dark Energy Survey collaboration used 758 nights of observation to carry out a deep, wide-area survey to record information from 300 million galaxies that are billions of light-years from Earth. The survey imaged 5000 square degrees of the southern sky in five optical filters to obtain detailed information about each galaxy. A fraction of the survey time is used to observe smaller patches of sky roughly once a week to discover and study thousands of supernovae and other astrophysical transients.
    _____________________________________________________________________________________

    Fritz Zwicky discovered Dark Matter in the 1930s when observing the movement of the Coma Cluster., Vera Rubin a Woman in STEM, denied the Nobel, some 30 years later, did most of the work on Dark Matter.

    Fritz Zwicky.
    Coma cluster via NASA/ESA Hubble, the original example of Dark Matter discovered during observations by Fritz Zwicky and confirmed 30 years later by Vera Rubin.
    In modern times, it was astronomer Fritz Zwicky, in the 1930s, who made the first observations of what we now call dark matter. His 1933 observations of the Coma Cluster of galaxies seemed to indicated it has a mass 500 times more than that previously calculated by Edwin Hubble. Furthermore, this extra mass seemed to be completely invisible. Although Zwicky’s observations were initially met with much skepticism, they were later confirmed by other groups of astronomers.

    Thirty years later, astronomer Vera Rubin provided a huge piece of evidence for the existence of dark matter. She discovered that the centers of galaxies rotate at the same speed as their extremities, whereas, of course, they should rotate faster. Think of a vinyl LP on a record deck: its center rotates faster than its edge. That’s what logic dictates we should see in galaxies too. But we do not. The only way to explain this is if the whole galaxy is only the center of some much larger structure, as if it is only the label on the LP so to speak, causing the galaxy to have a consistent rotation speed from center to edge.

    Vera Rubin, following Zwicky, postulated that the missing structure in galaxies is dark matter. Her ideas were met with much resistance from the astronomical community, but her observations have been confirmed and are seen today as pivotal proof of the existence of dark matter.
    Astronomer Vera Rubin at the Lowell Observatory in 1965, worked on Dark Matter (The Carnegie Institution for Science).

    Vera Rubin, with Department of Terrestrial Magnetism (DTM) image tube spectrograph attached to the Kitt Peak 84-inch telescope, 1970.

    Vera Rubin measuring spectra, worked on Dark Matter(Emilio Segre Visual Archives AIP SPL).
    Dark Matter Research

    LBNL LZ Dark Matter Experiment (US) xenon detector at Sanford Underground Research Facility(US) Credit: Matt Kapust.

    Lamda Cold Dark Matter Accerated Expansion of The universe http scinotions.com the-cosmic-inflation-suggests-the-existence-of-parallel-universes. Credit: Alex Mittelmann.

    DAMA at Gran Sasso uses sodium iodide housed in copper to hunt for dark matter LNGS-INFN.

    Yale HAYSTAC axion dark matter experiment at Yale’s Wright Lab.

    DEAP Dark Matter detector, The DEAP-3600, suspended in the SNOLAB (CA) deep in Sudbury’s Creighton Mine.

    The LBNL LZ Dark Matter Experiment (US) Dark Matter project at SURF, Lead, SD, USA.

    DAMA-LIBRA Dark Matter experiment at the Italian National Institute for Nuclear Physics’ (INFN’s) Gran Sasso National Laboratories (LNGS) located in the Abruzzo region of central Italy.

    DARWIN Dark Matter experiment. A design study for a next-generation, multi-ton dark matter detector in Europe at The University of Zurich [Universität Zürich](CH).

    PandaX II Dark Matter experiment at Jin-ping Underground Laboratory (CJPL) in Sichuan, China.

    Inside the Axion Dark Matter eXperiment U Washington (US) Credit : Mark Stone U. of Washington. Axion Dark Matter Experiment.
    ______________________________________________________

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 9:28 pm on December 6, 2021 Permalink | Reply
    Tags: "The uneven universe", An uneven distribution of the mass in the universe may have an effect on the speed of cosmic expansion., , , , , Dark Energy, , In reality the universe is not uniform: in some places there are stars and planets and in others there is just a void., It is almost always assumed in cosmological calculations that there is a even distribution of matter in the universe., One of the most important applications of the theory is in describing the cosmic expansion of the universe since the Big Bang., , The scientists starting point was the Mori-Zwanzig formalism-a method for describing systems consisting of a large number of particles with a small number of measurands., The speed of this expansion is determined by the amount of energy in the universe., The University of Münster [Westfälische Wilhelms-Universität Münster] (DE)   

    From The University of Münster [Westfälische Wilhelms-Universität Münster] (DE): “The uneven universe” 

    1

    From The University of Münster [Westfälische Wilhelms-Universität Münster](DE)

    3. December 2021

    Communication and Public Relations
    Schlossplatz 2
    48149 Münster
    Tel: +49 251 83-22232
    Fax: +49 251 83-22258
    communication@uni-muenster.de

    Timeline of the Inflationary Universe NASA WMAP (US)

    Researchers study cosmic expansion using methods from many-body physics / Article published in Physical Review Letters.

    It is almost always assumed in cosmological calculations that there is a even distribution of matter in the universe. This is because the calculations would be much too complicated if the position of every single star were to be included. In reality the universe is not uniform: in some places there are stars and planets and in others there is just a void. Physicists Michael te Vrugt and Prof. Raphael Wittkowski from the Institute of Theoretical Physics and the Center for Soft Nanoscience (SoN) at the University of Münster have, together with physicist Dr. Sabine Hossenfelder from The Frankfurt Institute for Advanced Studies (DE), developed a new model for this problem. Their starting point was the Mori-Zwanzig formalism-a method for describing systems consisting of a large number of particles with a small number of measurands. The results of the study have now been published in the journal Physical Review Letters.

    Background: The theory of general relativity developed by Albert Einstein is one of the most successful theories in modern physics. Two of the last five Nobel Prizes for Physics had associations with it: in 2017 for the measurement of gravitational waves, and in 2020 for the discovery of a black hole at the centre of the Milky Way. One of the most important applications of the theory is in describing the cosmic expansion of the universe since the Big Bang. The speed of this expansion is determined by the amount of energy in the universe. In addition to the visible matter, it is above all the dark matter and dark energy which play a role here – at least, according to the Lambda-CDM model currently used in cosmology.

    Lamda Cold Dark Matter Accerated Expansion of The universe http scinotions.com the-cosmic-inflation-suggests-the-existence-of-parallel-universes. Credit: Alex Mittelmann.

    “Strictly speaking, it is mathematically wrong to include the mean value of the universe’s energy density in the equations of general relativity”, says Sabine Hossenfelder. The question is now how “bad” this mistake is. Some experts consider it to be irrelevant, others see in it the solution to the enigma of dark energy, whose physical nature is still unknown. An uneven distribution of the mass in the universe may have an effect on the speed of cosmic expansion.

    “The Mori-Zwanzig formalism is already being successfully used in many fields of research, from biophysics to particle physics,” says Raphael Wittkowski, “so it also offered a promising approach to this astrophysical problem.” The team generalised this formalism so that it could be applied to general relativity and, in doing so, derived a model for cosmic expansion while taking into consideration the uneven distribution of matter in the universe.

    The model makes a concrete prediction for the effect of these so-called inhomogeneities on the speed of the expansion of the universe. This prediction deviates slightly from that given by the Lambda-CDM model and thus provides an opportunity to test the new model experimentally. “At present, the astronomical data are not precise enough to measure this deviation,” says Michael te Vrugt, “but the great progress made – for example, in the measurement of gravitational waves – gives us reason to hope that this will change. Also, the new variant of the Mori-Zwanzig formalism can also be applied to other astrophysical problems – so the work is relevant not only to cosmology.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Sitz der WWU
    Foto: MünsterView/Tronquet

    The The University of Münster [Westfälische Wilhelms-Universität Münster](DE) is a public university located in the city of Münster, North Rhine-Westphalia in Germany.

    With more than 43,000 students and over 120 fields of study in 15 departments, it is Germany’s fifth largest university and one of the foremost centers of German intellectual life. The university offers a wide range of subjects across the sciences, social sciences and the humanities. Several courses are also taught in English, including PhD programmes as well as postgraduate courses in geoinformatics, geospational technologies or information systems.

    Professors and former students have won ten Leibniz Prizes, the most prestigious as well as the best-funded prize in Europe, and one Fields Medal. The WWU has also been successful in the German government’s Excellence Initiative.

     
  • richardmitnick 1:47 pm on October 8, 2021 Permalink | Reply
    Tags: "Fermilab boasts new Theory Division", Astrophysics Theory, , , , Dark Energy, , , Fermilab experts on perturbative QCD use high-performance computing to tackle the complexity of simulations for experiments at the Large Hadron Collider., Muon g-2 Theory Initiative and the Muon g-2 experiment, , Particle Theory, , , Superconducting Systems,   

    From DOE’s Fermi National Accelerator Laboratory (US) : “Fermilab boasts new Theory Division” 

    FNAL Art Image
    FNAL Art Image by Angela Gonzales

    From DOE’s Fermi National Accelerator Laboratory (US) , an enduring source of strength for the US contribution to scientific research worldwide.

    October 8, 2021

    Theoretical physics research at Fermi National Particle Accelerator Laboratory has always sparked new ideas and scientific opportunities, while at the same time supporting the large experimental group that conducts research at Fermilab. In recent years, the Theoretical Physics Department has further strengthened its position worldwide as a hub for the high-energy physics theoretical community. The department has now become Fermilab’s newest division, the Theory Division, which officially launched early this year with strong support from HEP.

    This new division seeks to:

    support strategic theory leadership;
    promote new initiatives, as well as strengthen existing ones;
    and leverage U.S. Department of Energy support through partnerships with universities and more.

    “Creating the Theory Division increases the lab’s abilities to stimulate and develop new pathways to discovery,” said Fermilab Director Nigel Lockyer.

    Led by Marcela Carena and her deputy Patrick Fox, this new division features three departments: Particle Theory, Astrophysics Theory and Quantum Theory. “This structure will help us focus our scientific efforts in each area and will allow for impactful contributions to existing and developing programs for the theory community,” said Carena.

    Particle Theory Department

    At the helm of the Particle Theory Department is Andreas Kronfeld. This department studies all aspects of theoretical particle physics, especially those areas inspired by the experimental program—at Fermilab and elsewhere. It coordinates leading national efforts, including the Neutrino Theory Network, and the migration of the lattice gauge theory program to Exascale computing platforms. Lattice quantum chromodynamics, or QCD, experts support the Muon g-2 Theory Initiative, providing a solid theory foundation for the recently announced results of the Muon g-2 experiment.

    Fermilab particle theorists, working with DOE’s Argonne National Laboratory (US) nuclear theorists, are using machine learning for developing novel event generators to precisely model neutrino-nuclear interactions, and employ lattice QCD to model multi-nucleon interactions; both are important for achieving the science goals of DUNE.

    Fermilab experts on perturbative QCD use high-performance computing to tackle the complexity of simulations for experiments at the Large Hadron Collider. Fermilab theorists are strongly involved in the exploration of physics beyond the Standard Model, through model-building, particle physics phenomenology, and formal aspects of quantum field theory.

    Astrophysics Theory Department

    Astrophysics Theory, led by Dan Hooper, consists of researchers who work at the confluence of astrophysics, cosmology and particle physics. Fermilab’s scientists have played a key role in the development of this exciting field worldwide and continue to be deeply involved in supporting the Fermilab cosmic frontier program.

    Key areas of research include dark matter, dark energy, the cosmic microwave background, large-scale structure, neutrino astronomy and axion astrophysics. A large portion of the department’s research involves numerical cosmological simulations of galaxy formation, large-scale structures and gravitational lensing. The department is developing machine-learning tools to help solve these challenging problems.

    Quantum Theory Department

    Led by Roni Harnik, the Quantum Theory Department has researchers working at the interface of quantum information science and high-energy physics. Fermilab theorists are working to harness the developing power of unique quantum information capabilities to address important physics questions, such as the simulation of QCD processes, dynamics in the early universe, and more generally simulating quantum field theories. Quantum-enhanced capabilities also open new opportunities to explore the universe and test theories of new particles, dark matter, gravitational waves and other new physics.

    Scientists in the Quantum Theory Department are developing new algorithms for quantum simulations, and they are proposing novel methods to search for new phenomena using quantum technology, including quantum optics, atomic physics, optomechanical sensors and superconducting systems. The department works in close collaboration with both the Fermilab Superconducting Quantum Materials and Systems Center and the Fermilab Quantum Institute, as well as leads a national QuantISED theory consortium.

    Looking ahead

    The new Theory Division also intends to play a strong role in attracting and inspiring the next generation of theorists, training them in a data-rich environment, as well as promoting an inclusive culture that values diversity.

    “The best part about being a Fermilab theorist,” said Marcela Carena, “is working with brilliant junior scientists and sharing their excitement about exploring new ideas.”

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Fermi National Accelerator Laboratory (US), located just outside Batavia, Illinois, near Chicago, is a United States Department of Energy national laboratory specializing in high-energy particle physics. Since 2007, Fermilab has been operated by the Fermi Research Alliance, a joint venture of the University of Chicago, and the Universities Research Association (URA). Fermilab is a part of the Illinois Technology and Research Corridor.

    Fermilab’s Tevatron was a landmark particle accelerator; until the startup in 2008 of the Large Hadron Collider(CH) near Geneva, Switzerland, it was the most powerful particle accelerator in the world, accelerating antiprotons to energies of 500 GeV, and producing proton-proton collisions with energies of up to 1.6 TeV, the first accelerator to reach one “tera-electron-volt” energy. At 3.9 miles (6.3 km), it was the world’s fourth-largest particle accelerator in circumference. One of its most important achievements was the 1995 discovery of the top quark, announced by research teams using the Tevatron’s CDF and DØ detectors. It was shut down in 2011.

    In addition to high-energy collider physics, Fermilab hosts fixed-target and neutrino experiments, such as MicroBooNE (Micro Booster Neutrino Experiment), NOνA (NuMI Off-Axis νe Appearance) and SeaQuest. Completed neutrino experiments include MINOS (Main Injector Neutrino Oscillation Search), MINOS+, MiniBooNE and SciBooNE (SciBar Booster Neutrino Experiment). The MiniBooNE detector was a 40-foot (12 m) diameter sphere containing 800 tons of mineral oil lined with 1,520 phototube detectors. An estimated 1 million neutrino events were recorded each year. SciBooNE sat in the same neutrino beam as MiniBooNE but had fine-grained tracking capabilities. The NOνA experiment uses, and the MINOS experiment used, Fermilab’s NuMI (Neutrinos at the Main Injector) beam, which is an intense beam of neutrinos that travels 455 miles (732 km) through the Earth to the Soudan Mine in Minnesota and the Ash River, Minnesota, site of the NOνA far detector. In 2017, the ICARUS neutrino experiment was moved from CERN to Fermilab.
    In the public realm, Fermilab is home to a native prairie ecosystem restoration project and hosts many cultural events: public science lectures and symposia, classical and contemporary music concerts, folk dancing and arts galleries. The site is open from dawn to dusk to visitors who present valid photo identification.
    Asteroid 11998 Fermilab is named in honor of the laboratory.
    Weston, Illinois, was a community next to Batavia voted out of existence by its village board in 1966 to provide a site for Fermilab.

    The laboratory was founded in 1969 as the National Accelerator Laboratory; it was renamed in honor of Enrico Fermi in 1974. The laboratory’s first director was Robert Rathbun Wilson, under whom the laboratory opened ahead of time and under budget. Many of the sculptures on the site are of his creation. He is the namesake of the site’s high-rise laboratory building, whose unique shape has become the symbol for Fermilab and which is the center of activity on the campus.
    After Wilson stepped down in 1978 to protest the lack of funding for the lab, Leon M. Lederman took on the job. It was under his guidance that the original accelerator was replaced with the Tevatron, an accelerator capable of colliding protons and antiprotons at a combined energy of 1.96 TeV. Lederman stepped down in 1989. The science education center at the site was named in his honor.
    The later directors include:

    John Peoples, 1989 to 1996
    Michael S. Witherell, July 1999 to June 2005
    Piermaria Oddone, July 2005 to July 2013
    Nigel Lockyer, September 2013 to the present

    Fermilab continues to participate in the work at the Large Hadron Collider (LHC); it serves as a Tier 1 site in the Worldwide LHC Computing Grid.

    DOE’s Fermi National Accelerator Laboratory(US)/MINERvA Reidar Hahn.

    FNAL Don Lincoln.[/caption]

    FNAL Icon

     
  • richardmitnick 8:25 pm on July 18, 2021 Permalink | Reply
    Tags: "Curiosity and technology drive quest to reveal fundamental secrets of the universe", A very specific particle called a J/psi might provide a clearer picture of what’s going on inside a proton’s gluonic field., , Argonne-driven technology is part of a broad initiative to answer fundamental questions about the birth of matter in the universe and the building blocks that hold it all together., , , , , , Computational Science, , Dark Energy, , , , Developing and fabricating detectors that search for signatures from the early universe or enhance our understanding of the most fundamental of particles., , , Exploring the hearts of protons and neutrons, , , Neutrinoless double beta decay can only happen if the neutrino is its own anti-particle., , , , , , , SLAC National Accelerator Laboratory(US), , ,   

    From DOE’s Argonne National Laboratory (US) : “Curiosity and technology drive quest to reveal fundamental secrets of the universe” 

    Argonne Lab

    From DOE’s Argonne National Laboratory (US)

    July 15, 2021
    John Spizzirri

    Argonne-driven technology is part of a broad initiative to answer fundamental questions about the birth of matter in the universe and the building blocks that hold it all together.

    Imagine the first of our species to lie beneath the glow of an evening sky. An enormous sense of awe, perhaps a little fear, fills them as they wonder at those seemingly infinite points of light and what they might mean. As humans, we evolved the capacity to ask big insightful questions about the world around us and worlds beyond us. We dare, even, to question our own origins.

    “The place of humans in the universe is important to understand,” said physicist and computational scientist Salman Habib. ​“Once you realize that there are billions of galaxies we can detect, each with many billions of stars, you understand the insignificance of being human in some sense. But at the same time, you appreciate being human a lot more.”

    The South Pole Telescope is part of a collaboration between Argonne and a number of national labs and universities to measure the CMB, considered the oldest light in the universe.

    The high altitude and extremely dry conditions of the South Pole keep water vapor from absorbing select light wavelengths.

    With no less a sense of wonder than most of us, Habib and colleagues at the U.S. Department of Energy’s (DOE) Argonne National Laboratory are actively researching these questions through an initiative that investigates the fundamental components of both particle physics and astrophysics.

    The breadth of Argonne’s research in these areas is mind-boggling. It takes us back to the very edge of time itself, to some infinitesimally small portion of a second after the Big Bang when random fluctuations in temperature and density arose, eventually forming the breeding grounds of galaxies and planets.

    It explores the heart of protons and neutrons to understand the most fundamental constructs of the visible universe, particles and energy once free in the early post-Big Bang universe, but later confined forever within a basic atomic structure as that universe began to cool.

    And it addresses slightly newer, more controversial questions about the nature of Dark Matter and Dark Energy, both of which play a dominant role in the makeup and dynamics of the universe but are little understood.
    _____________________________________________________________________________________
    Dark Energy Survey

    Dark Energy Camera [DECam] built at DOE’s Fermi National Accelerator Laboratory(US)

    NOIRLab National Optical Astronomy Observatory(US) Cerro Tololo Inter-American Observatory(CL) Victor M Blanco 4m Telescope which houses the Dark-Energy-Camera – DECam at Cerro Tololo, Chile at an altitude of 7200 feet.

    NOIRLab(US)NSF NOIRLab NOAO (US) Cerro Tololo Inter-American Observatory(CL) approximately 80 km to the East of La Serena, Chile, at an altitude of 2200 meters.

    Timeline of the Inflationary Universe WMAP

    The Dark Energy Survey (DES) is an international, collaborative effort to map hundreds of millions of galaxies, detect thousands of supernovae, and find patterns of cosmic structure that will reveal the nature of the mysterious dark energy that is accelerating the expansion of our Universe. DES began searching the Southern skies on August 31, 2013.

    According to Einstein’s theory of General Relativity, gravity should lead to a slowing of the cosmic expansion. Yet, in 1998, two teams of astronomers studying distant supernovae made the remarkable discovery that the expansion of the universe is speeding up. To explain cosmic acceleration, cosmologists are faced with two possibilities: either 70% of the universe exists in an exotic form, now called dark energy, that exhibits a gravitational force opposite to the attractive gravity of ordinary matter, or General Relativity must be replaced by a new theory of gravity on cosmic scales.

    DES is designed to probe the origin of the accelerating universe and help uncover the nature of dark energy by measuring the 14-billion-year history of cosmic expansion with high precision. More than 400 scientists from over 25 institutions in the United States, Spain, the United Kingdom, Brazil, Germany, Switzerland, and Australia are working on the project. The collaboration built and is using an extremely sensitive 570-Megapixel digital camera, DECam, mounted on the Blanco 4-meter telescope at Cerro Tololo Inter-American Observatory, high in the Chilean Andes, to carry out the project.

    Over six years (2013-2019), the DES collaboration used 758 nights of observation to carry out a deep, wide-area survey to record information from 300 million galaxies that are billions of light-years from Earth. The survey imaged 5000 square degrees of the southern sky in five optical filters to obtain detailed information about each galaxy. A fraction of the survey time is used to observe smaller patches of sky roughly once a week to discover and study thousands of supernovae and other astrophysical transients.
    _____________________________________________________________________________________

    “And this world-class research we’re doing could not happen without advances in technology,” said Argonne Associate Laboratory Director Kawtar Hafidi, who helped define and merge the different aspects of the initiative.

    “We are developing and fabricating detectors that search for signatures from the early universe or enhance our understanding of the most fundamental of particles,” she added. ​“And because all of these detectors create big data that have to be analyzed, we are developing, among other things, artificial intelligence techniques to do that as well.”

    Decoding messages from the universe

    Fleshing out a theory of the universe on cosmic or subatomic scales requires a combination of observations, experiments, theories, simulations and analyses, which in turn requires access to the world’s most sophisticated telescopes, particle colliders, detectors and supercomputers.

    Argonne is uniquely suited to this mission, equipped as it is with many of those tools, the ability to manufacture others and collaborative privileges with other federal laboratories and leading research institutions to access other capabilities and expertise.

    As lead of the initiative’s cosmology component, Habib uses many of these tools in his quest to understand the origins of the universe and what makes it tick.

    And what better way to do that than to observe it, he said.

    “If you look at the universe as a laboratory, then obviously we should study it and try to figure out what it is telling us about foundational science,” noted Habib. ​“So, one part of what we are trying to do is build ever more sensitive probes to decipher what the universe is trying to tell us.”

    To date, Argonne is involved in several significant sky surveys, which use an array of observational platforms, like telescopes and satellites, to map different corners of the universe and collect information that furthers or rejects a specific theory.

    For example, the South Pole Telescope survey, a collaboration between Argonne and a number of national labs and universities, is measuring the cosmic microwave background (CMB) [above], considered the oldest light in the universe. Variations in CMB properties, such as temperature, signal the original fluctuations in density that ultimately led to all the visible structure in the universe.

    Additionally, the Dark Energy Spectroscopic Instrument and the forthcoming Vera C. Rubin Observatory are specially outfitted, ground-based telescopes designed to shed light on dark energy and dark matter, as well as the formation of luminous structure in the universe.

    DOE’s Lawrence Berkeley National Laboratory(US) DESI spectroscopic instrument on the Mayall 4-meter telescope at Kitt Peak National Observatory, in the Quinlan Mountains in the Arizona-Sonoran Desert on the Tohono O’odham Nation, 88 kilometers 55 mi west-southwest of Tucson, Arizona, Altitude 2,096 m (6,877 ft).

    National Optical Astronomy Observatory (US) Mayall 4 m telescope at NSF NOIRLab NOAO Kitt Peak National Observatory (US) in the Quinlan Mountains in the Arizona-Sonoran Desert on the Tohono O’odham Nation, 88 kilometers 55 mi west-southwest of Tucson, Arizona, Altitude 2,096 m (6,877 ft).

    National Science Foundation(US) NSF (US) NOIRLab NOAO Kitt Peak National Observatory on the Quinlan Mountains in the Arizona-Sonoran Desert on the Tohono O’odham Nation, 88 kilometers (55 mi) west-southwest of Tucson, Arizona, Altitude 2,096 m (6,877 ft).

    National Science Foundation(US) NOIRLab (US) NOAO Kitt Peak National Observatory (US) on Kitt Peak of the Quinlan Mountains in the Arizona-Sonoran Desert on the Tohono O’odham Nation, 88 kilometers (55 mi) west-southwest of Tucson, Arizona, Altitude 2,096 m (6,877 ft). annotated.

    NSF (US) NOIRLab (US) NOAO (US) Vera C. Rubin Observatory [LSST] Telescope currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing NSF (US) NOIRLab (US) NOAO (US) Gemini South Telescope and NSF (US) NOIRLab (US) NOAO (US) Southern Astrophysical Research Telescope.

    Darker matters

    All the data sets derived from these observations are connected to the second component of Argonne’s cosmology push, which revolves around theory and modeling. Cosmologists combine observations, measurements and the prevailing laws of physics to form theories that resolve some of the mysteries of the universe.

    But the universe is complex, and it has an annoying tendency to throw a curve ball just when we thought we had a theory cinched. Discoveries within the past 100 years have revealed that the universe is both expanding and accelerating its expansion — realizations that came as separate but equal surprises.

    Saul Perlmutter (center) [The Supernova Cosmology Project] shared the 2006 Shaw Prize in Astronomy, the 2011 Nobel Prize in Physics, and the 2015 Breakthrough Prize in Fundamental Physics with Brian P. Schmidt (right) and Adam Riess (left) [The High-z Supernova Search Team] for providing evidence that the expansion of the universe is accelerating.

    “To say that we understand the universe would be incorrect. To say that we sort of understand it is fine,” exclaimed Habib. ​“We have a theory that describes what the universe is doing, but each time the universe surprises us, we have to add a new ingredient to that theory.”

    Modeling helps scientists get a clearer picture of whether and how those new ingredients will fit a theory. They make predictions for observations that have not yet been made, telling observers what new measurements to take.

    Habib’s group is applying this same sort of process to gain an ever-so-tentative grasp on the nature of dark energy and dark matter. While scientists can tell us that both exist, that they comprise about 68 and 26% of the universe, respectively, beyond that not much else is known.

    ______________________________________________________________________________________________________________

    Dark Matter Background
    Fritz Zwicky discovered Dark Matter in the 1930s when observing the movement of the Coma Cluster., Vera Rubin a Woman in STEM denied the Nobel, some 30 years later, did most of the work on Dark Matter.

    Fritz Zwicky from http:// palomarskies.blogspot.com.


    Coma cluster via NASA/ESA Hubble.


    In modern times, it was astronomer Fritz Zwicky, in the 1930s, who made the first observations of what we now call dark matter. His 1933 observations of the Coma Cluster of galaxies seemed to indicated it has a mass 500 times more than that previously calculated by Edwin Hubble. Furthermore, this extra mass seemed to be completely invisible. Although Zwicky’s observations were initially met with much skepticism, they were later confirmed by other groups of astronomers.
    Thirty years later, astronomer Vera Rubin provided a huge piece of evidence for the existence of dark matter. She discovered that the centers of galaxies rotate at the same speed as their extremities, whereas, of course, they should rotate faster. Think of a vinyl LP on a record deck: its center rotates faster than its edge. That’s what logic dictates we should see in galaxies too. But we do not. The only way to explain this is if the whole galaxy is only the center of some much larger structure, as if it is only the label on the LP so to speak, causing the galaxy to have a consistent rotation speed from center to edge.
    Vera Rubin, following Zwicky, postulated that the missing structure in galaxies is dark matter. Her ideas were met with much resistance from the astronomical community, but her observations have been confirmed and are seen today as pivotal proof of the existence of dark matter.

    Astronomer Vera Rubin at the Lowell Observatory in 1965, worked on Dark Matter (The Carnegie Institution for Science).


    Vera Rubin measuring spectra, worked on Dark Matter (Emilio Segre Visual Archives AIP SPL).


    Vera Rubin, with Department of Terrestrial Magnetism (DTM) image tube spectrograph attached to the Kitt Peak 84-inch telescope, 1970

    Dark Matter Research

    Inside the Axion Dark Matter eXperiment U Washington (US) Credit : Mark Stone U. of Washington. Axion Dark Matter Experiment.
    _____________________________________________________________________________________

    Observations of cosmological structure — the distribution of galaxies and even of their shapes — provide clues about the nature of dark matter, which in turn feeds simple dark matter models and subsequent predictions. If observations, models and predictions aren’t in agreement, that tells scientists that there may be some missing ingredient in their description of dark matter.

    But there are also experiments that are looking for direct evidence of dark matter particles, which require highly sensitive detectors [above]. Argonne has initiated development of specialized superconducting detector technology for the detection of low-mass dark matter particles.

    This technology requires the ability to control properties of layered materials and adjust the temperature where the material transitions from finite to zero resistance, when it becomes a superconductor. And unlike other applications where scientists would like this temperature to be as high as possible — room temperature, for example — here, the transition needs to be very close to absolute zero.

    Habib refers to these dark matter detectors as traps, like those used for hunting — which, in essence, is what cosmologists are doing. Because it’s possible that dark matter doesn’t come in just one species, they need different types of traps.

    “It’s almost like you’re in a jungle in search of a certain animal, but you don’t quite know what it is — it could be a bird, a snake, a tiger — so you build different kinds of traps,” he said.

    Lab researchers are working on technologies to capture these elusive species through new classes of dark matter searches. Collaborating with other institutions, they are now designing and building a first set of pilot projects aimed at looking for dark matter candidates with low mass.

    Tuning in to the early universe

    Amy Bender is working on a different kind of detector — well, a lot of detectors — which are at the heart of a survey of the cosmic microwave background (CMB).

    “The CMB is radiation that has been around the universe for 13 billion years, and we’re directly measuring that,” said Bender, an assistant physicist at Argonne.

    The Argonne-developed detectors — all 16,000 of them — capture photons, or light particles, from that primordial sky through the aforementioned South Pole Telescope, to help answer questions about the early universe, fundamental physics and the formation of cosmic structures.

    Now, the CMB experimental effort is moving into a new phase, CMB-Stage 4 (CMB-S4).

    CMB-S4 is the next-generation ground-based cosmic microwave background experiment.With 21 telescopes at the South Pole and in the Chilean Atacama desert surveying the sky with 550,000 cryogenically-cooled superconducting detectors for 7 years, CMB-S4 will deliver transformative discoveries in fundamental physics, cosmology, astrophysics, and astronomy. CMB-S4 is supported by the Department of Energy Office of Science and the National Science Foundation.

    This larger project tackles even more complex topics like Inflationary Theory, which suggests that the universe expanded faster than the speed of light for a fraction of a second, shortly after the Big Bang.
    _____________________________________________________________________________________
    Inflation

    4
    Alan Guth, from Highland Park High School and M.I.T., who first proposed cosmic inflation
    [caption id="attachment_55311" align="alignnone" width="632"] HPHS Owls

    Lamda Cold Dark Matter Accerated Expansion of The universe http scinotions.com the-cosmic-inflation-suggests-the-existence-of-parallel-universes
    Alex Mittelmann, Coldcreation


    Alan Guth’s notes:

    Alan Guth’s original notes on inflation


    _____________________________________________________________________________________

    3
    A section of a detector array with architecture suitable for future CMB experiments, such as the upcoming CMB-S4 project. Fabricated at Argonne’s Center for Nanoscale Materials, 16,000 of these detectors currently drive measurements collected from the South Pole Telescope. (Image by Argonne National Laboratory.)

    While the science is amazing, the technology to get us there is just as fascinating.

    Technically called transition edge sensing (TES) bolometers, the detectors on the telescope are made from superconducting materials fabricated at Argonne’s Center for Nanoscale Materials, a DOE Office of Science User Facility.

    Each of the 16,000 detectors acts as a combination of very sensitive thermometer and camera. As incoming radiation is absorbed on the surface of each detector, measurements are made by supercooling them to a fraction of a degree above absolute zero. (That’s over three times as cold as Antarctica’s lowest recorded temperature.)

    Changes in heat are measured and recorded as changes in electrical resistance and will help inform a map of the CMB’s intensity across the sky.

    CMB-S4 will focus on newer technology that will allow researchers to distinguish very specific patterns in light, or polarized light. In this case, they are looking for what Bender calls the Holy Grail of polarization, a pattern called B-modes.

    Capturing this signal from the early universe — one far fainter than the intensity signal — will help to either confirm or disprove a generic prediction of inflation.

    It will also require the addition of 500,000 detectors distributed among 21 telescopes in two distinct regions of the world, the South Pole and the Chilean desert. There, the high altitude and extremely dry conditions keep water vapor in the atmosphere from absorbing millimeter wavelength light, like that of the CMB.

    While previous experiments have touched on this polarization, the large number of new detectors will improve sensitivity to that polarization and grow our ability to capture it.

    “Literally, we have built these cameras completely from the ground up,” said Bender. ​“Our innovation is in how to make these stacks of superconducting materials work together within this detector, where you have to couple many complex factors and then actually read out the results with the TES. And that is where Argonne has contributed, hugely.”

    Down to the basics

    Argonne’s capabilities in detector technology don’t just stop at the edge of time, nor do the initiative’s investigations just look at the big picture.

    Most of the visible universe, including galaxies, stars, planets and people, are made up of protons and neutrons. Understanding the most fundamental components of those building blocks and how they interact to make atoms and molecules and just about everything else is the realm of physicists like Zein-Eddine Meziani.

    “From the perspective of the future of my field, this initiative is extremely important,” said Meziani, who leads Argonne’s Medium Energy Physics group. ​“It has given us the ability to actually explore new concepts, develop better understanding of the science and a pathway to enter into bigger collaborations and take some leadership.”

    Taking the lead of the initiative’s nuclear physics component, Meziani is steering Argonne toward a significant role in the development of the Electron-Ion Collider, a new U.S. Nuclear Physics Program facility slated for construction at DOE’s Brookhaven National Laboratory (US).

    Argonne’s primary interest in the collider is to elucidate the role that quarks, anti-quarks and gluons play in giving mass and a quantum angular momentum, called spin, to protons and neutrons — nucleons — the particles that comprise the nucleus of an atom.


    EIC Electron Animation, Inner Proton Motion.
    Electrons colliding with ions will exchange virtual photons with the nuclear particles to help scientists ​“see” inside the nuclear particles; the collisions will produce precision 3D snapshots of the internal arrangement of quarks and gluons within ordinary nuclear matter; like a combination CT/MRI scanner for atoms. (Image by Brookhaven National Laboratory.)

    While we once thought nucleons were the finite fundamental particles of an atom, the emergence of powerful particle colliders, like the Stanford Linear Accelerator Center at Stanford University and the former Tevatron at DOE’s Fermilab, proved otherwise.

    It turns out that quarks and gluons were independent of nucleons in the extreme energy densities of the early universe; as the universe expanded and cooled, they transformed into ordinary matter.

    “There was a time when quarks and gluons were free in a big soup, if you will, but we have never seen them free,” explained Meziani. ​“So, we are trying to understand how the universe captured all of this energy that was there and put it into confined systems, like these droplets we call protons and neutrons.”

    Some of that energy is tied up in gluons, which, despite the fact that they have no mass, confer the majority of mass to a proton. So, Meziani is hoping that the Electron-Ion Collider will allow science to explore — among other properties — the origins of mass in the universe through a detailed exploration of gluons.

    And just as Amy Bender is looking for the B-modes polarization in the CMB, Meziani and other researchers are hoping to use a very specific particle called a J/psi to provide a clearer picture of what’s going on inside a proton’s gluonic field.

    But producing and detecting the J/psi particle within the collider — while ensuring that the proton target doesn’t break apart — is a tricky enterprise, which requires new technologies. Again, Argonne is positioning itself at the forefront of this endeavor.

    “We are working on the conceptual designs of technologies that will be extremely important for the detection of these types of particles, as well as for testing concepts for other science that will be conducted at the Electron-Ion Collider,” said Meziani.

    Argonne also is producing detector and related technologies in its quest for a phenomenon called neutrinoless double beta decay. A neutrino is one of the particles emitted during the process of neutron radioactive beta decay and serves as a small but mighty connection between particle physics and astrophysics.

    “Neutrinoless double beta decay can only happen if the neutrino is its own anti-particle,” said Hafidi. ​“If the existence of these very rare decays is confirmed, it would have important consequences in understanding why there is more matter than antimatter in the universe.”

    Argonne scientists from different areas of the lab are working on the Neutrino Experiment with Xenon Time Projection Chamber (NEXT) collaboration to design and prototype key systems for the collaborative’s next big experiment. This includes developing a one-of-a-kind test facility and an R&D program for new, specialized detector systems.

    “We are really working on dramatic new ideas,” said Meziani. ​“We are investing in certain technologies to produce some proof of principle that they will be the ones to pursue later, that the technology breakthroughs that will take us to the highest sensitivity detection of this process will be driven by Argonne.”

    The tools of detection

    Ultimately, fundamental science is science derived from human curiosity. And while we may not always see the reason for pursuing it, more often than not, fundamental science produces results that benefit all of us. Sometimes it’s a gratifying answer to an age-old question, other times it’s a technological breakthrough intended for one science that proves useful in a host of other applications.

    Through their various efforts, Argonne scientists are aiming for both outcomes. But it will take more than curiosity and brain power to solve the questions they are asking. It will take our skills at toolmaking, like the telescopes that peer deep into the heavens and the detectors that capture hints of the earliest light or the most elusive of particles.

    We will need to employ the ultrafast computing power of new supercomputers. Argonne’s forthcoming Aurora exascale machine will analyze mountains of data for help in creating massive models that simulate the dynamics of the universe or subatomic world, which, in turn, might guide new experiments — or introduce new questions.

    Depiction of ANL ALCF Cray Intel SC18 Shasta Aurora exascale supercomputer, to be built at DOE’s Argonne National Laboratory.

    And we will apply artificial intelligence to recognize patterns in complex observations — on the subatomic and cosmic scales — far more quickly than the human eye can, or use it to optimize machinery and experiments for greater efficiency and faster results.

    “I think we have been given the flexibility to explore new technologies that will allow us to answer the big questions,” said Bender. ​“What we’re developing is so cutting edge, you never know where it will show up in everyday life.”

    Funding for research mentioned in this article was provided by Argonne Laboratory Directed Research and Development; Argonne program development; DOE Office of High Energy Physics: Cosmic Frontier, South Pole Telescope-3G project, Detector R&D; and DOE Office of Nuclear Physics.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    DOE’s Argonne National Laboratory (US) seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their is a science and engineering research national laboratory operated by UChicago Argonne LLC for the United States Department of Energy. The facility is located in Lemont, Illinois, outside of Chicago, and is the largest national laboratory by size and scope in the Midwest.

    Argonne had its beginnings in the Metallurgical Laboratory of the University of Chicago, formed in part to carry out Enrico Fermi’s work on nuclear reactors for the Manhattan Project during World War II. After the war, it was designated as the first national laboratory in the United States on July 1, 1946. In the post-war era the lab focused primarily on non-weapon related nuclear physics, designing and building the first power-producing nuclear reactors, helping design the reactors used by the United States’ nuclear navy, and a wide variety of similar projects. In 1994, the lab’s nuclear mission ended, and today it maintains a broad portfolio in basic science research, energy storage and renewable energy, environmental sustainability, supercomputing, and national security.

    UChicago Argonne, LLC, the operator of the laboratory, “brings together the expertise of the University of Chicago (the sole member of the LLC) with Jacobs Engineering Group Inc.” Argonne is a part of the expanding Illinois Technology and Research Corridor. Argonne formerly ran a smaller facility called Argonne National Laboratory-West (or simply Argonne-West) in Idaho next to the Idaho National Engineering and Environmental Laboratory. In 2005, the two Idaho-based laboratories merged to become the DOE’s Idaho National Laboratory.
    What would become Argonne began in 1942 as the Metallurgical Laboratory at the University of Chicago, which had become part of the Manhattan Project. The Met Lab built Chicago Pile-1, the world’s first nuclear reactor, under the stands of the University of Chicago sports stadium. Considered unsafe, in 1943, CP-1 was reconstructed as CP-2, in what is today known as Red Gate Woods but was then the Argonne Forest of the Cook County Forest Preserve District near Palos Hills. The lab was named after the surrounding forest, which in turn was named after the Forest of Argonne in France where U.S. troops fought in World War I. Fermi’s pile was originally going to be constructed in the Argonne forest, and construction plans were set in motion, but a labor dispute brought the project to a halt. Since speed was paramount, the project was moved to the squash court under Stagg Field, the football stadium on the campus of the University of Chicago. Fermi told them that he was sure of his calculations, which said that it would not lead to a runaway reaction, which would have contaminated the city.

    Other activities were added to Argonne over the next five years. On July 1, 1946, the “Metallurgical Laboratory” was formally re-chartered as Argonne National Laboratory for “cooperative research in nucleonics.” At the request of the U.S. Atomic Energy Commission, it began developing nuclear reactors for the nation’s peaceful nuclear energy program. In the late 1940s and early 1950s, the laboratory moved to a larger location in unincorporated DuPage County, Illinois and established a remote location in Idaho, called “Argonne-West,” to conduct further nuclear research.

    In quick succession, the laboratory designed and built Chicago Pile 3 (1944), the world’s first heavy-water moderated reactor, and the Experimental Breeder Reactor I (Chicago Pile 4), built-in Idaho, which lit a string of four light bulbs with the world’s first nuclear-generated electricity in 1951. A complete list of the reactors designed and, in most cases, built and operated by Argonne can be viewed in the, Reactors Designed by Argonne page. The knowledge gained from the Argonne experiments conducted with these reactors 1) formed the foundation for the designs of most of the commercial reactors currently used throughout the world for electric power generation and 2) inform the current evolving designs of liquid-metal reactors for future commercial power stations.

    Conducting classified research, the laboratory was heavily secured; all employees and visitors needed badges to pass a checkpoint, many of the buildings were classified, and the laboratory itself was fenced and guarded. Such alluring secrecy drew visitors both authorized—including King Leopold III of Belgium and Queen Frederica of Greece—and unauthorized. Shortly past 1 a.m. on February 6, 1951, Argonne guards discovered reporter Paul Harvey near the 10-foot (3.0 m) perimeter fence, his coat tangled in the barbed wire. Searching his car, guards found a previously prepared four-page broadcast detailing the saga of his unauthorized entrance into a classified “hot zone”. He was brought before a federal grand jury on charges of conspiracy to obtain information on national security and transmit it to the public, but was not indicted.

    Not all nuclear technology went into developing reactors, however. While designing a scanner for reactor fuel elements in 1957, Argonne physicist William Nelson Beck put his own arm inside the scanner and obtained one of the first ultrasound images of the human body. Remote manipulators designed to handle radioactive materials laid the groundwork for more complex machines used to clean up contaminated areas, sealed laboratories or caves. In 1964, the “Janus” reactor opened to study the effects of neutron radiation on biological life, providing research for guidelines on safe exposure levels for workers at power plants, laboratories and hospitals. Scientists at Argonne pioneered a technique to analyze the moon’s surface using alpha radiation, which launched aboard the Surveyor 5 in 1967 and later analyzed lunar samples from the Apollo 11 mission.

    In addition to nuclear work, the laboratory maintained a strong presence in the basic research of physics and chemistry. In 1955, Argonne chemists co-discovered the elements einsteinium and fermium, elements 99 and 100 in the periodic table. In 1962, laboratory chemists produced the first compound of the inert noble gas xenon, opening up a new field of chemical bonding research. In 1963, they discovered the hydrated electron.

    High-energy physics made a leap forward when Argonne was chosen as the site of the 12.5 GeV Zero Gradient Synchrotron, a proton accelerator that opened in 1963. A bubble chamber allowed scientists to track the motions of subatomic particles as they zipped through the chamber; in 1970, they observed the neutrino in a hydrogen bubble chamber for the first time.

    Meanwhile, the laboratory was also helping to design the reactor for the world’s first nuclear-powered submarine, the U.S.S. Nautilus, which steamed for more than 513,550 nautical miles (951,090 km). The next nuclear reactor model was Experimental Boiling Water Reactor, the forerunner of many modern nuclear plants, and Experimental Breeder Reactor II (EBR-II), which was sodium-cooled, and included a fuel recycling facility. EBR-II was later modified to test other reactor designs, including a fast-neutron reactor and, in 1982, the Integral Fast Reactor concept—a revolutionary design that reprocessed its own fuel, reduced its atomic waste and withstood safety tests of the same failures that triggered the Chernobyl and Three Mile Island disasters. In 1994, however, the U.S. Congress terminated funding for the bulk of Argonne’s nuclear programs.

    Argonne moved to specialize in other areas, while capitalizing on its experience in physics, chemical sciences and metallurgy. In 1987, the laboratory was the first to successfully demonstrate a pioneering technique called plasma wakefield acceleration, which accelerates particles in much shorter distances than conventional accelerators. It also cultivated a strong battery research program.

    Following a major push by then-director Alan Schriesheim, the laboratory was chosen as the site of the Advanced Photon Source, a major X-ray facility which was completed in 1995 and produced the brightest X-rays in the world at the time of its construction.

    On 19 March 2019, it was reported in the Chicago Tribune that the laboratory was constructing the world’s most powerful supercomputer. Costing $500 million it will have the processing power of 1 quintillion flops. Applications will include the analysis of stars and improvements in the power grid.

    With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About the Advanced Photon Source

    The U. S. Department of Energy Office of Science’s Advanced Photon Source (APS) at Argonne National Laboratory is one of the world’s most productive X-ray light source facilities. The APS provides high-brightness X-ray beams to a diverse community of researchers in materials science, chemistry, condensed matter physics, the life and environmental sciences, and applied research. These X-rays are ideally suited for explorations of materials and biological structures; elemental distribution; chemical, magnetic, electronic states; and a wide range of technologically important engineering systems from batteries to fuel injector sprays, all of which are the foundations of our nation’s economic, technological, and physical well-being. Each year, more than 5,000 researchers use the APS to produce over 2,000 publications detailing impactful discoveries, and solve more vital biological protein structures than users of any other X-ray light source research facility. APS scientists and engineers innovate technology that is at the heart of advancing accelerator and light-source operations. This includes the insertion devices that produce extreme-brightness X-rays prized by researchers, lenses that focus the X-rays down to a few nanometers, instrumentation that maximizes the way the X-rays interact with samples being studied, and software that gathers and manages the massive quantity of data resulting from discovery research at the APS.

    With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About the Advanced Photon Source

    The U. S. Department of Energy Office of Science’s Advanced Photon Source (APS) at Argonne National Laboratory is one of the world’s most productive X-ray light source facilities. The APS provides high-brightness X-ray beams to a diverse community of researchers in materials science, chemistry, condensed matter physics, the life and environmental sciences, and applied research. These X-rays are ideally suited for explorations of materials and biological structures; elemental distribution; chemical, magnetic, electronic states; and a wide range of technologically important engineering systems from batteries to fuel injector sprays, all of which are the foundations of our nation’s economic, technological, and physical well-being. Each year, more than 5,000 researchers use the APS to produce over 2,000 publications detailing impactful discoveries, and solve more vital biological protein structures than users of any other X-ray light source research facility. APS scientists and engineers innovate technology that is at the heart of advancing accelerator and light-source operations. This includes the insertion devices that produce extreme-brightness X-rays prized by researchers, lenses that focus the X-rays down to a few nanometers, instrumentation that maximizes the way the X-rays interact with samples being studied, and software that gathers and manages the massive quantity of data resulting from discovery research at the APS.

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: