Tagged: Medium Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:27 pm on June 16, 2019 Permalink | Reply
    Tags: "Meteor magnets! Giant planets may act as a shield for life", , , , , Medium   

    From Medium: “Meteor magnets! Giant planets may act as a shield for life” 

    From Medium

    Astronomers have uncovered further evidence that planets like Jupiter act as meteor magnets-shielding us from space objects that would otherwise slam into Earth. Questioning if giant planets act as guardians of solar systems elsewhere in the galaxy.

    A team of astronomers have discovered two Jupiter-sized planets-150 light-years away from Earth-that could reveal whether life is likely on the smaller planets in other solar systems thanks to the intervention of giant planets acting as meteor magnets.

    A mystery object impacts Jupiter in 2016 (NASA)

    Stephen Kane, lead study author and UCR associate professor of planetary astrophysics, explains the importance of gas giants acting as meteor magnets: “We believe planets like Jupiter have profoundly impacted the progression of life on Earth. Without them, humans might not be here to have this conversation.

    “Understanding how many other stars have planets like Jupiter could be very important for learning about the habitability of planets in those systems.”

    Along with liquid water oceans, Kane and other astronomers believe such planets have the ability to act as ‘slingshots’- pulling objects like meteors, comets, and asteroids out of their trajectories diverting them from impacting with small, rocky planets.

    Locating meteor magnets with a novel technique

    Many larger planets have been found close to their stars. However, those aren’t as useful for learning about the architecture of our own solar system, where the giant planets including Saturn, Uranus and Neptune are all farther from the sun. Big planets far from their stars have, until now, been harder to find.

    The ‘wobble,’ or radial velocity technique for finding planets relies on the movement of the stars created as they’re circled by their planets. The blue wave represents movement toward Earth, while the red wavelength occurs as the star heads away. (NASA/JPL-Caltech)

    Kane’s team found used a novel approach combining traditional detection methods with the latest technologies. A study detailing their findings have been accepted for publication in The Astronomical Journal.

    One of the primary methods of searching for exoplanets in other solar systems involves monitoring stars the “wobble” produced in the primary star’s orbit caused by the gravitational effects of planets in orbit around it. Thus when a star wobbles, it’s a clue there may be an exoplanet nearby.

    When the planet is far from its star, the gravitational pull it exerts is weaker, thus making the wobble smaller and harder to detect. Another problem with using the wobble detection method, Kane points out, is that it just takes a long time.

    Consider an example from our solar system; Earth may only take a year to orbit the sun, but Jupiter takes 12, Saturn takes 30, and Neptune takes an astonishing 164 years! Larger exoplanets also take many years to circle their stars, meaning observing a complete orbit could take an astronomer’s entire career.

    To accelerate the process for the less patient astronomer, Kane and his team combined the wobble method with direct imaging. This way, if the team thought a planet might be causing wobble, they could confirm it by sight.

    Direct imaging-This false-color composite image traces the motion of the planet Fomalhaut b, a world captured by direct imaging. Credit: NASA, ESA, and P. Kalas (University of California, Berkeley and SETI Institute

    Obtaining a direct image of a planet quadrillion of miles away is no simple task. It requires the largest possible telescope, one that is at least 32 feet long and highly sensitive. Even from this distance, the light of the stars can overexpose the image, obscuring the target planets.

    The team overcame this challenge by learning to recognize and eliminate the patterns in their images created by starlight. Removing the starlight allowed Kane’s team to see what remained.

    Kane adds: “Direct imaging has come a long way both in terms of understanding the patterns we find, and in terms of the instruments used to create the images, which are much higher resolution than they’ve ever been.

    “You see this every time a new smartphone is released — the camera detectors are always being improved and that’s true in astronomy as well.”

    In this project, the team applied the combination of wobble and imaging method to 20 stars. In addition to the two being orbited by giant Jupiter-like planets that had not been previously discovered, the team also detected a third, previously observed star with a giant planet in its system.

    Going forward, the team will continue to monitor 10 of the stars where planetary companions could not be ruled out. In addition, Kane is planning a new project to measure how long it takes these exoplanets to complete rotations toward and away from their stars, which cannot currently be measured.

    Kane concludes: “This discovery is an important piece of the puzzle because it helps us understand the factors that make a planet habitable and whether that’s common or not.

    “We are converging rapidly on answers to this question that the past 3,000 recorded years of history could only wish they had available to them.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    About Medium

    Medium is an online publishing platform developed by Evan Williams, and launched in August 2012. It is owned by A Medium Corporation. The platform is an example of social journalism, having a hybrid collection of amateur and professional people and publications, or exclusive blogs or publishers on Medium, and is regularly regarded as a blog host.

    Williams developed Medium as a way to publish writings and documents longer than Twitter’s 140-character (now 280-character) maximum.

  • richardmitnick 8:45 am on June 4, 2019 Permalink | Reply
    Tags: "A Twisted Tale of Sunspots", , Medium,   

    From Medium: “A Twisted Tale of Sunspots” 

    From Medium

    May 28. 2019
    James Maynard

    One of the greatest questions in solar astronomy may have an answer after more than 400 years, thanks to an inquisitive team of German researchers. Every eleven years, the population of sunspots seen on the surface of our local star reaches a maximum, before dying out. Another population of sunspots then begin to appear (this time with their poles reversed from the previous cycle) before they too peak and fade away. This process may be well-known, but the reason for these 11-year peaks has remained a mystery, until now.

    The magnetic field of the Sun may be affected by the gravitational forces of Venus, Earth, and Jupiter, resulting in the cyclical sunspot cycle, a new study suggests. Researchers compared solar cycles to the positions of planets, finding the gravitational forces of these three worlds acts like a cosmic clock, regulating the solar cycle.

    “There is an astonishingly high level of concordance: what we see is complete parallelism with the planets over the course of 90 cycles. Everything points to a clocked process,” explained Frank Stefani of the German-based research institute Helmholtz-Zentrum Dresden-Rossendorf (HZDR).

    The sunspot cycle can be easily seen in this graphic, produced by NASA in 2017. We are currently at a low point in the cycle. Image credit: NASA/ARC/Hathaway

    You Missed a Spot Right There

    Sunspots were first clearly seen between the years 1610 and 1611, in the years following the invention of the telescope. Although Galileo is often given credit for the discovery, several pioneering astronomers of the era reported finding the distinctive dark spots on the Moon around the same time.

    A sunspot, seen by the Solar Dynamics Observatory (SDO) shows off it’s powerful magnetic field. Image credit: NASA’s Goddard Space Flight Center/SDO


    The publication of the first paper recognizing these features, by Dutch astronomer Johannes Fabricius, shocked the zeitgeist of early 17th Century society, which always held a belief in a perfect, unchanging, featureless Sun.

    De Maculis in Sole observatis et Apparente earum cum Sole Conversione Narratio (Narration on Spots Observed on the Sun and their Apparent Rotation with the Sun), published in June 1611, was the first scientific paper published describing sunspots. Public domain image.

    Everybody Line up!

    The greatest gravitational force of planets on the Sun occurs once every 11.07 years, when Venus, Earth, and Jupiter come into alignment. Gravitational pull from this arrangement results in tidal forces on the Sun, similar to the way our own Moon draws oceans upward, creating tides.

    This effect is not strong enough to affect the interior of our stellar companion, so the timing of this alignment was previously overlooked in earlier studies of sunspot cycles. However, a physical effect known as the Tayler instability is capable of altering the behavior of conductive liquids or a plasma.

    The Tayler instability alters the rate of flow of material (the flux) in an object, like the Sun, and can affect magnetic fields. This effect can be triggered by relatively small movements in materials like the plasma found at the surface of the Sun. Due to this effect, these relatively minor tidal forces can alter the relationship of sunspots to their direction of travel. This measurement, known as the helicity of a region of plasma, alters the solar dynamo (the physical process which generates the magnetic field of our parent star).

    “Magnetic fields are a little like rubber bands. They consist of continuous loops of lines of force that have both tension and pressure. Like rubber bands, magnetic fields can be strengthened by stretching them, twisting them, and folding them back on themselves. This stretching, twisting, and folding is done by the fluid flows within the Sun,” The Marshall Space Flight Center explains.

    A video explaining the process resulting in the formation of sunspots. Credit: NASA Goddard

    Stefani had his doubts whether or not tidal forces from the planets could alter an event as powerful as the solar dynamo. However, once he realized the Tayler instability could provide the trigger for the process, Stefani and his team began developing a computer simulation to model the process.

    “I asked myself: What would happen if the plasma was impacted on by a small, tidal-like perturbation? The result was phenomenal. The oscillation was really excited and became synchronized with the timing of the external perturbation,” Stefani explains.

    Sun, Spot, Sun!

    The motion of the sun is complex, with multiple effects contributing to its intricate dance. As the sun rotates, the equator moves faster than the material near the poles. In a process known as the omega effect, lines of the sun’s magnetic field are pulled and stretched near the equator, creating a bend in the direction of the solar equator.

    A little-understood alpha effect then affects the magnetic lines, pushing them toward their original alignment, resulting in a twisting of the lines of force.

    Magnetic lines can be seen above sunspots in this image of charged particles, captured in extreme ultraviolet light. Image credit: NASA/GSFC/Solar Dynamics Observatory.

    These actions create the cool, dark areas we know as sunspots. While most of the surface of the Sun glows around 5,500 degrees Celsius (9,900 Fahrenheit), sunspots remain at a relatively cool 3,200 Celsius (5,800 Fahrenheit). Sunspots are still fairly bright, only appearing dark against the torrid backdrop of the solar surface.

    This new model, folding tidal forces into the complex processes of the solar dynamo, could explain several questions astronomers and physicists have about the solar dynamo, and how it affects our parent star.

    The Parker Solar Probe is currently in orbit around the Sun, in a mission to study our stellar companion up close.

    NASA Parker Solar Probe Plus named to honor Pioneering Physicist Eugene Parker

    This program could answer a multitude of mysteries concerning Sun over the next few years.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    About Medium

    Medium is an online publishing platform developed by Evan Williams, and launched in August 2012. It is owned by A Medium Corporation. The platform is an example of social journalism, having a hybrid collection of amateur and professional people and publications, or exclusive blogs or publishers on Medium, and is regularly regarded as a blog host.

    Williams developed Medium as a way to publish writings and documents longer than Twitter’s 140-character (now 280-character) maximum.

  • richardmitnick 11:03 am on May 23, 2019 Permalink | Reply
    Tags: , , , , Medium,   

    From Webb via Medium: “Is the James Webb Space Telescope Worth the Wait?” 

    NASA Webb Header

    NASA Webb Telescope

    James Webb Space Telescope



    May 9, 2019
    James Maynard

    Billed as the successor to Hubble, the James Webb Space Telescope (JWST) promises to bring about a new era in astronomy. This mammoth orbiting observatory is designed to answer some of the greatest and deepest questions astronomers have about the Cosmos.

    However, this project, originally conceived in 1996 for launch around 2007, has faced a series of delays and setbacks, and the telescope remains on the ground. Some members of Congress, and even the general public, are starting to ask if Webb is worth the cost and the delays. Scientists, on the other hand, are eagerly awaiting the launch with bated breath.

    The golden mirror of the James Webb Space Telescope contains 18 segments, designed to captured light from the earliest ages of the Cosmos. Image credit: NASA/Desiree Stover

    “The James Webb Space Telescope will be the world’s premier space science observatory when it launches in 2021. Webb will solve mysteries of our solar system, look beyond to distant worlds around other stars, and probe the mysterious structures and origins of our universe and our place in it,” NASA officials explain.

    Dream Big or Stay Home

    Following an initial budget estimate of one billion dollars, costs have skyrocketed to $9.66 billion, while the launch date has slipped by more than a decade. Technical errors, equipment failures, and the government shutdown early in 2019 all combined to push back the launch of this next generation space telescope.

    For much of the time Webb was being developed, NASA was aiming for launch in October 2018. In September 2017, that date was pushed back to spring 2019. In March 2018, launch was again delayed, until May 2020. Then, in June 2018, NASA rescheduled launch for March 2021.

    One major hurdle with lifting Webb off the ground is the massive scope of the project. Engineers at NASA, faced with scientific challenges that have never before been reached, needed to develop 10 new technologies before construction could begin on the telescope. These included advanced shielding to protect the observatory from the heat of the Sun, as well as new software to keep Webb pointed at its target.

    “Among the new technologies are: near and mid-infrared detectors, sunshield materials, microshutters and wavefront sensing and control. All inventions, with the exception of wavefront sensing and control are ‘cryogenic,’ which means icy cold. It’s important for these pieces to be kept cold because the telescope will be reading heat and light from stars, and heat from instruments would get in the way of a good reading,” NASA officials explain.

    The JWST will come complete with a wide range of technologies, many of which are still being developed.

    The researchers, engineers, and contractors of NASA have a can-do attitude, which can be one of their greatest strengths. It was this zeitgeist which allowed the American space agency to put astronauts on the Moon less than a decade after the project was initiated. The mindset that anything is possible also led to saving the Hubble, as well as the astronauts aboard Apollo 13. It is challenging, when an agency is faced with the prospect of developing great science like Webb proposes, to take into account the inevitable failures and setbacks which are bound to come up over time.

    “The James Webb Space Telescope is the most ambitious and complex astronomical project ever built, and bringing it to life is a long, meticulous process. The wait will be a little longer now but the breakthrough science that it will enable is absolutely worth it,” said Günther Hasinger, Director of Science at the European Space Agency (ESA), following the most recent launch delay.

    It’s Harder to Hit a Moving Target

    One challenge facing NASA is the constantly-shifting priorities of presidents and members of Congress. Unlike China, the American space program, in general, is beset by scientific targets that shift with each passing administration.

    As a prime example of this, NASA was recently directed to land human beings on the Moon once more, by the year 2024. While many people within the agency are confident of making this goal, the challenges are quite extraordinary. Meanwhile, several other countries and private organization are also planning their own human journeys to our planetary companion.

    A video showing the launch and deployment of the JWST. Credit: Northrop Grumman

    The JWST was first officially proposed to NASA in 2001, by the National Academy of Sciences, as the Next Generation Space Telescope. This massive undertaking was declared a top priority for the academy, and a one billion dollar budget was proposed for the program.

    Cost overruns are not new, or unexpected, at NASA. The Hubble Space Telescope, designed to cost $200 million, finally tallied out at $1.2 billion. Only after launch did researchers find it had reached orbit with a faulty mirror that needed correcting.

    “NASA project managers are often overly optimistic about the effort required to mature critical technologies and frequently underestimate the cost and schedule reserves needed to address known and unknown risks, optimistically assuming that most risks will not materialize. However, when they do they result in significant cost, schedule, and performance problems,” Paul Martin, NASA Inspector General, wrote in June 2018.

    A video comparing the Hubble and James Webb Space Telescopes. Credit: James Webb Space Telescope (JWST)

    Northrop Grumman, the main contractor for Webb, has been a frequent target for critics of the delays and cost overruns. Human errors have certainly contributed to problems getting Webb off the ground. One person selected an improper solvent to clean a fuel valve, while an incorrect set of wires pushed the wrong voltage into a system during a test. Just prior to another key test, the wrong fasteners were installed on the sunshield cover, resulting in another delay.

    Grumman has a large team of workers dedicated solely to building the JWST, and delays at any point in the project can result in large cost overruns. The Virginia-based contractor is constructing Webb on a cost-plus contract, meaning that cost overruns are charged to the government. Other organizations (such as SpaceX) ferrying supplies to the space station are paid on a fixed-price basis. However, Grumman officials have stated they would have been unable to make a profit on Webb if they were required to develop the telescope on a fixed-price contract.

    On Further Reflection…

    The Webb Telescope will, almost certainly, rise from Earth one day in the coming years. Assuming a successful launch, the observatory will head more than 625,000 kilometers (one million miles) from Earth, to the L2 Lagrange point, above the nighttime side of our planet. There, it will join the Planck Space Observatory and the Herschel Space Telescope.

    Sporting a mirror eight meters (25 feet) in diameter, the Webb Telescope will be capable of collecting images and data from the first stars and galaxies which came into being. This instrument will unravel some of the deepest mysteries of all about the earliest eras of the Cosmos, utilizing the largest mirror ever sent into space.

    Once Webb opens its magnificent golden eye to space, the observatory will study the oldest, most distant objects in the Cosmos in infrared light, and assist in the search for exoplanets which could be home to extraterrestrial life.

    Like so much in life, as well as science, blame for setbacks cannot be placed on the shoulders of a single person or organization. We have seen, time and again, how NASA delivers science unparalleled by any other organization in the world.

    Once the James Webb Space Telescope launches aboard an Ariane 5 rocket, the science it delivers will revolutionize our knowledge of the Universe. Hopefully, we won’t have to wait too much longer for the most-advanced telescope in the history of the world to open our view of the deepest reaches of the Cosmos.

    One thing that is certain — when the science starts pouring in, it will be worth the wait.

    [Totally ignored in this article is the contribution of money and technology for Webb by ESA and CSA. We are far beyond the point of allowable failure to launch Webb.]

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    The James Webb Space Telescope will be a large infrared telescope with a 6.5-meter primary mirror. Launch is planned for later in the decade.

    Webb telescope will be the premier observatory of the next decade, serving thousands of astronomers worldwide. It will study every phase in the history of our Universe, ranging from the first luminous glows after the Big Bang, to the formation of solar systems capable of supporting life on planets like Earth, to the evolution of our own Solar System.

    Webb telescope was formerly known as the “Next Generation Space Telescope” (NGST); it was renamed in Sept. 2002 after a former NASA administrator, James Webb.

    Webb is an international collaboration between NASA, the European Space Agency (ESA), and the Canadian Space Agency (CSA). The NASA Goddard Space Flight Center is managing the development effort. The main industrial partner is Northrop Grumman; the Space Telescope Science Institute will operate Webb after launch.

    Several innovative technologies have been developed for Webb. These include a folding, segmented primary mirror, adjusted to shape after launch; ultra-lightweight beryllium optics; detectors able to record extremely weak signals, microshutters that enable programmable object selection for the spectrograph; and a cryocooler for cooling the mid-IR detectors to 7K.

    There will be four science instruments on Webb: the Near InfraRed Camera (NIRCam), the Near InfraRed Spectrograph (NIRspec), the Mid-InfraRed Instrument (MIRI), and the Fine Guidance Sensor/ Near InfraRed Imager and Slitless Spectrograph (FGS-NIRISS). Webb’s instruments will be designed to work primarily in the infrared range of the electromagnetic spectrum, with some capability in the visible range. It will be sensitive to light from 0.6 to 28 micrometers in wavelength.

    NASA Webb NIRCam

    NASA Webb NIRspec

    NASA Webb MIRI

    CSA Webb Fine Guidance Sensor-Near InfraRed Imager and Slitless Spectrograph FGS/NIRISS

    Webb has four main science themes: The End of the Dark Ages: First Light and Reionization, The Assembly of Galaxies, The Birth of Stars and Protoplanetary Systems, and Planetary Systems and the Origins of Life.

    Launch is scheduled for later in the decade on an Ariane 5 rocket. The launch will be from Arianespace’s ELA-3 launch complex at European Spaceport located near Kourou, French Guiana. Webb will be located at the second Lagrange point, about a million miles from the Earth.

    NASA image

    ESA50 Logo large

    Canadian Space Agency

  • richardmitnick 2:19 pm on May 20, 2019 Permalink | Reply
    Tags: "The Protein Folding Problem", , , Medium   

    From Medium: “The Protein Folding Problem” 

    From Medium

    Apr 23, 2019
    Aryan Misra


    Recent advancements on the ultimate problem in biology.

    The key to finding the cure to diseases such as Alzheimer’s and Parkinson’s lie in a fundamental biomolecule; the protein. Biologists and physicists alike have been trying to solve the protein folding problem for over half a century with no real progress until recently. This article will give insight into how proteins fold, why it’s so difficult to predict how it folds, and solutions that can be designed around an accurate protein folding algorithm, as well as various other topics that may be of interest.

    What are proteins?

    Proteins are really complex macromolecules that are made of strings of hundreds or thousands of amino acids. They perform every biological function in your body and are absolutely key in every organism. From fighting diseases to providing structure for cells, proteins play every role in your body and in every other living organism. Our DNA contains all the information for creating all these proteins, in the form of the nucleotides: A, C, G, and T. Then, DNA is transcribed to mRNA which is an intermediary molecule in the process of protein creation. In mRNA, the T’s are replaced with U’s, when transcription occurs. Finally, mRNA gets transcribed into the 20 different amino acids that make up proteins.



    Protein Structure

    Proteins start off as a really long sequence of amino acids, in this state the protein is unstable as it’s not at its lowest energy state. To reach this state, the protein folds into a complex 3D shape which is determined by the sequence of amino acids it started off as, as well as the environment it’s in.

    This structure is really important because the way it’s folded completely defines how the protein functions in your body. For example, if the protein holds a T shaped structure, it is likely an antibody that binds to viruses and bacteria to help protect the body. If the structure of the protein is globular, it is likely used for transporting other small molecules throughout your body.

    By understanding this folding code, we could essentially eradicate neurological diseases such as Alzheimers and Parkinsons; diseases that are known to be caused by proteins misfolding in your brain, creating clumps of protein that disrupt brain activity. The protein folding problem can essentially be broken into three parts, outlined well be the following quote.

    The protein folding problem is the most important unsolved problem in structural biochemistry. The problem consists of three related puzzles: i) what is the physical folding code? ii) what is the folding mechanism? and iii) can we predict the 3D structure from the amino acid sequences of proteins?

    (Jankovic and Polovic 2017)

    Knowing this code could completely change the way we deal with treatments for many different diseases, and could potentially lead to completely artificially made materials and prosthetics. Knowing how proteins fold also opens up new potential within targetted drug discovery. It would also result in us being able to create biomaterials that could be used to create incredibly accurate prosthetics or really anything that requires compatibility with a living organism. Advances in biodegradable enzymes could help us reduce the effect of pollutants such as plastic or oil by helping break them down efficiently.

    However, the problem here is trying to predict the 3D structure of a protein is nigh on impossible because of the sheer amount of combinations of structures a protein can be in. According to the acclaimed, Levinthal’s paradox, it would take longer than the age of the universe to find iterate through every combination of a typical protein’s structure.

    The way a protein folds is dependant on the amino acids it’s made of and the environment that it’s in. This folding is a result of amino acids coming together through attraction across disparate lengths of the protein and is driven by a process called energy minimization.

    Energy Minimization

    Picture going on a hiking expedition across fields of rolling hills. The tops of these hills represent protein fold combinations that are really unlikely to occur, and valleys represent states that proteins are drawn towards. For a protein of x amino acids, there are 3^x states it can be in, and when the number of amino acids ranges from hundreds to thousands, this soon becomes way too many combinations for modern computers to even attempt to solve. This is why we can’t just try every combination and see which has the least energy.

    Protein’s structure changing at different energy points. Source.

    With recent surges in the amount of genetic data we have access to as a result of cheaper genome sequencing, many biologists are turning to a data-driven solution to the protein folding problem. The cost of sequencing a genome has dropped from billions of dollars to a price that is far more practical.

    Graph illustrating the decreasing cost of genome sequencing.

    The Data-Driven Approach

    Previous methods for trying to predict protein structures include things with long names such as x-ray crystallography, cryo-electron microscopy, and nuclear magnetic resonance. These approaches are really expensive and take months to do, so more and more researchers are moving towards an approach using machine learning to predict proteins. CASP (Critical Assessment of Protein Structure Predictions) is a bi-yearly experiment where research groups test their prediction methodology in competition with dozens of the world’s leading teams.

    CASP had seen lots of stagnation in improvements for protein predictions in the early 2000s, however, this has changed recently, with every year bringing significant improvement in the quality of the results.

    In the most recent CASP, Google’s Deepmind. took everyone by surprise by winning first place by a huge margin. Their algorithm, coined Alphafold, was designed to do de novo prediction, which is modelling target proteins from scratch, a daunting task.


    Alphafold’s algorithm consists of a dual residual network architecture where one of the networks predicts the distance between pairs of amino acid bonds, and the other network predict the angles at chemical bonds which connect each amino acid. On top of this, they used a Generative Adversarial Network (GAN) to create new protein fragments, which were then fed into the network to continuously improve it. Their code and research paper has yet to be released, however, there have been some attempts to replicate their concept. (MiniFold)

    Optimization of protein structure.

    Using this, they were able to create predictions for many different proteins, and create a state of the art algorithm in this field. The current limitations that are holding Alphafold and other machine learning algorithms back are the lack of data available for this at the moment. Currently, there are about 145,000 different proteins catalogued in the Protein Data Bank, while the total number of different proteins found in nature is thought to be around 10¹². Yet the total number of combinations for proteins are immense; 200 amino acids gives us nearly 20²⁰⁰ possible combinations for possible structures, leaving lots of room for scientific discovery.

    It is very likely that the integration of various technologies and algorithms in the fields of deep learning, quantum computing and bioinformatics that will lead to the creation of an algorithm that can successfully solve this problem. Utilizing techniques such as deep q-learning, quantum computing, and capsule networks, we hope to foresee a future in which we can positively change the lives of billions of people through the solution of the protein folding problem.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    About Medium

    Medium is an online publishing platform developed by Evan Williams, and launched in August 2012. It is owned by A Medium Corporation. The platform is an example of social journalism, having a hybrid collection of amateur and professional people and publications, or exclusive blogs or publishers on Medium, and is regularly regarded as a blog host.

    Williams developed Medium as a way to publish writings and documents longer than Twitter’s 140-character (now 280-character) maximum.

  • richardmitnick 9:34 am on April 18, 2019 Permalink | Reply
    Tags: "Gas: The Blueprint of Star Formation", ASTRON LOFAR Radio Antenna Bank Netherlands, , , , , , GMCs-Giant Molecular Clouds, Medium   

    From Medium: “Gas: The Blueprint of Star Formation” 

    From Medium

    Apr 2, 2019

    We have explored hundreds of thousands of hypothetical individual gas clouds in the universe, which collapse gravitationally and crunch out new stars. That’s just an illustration of the way stars are formed. What is the distribution of stars and gas in the Milky Way as a whole?

    Detail of the birth of stars in the Carina nebula, focuesed on a pillar of gas and dust within which stars are forming. This is just part of a larger complex of star formation within a huge cloud of gas, a scenario played out in patches throughout the disc of our galaxy and other star-forming galaxies in the universe, wherever there is a reservior of cold, dense gas and the conditions are right for the thermonuclear triggering of star formation. The pillar is quite opaque, even to the intense light emitted by the new stars within it, but jets emitted by some young, massive stars within the pilar can be seen blasting lateraly out of the collumn, and the whole region glows with the light of ionized gases and scattered light. Star formation is an energetic process: radiation and winds from the most massive, young stars can dramatically alter and shape their immediate surroundings, and form part of feedback energy responsibe for regulating the growth of galaxies.

    As we know, the Milky Way can be divided up into its disc and central bulge — the virtual and the yolk, if you like. The disc of the galaxy is where most of the dense gas reservoirs responsible for forming new stars are located, and these are so-called Giant Molecular Clouds (GMCs). They are called ‘giant’ because they are large, spanning some hundreds or so parsecs, and contain enough fuel to form potentially millions of stars. They are ‘molecular’ because of the gas within them is primarily composed of molecular hydrogen, the simplest molecule, just two protons bound together by shared electrons forming a simple covalent bond. In order to form in the first place, molecular clouds must have ‘cooled’ from more tenuous gas where the hydrogen atoms were not yet bound together. We say the gas has ‘cooled’ because, for the molecules to form, those atoms must get close enough together so that they become bound via the electromagnetic force, and don’t simply zip by each other. This is the situation in host gas; the atoms have lost of energy if molecules and subsequently, stars are to form.

    At first, it’s a bit confusing to think that stars, which are a lot, form from gas that has cooled, but what we really mean is that the gas cloud as a whole has collapsed gravitationally, losing some of its internal energy so that fusion — star formation — can eventually take place in dynamically cold clumps.

    Once stars start forming within a cloud, the gas around the sites of new star formation starts getting blasted by the radiation and winds driven by those new stars. This backlash not only ionizes the surroundings gas, creating a glowing nebula like Orion but the combination of the radiation and winds blown by the stars starts to blow out bubbles and cavities within the GMC, affecting the distribution and chemistry of the gas. Thus the astrophysics at the interface of star formation and the interstellar medium is incredibly complex, meriting a dedicated field of astrophysics research.

    There are many GMCs spread throughout the galactic disc. If we could view the Milky Way from above, we would see many patches of red-hued ionized hydrogen and clusters of blue, young stars punctuating the spiral arms of the galaxy. We cannot get to this vantage point for obvious reasons, but images of nearby spiral galaxies that present their faces to use give us an excellent idea of what the Milky Way looks like from the outside.

    We measure the star-formation rate — or SFR — of a galaxy in the convenient units of the equivalent mass in Suns formed per year. The Milky Way has a star formation rate of just a few solar masses per year, and it’s important to consider that even after billions of years of evolution, the galaxy has not yet used up all of its gas, it remains an active place, albeit comparatively sedate compared to the most extreme galaxies in the universe, which I will come to. If we waited long enough and watched the evolution of the Milky Way, pretty much all the gas in the galaxy would be turned into stars, and the supply of gas from the surrounding intergalactic space — which gradually rains down via gravity — would turn into an insignificant trickle.

    A few tens to 100 million years later, after the last generation of star forms, the massive stars would die, leaving behind their longer lived but less massive cousins. The disc would eventually fade and turn from blue to red as the bluer spectral type die off progressively. Such galaxies do exist and are called ‘passive spirals’. They are thought to be typical spirals in which star formation has ceased, either because of some environmental influence that prevents gas from forming new stars or because they have run out of fuel.

    On the other hand, if the Milky Way collides with another galaxy, as it will probably do with M31 in the future, there will be a violent event that could significantly boost the star formation rate.

    Milkdromeda -Andromeda on the left-Earth’s night sky in 3.75 billion years-NASA

    The strong gravitational tidal force will distort and tear the two galactic discs, triggering a burst of star formation in disturbed clouds, which are impelled to collapse from the gravitational perturbation. No stars will physically collide — they are so small and far between that the chance of individual stellar collisions when galaxies collide are very low. We see these starbursts happening in other galaxies that have recently collided; stellar discs are ripped into long tails, and there are patches of intense ultraviolet and infrared emissions, often towards the dense centre of the system. When things settle down, our galaxy will have changed chemically, dynamically and structurally. New generations of stars and the new solar systems that form with them will be enriched with elements that will literally have formed a long time ago in a galaxy far, far away.

    Galaxy collision is events that stir things up: they deliver new material and promote new growth. As always, the dense gas is where all the actions happen, but this gas is surprisingly difficult to detect. Most of the molecular hydrogen in galaxies cannot be observed directly, because for physical reasons relate to the structure of the hydrogen molecules under normal conditions it doesn’t emit radiation that we can detect. And yet molecular hydrogen is a fundamental component of galaxies, so how can we learn about the properties of the raw material for star formation?

    It’s easy to see the glowing, ionized gas around star-forming regions, but these are like burst-fires in a more expensive savannah. The majority of the gas in any one GMC is not actively forming stars. So, how do we measure and map the molecular gas? The answer comes from the contamination of that gas by previous generations of stars. One of the most common molecules in galaxies after hydrogen is carbon monoxide. This is the same stuff that is emitted by the poorly burning gas fires, and which you can detect in your home.

    Carbon monoxide tends to be mixed in with the hydrogen gas, which is extremely useful because, unlike the hydrogen molecules, it does emit radiation when excited into an energetic state. In this case, that energy is in the form of the simple rotation of the carbon monoxide molecules(Which are single carbon and oxygen atoms bound together.) This rotation can happen when carbon monoxide molecules collide with hydrogen molecules. Changes in the energy of quantum systems (like molecules) result in the emission of precisely turned radiation. At the molecular level, even the rotation of a molecule like carbon monoxide is regulated by quantum mechanics: only certain types of rotation are allowed. This means that carbon monoxide, when rotationally excited, emits radiation at regular intervals in frequency. Different frequencies of emission correspond with different energy states, the highest frequencies are emitted by carbon monoxide molecules in the most energetic states and vice verse. These energy states are dependent on the density and temperature of the gas.

    It takes gas densities of a few hundred particles per cubic centimetre and temperatures of a few tens of degrees above absolute zero to start emitting the lowest energy carbon monoxide lines. In context, the gas that is producing this emission is representative of the bulk molecular gas reservoir. Unlike the emission lines of ionized gas in the visible light part of the spectrum, the carbon monoxide emission has wavelengths of the order of a millimetre, between the far-infrared and radio part of the spectrum, so it cannot be observed with a normal optical telescope. Instead, we can use radio telescope equipped with suitable receivers that can detect photons of this wavelength. Once we detect the carbon monoxide emission, we can measure the total amount of light and convert this to carbon monoxide luminosity assuming we have some estimate of how far away the emitting gas is. Since the carbon monoxide emitting gas is mixed in with the molecular hydrogen such that the more hydrogen there is, the more carbon monoxide there is, we can convert the observed carbon monoxide luminosity to a molecular hydrogen mass. Thus, we can tell how much gas is available for star formation in a GMC, or indeed in the whole galaxy.

    Traditionally this has been quite a challenging observations for galaxies much beyond our local volume — the technology has not been available to detect the faint carbon monoxide emission from very distant galaxies apart from the most extreme, luminous galaxies like quasars. All this is changing right now with the development of a new telescope — or rather, an array of telescope — called the Atacama Large Millimeter Array (ALMA).

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    ALMA is a collection of about 50 radio dishes [correction-66 radio telescopes], each 12 meters in diameter, spread over a large area of land in the high Chilean Atacama on the Chajnantor plateau, at an altitude of about 5 Kilometer. ALMA is an international project, with the major contributions from the USA, Europe and Japan. The magical thing about an array of telescope like ALMA is that they can be linked together electronically to act like a very large telescope, utilizing the light-collecting area of all the dishes and attaining very high spatial resolutions. This technique is called interferometry. ALMA is incredibly sensitive in the sub-millimetre and millimetre bands and, once it reaches full operational power, will be able to detect the molecular gas in galaxies not dissimilar to the Milky Way, but seen close to the start of the cosmic time. It’s an amazing leap forward in this area of astronomy and is ushering in a new era of exploration of galaxies that will yield fascinating discoveries for several decades to come.

    You might have heard about the molecular gas — the building blocks of stars — but it’s important to also consider the other major gaseous components of galaxies: the neutral atomic hydrogen, H1, which precedes the molecular phase. This H1 gas comprises single atoms of hydrogen rather than molecules of hydrogen. Unlike molecular hydrogen, the atomic component is more diffuse and is not restricted to dense, compact clouds trapped in the disc. The atomic hydrogen is incredibly useful as a tracer of the outer edge of disc galaxies. The atomic hydrogen is easy to spot because it is a strong emitter of radio waves. Not any old sort of radio waves, mind you — in the rest frame, the gas emits light at a frequency precisely 1.4 gigahertz, or equivalent a wavelength of 21 centimetres. Like the precise carbon monoxide emission from GMCs discussed before, and like those ionized gas emission line around star-forming regions we have talked about, the 21 cm emission from atomic hydrogen is also an emission line. This time the physics of the emission is slightly different again. I’ll explain it because it illustrates two important things: one, the ridiculous numbers involved in astrophysics, and two, another nice link between quantum mechanics and astrophysics.

    Hydrogen atoms are made from a proton and an electron. In quantum mechanics, these particles have a property called ‘spin’, which doesn’t really have an analogue in classical physics but is a bit like a quantum angular momentum. Anyway, the spin of the proton and electron can each be thought of as oriented up or down, so it’s easy to think of a bunch of hydrogen atoms, somewhere both the protons and electrons have their spins in the same direction (parallel), and some where the spins are in opposite directions which is anti-parallel. It turns out that the quantum state in which the spins are parallel has a little more energy than the state in which they are anti-parallel. A quantum system is lazy — it likes to be in the lowest possible energy state — so there is a mechanism by which those atoms with parallel spins can have the electron flip so that is spin points in the opposite direction to the proton’s spin. This is called hyperfine splitting because the difference in the energy between the parallel and anti-parallel states is tiny compared with the overall ground-state energy of a hydrogen atom.

    The energy that the system loses in this transition has to go somewhere, so every spin flip releases a photon with a very specific energy corresponding to the exact difference in energy between the parallel and anti-parallel states, which happens to correspond with electromagnetic radiation — a photon — with a wavelength of precisely 21 centimetres. The corollary is that neutral atomic hydrogen can also absorb radiation with a wavelength of 21 centimetres, where energy is absorbed by the atom and stored by aligning the spins of the electron and proton.

    Hyperfine splitting is called a ‘forbidden’ transition because, for any one atom, there is a very small chance of it occurring under normal conditions. In fact, the chance is so remote that if you observed a single hydrogen atom aligned in the parallel state and waited for it to undergo the hyperfine transition, you would have to wait on average of 10 million years for it to happen. If you observed 10 million atoms, then you would expect to see just one photon released per year. That’s still not much of a signal. In astrophysical scenarios, however, we can exploit atomic crowdsourcing, there are so many neutral hydrogen atoms in an astrophysical cloud of gas that the radio emission is really quite bright — since, at any one time, a huge number of 21 cm photon are being emitted via the hyperfine transition. I find this amazing — this is a probabilistic quantum mechanical release of a photon from a single atom that simply doesn’t happen on Earth, but when it is put in an astrophysical theatre it gives rise to one of the most important observations we have of our own, and indeed other galaxies.

    NASA/ESA Hubble Messier 83

    Again, like the carbon monoxide measurements, the detection of atomic hydrogen much beyond the local volume is difficult. Like all electromagnetic radiation being emitted by a source moving relative to use, the 21 cm line is subject to redshift, which stretches wavelength longer and equivalently makes frequencies lower. The rest-frame frequency of 1.4 GHz is already quite low. Make that lower still and it moves into a part of the radio frequency range that is quite difficult to detect. For one thing, below 1 GHz we get into the radio bands used commercially for TV and radio, and for communication. This manmade radio frequency interference dwarfs astronomical signals, making astronomical observations near impossible at frequencies that coincide with these ranges. Radio telescopes that want to operate close to the frequencies used for communication must be put in a location remote from terrestrial radio source in order to minimize RFI [Radio Frequency Interference].

    The Earth’s ionosphere also affects the traversal of radio frequencies below 1 GHz in a similar way to how optical light is bent and refracted by a glass of water, and correcting for this is hard. There are numerous other technical reasons why low-frequency radio astronomy is challenging, but many of these hurdles are now being overcome with the development of large antenna array coupled with extremely powerful computers that can handle the insane level of single processing that must be performed in order to distil astronomical signals in the radio part of the spectrum.

    One such recent example is LOFAR: the LOw-Frequency ARay for radio astronomy.

    ASTRON LOFAR Radio Antenna Bank, Netherlands

    LOFAR is an array of thousands of very cheap antennae, that actually just resemble black slabs, rather than the parabolic dishes that you usually associate with a radio telescope), spread over a 100 Kilometer region in the Netherlands, as well as stations up to 1,500 Km away in various parts of Europe. The telescope is designed to detect radio frequency of 10 to 250 MHz — suitable for exploring what has been dubbed the ‘low-frequency universe’. What makes LOFAR different from traditional telescopes is the fact that the antennae are Omni-directional they can record the entire sky at once. Then, in order to observe a particular spot in the sky, the signals from all the antenna are collected and the aperture actually defined within the software, using a supercomputer that cleverly processes the signal received by each of the antennae. Although it still requires antennae to do the receiving, LOFAR is basically a digital telescope that has only been made possible through modern computing — the power and sophistication of which will only improve over time.

    Like ALMA, LOFAR is a fantastically powerful and innovative telescope that is going to help revolutionize twenty-first-century astronomy. One of the goals of LOFAR is to detect the 21 cm line of neutral atomic hydrogen close to the epoch when the first stars and galaxies formed, where the H1 emission has been redshifted to very low frequencies — this is the final frontier of galaxy evolution studies. LOFAR has a more practical application too: it is also being used as a sensor network that can be applied to geophysics research and agriculture studies.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    About Medium

    Medium is an online publishing platform developed by Evan Williams, and launched in August 2012. It is owned by A Medium Corporation. The platform is an example of social journalism, having a hybrid collection of amateur and professional people and publications, or exclusive blogs or publishers on Medium, and is regularly regarded as a blog host.

    Williams developed Medium as a way to publish writings and documents longer than Twitter’s 140-character (now 280-character) maximum.

  • richardmitnick 3:58 pm on February 17, 2019 Permalink | Reply
    Tags: Asgardia, , , , , , , Medium, , See the full blog post for images of all of the spacecraft involved and the Heliopause and Heliosphere, Which Spacecraft Will Reach Interstellar Space Next?   

    From Asgardia via Medium: “Which Spacecraft Will Reach Interstellar Space Next?” 

    From Asgardia




    NASA’s Voyager 2spacecraft reached interstellar space in December 2018, following in the footsteps of its sister, Voyager 1. Currently, only five spacecraft have been launched that can make such a grand exit, including the Voyagers. The other three are Pioneers 10 and 11, and New Horizons. Which one will make a great escape next?

    NASA/Voyager 2

    NASA/Voyager 1

    NASA Pioneer 10

    NASA Pioneer 11

    NASA/New Horizons spacecraft

    Reaching interstellar space is a milestone that is thought of as leaving the solar system by a specific definition. In 1990, the New York Times reported that Pioneer left the solar system when it flew past Neptune’s orbit. But that’s not what Voyager 2’s scientists used as their definition. Instead, the more recent measurements said the crossing of the sun’s heliopause, the theoretical boundary to its heliosphere, is the determining factor for entering interstellar space.

    The heliosphere is a bubble of charged particles that are created by and flows past the sun. It is used by scientists to mark where interstellar space starts.

    NASA Heliosphere

    However, the heliosphere is tricky, and there are many changes such as the sun’s 22-year solar cycle, the shrinking and growing with the solar wind, and stretching out behind the sun in the star’s direction of travel. It’s not something that can be measured easily from Earth. Thus, NASA’s Interstellar Boundary Explorer (IBEX) mission is trying to define the edges of the bubble remotely.

    Observations from the Voyager probes’ indicate that they’ve pierced this bubble. However, since researchers think the Oort Cloud also surrounds the sun, an area of icy bodies that is estimated to stretch from 1,000 to 100,000 astronomical units — far beyond the heliopause — the Voyager probes cannot be considered entirely outside the solar system. (One astronomical unit, or AU, is the distance between the Earth and the sun — 93 million miles, or 150 million kilometres).

    Oort cloud Image by TypePad, http://goo.gl/NWlQz6

    Oort Cloud, The layout of the solar system, including the Oort Cloud, on a logarithmic scale. Credit: NASA, Universe Today

    When Voyager 1 and 2 crossed the heliopause, their still-working particle instruments unveiled the historical events. The heliosphere functions as a shield, keeping out many of the higher-energy particles created by the cosmic rays generated by other stars.

    Magnetosphere of Earth, original bitmap from NASA. SVG rendering by Aaron Kaase

    By tracking both the low-energy particles found inside the solar system and the high-energy particles from outside of it, the instruments could reveal a sudden surge of cosmic rays alerting scientists that the spacecraft had left the solar system.

    The ever-changing nature of the heliosphere makes it impossible to tell when Pioneer 10 and 11 will enter interstellar space. It’s even possible that one of them may have already.

    As per NASA’s e-book Beyond Earth: A Chronicle of Deep Space Exploration, from Nov. 5, 2017, Pioneer 10 was approximately 118.824 AUs from Earth, farther than any craft besides Voyager 1. H(?), Although Pioneer 11 and the Voyager twins were all heading in the direction of the sun’s apparent travel, Pioneer 10 is headed toward the trailing side. 2017 research showed that the tail of the heliosphere is around 220 AU from the sun. Since Pioneer 10 travels about 2.5 AU/year, it will take Pioneer until roughly 2057–40 years — to reach the changing boundary.

    Pioneer 11 was thought to be approximately 97.6 AUs from Earth as of Nov. 5, 2017, according to the same e-book. Unlike its twin, the spacecraft is travelling in about the same direction as the Voyagers. Voyager 2 crossed into interstellar medium at approximately 120 AUs. Since Pioneer 11 is moving at 2.3 AU/year, it should reach interstellar space in about eight years, around 2027 — assuming the boundary doesn’t change, which it probably will.

    On Jan. 1, 2019, New Horizons made its most recent flyby of a solar system object, and it was launched much later than the other four. During this flyby, New Horizons was 43 AU from the sun. The mission’s principal investigator, Alan Stern, told Space.com that the spacecraft was travelling approximately 3.1 AU each year, or 31 AU in ten years. In another two decades, the spacecraft has a good chance of reaching interstellar space. If New Horizons crossed at Voyager 2’s same border (it won’t, but just consider as a baseline), it would make the trip in just under 24 years, in 2043. But it’s possible the ISM line will move inward, allowing it to cross sooner.

    Although there won’t be a direct confirmation of crossing the heliopause with the Pioneer spacecraft, it’s possible that New Horizons will still be working, and will give us a detailed study of interstellar space. The particle detectors that it holds are much more potent than the ones on Voyager, Stern said. Moreover, New Horizons holds a dust detector that would offer insight into the area beyond the heliosphere.

    However, whether or not they will still be functioning remains to be seen. As per Stern, power is the limiting factor. New Horizons runs off of decaying plutonium dioxide. Presently, the spacecraft has enough power to work until the late 2030s, said Stern, and it is currently in good working order.

    If in the unlikely event that the ever-changing heliosphere remains static Pioneer 11 will be the next to cross the heliopause in 2027, followed by New Horizons in 2043. Pioneer 10, the first of the five spacecraft to launch, will be the last to leave the heliosphere, in 2057. Once again, this assumes the extremely unrealistic chance that the heliopause remaining static for the next four decades.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    About Medium

    Medium is an online publishing platform developed by Evan Williams, and launched in August 2012. It is owned by A Medium Corporation. The platform is an example of social journalism, having a hybrid collection of amateur and professional people and publications, or exclusive blogs or publishers on Medium, and is regularly regarded as a blog host.

    Williams developed Medium as a way to publish writings and documents longer than Twitter’s 140-character (now 280-character) maximum.

  • richardmitnick 1:54 pm on February 15, 2019 Permalink | Reply
    Tags: , , , , Is J1420–0545 the largest galaxy ever discovered?, Medium,   

    From Medium: “Is J1420–0545 the largest galaxy ever discovered?” 

    From Medium

    Jan 27, 2019
    Graham Doskoch

    An unassuming galaxy hides a secret 15 million light-years long.

    If we could get high-quality optical images of J1420–0545, they might look like this photograph of its closer cousin, the giant radio galaxy 3C 236. This Hubble image only shows the galaxy’s core; radio telescopes reveal a much larger structure. Image credit: NASA/ESA.

    The Milky Way is about 50 to 60 kiloparsecs in diameter — a moderately sized spiral galaxy.

    Milky Way Galaxy Credits: NASA/JPL-Caltech/R. Hurt

    It’s a few orders of magnitude larger than the smallest galaxies, ultra-compact dwarfs like M60–UCD1 that have most of their stars clustered in a sphere less than 50 to 100 parsecs across. At the extreme opposite end of the spectrum lie supergiant ellipticals, more formally known as cD galaxies, whose diffuse halos can be up to 1–2 megaparsecs wide. To put this in perspective, the Andromeda galaxy is 0.78 Mpc away.

    Andromeda Galaxy Adam Evans

    Andromeda Nebula Clean by Rah2005 on DeviantArt

    This means that the 2-megaparsec-long stellar halo of IC 1101 — sometimes hailed as the largest known galaxy in the observable universe — could stretch from the Milky Way to Andromeda and then some.

    IC 1101, possibly the largest known galaxy in the universe. Its diffuse halo might not look like much, but it extends about one megaparsec in each direction. Image credit: NASA/ESA/Hubble Space Telescope

    Yet IC 1101 pales in comparison to another class of objects: radio galaxies. Radio galaxies are sources of strong synchrotron emission, radiation from particles being accelerated along curved paths by magnetic fields. Active galactic nuclei are the culprits, supermassive black holes accreting matter and sending out jets of energetic electrons. In most cases, these jets are hundreds of kiloparsecs in length, and some are even longer.

    This week’s blog post talks about J1420–0545, currently the largest-known radio galaxy. To be more specific, it has the largest radio “cocoon” ever observed. These cocoons are structures formed by shocked plasma from the jets, which expands outward into the intergalactic medium (IGM) and encases the jets and the lobes they form. The entire radio structure around J1420–0545 is enormous, stretching 4.69 Mpc — 15 million light-years — from end to end. Read on to find out just how extraordinary this galaxy is and how we know so much about its enormous cocoon, despite knowing so little about the host galaxy itself.

    Initial observations and slight surprise

    J1420–0545 was discovered, like many unusual galaxies, in a survey scanning the sky. In particular, it showed up as two large radio lobes spaced 17.4′ apart on the FIRST and NVSS surveys observing at 1.4 GHz using the Very Large Array (VLA).

    NRAO/Karl V Jansky Expanded Very Large Array, on the Plains of San Agustin fifty miles west of Socorro, NM, USA, at an elevation of 6970 ft (2124 m)

    Follow-up observations made at Effelsberg and the Giant Metrewave Radio Telescope (GMRT) (Machalski et al. 2008) then confirmed that there was a radio-loud core located midway between them, and that it corresponded to a previously-known dim galaxy.

    MPIFR/Effelsberg Radio Telescope, in the Ahrgebirge (part of the Eifel) in Bad Münstereifel, Germany

    Giant Metrewave Radio Telescope, an array of thirty telecopes, located near Pune in India

    Fig. 1, Machalski et al. 2008. VLA/Effelsberg observations of J1020–0545 showed that the main sources of 1.4 GHz emission were two large radio-loud lobes and a weaker central source. The galaxy itself is in the crosshairs in the image, a speck among specks.

    Redshift values for that galaxy were available (z~0.42–0.46), but had large uncertainties, so the team performed their own optical photometry at the Mount Suhora Observatory. The spectra derived from this proved useful in two ways. First, the spectroscopy allowed the team to figure out what sort of galaxy they were looking at. Unlike the radio lobes, the optical emission from the center couldn’t be resolved, and it wasn’t possible to image the galaxy in the same way that we could take a picture of, say, our neighbor Andromeda. Fortunately, there was a solution: The 4000 Å discontinuity.

    Elliptical galaxies are typically old, having formed over time from mergers and collisions of smaller galaxies of varying types. Star formation levels are low, meaning that there are relatively few young, hot, blue stars compared to star-forming spiral and lenticular galaxies. Now, at wavelengths a bit shorter than 4000 Å, there is a drop-off in emission thanks to absorption by metals in stellar atmospheres. In most galaxies, hot stars fill in this gap, when present. However, in elliptical galaxies, there are few hot stars, and so there is a “discontinuity” in the spectra around 4000 Å.

    The team found other spectral features corroborating the hypothesis that J1420–0545 is an elliptical galaxy. Now that they knew the sort of spectrum they expected to see, they could fit a model to it. Measurements of [O II] and Ca II absorption lines yielded a new redshift of z~0.03067, placing the object closer than originally thought. Since the redshift (and therefore the distance) was known, as well as the angular size of the radio cocoon, its size could be estimated — assuming that the inclination angle was 90°, as suggested by the weak emission from the core. A simple calculation showed that the jets must be 4.69 Mpc long.

    How did it get so big?

    A radio structure of this size isn’t unprecedented. The giant radio galaxy 3C 236 had already been discovered, and found to have a radio cocoon 4.4 Mpc in length. However, what was surprising about J1420–0545 wasn’t just its size, but its age. Best-fit models of the jet and ambient medium found the structure to have an age of about 47 million years; 3C 236, on the other hand, is thought to have been active for 110 million years — more than double that. So why is J1420–0545, a relatively young radio galaxy, so large?

    Fig. 1, Carilli & Barthel 1995. A radio galaxy’s narrow jets are surrounded by a bow shock at the boundary with the intergalactic medium, as well as a radio cocoon.

    The answer turned out to be the intergalactic medium itself, the hot plasma that fills the spaces between galaxies. The IGM at the center of the galaxy is lower than at the center of 3C 236 by about a factor of 20, meaning that the gas pressure opposing the jets’ expansion was correspondingly lower. The power of the AGN in J1420–0545 is also 50% greater than the AGN in 3C 236; this, combined with the substantially lower ambient IGM density, meant that the jets experienced much less resistance as they plowed into intergalactic space, and could therefore expand faster and farther in a shorter amount of time.

    This of course just begs the question: Why is the local IGM so rarefied on so large a scale? Originally, the group thought that it was simply a naturally under-dense region of space, similar to a void — an underdensity dozens of megaparsecs across that formed shortly after the Big Bang. However, after additional VLA and GMRT measurements (Machalski et al. 2011), they considered an alternative possibility: that the jets were the result of more than one round of AGN activity.

    Double, double, radio bubbles

    The team suggested classifying J1420–0545 as a double-double radio galaxy (DDRG). DDRGs exhibit two pairs of lobes that are aligned to within a few degrees, indicating that the central AGN underwent a period of activity, shut down, and then restarted. The key piece of information from the old VLA and GMRT data that suggested that J1420–0545 might be an extreme DDRG was the shape of its jets. The narrow jets are characteristic of double-double radio galaxies undergoing their second period of activity.

    If the DDRG hypothesis is true, there should be a second faint outer radio cocoon surrounding the structure. After the first period of AGN activity, once the jets ceased, the cocoon should have quickly cooled through energy losses by synchrotron radiation and inverse-Compton scattering; with a suitable choice of parameters, it would be quite possible for it to be below the sensitivity of the VLA and GMRT. However, the team is hopeful that higher-sensitivity measurements in the future might be able to discover it.

    In an interesting twist, it was suggested around the same time that 3C 236 is also a DDRG — albeit one in the very early stages of its second period of AGN activity (Tremblay et al. 2010 The Astrophysical Journal). A group observed four bright “knots” near its core that were visible in the far ultraviolet. They appear to be associated with the AGN’s dust disk, and are about ten million years old.

    Fig. 4, Tremblay et al. The star-forming knots in the core of 3C 236. The nucleus itself, hiding a supermassive black hole, is surrounded by dust lanes

    3C 236’s two large radio lobes appear to be relic, and it has a smaller (~2 kpc) compact structure that seems to be much more recent. This is the key bit of evidence suggesting that it, too, might be a DDRG: The compact radio structure appears to be the same age as the knots, meaning that whatever event caused one likely caused the other. For instance, if a new reservoir of gas became available, it could fuel both AGN activity and a new round of star formation. If this is true, and the compact source ends up resulting in jets, it’s possible that 3C 236 could end up the size of J1420–0545 — or larger.

    I’ll end this post by discussing the question I posed in the title: Does J1420–0545 deserve to be called the largest known galaxy? We don’t know quite how large its stellar halo is, but it’s assuredly much smaller than the giant radio cocoon that surrounds it. At the same time, the cocoon represents a very distinct boundary between the galaxy and the intergalactic medium, and the shocked plasma inside it should behave quite differently from plasma in the IGM. Ironically, unlike normal elliptical galaxies that have diffuse halos, we can place a finger on where this giant ends and where intergalactic space begins.

    One day, perhaps, we’ll find a giant radio galaxy even larger than J1420–0545, and the question will be moot. For now, though, I leave the question open— and I’ll wait for more VLA data. Clinching evidence of an outer cocoon could be around the corner. All we have to do is wait and see.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    About Medium

    Medium is an online publishing platform developed by Evan Williams, and launched in August 2012. It is owned by A Medium Corporation. The platform is an example of social journalism, having a hybrid collection of amateur and professional people and publications, or exclusive blogs or publishers on Medium, and is regularly regarded as a blog host.

    Williams developed Medium as a way to publish writings and documents longer than Twitter’s 140-character (now 280-character) maximum.

  • richardmitnick 1:54 pm on February 13, 2019 Permalink | Reply
    Tags: , , , , , Medium, ,   

    From Medium: “Here’s what I Zwicky 18 can tell us about the first stars in the universe” 

    From Medium

    Feb 10, 2019
    Graham Doskoch

    A blue dwarf galaxy only 59 million light-years away may harbor cousins of the mysterious Population III stars.

    A Hubble Space Telescope image of I Zwicky 18 shows gas illuminated by young blue stars. Image credit: NASA/ ESA/A. Aloisi.

    The first stars in the universe were unlike any we can see today. Known to astronomers as Population III stars, they were large, massive, and composed almost entirely of hydrogen and helium. Population III stars were important because they enriched the interstellar medium with metals — all the elements heavier than hydrogen and helium — and participated in reionization, an event a few hundred million years after the Big Bang that made the universe more transparent.

    Finding Population III stars could confirm important parts of our theories of cosmology and stellar evolution. However, they should all be gone from the Milky Way by now, having exploded as supernova long ago. We can look into the distant universe to search for them at high redshifts — and indeed, the James Webb Space Telescope will do just that — but detecting individual stars at that distance is beyond our current capabilities. So far, telescopes have turned up nothing.

    Recent observations of a nearby dwarf galaxy named I Zwicky 18, however, have given us some hope. Only 59 million light-years away, the galaxy seems to contain clouds of hydrogen that are nearly metal-free. What’s more, it’s undergoing a burst of star formation that might be producing stars very similar to Population III stars. If we could learn more about this galaxy, it could provide us with clues as to what the earliest stars and galaxies in the universe were like.

    Is the current wave of star formation the first?

    The initial HI observations of I Zwicky 18 used the radio interferometer at Westerbork, in the Netherlands. Image credit: Wikipedia user Onderwijsgek, under the Creative Commons Attribution-Share Alike 2.5 Netherlands license.

    One of the first studies to draw attention to the possibility that I Zwicky 18 is forming Population III-analog stars was by Lequex & Viallefond 1980. They supplemented existing optical observations of HII regions — clouds of ionized gas that host young, hot, massive stars — with studies of HI regions via the 21-cm emission line, a key tool for mapping neutral hydrogen. They were trying to figure out if the current round of massive star formation in the dwarf galaxy is its first, or if it had been preceded by other events, polluting the hydrogen clouds with metals.

    Their radio observations with the Westerbork Synthesis Radio Telescope [above] found a total HI mass of about 70 million solar masses in six separate regions, three of which remained unresolved. They were unable to connect individual components to the maps of HII regions, but radial velocity measurements of the clouds found that the total mass of the galaxy was much greater by about a factor of ten, suggesting that some other sort of mass was present.

    There were two possibilities: either the unseen mass was molecular hydrogen — which would not emit 21-cm radiation — or there was a dim population of older stars. The molecular hydrogen hypothesis couldn’t be ruled out, but the idea of an as-yet unseen group of stars was attractive. For one thing, the HI clouds appeared quite similar to the primordial clouds needed for galaxy formation. If these HI regions were actually primordial, then these dim stars could have supported them against gravitational collapse for billions of years.

    Figure 5, Lequex & Viallefond 1980. A map of the HI regions in the galaxy show that three (labeled 1, 2 and 5) are large enough to be resolved, while the others are point sources. Regions 1, 4 and 5 are the most massive.

    A picture began to emerge. Comparison of Lyman continuum emission with far-ultraviolet emission indicated that the burst of star formation must have begun about a few million years ago, likely due to the collision of several hydrogen clouds. Before this, there would have been formation of dim red stars on a smaller scale, but not enough to enrich the galaxy more than low observed oxygen abundances suggested. Therefore, the stars forming in I Zwicky 18 should indeed be very close to Population III stars.

    What sort of stars are we dealing with?

    Figure 1, Kehrig et al. 2015. A composite (hydrogen alpha + UV + r’-band) image of luminous knots in the dwarf galaxy that show intense helium emission.

    The idea caught on over the next few decades, and astronomers became interested in determining the nature of these young stars. One group (Kehrig et al. 2015 The Astrophysical Letters) was particularly interested in determining what type of massive stars could best explain the He II λ4686 line, an indicator of hard radiation and hot stars ionizing material in HII star-forming regions. There were a couple possible culprits:

    Early-type Wolf-Rayet stars, which are thought to be responsible for much of the He II λ4686 emission in star-forming galaxies.

    Shocks and x-ray binaries, which have also been found in extragalactic HII regions.

    Extremely metal-poor O stars, or — going one step further — entirely metal-free O stars, similar to Population III stars.

    The group ruled out the Wolf-Rayet stars quickly. Key signatures of metal-poor carbon Wolf-Rayet stars were clearly evident in the spectra, but the inferred number based on the C IV λ1550 line was too small to account for all of the helium emission. Similarly, the x-ray binary possibility was discarded because the sole x-ray binary found was too dim by a factor of 100.

    Figure 2, Kehrig et al. 2015. A region of high Hα and He II λ4686 emission shows little overlap with [OI] λ6300 emission and low [S II] contrast, ruling out the possibility of x-ray shocks.

    However, a group of maybe a dozen or so metal-free stars of a hundred solar masses or more could successful reproduce the observed He II λ4686 line. There are pockets of gas near a knot in the northwest edge of the galaxy that are devoid of metals and would provide a suitable environment for these stars to form, although there are also likely chemically-enriched stars there, too. Certain models of extremely high-mass (~300 solar masses) offer an alternative to these metal-free stars, but in light of the previous observations, the metal-free models remain enticing.

    For the time being, our telescopes can’t detect Population III stars. Until they do, we can still learn a lot about the early universe by studying blue compact dwarf galaxies like I Zwicky 18. Low-redshift, metal-free analogs of the first stars in the universe are close enough for us to study today. The most metal-poor galaxy in the universe is a good place to start.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    About Medium

    Medium is an online publishing platform developed by Evan Williams, and launched in August 2012. It is owned by A Medium Corporation. The platform is an example of social journalism, having a hybrid collection of amateur and professional people and publications, or exclusive blogs or publishers on Medium, and is regularly regarded as a blog host.

    Williams developed Medium as a way to publish writings and documents longer than Twitter’s 140-character (now 280-character) maximum.

  • richardmitnick 11:39 am on February 9, 2019 Permalink | Reply
    Tags: , , , , First observed in 2008 a binary system known as IGR J18245–2452 from its x-ray outbursts and PSR J1824–2452I for its radio emissions, Medium, , , , The fastest millisecond pulsar PSR J1748–2446ad   

    From Medium: “IGR J18245–2452: The most important neutron star you’ve never heard of” 

    From Medium

    Jan 21, 2019
    Graham Doskoch

    Astronomers have spent thirty years on the theory behind how millisecond pulsars form. Now we know they got it right.

    Neutron stars are known for their astonishing rotational speeds, with most spinning around their axes many times each second. The mechanism behind this is simple: When a fairly massive star several times the radius of the Sun collapses into a dense ball about ten kilometers in diameter, conservation of angular momentum dictates that it must spin quicker.

    However, one class of neutron stars can’t be explained this way: millisecond pulsars. These exotic objects spin hundreds of times each second, with the fastest, PSR J1748–2446ad, rotating at over 700 Hertz! Since their discovery in the 1980s, a slightly different evolutionary path has been proposed. After studying dozens of systems, astronomers theorized that millisecond pulsars are very old — old enough that they’ve lost much of their original angular momentum to radiation. However, they’re also in binary systems, and under certain conditions, a companion star can transfer matter — and thus angular momentum — to the pulsar, spinning it back up again.

    A plot of the periods and magnetic fields of pulsars. Millisecond pulsars have extremely short periods, and comparatively weak magnetic fields. Image credit: Swinburne University of Technology

    During this period of accretion, the system should become an x-ray binary, featuring strong emission from the hot plasma in the neutron star’s accretion disk. There should also be periods where the neutron star behaves like an ordinary radio pulsar, emitting radio waves we can detect on Earth. If we could detect both types of radiation from a single system, it might be the clinching bit of evidence for the spin-up model of millisecond pulsar formation.

    In 2013, astronomers discovered just that: a binary system known as IGR J18245–2452 from its x-ray outbursts, and PSR J1824–2452I for its radio emissions. First observed in 2008, it had exhibited both radio pulsations and x-ray outbursts within a short period of time, clear evidence of the sort of transitional stage everyone had been looking for. This was it: a confirmation of the ideas behind thirty years of work on how these strange systems form.

    INTEGRAL observations of IGR J18245–2452 from February 2013 (top) and March/April 2013 (bottom). The system is only visible in x-rays in the second period. Image credit: ESA/INTEGRAL/IBIS/Jörn Wilms.


    The 2013 outburst

    Towards the end of March of 2013, the INTEGRAL and Swift space telescopes detected x-rays from an energetic event coming from the core of the globular cluster M28 (Papitto et al. 2013).

    NASA Neil Gehrels Swift Observatory

    It appeared to be an outburst of some kind — judging by the Swift observations, likely a thermonuclear explosion. A number of scenarios can lead to x-ray transients, including novae and certain types of supernovae. Binary systems are often the culprits, where mass can be transferred from one star or compact object to another.

    Fig. 7, Papitto et al. Swift data from observations of an outburst show its characteristic exponentially decreasing cooling.

    One thermonuclear burst observed by Swift followed a time evolution profile expected for such a detonation: An increase in luminosity for 10 seconds, followed by an exponential decrease with a time constant of 38.9 seconds. This decrease represents the start of post-burst cooling. The other outbursts from the system should have had similar profiles characteristic of x-ray-producing thermonuclear explosions, and indeed later observations of the system have confirmed that this is indeed the case (De Falco et al. 2017 [Astronomy and Astrophysics]), albeit with slightly different rise times and decay constants.

    To determine the identity of the transient, now designated IGR J18245–2452, astronomers made follow-up observations using the XMM-Newton telescope.

    ESA/XMM Newton

    The nature of the outburst would determine how it evolved over time. For instance, supernovae (usually) decrease in brightness over the course of weeks or months. In this case, however, the x-rays were still detected — albeit a bit weaker. More surprisingly, the strength of the emission appeared to be modulated, varying with a period of 3.93 milliseconds.

    Such a short period seemed to indicate that a pulsar might be responsible. The team checked databases of known radio pulsars and found one that matched the x-ray source: PSR J1824–2452I, a millisecond pulsar in a binary system. Even after this radio counterpart had been found, however, two questions remained: Were these x-ray pulses new or a long-term process, and how did they relate to the radio emission?

    Diving into the archives

    A handy tool for observational astronomers is archival images. By looking at observations taken months, years or decades before an event, scientists can — if they’re lucky — peek into the past to see what an object of interest looked like long before it became interesting. Archival data is often of use for teams studying supernovae, as even a previously uninteresting or unnoticed star can tell the story of a supernova’s progenitor.

    Fig. 3, Papitto et al. Chandra images from 2008, showing the system in quiescent (top) and active (bottom) states.

    NASA/Chandra X-ray Telescope

    In this case, Papitto et al. looked at Chandra observations from 2008, comparing them with new data from April 2013. They found x-ray variability occurring shortly after a period of radio activity by the pulsar, indicating that the system had switched off its radio emissions and started emitting x-rays. This was extremely interesting, because new observations with three sensitive radio telescopes — Green Bank, Parkes, and Westerbork — indicated that the pulsar was no longer active in radio waves.

    Green Bank Radio Telescope, West Virginia, USA, now the center piece of the GBO, Green Bank Observatory, being cut loose by the NSF

    CSIRO/Parkes Observatory, located 20 kilometres north of the town of Parkes, New South Wales, Australia

    Westerbork Synthesis Radio Telescope, an aperture synthesis interferometer near World War II Nazi detention and transit camp Westerbork, north of the village of Westerbork, Midden-Drenthe, in the northeastern Netherlands

    It was possible that the pulsar had been eclipsed and emission was ongoing, and this may indeed have happened at some points, but was not likely to be the main factor behind the apparent quiescence.

    A few weeks later, however, the exact opposite happened: the pulsar exited its quiescent radio state and was again picked up by the three radio telescopes. In short, over a period of months, it had oscillated between behaving like an x-ray binary and a normal millisecond pulsar. Finally, x-ray observations had conclusively shown that this sort of bizarre transitional state was possible!

    The mechanism

    IGR J18245–2452 spends the vast majority of its time in what is known as a “quiescent” state, during which there is comparatively little x-ray activity. The pulsar’s magnetosphere exerts a pressure on the infalling gas, forming a disk at a suitable distance from the surface. Eventually, however, there is enough buildup that an x-ray outburst occurs, lasting for a few months. The outburst decreases the mass accretion rate, and the magnetosphere pushes away much of the transferred gas, allowing radio pulsations to take place once more.

    Fig. 2, De Falco et al. Over a period of a few weeks, IGR J18245–2452 underwent a number of individual x-ray outbursts, themselves indicative of a brief period of x-ray activity and radio silence.

    It’s expected that the pulsar will eventually be spun-up until its rotational period is on the order of a millisecond or so. It will cease x-ray emissions, and be visible mainly through radio pulses. All of this, however, is far in the future, and during our lifetimes, IGR J18245–2452 will stay in its current transitional state, halfway between an x-ray binary and a millisecond pulsar.

    Women in STEM – Dame Susan Jocelyn Bell Burnell

    Dame Susan Jocelyn Bell Burnell, discovered pulsars with radio astronomy. Jocelyn Bell at the Mullard Radio Astronomy Observatory, Cambridge University, taken for the Daily Herald newspaper in 1968. Denied the Nobel.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    About Medium

    Medium is an online publishing platform developed by Evan Williams, and launched in August 2012. It is owned by A Medium Corporation. The platform is an example of social journalism, having a hybrid collection of amateur and professional people and publications, or exclusive blogs or publishers on Medium, and is regularly regarded as a blog host.

    Williams developed Medium as a way to publish writings and documents longer than Twitter’s 140-character (now 280-character) maximum.

  • richardmitnick 1:05 pm on January 5, 2019 Permalink | Reply
    Tags: , Blogs publishing falsehoods are bad enough but the rise of social media made the situation even worse, In 2014 only 14 percent of those surveyed showed “a great deal of confidence” in academia, It is incredibly rare for the scientific consensus as a whole to be wrong, Measles, Medium, Parents in the United States are fooled by the false claim that vaccines cause autism, Research shows that people lack the skills for differentiating misinformation from true information, Scientists get rewarded in money and reputation for finding fault with statements about reality made by other scientists, The Internet Is for…Misinformation, The lack of curation means thinking errors are causing us to choose information that fits our intuitions and preferences as opposed to the facts, The large gaps between what scientists and the public believe about issues such as climate change evolution GMOs and vaccination exemplify the problems caused by misinformation and lack of trust in sc, The Pro-Truth Pledge combines the struggle against misinformation with science advocacy, The rise of the internet and more recently social media is key to explaining the declining public confidence in expert opinion, These problems result from the train wreck of human thought processes meeting the internet, This crumbling of trust in science and academia forms part of a broader pattern, This greater likelihood of experts being correct does not at all mean we should always defer to experts, We can uplift the role of science in our society. The March for Science movement is a great example of this effort, We’re In an Epidemic of Mistrust in Science   

    From Medium: “We’re In an Epidemic of Mistrust in Science” 

    From Medium

    Jun 27, 2018
    Gleb Tsipursky

    A family physician prepares a measles vaccine during a consultation in Bucharest, Romania on April 16, 2018. Photo by Daniel Mihailescu/AFP via Getty

    Dozens of infants and children in Romania died recently in a major measles outbreak, as a result of prominent celebrities campaigning against vaccination. This trend parallels that of Europe as a whole, which suffered a 400 percent increase in measles cases from 2016 to 2017. Unvaccinated Americans traveling to the World Cup may well bring back the disease to the United States.

    Of course, we don’t need European travel to suffer from measles. Kansas just experienced its worst measles outbreak in decades. Children and adults in a few unvaccinated families were key to this widespread outbreak.

    Just like in Romania, parents in the United States are fooled by the false claim that vaccines cause autism. This belief has spread widely across the country and leads to a host of problems.

    Measles was practically eliminated in the United States by 2000. In recent years, however, outbreaks of measles have been on the rise, driven by parents failing to vaccinate their children in a number of communities. We should be especially concerned because our president has frequently expressed the false view that vaccines cause autism, and his administration has pushed against funding “science-based” policies at the Centers for Disease Control and Prevention.

    These illnesses and deaths are among many terrible consequences of the crisis of trust suffered by our institutions in recent years. While headlines focus on declining trust in the media and government, science and academia are not immune to this crisis of confidence, and the results can be deadly.

    Consider that in 2006, 41 percent of respondents in a nationwide poll expressed “a lot of confidence” in higher education. Fewer than 10 years later, in 2014, only 14 percent of those surveyed showed “a great deal of confidence” in academia.

    What about science as distinct from academia? Polling shows that the number of people who believe science has “made life more difficult” increased by 50 percent from 2009 to 2015. According to a 2017 survey, only 35 percent of respondents have “a lot” of trust in scientists; the number of people who trust scientists “not at all” increased by over 50 percent from a similar poll conducted in December 2013.

    This crumbling of trust in science and academia forms part of a broader pattern, what Tom Nichols called the death of expertise in his 2017 book of the same name. Growing numbers of people claim their personal opinions hold equal weight to the opinions of experts.

    Should We Actually Trust Scientific Experts?

    While we can all agree that we do not want people to get sick, what is the underlying basis for why the opinions of experts — including scientists — deserve more trust than the average person in evaluating the truth of reality?

    The term “expert” refers to someone who has extensive familiarity with a specific area, as shown by commonly recognized credentials, such as a certification, an academic degree, publication of a book, years of experience in a field, or some other way that a reasonable person may recognize an “expert.” Experts are able to draw on their substantial body of knowledge and experience to provide an opinion, often expressed as “expert analysis.”

    That doesn’t mean an expert opinion will always be right—it’s simply much more likely to be right than the opinion of a nonexpert. The underlying principle here is probabilistic thinking, our ability to predict the truth of current and future reality based on limited information. Thus, a scientist studying autism would be much more likely to predict accurately the consequences of vaccinations than someone who has spent 10 hours Googling “vaccines and autism.”

    This greater likelihood of experts being correct does not at all mean we should always defer to experts. First, research shows that experts do best in evaluating reality in environments that are relatively stable over time and thus predictable, and when the experts have a chance to learn about the predictable aspects of this environment. Second, other research suggests that ideological biases can have a strongly negative impact on the ability of experts to make accurate evaluations. Third, material motivations can sway experts to conduct an analysis favorable to their financial sponsor.

    However, while individual scientists may make mistakes, it is incredibly rare for the scientific consensus as a whole to be wrong. Scientists get rewarded in money and reputation for finding fault with statements about reality made by other scientists. Thus, when the large majority of them agree on something — when there is a scientific consensus — it is a clear indicator that whatever they agree on accurately reflects reality.

    The Internet Is for…Misinformation

    The rise of the internet and, more recently, social media, is key to explaining the declining public confidence in expert opinion.

    Before the internet, the information accessible to the general public about any given topic usually came from experts. For instance, scientific experts on autism were invited to talk on this topic on mainstream media, large publishers published books by the same experts, and they wrote encyclopedia articles on the topic.

    The internet has enabled anyone to be a publisher of content, connecting people around the world with any and all sources of information. On the one hand, this freedom is empowering and liberating, with Wikipedia being a great example of a highly curated and accurate source on the vast majority of subjects. On the other hand, anyone can publish a blog post making false claims about links between vaccines and autism. If they are skilled at search engine optimization or have money to invest in advertising, they can get their message spread widely.

    Unfortunately, research shows that people lack the skills for differentiating misinformation from true information. This lack of skills has clear real-world effects: Just consider that U.S. adults believed 75 percent of fake news stories about the 2016 U.S. presidential election. The more often someone sees a piece of misinformation, the more likely they are to believe it.

    Today, the lack of curation means thinking errors are causing us to choose information that fits our intuitions and preferences, as opposed to the facts.

    Blogs publishing falsehoods are bad enough, but the rise of social media made the situation even worse. Most people reshare news stories without reading the actual article, judging the quality of the story by the headline and image alone. No wonder research indicates that misinformation spreads as much as 10 times faster and further on social media than true information. After all, the creator of a fake news item is free to devise the most appealing headline and image, while credible sources of information have to stick to factual headlines and images.

    These problems result from the train wreck of human thought processes meeting the internet. We all suffer from a series of thinking errors, such as confirmation bias, our tendency to look for and interpret information in ways that conform to our beliefs.

    Before the internet, we got our information from sources like mainstream media and encyclopedias, which curated the information for us to ensure it came from experts, minimizing the problem of confirmation bias. Today, the lack of curation means thinking errors are causing us to choose information that fits our intuitions and preferences, as opposed to the facts. Moreover, some unscrupulous foreign actors — such as the Russian government — and domestic politicians use misinformation as a tool to influence public discourse and public policy.

    The large gaps between what scientists and the public believe about issues such as climate change, evolution, GMOs, and vaccination exemplify the problems caused by misinformation and lack of trust in science. Such mistrust results in great harm to our society, from outbreaks of preventable diseases to highly damaging public policies.

    What Can We Do?

    Fortunately, there are proactive steps we can take to address the crisis of trust in science and academia.

    For example, we can uplift the role of science in our society. The March for Science movement is a great example of this effort. First held on Earth Day in 2017 and repeated in 2018, this effort involves people rallying in the streets to celebrate science and push for evidence-based policies. Another example is the Scholars Strategy Network, an effort to support scholars in popularizing their research for a broad audience and connecting scholars to policymakers.

    We can also fight the scourge of misinformation. Many world governments are taking steps to combat falsehoods. While the U.S. federal government has dropped the ball on this problem, a number of states have passed bipartisan efforts promoting media literacy. Likewise, many nongovernmental groups are pursuing a variety of efforts to fight misinformation.

    The Pro-Truth Pledge combines the struggle against misinformation with science advocacy. Founded by a group of behavioral science experts (including myself) and concerned citizens, the pledge calls on public figures, organizations, and private citizens to commit to 12 behaviors listed on the pledge website that research in behavioral science shows correlate with truthfulness. Signers are held accountable through a crowdsourced reporting and evaluation mechanism while getting reputational rewards because of their commitment. The scientific consensus serves as a key measure of credibility, and the pledge encourages pledge-takers to recognize the opinions of experts as more likely to be true when the facts are disputed. More than 500 politicians took the pledge, including state legislators Eric Nelson (PA) and Ogden Driskell (WY) and Congress members Beto O’Rourke (TX) and Marcia Fudge (OH).

    Two research studies at Ohio State University demonstrated the effectiveness of the pledge in changing the behavior of pledge-takers to be more truthful with a strong statistical significance. Thus, taking the pledge yourself and encouraging people you know and your elected representatives to take the pledge is an easy action to both fight misinformation and promote science.


    I have a dream that, one day, children will not be getting sick with measles because their parents put their trust in a random blogger instead of extensive scientific studies. I have a dream that schools will be teaching media literacy, and people will know how to evaluate the firehose of information coming their way. I have a dream that we will all know that we suffer from thinking errors and will watch out for confirmation bias and other problems. I have a dream that the quickly growing distrust of experts and science will seem like a bad dream. I have a dream that our grandchildren will find it hard to believe our present reality when we tell them stories about the bad old days.

    To live these dreams requires all of us who care about truth and science to act now, before we fall further down the slippery slope. Our information ecosystem and credibility mechanisms are broken. Only a third of Americans trust scientists, and most people can’t tell the difference between truth and falsehood online. The lack of trust in science — and the excessive trust in persuasive purveyors of misinformation — is perhaps the biggest threat to our society right now. If we don’t turn back from the brink, our future will not be a dream: It will be a nightmare.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    About Medium

    Medium is an online publishing platform developed by Evan Williams, and launched in August 2012. It is owned by A Medium Corporation. The platform is an example of social journalism, having a hybrid collection of amateur and professional people and publications, or exclusive blogs or publishers on Medium, and is regularly regarded as a blog host.

    Williams developed Medium as a way to publish writings and documents longer than Twitter’s 140-character (now 280-character) maximum.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: