From “McDonald Observatory: Searching for Dark Energy”

space-dot-com logo


McDonald Observatory is a Texas-based astronomical site that has made significant contributions in research and education for more than 80 years.

Administered by the University of Texas at Austin, the McDonald Observatory has several telescopes perched at an altitude of 6791 feet (2070 meters) above sea level on Mount Locke and Mount Fowlkes, part of the Davis Mountains in western Texas, about 450 miles (724 kilometers) west of Austin. McDonald “enjoys the darkest night skies of any professional observatory in the continental United States,” according to a news release issued for the observatory’s 80th anniversary.

McDonald is home to the Hobby-Eberly Telescope, one of the world’s largest optical telescopes, with a 36-foot-wide (11 meter) mirror.

A visitor center offers daytime tours of the grounds and big telescopes, daytime solar viewing, a twilight program in an outdoor amphitheater, and nighttime star parties with telescope viewing.

The observatory is also known for its daily StarDate program, which runs on more than 300 radio stations across the country.

The gift of an observatory

The regents of the University of Texas were surprised when they opened the will of William Johnson McDonald, a banker from Paris, Texas, who died in 1926. He had left the bulk of his fortune to the university for the purpose of building an astronomical observatory. After court proceedings were done, about $850,000 (the equivalent of $11 million today) was available, according to the Texas State Historical Association.

“McDonald is said to have thought that an observatory would improve weather forecasting and therefore help farmers to plan their work,” the association said.

But there were two major challenges to overcome before McDonald’s wish could become reality. First, the money was enough to build an observatory but not enough to run it, so the university would need to acquire more funds. Second, at that time, the University of Texas had no astronomers on its faculty, so it needed to recruit a team of space experts.

Fortunately, the University of Chicago had astronomers who were looking for another telescope to use in addition to their university’s refracting telescope at Yerkes Observatory. So, the presidents of the two universities made a deal: The University of Texas would build the new observatory, and the University of Chicago would provide experts to operate it.

McDonald’s first major telescope — later named the Otto Struve Telescope after the observatory’s first director — was finished in 1939 and is still in use today.

McDonald Observatory Otto Struve telescope
Altitude 2,026 m (6,647 ft)

Its main mirror is 82 inches (2.08 meters) across. One of the main purposes of the Struve Telescope was to analyze the exact colors of light coming from stars and other celestial bodies, to determine their chemical composition, temperature, and other properties. To do this, the telescope was designed to send light through a series of mirrors into a spectrograph — an instrument that separates light into its component colors — in another room. This required the telescope to be mounted on a strange-looking arrangement of axes and counterweights, designed and built by the Warner & Swasey company. “With its heavy steel mounting and black, half-open framework, the Struve is not just a scientific instrument, but it is a work of art,” the Observatory’s website says.

The Struve Telescope helped astronomers gather the first evidence of an atmosphere on Saturn’s moon Titan. Gerard Kuiper, assisted by Struve himself, found the clues while examining our solar system’s largest moons in 1944. Kuiper published his spectroscopic study in The Astrophysical Journal.

In 1956, a reflecting telescope with a 36-inch (0.9 m) mirror was added to the McDonald site at the request of the University of Chicago.

McDonald Observatory .9 meter telescope, Altitude 2,026 m (6,647 ft)

Housed in a dome made from locally quarried rock and leftover metal from the Struve Telescope dome, this instrument was designed primarily to measure changes in the brightness of stars. It is now obsolete for professional research, but is regularly used for special public-viewing nights.

The Harlan J. Smith Telescope, with a main mirror 107 inches (2.7 m) across, was built by NASA to examine other planets in preparation for spacecraft missions. It was the world’s third-largest telescope when it saw first light in 1968.

U Texas at Austin McDonald Observatory Harlan J Smith 2.7-meter Telescope , Altitude 2,026 m (6,647 ft)

From 1969 to 1985, the Smith telescope was also used to aim laser light at special reflecting mirrors left on the moon by Apollo astronauts. Measuring the time required for the reflected light to return to Earth enables astronomers to measure the moon’s distance to an accuracy of 1.2 inches (3 centimeters). These measurements, in turn, contribute to our understanding of Earth’s rotation rate, the moon’s composition, long-term changes in the moon’s orbit, and the behavior of gravity itself, including small effects predicted by Albert Einstein’s General Theory of Relativity.

When the Smith telescope was being built, a circular hole was cut in the center of its main quartz mirror to allow light to pass to instruments at the back of the telescope. The cutout quartz disk was made into a new mirror 30 inches (0.8 m) across for another telescope. This instrument, built nearby in 1970 and known simply as the 0.8 meter telescope, has the advantage of an unusually wide field of view.

McDonald’s biggest telescope

Today, the giant at McDonald is the Hobby-Eberly Telescope (HET), on neighboring Mount Fowlkes, almost a mile (1.3 km) from the cluster of original domes on Mount Locke.

U Texas Austin McDonald Observatory Hobby-Eberly Telescope, Altitude 2,026 m (6,647 ft)

The HET is a joint project of the University of Texas at Austin, Pennsylvania State University, and two German universities: Ludwig-Maximilians-Universität München, and Georg-August-Universität Göttingen.

Dedicated in 1997, the HET makes a striking technological contrast with the classic Struve instrument. HET’s main mirror is not one piece of glass or quartz, but an array of 91 individually controlled hexagonal segments making a honeycomb-like reflecting area that’s 36 feet (11 m) wide. A mushroom-shaped tower next to the main dome contains lasers that are aimed at the mirror segments to test and adjust their alignment.

Another remarkable feature of the HET is that the telescope can rotate to point toward any compass direction, but it cannot tilt up or down to point at different heights in the sky. Instead, the main mirror is supported at a fixed angle pointing 55 degrees above the horizon. A precisely controlled tracking support moves light-gathering instruments to various locations above the main mirror, which has the effect of aiming at slightly different parts of the sky. This unique, simplified design allowed the HET to be built for a fraction of the cost of a conventional telescope of its size, while still allowing access to 70% of the sky visible from its location.

The HET was designed primarily for spectroscopy, which is a key method in current research areas such as measuring motions of space objects, determining distances to galaxies and discovering the history of the universe since the Big Bang.

Habitable planets and dark energy

In 2017, the HET was rededicated after a $40 million upgrade. The tracking system was replaced with a new unit that uses more of the main mirror and has a wider field of view. And, new sensing instruments were created.

One of the new instruments is the Habitable Zone Planet Finder (HPF), built in conjunction with the National Institute of Standards and Technology.

Habitable Zone Planet Finder

The HPF is optimized to study infrared light from nearby, cool red dwarf stars, according to an announcement from the observatory. These stars have long lifetimes and could provide steady energy for planets orbiting close to them. The HPF allows precise measurements of a star’s radial velocity, measured by the subtle change in the color of the star’s spectra as it is tugged by an orbiting planet, which is critical information in the discovery and confirmation of new planets.

Advancing another frontier is the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX).Billed as the first major experiment searching for the mysterious force pushing the universe’s expansion, the HETDEX “will tell us what makes up almost three-quarters of all the matter and energy in the universe. It will tell us if the laws of gravity are correct, and reveal new details about the Big Bang in which the universe was born,” the HETDEX project website says.

VIRUS-P undergoes testing at the Harlan J. Smith Telescope at McDonald Observatory. HETDEX will consist of 145 identical VIRUS units attached to the Hobby-Eberly Telescope. [Martin Harris/McDonald Observatory]

A key piece of technology for the dark-energy search is the Visible Integral-Field Replicable Unit Spectrographs, or VIRUS, a set of 156 spectrographs mounted alongside the telescope and receiving light via 35,000 optical fibers coming from the telescope. With this package of identical instruments sharing the telescope, the HET can observe several hundred galaxies at once, measuring how their light is affected by their own motions and the expansion of the universe.

The HETDEX will spend about three years observing a minimum of 1 million galaxies to produce a large map showing the universe’s expansion rate during different time periods. Any changes in how quickly the universe grows could yield differences in dark energy.

Keeping the skies dark

In 2019, the McDonald Observatory received a grant from the Apache Corp., an oil and gas exploration and production company, to promote awareness of the value of dark skies as a natural resource and as an aid to astronomical research. The gift will fund education programs, outreach events, and a new exhibit at the observatory’s visitors center. According to the observatory’s announcement, Apache has served as a model for other businesses in west Texas by adjusting and shielding the lights at its drilling sites and related facilities.

See the full article here .


Please help promote STEM in your local schools.

Stem Education Coalition

#astronomy, #astrophysics, #basic-research, #cosmology, #mcdonald-observatory-searching-for-dark-energy, #space-com, #u-texas-at-austin

From European Space Agency -United space in Europe: “French earthquake fault mapped”

ESA Space For Europe Banner

European Space Agency-United space in Europe

French earthquake fault mapped

This week, southeast France was hit by a magnitude 5 earthquake with tremors felt between Lyon and Montélimar. The Copernicus Sentinel-1 radar mission has been used to map the way the ground shifted as a result of the quake.

Earthquakes are unusual in this part of France, but on 11 November at noon (local time) part of the Auvergne-Rhône-Alpes region was rocked by a quake leading to people having to be evacuated and buildings damaged.

Scientists are turning to satellite-based radar observations to help understand the nature of the seismic fault and map its location.

Earthquake ground shift

On 11 November 2019, the southeast of France was hit by a magnitude 5 earthquake with tremors felt between Lyon and Montélimar. The Copernicus Sentinel-1 radar mission was used to map the way the ground shifted as a result of the quake.

ESA Sentinel-1B

Displacement of the ground in the satellite line-of-sight direction. This product is derived from the Copernicus Sentinel-1 mission using the acquisitions of 6 and 12 November 2019. The interferogram was generated with the GAMMA processing chain.

By combining imagery acquired before and after a quake, changes on the ground that occurred between the two acquisition dates lead to rainbow-coloured interference patterns in the combined image, known as an ‘interferogram’, which allows scientists to quantify ground movement.

Several users have computed interferograms over the concerned region.

While several faults are present in the region and marked in geological maps, none were known to be seismically active. The interferogram here shows a series of fringes in the area west of the city of Le Teil and has allowed scientists to identify the fault at the origin of the earthquake. The satellite observation also measured a ground displacement that corresponds to an uplift of up to 8 cm in the southern part of the fault.


The intensity of the ground motion felt by the inhabitants and measured from space is unusual for this magnitude of event unless the earthquake epicentre is shallow and, indeed, seismic data put the epicentre at between 1 km and 3.5 km below the surface. Observations in the field on 13 November suggest that the rupture propagated up to the surface.

Floriane Provost, Research Fellow at ESA, said, “The rapid release to the public of up-to-date Copernicus Sentinel-1 based products visualised in a friendly fashion on the GEP geobrowser was followed by a peak of connections. It helped the scientific community better map the location of the fault and to confirm the mechanism of the earthquake.

“This example shows how the GEP environment contributes to the rapid processing and exchange of information within the geohazards community.”

Michael Foumelis, researcher at the French Geological Survey BRGM, added, “Field investigations by BRGM experts are on-going, while interferometric synthetic aperture radar results are actually helping them to correlate the distribution of damage with the location of the activated fault and measured ground displacements.”

Read more about mapping faults on ESA’s Earth Observation Science for Society website.

See the full article here .

Please help promote STEM in your local schools.

Stem Education Coalition

The European Space Agency (ESA), established in 1975, is an intergovernmental organization dedicated to the exploration of space, currently with 19 member states. Headquartered in Paris, ESA has a staff of more than 2,000. ESA’s space flight program includes human spaceflight, mainly through the participation in the International Space Station program, the launch and operations of unmanned exploration missions to other planets and the Moon, Earth observation, science, telecommunication as well as maintaining a major spaceport, the Guiana Space Centre at Kourou, French Guiana, and designing launch vehicles. ESA science missions are based at ESTEC in Noordwijk, Netherlands, Earth Observation missions at ESRIN in Frascati, Italy, ESA Mission Control (ESOC) is in Darmstadt, Germany, the European Astronaut Centre (EAC) that trains astronauts for future missions is situated in Cologne, Germany, and the European Space Astronomy Centre is located in Villanueva de la Cañada, Spain.

ESA50 Logo large

#applied-research-technology, #copernicus-sentinel-1-radar-mission-was-used-to-map-the-way-the-ground-shifted-as-a-result-of-the-quake, #earth-observation, #european-space-agency-united-space-in-europe, #french-earthquake-fault-mapped, #geology

From Science News: “Realigning magnetic fields may drive the sun’s spiky plasma tendrils”

From Science News

November 14, 2019
Christopher Crockett


Whiskery plasma jets, known as spicules, on the sun appear as dark, threadlike structures in this image, acquired at the Goode Solar Telescope in Big Bear, Calif. T. Samanta, GST & SDO

The Goode Solar Telescope pointed at the Sun in the morning.
Date 23 July 201
Big Bear Solar Observatory
Location Big Bear Lake, California, US
Altitude 2,060 m (6,760 ft)


Tendrils of plasma near the surface of the sun emerge from realignments of magnetic fields and pump heat into the corona, the sun’s tenuous outer atmosphere, a study suggests.

The new observation, described in the Nov. 15 Science, could help crack the century-plus mystery of where these plasma whiskers, called spicules, come from and what role — if any — they play in heating the corona to millions of degrees Celsius.

Spicules undulate like a wind-whipped field of wheat in the chromosphere, the layer of hot gas atop the sun’s surface. These plasma filaments stretch for thousands of kilometers and last for just minutes, shuttling ionized gas into the corona. Astronomers have long debated how spicules form — with the sun’s turbulent magnetic field being a prime suspect — and whether they can help explain why the corona is a few hundred times as hot as the sun’s surface (SN: 8/20/17).

To look for connections between spicules and magnetic activity, solar physicist Tanmoy Samanta of Peking University in Beijing and colleagues pointed the Goode Solar Telescope, at Big Bear Solar Observatory in California, at the sun. They snapped images of spicules forming, while also measuring the surrounding magnetic field. The team discovered that thickets of spicules frequently emerged within minutes after pockets of the local magnetic field reversed course and pointed in the opposite direction from the prevailing field in the area.

Counterpointing magnetic fields create a tension that gets resolved when the fields break and realign, and the team postulates that the energy released in this “magnetic reconnection” creates the spicules. “The magnetic field energy is converted to kinetic and thermal energy,” says study coauthor Hui Tian, a solar physicist also at Peking University. “The kinetic energy is in the form of fast plasma motion — jets, or spicules.”

To see if this energy made it into the corona, the team pored through images acquired at the same time by NASA’s orbiting Solar Dynamics Observatory. Those data revealed a glow from charged iron atoms directly over the spicules. That glow, Tian says, means the plasma reached roughly 1 million degrees Celsius. Whether that’s enough to sustain the scorching temperature throughout the corona, however, remains to be seen.

“Their observations are amazing,” says Juan Martínez-Sykora, a solar physicist at the Lockheed Martin Solar & Astrophysics Laboratory in Palo Alto, Calif.

Capturing this level of detail is difficult, Martínez-Sykora says, because individual spicules are relatively small and come and go so quickly. He does caution, though, that the magnetic reconnection story needs to be checked with computer simulations or more observations. As it stands, it remains a postulation, he says.

See the full article here .


Please help promote STEM in your local schools.

Stem Education Coalition

#realigning-magnetic-fields-may-drive-the-suns-spiky-plasma-tendrils, #science-news, #solar-research

From World Community Grid (WCG): “15 Years of Shining a Beacon for Science”

New WCG Logo


From World Community Grid (WCG)

15 Nov 2019

To mark World Community Grid’s 15th anniversary, we’re asking you as volunteers, researchers, and supporters to publicly show your support for science on social media, in our forum, and on your own website or blog.

“Basic research is performed without thought of practical ends. It results in general knowledge and understanding of nature and its laws. The general knowledge provides the means of answering a large number of important practical problems, though it may not give a complete specific answer to any one of them.”


Thanks to volunteers, researchers, and supporters of science all over the globe, World Community Grid has been a beacon for scientific research since 2004. What started out as a short-term initiative has grown into a major source of computing power for 30 basic science projects to-date. So far, this has led to breakthrough discoveries for childhood cancer, water filtration, and renewable energy, as well as more than 50 peer-reviewed papers about many smaller discoveries that may one day lead to future breakthroughs.

Future discoveries depend on the basic research of yesterday and today. And basic research projects often uncover knowledge no one expected, and lead to paths that were previously unknown. This past year, World Community Grid’s contribution to advances in basic research included:

Working with the FightAIDS@Home researchers to create a new, more efficient sampling protocol
Helping the Microbiome Immunity Project researchers predict almost 200,000 unique protein structures, which is more than all the experimentally solved protein structures to-date
Providing data to help lay the ground for new tools to analyze protein-protein interactions.

This is only possible because of generous volunteers who donate their unused computing power to research, and scientists who have the unique skills and patience to take on challenging problems that have no obvious answers.

We’re inviting everyone involved with World Community Grid to shine a beacon for science this week to help us celebrate our 15th anniversary. You can do this by:

Creating your own social media posts on your favorite platform (tag us on Twitter or Facebook so we can say thanks, and use the hashtag #Beacon4Science)
Posting your thoughts about being involved in World Community Grid in our forum
Sharing our Facebook post and/or retweeting our tweets on starting on Saturday, November 16
Sending us an email with your thoughts at

Feel free to include pictures or videos, especially if they’re science or World Community Grid-related.

Thanks for helping us shine a beacon for science since 2004, and we look forward to continuing our important work together.

See the full article here.


Please help promote STEM in your local schools.

Stem Education Coalition

Ways to access the blog:
World Community Grid (WCG) brings people together from across the globe to create the largest non-profit computing grid benefiting humanity. It does this by pooling surplus computer processing power. We believe that innovation combined with visionary scientific research and large-scale volunteerism can help make the planet smarter. Our success depends on like-minded individuals – like you.”
WCG projects run on BOINC software from UC Berkeley.

BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing.

BOINC WallPaper


“Download and install secure, free software that captures your computer’s spare power when it is on, but idle. You will then be a World Community Grid volunteer. It’s that simple!” You can download the software at either WCG or BOINC.

Please visit the project pages-

Microbiome Immunity Project

FightAIDS@home Phase II


Rutgers Open Zika

Help Stop TB
WCG Help Stop TB
Outsmart Ebola together

Outsmart Ebola Together

Mapping Cancer Markers

Uncovering Genome Mysteries
Uncovering Genome Mysteries

Say No to Schistosoma

GO Fight Against Malaria

Drug Search for Leishmaniasis

Computing for Clean Water

The Clean Energy Project

Discovering Dengue Drugs – Together

Help Cure Muscular Dystrophy

Help Fight Childhood Cancer

Help Conquer Cancer

Human Proteome Folding




World Community Grid is a social initiative of IBM Corporation
IBM Corporation

IBM – Smarter Planet

#basic-research, #biology, #chemistry, #physics, #wcg

From The New York Times: “Leonids Meteor Shower Will Peak in Night Skies”

New York Times

From The New York Times

Nov. 16, 2019
Nicholas St. Fleur

A meteor from the Leonids streaking through the sky, seen between the arms of a cactus in Tucson, Ariz., in 2001.Credit…James S. Wood/Arizona Daily Star, via Associated Press

All year long as Earth revolves around the sun, it passes through streams of cosmic debris. The resulting meteor showers can light up night skies from dusk to dawn, and if you’re lucky you might be able to catch a glimpse.

The next shower you might be able to see is the Leonids. Active between Nov. 6 and Nov. 30, the show peaks around Sunday night into Monday morning, or Nov. 17-18.

The Leonids are one of the most dazzling meteor showers and every few decades it produces a meteor storm where more than 1,000 meteors can been seen an hour. Cross your fingers for some good luck — the last time the Leonids were that strong was in 2002. Its parent comet is called Comet-Temple/Tuttle and it orbits the sun every 33 years.

Where meteor showers come from

If you spot a meteor shower, what you’re usually seeing is an icy comet’s leftovers that crash into Earth’s atmosphere. Comets are sort of like dirty snowballs: As they travel through the solar system, they leave behind a dusty trail of rocks and ice that lingers in space long after they leave. When Earth passes through these cascades of comet waste, the bits of debris — which can be as small as grains of sand — pierce the sky at such speeds that they burst, creating a celestial fireworks display. A general rule of thumb with meteor showers: You are never watching the Earth cross into remnants from a comet’s most recent orbit. Instead, the burning bits come from the previous passes. For example, during the Perseid meteor shower you are seeing meteors ejected from when its parent comet, Comet Swift-Tuttle, visited in 1862 or earlier, not from its most recent pass in 1992.

What on Earth Is Going On?

That’s because it takes time for debris from a comet’s orbit to drift into a position where it intersects with Earth’s orbit, according to Bill Cooke, an astronomer with NASA’s Meteoroid Environment Office.

How to watch

The best way to see a meteor shower is to get to a location that has a clear view of the entire night sky. Ideally, that would be somewhere with dark skies, away from city lights and traffic. To maximize your chances of catching the show, look for a spot that offers a wide, unobstructed view.

Bits and pieces of meteor showers are visible for a certain period of time, but they really peak visibly from dusk to dawn on a given few days. Those days are when Earth’s orbit crosses through the thickest part of the cosmic stream. Meteor showers can vary in their peak times, with some reaching their maximums for only a few hours and others for several nights. The showers tend to be most visible after midnight and before dawn.

It is best to use your naked eye to spot a meteor shower. Binoculars or telescopes tend to limit your field of view. You might need to spend about half an hour in the dark to let your eyes get used to the reduced light. Stargazers should be warned that moonlight and the weather can obscure the shows. But if that happens, there are usually meteor livestreams like the ones hosted by NASA and by Slooh.

See the full article here .


Please help promote STEM in your local schools.

Stem Education Coalition

#astronomy, #astrophysics, #basic-research, #cosmology, #leonids-meteor-shower-will-peak-in-night-skies, #nyt

From Ethan Siegel: “Ask Ethan: Did We Just Find The Universe’s Missing Black Holes?”

From Ethan Siegel
Nov 16, 2019

This simulation shows the radiation emitted from a binary black hole system. In principle, we should have neutron star binaries, black hole binaries, and neutron star-black hole systems, covering the entire allowable mass range. In practice, we see a ‘gap’ in such binaries between about 2 and 5 solar masses. It is a great puzzle for modern astronomy to find this missing population of objects. (NASA’S GODDARD SPACE FLIGHT CENTER)

A longstanding astronomical gap between neutron stars and black holes is finally coming to a close.

Astronomy has taken us so far into the Universe, from beyond Earth to the planets, stars, and even the galaxies far beyond our Milky Way. We’ve discovered exotic objects along the way, from interstellar visitors to rogue planets to white dwarfs, neutron stars and black holes.

But those last two are kind of funny. They both typically form from the same mechanism: the collapse of a very massive star that results in a supernova explosion. Even though stars come in all different masses, the most massive neutron star was only about 2 solar masses while the least massive black hole was already 5 solar masses, as of 2017. What’s with the gap, and are there any black holes or neutron stars in between? Patreon supporter Richard Jowsey points to a new study and asks:

This low-mass collapsar is smack-dab on the “mind the gap” borderline. How can we tell whether it’s a neutron star or a black hole?

Let’s dive into what astronomers call the mass gap and find out.

The various types of events that LIGO is known to be sensitive to all take the form of two masses inspiraling and merging with one another. We know that black holes above 5 solar masses are common, as are neutron stars below about 2 solar masses. The in-between range is known as the mass gap, a puzzle for astronomers to solve. (CHRISTOPHER BERRY / TWITTER)

Before gravitational waves came along, there were only two ways we knew of to detect black holes.

You could find a light-emitting object, like a star, that was orbiting a large mass that emitted no light of any type. Based on the luminous object’s light curve and how it changed over time, you could gravitationally infer the presence of a black hole.
You could find a black hole that’s gathering matter from either a companion star, an infalling mass, or a cloud of gas that flows inward. As the material approaches the black hole’s event horizon, it will heat up, accelerate, and emit what we detect as X-ray radiation.

The first black hole ever discovered was found by this latter method: Cygnus X-1.

Black holes are not isolated objects in space, but exist amidst the matter and energy in the Universe, galaxy, and star systems where they reside. They grow by accreting and devouring matter and energy, and when they actively feed they emit X-rays. Binary black holes systems that emit X-rays are how the majority of our known non-supermassive black holes were discovered. (NASA/ESA HUBBLE SPACE TELESCOPE COLLABORATION)

Since that first discovery 55 years ago, the known population of black holes has exploded. We now know that supermassive black holes lie at the centers of most galaxies, and feed on and devour gas regularly. We know that there are black holes that likely originated from supernova explosions, as the number of black holes in X-ray emitting, binary systems is now quite large.

We also know that only a fraction of the black holes out there are active at any given time; most of them are probably quiet. Even after LIGO turned on, revealing black holes merging with other black holes, one puzzling fact remained: the lowest-mass black hole we had ever discovered have all had masses that were at least five times the mass of our Sun. There were no black holes with three or four solar masses worth of material. For some reason, all the known black holes were above some arbitrary mass threshold.

The anatomy of a very massive star throughout its life, culminating in a Type II Supernova. At the end of its life, if the core is massive enough, the formation of a black hole is absolutely unavoidable. (NICOLE RAGER FULLER FOR THE NSF)

Theoretically, there is disagreement about what ought to be out there as far as black hole masses go. According to some theoretical models, there is a fundamental difference between the supernova processes that wind up producing black holes and the ones that wind up producing neutron stars. Although both arise from Type II supernovae, when the cores of the progenitor stars implode, whether you cross a critical threshold (or not) could make all the difference.

If correct, then crossing that threshold and forming an event horizon could compel significantly more matter to wind up in the collapsing core, contributing to the eventual black hole. The minimum mass of the final-state black hole could be many solar masses above the mass of the heaviest neutron star, which never forms an event horizon or crosses that critical threshold.

Supernovae types as a function of initial star mass and initial content of elements heavier than Helium (metallicity). Note that the first stars occupy the bottom row of the chart, being metal-free, and that the black areas correspond to direct collapse black holes. For modern stars, we are uncertain as to whether the supernovae that create neutron stars are fundamentally the same or different than the ones that create black holes, and whether there is a ‘mass gap’ present between them in nature. (FULVIO314 / WIKIMEDIA COMMONS)

On the other hand, other theoretical models don’t predict a fundamental difference between the supernova processes that do or don’t create an event horizon. It’s entirely possible, and a significant number of theorists come to this conclusion instead, that supernovae wind up producing a continuous distribution of masses, and that neutron stars will be found all the way up to a certain limit, followed immediately by black holes that leave no mass gap.

Up until 2017, observations seemed to favor a mass gap. The most massive known neutron star was right around 2 solar masses, while the least massive black hole ever seen (through X-ray emissions from a binary system) was right around 5 solar masses. But in August of 2017, an event happened that kicked off a tremendous change in how we think about this elusive mass range.

n the final moments of merging, two neutron stars don’t merely emit gravitational waves, but a catastrophic explosion that echoes across the electromagnetic spectrum. Simultaneously, it generates a slew of heavy elements towards the very high end of the periodic table. In the aftermath of this merger, they must have settled down to form a black hole, which later produced collimated, relativistic jets that broke through the surrounding matter. (UNIVERSITY OF WARWICK / MARK GARLICK)

For the very first time, an event occurred where not only gravitational waves were detected, but also emitted light. From over 100 million light-years away, scientists observed signals from all across the spectrum: gamma rays to visible signals all the way down to radio waves. They indicated something we had never seen before: two neutron stars merged together, creating an event called a kilonova. These kilonovae, we now believe, are responsible for the majority of the heaviest elements found throughout the Universe.

But perhaps most remarkably, from the gravitational waves that arrived, we were able to extract an enormous amount of information about the merger process. Two neutron stars merged to form an object that, it appears, initially formed as a neutron star before, fractions of a second later, collapsing to form a black hole. For the first time, we’d found an object in the mass gap range, and it was, indeed, a black hole.

LIGO and Virgo have discovered the tip of an amazing iceberg: a new population of black holes with masses that had never been seen before with X-ray studies alone (purple). This plot shows the masses of all ten confident binary black hole mergers detected by LIGO/Virgo (blue) as of the end of Run II, along with the one neutron star-neutron star merger seen (orange) that created the lowest-mass black hole we’ve ever found. (LIGO/VIRGO/NORTHWESTERN UNIV./FRANK ELAVSKY)

MIT /Caltech Advanced aLigo

VIRGO Gravitational Wave interferometer, near Pisa, Italy

However, that absolutely does not mean that there’s no mass gap. It’s eminently possible that neutron star-neutron star mergers will often form black holes if their combined mass is over a certain threshold: between 2.5 and 2.75 solar masses, dependent on how fast it’s spinning.

But even if that’s true, it’s still possible that the neutron stars produced by supernovae will top out at a certain threshold, and that the black holes produced by supernovae won’t appear until a significantly higher threshold. The only ways to determine if that type of mass gap is real would be to either:

take a large census of supernovae and supernova remnants and measure the mass distribution of the central neutron stars/black holes produced,
or to collect superior data that actually measured the distribution of object in that so-called mass gap range, and determine whether there’s a gap, a dip, or a continuous distribution.

In a study just released two months ago, the gap closed a little bit more.

In 2019, scientists were measuring the pulses coming from a neutron star and were able to measure how a white dwarf orbiting it delayed the pulses. From the observations, scientists determined it had a mass of around 2.2 solar masses: the heaviest neutron star seen thus far. (B. SAXTON, NRAO/AUI/NSF)

By finding a neutron star that ate into the mass gap range a little bit, using a technique involving pulsar timing and gravitational physics, we were able to confirm that we still get neutron stars below the anticipated 2.5 solar mass threshold. The orbital technique that works for black holes also works for neutron stars and any massive object. So long as there’s some form of a light or gravitational wave signal you can measure, the gravitational effects of mass can be inferred.

But just about six weeks after this neutron star story came out, another even more exciting story hit the news [Science]. About 10,000 light-years away, right in our own galaxy, scientists took precision observations of a giant star, thought to be a few times the mass of our Sun. Its orbit, fascinatingly, showed that it was orbiting an object that emitted no radiation at all of any type. From its gravity, that object is right around 3.3 solar masses: solidly in the mass gap range.

The color curves and radial velocity of the giant star measured to be orbiting a binary companion with an 83 day period. The companion emits no radiation of any type, not even X-rays, suggesting a black hole nature. (T.A. THOMPSON ET AL. (2019), VOL. 366, ISSUE 6465, PP. 637–640)

We can’t be absolutely certain this object isn’t a neutron star, but the super strong magnetic fields of even quiet neutron stars should lead to X-ray emissions that fall well below the observed thresholds. Even given the uncertainties, which could admit a mass as low as about 2.6 solar masses (or as high as around almost 5 solar masses), this object is strongly indicated to be a black hole.

This supports the idea that above 2.75 solar masses, there are no more neutron stars: the objects are all black holes. It shows that we have the capability of finding black holes that are smaller in mass simply by its gravitational effects on any orbiting companions.

We’re pretty confident that this stellar remnant is a black hole and not a neutron star. But what about the big question? What about the mass gap?

While practically all the stars in the night sky appear to be single points of light, many of them are multi-star systems, with approximately 50% of the stars we’ve seen bound up in multi-star systems. Castor is the system with the most stars within 25 parsecs: it is a sextuple system. (NASA / JPL-CALTECH / CAETANO JULIO)

As interesting as this new black hole is, and it really is most likely a black hole, it cannot tell us whether there’s a mass gap, a mass dip, or a straightforward distribution of masses arising from supernova events. About 50% of all the stars ever discovered exist as part of a multi-star system, with approximately 15% in bound systems containing 3-to-6 stars. Since the multi-star systems we see often have stellar masses similar to one another, there’s nothing ruling out that this newfound black hole didn’t have its origin from a long-ago kilonova event of its own.

So the object itself? It’s almost certainly a black hole, and it very likely has a mass that puts it squarely in a range where at most one other black hole is known to exist. But is the mass gap a real gap, or just a range where our data is deficient? That will take more data, more systems, and more black holes (and neutron stars) of all masses before we can give a meaningful answer.

Until we find a large enough population of black holes to accurately determine their mass distribution overall, we will not be able to discover whether there’s a mass gap or not. Black holes in binary systems may be our best bet. (GETTY IMAGES)

See the full article here .


Please help promote STEM in your local schools.

Stem Education Coalition

“Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

From insideHPC: “Tackling Turbulence on the Summit Supercomputer”

From insideHPC

ORNL IBM AC922 SUMMIT supercomputer

Researchers at the Georgia Institute of Technology have achieved world record performance on the Summit supercomputer using a new algorithm for turbulence simulation.

An illustration of intricate flow structures in turbulence from a large simulation performed using 1,024 nodes on Summit. The lower right frame shows a zoom-in view of a high-activity region. Credit: Dave Pugmire and Mike Matheson, Oak Ridge National Laboratory.

Turbulence, the state of disorderly fluid motion, is a scientific puzzle of great complexity. Turbulence permeates many applications in science and engineering, including combustion, pollutant transport, weather forecasting, astrophysics, and more. One of the challenges facing scientists who simulate turbulence lies in the wide range of scales they must capture to accurately understand the phenomenon. These scales can span several orders of magnitude and can be difficult to capture within the constraints of the available computing resources.

“High-performance computing can stand up to this challenge when paired with the right scientific code; but simulating turbulent flows at problem sizes beyond the current state of the art requires new thinking in concert with top-of-the-line heterogeneous platforms.”

A team led by P. K. Yeung, professor of aerospace engineering and mechanical engineering at the Georgia Institute of Technology, performs direct numerical simulations (DNS) of turbulence using his team’s new code, GPUs for Extreme-Scale Turbulence Simulations (GESTS). DNS can accurately capture the details that arise from a wide range of scales. Earlier this year, the team developed a new algorithm optimized for the IBM AC922 Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF). With the new algorithm, the team reached a performance of less than 15 seconds of wall-clock time per time step for more than 6 trillion grid points in space—a new world record surpassing the prior state of the art in the field for the size of the problem.

The simulations the team conducts on Summit are expected to clarify important issues regarding rapidly churning turbulent fluid flows, which will have a direct impact on the modeling of reacting flows in engines and other types of propulsion systems.

GESTS is a computational fluid dynamics code in the Center for Accelerated Application Readiness at the OLCF, a US Department of Energy (DOE) Office of Science User Facility at DOE’s Oak Ridge National Laboratory. At the heart of GESTS is a basic math algorithm that computes large-scale, distributed fast Fourier transforms (FFTs) in three spatial directions.

An FFT is a math algorithm that computes the conversion of a signal (or a field) from its original time or space domain to a representation in the frequency (or wave number) space—and vice versa for the inverse transform. Yeung extensively applies a huge number of FFTs in accurately solving the fundamental partial differential equation of fluid dynamics, the Navier-Stokes equation, using an approach known in mathematics and scientific computing as “pseudospectral methods.”

Most simulations using massive CPU-based parallelism will partition a 3D solution domain, or the volume of space where a fluid flow is computed, along two directions into many long “data boxes,” or “pencils.” However, when Yeung’s team met at an OLCF GPU Hackathon in late 2017 with mentor David Appelhans, a research staff member at IBM, the group conceived of an innovative idea. They would combine two different approaches to tackle the problem. They would first partition the 3D domain in one direction, forming a number of data “slabs” on Summit’s large-memory CPUs, then further parallelize within each slab using Summit’s GPUs.

The team identified the most time-intensive parts of a base CPU code and set out to design a new algorithm that would reduce the cost of these operations, push the limits of the largest problem size possible, and take advantage of the unique data-centric characteristics of Summit, the world’s most powerful and smartest supercomputer for open science.

“We designed this algorithm to be one of hierarchical parallelism to ensure that it would work well on a hierarchical system,” Appelhans said. “We put up to two slabs on a node, but because each node has 6 GPUs, we broke each slab up and put those individual pieces on different GPUs.”

In the past, pencils may have been distributed among many nodes, but the team’s method makes use of Summit’s on-node communication and its large amount of CPU memory to fit entire data slabs on single nodes.

“We were originally planning on running the code with the memory residing on the GPU, which would have limited us to smaller problem sizes,” Yeung said. “However, at the OLCF GPU Hackathon, we realized that the NVLink connection between the CPU and the GPU is so fast that we could actually maximize the use of the 512 gigabytes of CPU memory per node.”

he realization drove the team to adapt some of the main pieces of the code (kernels) for GPU data movement and asynchronous processing, which allows computation and data movement to occur simultaneously. The innovative kernels transformed the code and allowed the team to solve problems much larger than ever before at a much faster rate than ever before.

The team’s success proved that even large, communication-dominated applications can benefit greatly from the world’s most powerful supercomputer when code developers integrate the heterogenous architecture into the algorithm design.

Coalescing into success

One of the key ingredients to the team’s success was a perfect fit between the Georgia Tech team’s long-held domain science expertise and Appelhans’ innovative thinking and deep knowledge of the machine.

Also crucial to the achievement was the OLCF’s early access Ascent and Summit dev systems and a million–node-hour allocation on Summit provided by the Innovative Novel and Computational Impact on Theory and Experiment (INCITE) program, jointly managed by the Argonne and Oak Ridge Leadership Computing Facilities, and the Summit Early Science Program in 2019.

Oscar Hernandez, tools developer at the OLCF, helped the team navigate challenges throughout the project. One such challenge was figuring out how to how to run each single parallel process (that obeys the message passing interface [MPI] standard) on the CPU in conjunction with multiple GPUs. Typically, one or more MPI processes are tied to a single GPU, but the team found that using multiple GPUs per MPI process allows the MPI processes to send and receive a smaller number of larger messages than the team originally planned. Using the OpenMP programming model, Hernandez helped the team reduce the number of MPI tasks, improving the code’s communication performance and thereby leading to further speedups.

Kiran Ravikumar, a Georgia Tech doctoral student on the project, will present details of the algorithm within the technical program at SC19.

The team plans to use the code to make further inroads into the mysteries of turbulence; they will also introduce other physical phenomena such as oceanic mixing and electromagnetic fields into the code in the future.

“This code, and its future versions, will provide exciting opportunities for major advances in the science of turbulence, with insights of generality bearing upon turbulent mixing in many natural and engineered environments,” Yeung said.

Related Publication: K. Ravikumar, D. Appelhans, and P. K. Yeung, “GPU Acceleration of Extreme Scale Pseudo-Spectral Simulations of Turbulence using Asynchronism.” [ACM Digital Library]Paper to be presented at SC19 in Denver.

See the full article here .


Please help promote STEM in your local schools.

Stem Education Coalition

Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at Or you can send me mail at:

2825 NW Upshur
Suite G
Portland, OR 97239

Phone: (503) 877-5048