Tagged: The Conversation Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:36 am on May 13, 2020 Permalink | Reply
    Tags: "How the Earth’s last supercontinent broke apart to form the world we have today", , , Pangaea was the Earth’s latest supercontinent — a vast amalgamation of all the major landmasses., The Conversation   

    From The Conversation: “How the Earth’s last supercontinent broke apart to form the world we have today” 

    From The Conversation

    May 12, 2020
    Alexander Lewis Peace

    Pangaea was the Earth’s latest supercontinent — a vast amalgamation of all the major landmasses. Before Pangaea began to disintegrate, what we know today as Nova Scotia was attached to what seems like an unlikely neighbour: Morocco. Newfoundland was attached to Ireland and Portugal.

    About 250 million years ago, Pangaea was still stitched together, yet to be ripped apart by the geological forces that shaped the continents as we know them today.

    For many years, geologists have pondered how all the pieces originally fit together, why they came apart the way they did and how they ended up spread across the globe.

    As an assistant professor in structural geography, I research plate tectonics — specifically how and why continents break up — and the related igneous rocks, natural resources and hazards.

    1
    Mapping rocks related to the opening of the Labrador Sea near Makkovik, Labrador. (Jordan Phethean, University of Derby), Author provided.

    Puzzle pieces

    We know that Nova Scotia and Morocco were once attached because their coastal areas — or margins — match up perfectly. We can also trace their path from the structure of the ocean floor now separating them. Today, we are much closer to understanding the shifting of the continents, including the movement of land masses, but there is still much to learn.

    The science of exactly why they ended up 5,000 km away from each other — and how other parts of the continental jigsaw puzzle pulled apart the way they did — has been extensively researched and debated.

    One camp believes the continents were dragged apart by the movement of tectonic plates driven by forces elsewhere. The other group believes that hot material from deeper underground forced its way up and pushed the continents apart. Whether one theory or the other or some combination of both is correct, this much is certain: whatever happened, didn’t happen quickly!

    Plate tectonics is an ongoing story that unfolds by mere millimetres each year. The change has added up over eons, placing us where we are today — still drifting, though almost imperceptibly.

    2
    An overview of the disintegration of Pangaea using a palaeogeographic reconstruction from 250 million years ago (Ma) to today. (PALEOMAP PaleoAtlas), Author provided.

    The North Atlantic

    An area of especially intensive study and lingering mystery is the North Atlantic — the area bounded by Greenland, Eastern Canada and Western Europe — where the final stages of Pangaea’s breakup played out.

    Curiously, perhaps, it is the region that spawned much of the geoscience that would successfully be applied to understanding the continental makeup of other regions of the world.

    When the North Atlantic began opening up, the continent started separating along the west side of Greenland. It then stopped and instead continued opening between eastern Greenland and Europe. Why?

    To solve this and related questions, two colleagues and I brought together about 30 researchers from many different fields of geoscience in the North Atlantic Working Group. Our research team includes geophysicists (who apply physics to understand processes in the Earth), geochemists (who apply chemistry to understand the composition of the materials that make up the Earth) and many others who study the structure and evolution of the Earth.

    To date, the North Atlantic Working Group has held a number of workshops and published a set of papers that propose a new model for answering some of the long-unanswered questions about what happened in the North Atlantic.

    Structural inheritance

    Our North Atlantic Working Group was able to draw many types of data together and to tackle the problem from multiple angles. We concluded that most important geological events were strongly influenced by earlier activity — a process called “inheritance.”

    Throughout the history of the Earth, the continental landmasses have several times come together and then subsequently been torn apart. This process of amalgamation and subsequent dispersal is known as a “supercontinent cycle.” These previous events left behind scars and lines of weakness.

    When Pangaea was stressed again, it tore open along these older structures. While this process was suggested in the early days of plate tectonic theory, it is only now becoming clear just how important and far reaching it is.

    At the largest scale, the tear that formed the North Atlantic started first to the west of Greenland. There, it hit ancient mountain belts that would not break apart. There was less resistance to the east of Greenland, which opened like a zipper and eventually took up all the widening to form the North Atlantic Ocean.

    In addition, relics from these previous plate tectonic cycles left remnants deep in the Earth’s mantle that were susceptible to melting, explaining much of the widespread molten rocks that accompanied breakup. And at the smaller scale, it appears that the hydrocarbon bearing basins left behind on the continental margins were also influenced by previous events.

    Much of what we know about this was gathered in the search for oil and gas. Our most detailed knowledge comes from coastal areas closest to the markets where those commodities are processed and sold, and most of it has been obtained since the 1960s, using post-war technology to scan the bottom of the oceans.

    These economic factors mean that our knowledge of the subsurface drastically diminishes beyond Newfoundland. North of that, there is much to explore and to understand, where the answers to the remaining mystery of how we got here lie miles beneath the waves.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 12:46 pm on April 24, 2020 Permalink | Reply
    Tags: "How the Hubble Space Telescope opened our eyes to the first galaxies of the universe", , , , , , The Conversation,   

    From University of Arizona via The Conversation: “How the Hubble Space Telescope opened our eyes to the first galaxies of the universe” 

    From University of Arizona

    VIA


    The Conversation

    April 24, 2020

    Rodger I. Thompson
    Professor of Astronomy, University of Arizona

    1
    The launch of Hubble Space Telescope on April 24, 1990. This photo captures the first time that there were shuttles on both pad 39a and 39b. NASA

    The Hubble Space Telescope launched on the 24th of April, 30 years ago.

    NASA/ESA Hubble Telescope

    It’s an impressive milestone especially as its expected lifespan was just 10 years.

    One of the primary reasons for the Hubble telescope’s longevity is that it can be serviced and improved with new observational instruments through Space Shuttle visits [no longer true].

    When Hubble, or HST, first launched, its instruments could observe ultraviolet light with wavelengths shorter than the eye can see, as well as optical light with wavelengths visible to humans. A maintenance mission in 1997 added an instrument to observe near infrared light, which are longer wavelengths than people can see. Hubble’s new infrared eyes provided two new major capabilities: the ability to see farther into space than before and see deeper into the dusty regions of star formation.

    I am an astrophysicist at the University of Arizona who has used near infrared observations to better understand how the universe works, from star formation to cosmology. Some 35 years ago, I was given the chance to build a near infrared camera and spectrometer for Hubble. It was the chance of a lifetime. The camera my team designed and developed has changed the way humans see and understand the universe. The instrument was built at Ball Aerospace in Boulder, Colorado, under our direction.

    3
    The light we can see with our eyes is part of a range of radiation known as the electromagnetic spectrum. Shorter wavelengths of light are higher energy, and longer wavelengths of light are lower energy. The Hubble Space Telescope sees primarily visible light (indicated here by the rainbow), as well as some infrared and ultraviolet radiation. NASA/JHUAPL/SwRI

    Seeing further and earlier

    Edwin Hubble, HST’s namesake, discovered in the early 1900s that the universe is expanding and that the light from distant galaxies was shifted to longer, redder wavelengths, a phenomenon called the redshift.

    Edwin Hubble at Caltech Palomar Samuel Oschin 48 inch Telescope, (credit: Emilio Segre Visual Archives/AIP/SPL)

    Edwin Hubble looking through a 100-inch Hooker telescope at Mount Wilson in Southern California, 1929 discovers the Universe is Expanding

    The greater the distance, the larger the shift. This is because the further away an object is, the longer it takes for the light to reach us here on Earth and the more the universe has expanded in that time.

    The Hubble ultraviolet and optical instruments had taken images of the most distant galaxies ever seen, known as the Northern Hubble Deep Field, or NHDF, which were released in 1996. These images, however, had reached their distance limit due to the redshift, which had shifted all of the light of the most distant galaxies out of the visible and into the infrared.

    One of the new instruments added to Hubble in the second maintenance mission has the awkward name, the Near Infrared Camera and Multi-Object Spectrometer, NICMOS, pronounced “Nick Moss.”

    NASA/Hubble NICMOS

    The near infrared cameras on NICMOS observed regions of the NHDF and discovered even more distant galaxies with all of their light in the near infrared.

    4
    A typical image taken with NICMOS. It shows a gigantic star cluster in the center of our Milky Way. NICMOS, thanks to its infrared capabilities, is able to look through the heavy clouds of dust and gas in these central regions. NASA/JHUAPL/SwRI

    Astronomers have the privilege of watching things happen in the past which they call the “lookback time.” Our best measurement of the age of the universe is 13.7 billion years. The distance that light travels in one year is called a light year. The most distant galaxies observed by NICMOS were at a distance of almost 13 billion light years. This meant that the light that NICMOS detected had been traveling for 13 billion years and showed what the galaxies looked like 13 billion years ago, a time when the universe was only about 5% of its current age. These were some of the first galaxies ever created and were forming new stars at rates that were more than a thousand times the rate at which most galaxies form stars in the current universe.

    Hidden by dust

    Although astronomers have studied star formation for decades, many questions remain. Part of the problem is that most stars are formed in clouds of molecules and dust. The dust absorbs the ultraviolet and most of the optical light emitted by forming stars, making it difficult for Hubble’s ultraviolet and optical instruments to study the process.

    The longer, or redder, the wavelength of the light, the less is absorbed. That is why sunsets, where the light must pass through long lengths of dusty air, appear red.

    The near infrared, however, has an even easier time passing through dust than the red optical light. NICMOS can look into star formation regions with the superior image quality of Hubble to determine the details of where the star formation occurs. A good example is the iconic Hubble image of the Eagle Nebula, also known as the Pillars of Creation.

    5
    The Eagle Nebula in visible light. NASA, ESA and the Hubble Heritage Team (STScI/AURA)

    The optical image shows majestic pillars which appear to show star formation over a large volume of space. The NICMOS image, however, shows a different picture. In the NICMOS image, most of the pillars are transparent with no star formation. Stars are only being formed at the tip of the pillars. The optical pillars are just empty dust reflecting the light of a group of nearby stars.

    6
    In this Hubble Space Telescope image is the Eagle Nebula’s Pillars of Creation. Here, the pillars are seen in infrared light, which pierces through obscuring dust and gas and unveils a more unfamiliar — but just as amazing — view of the pillars. NASA, ESA/Hubble and the Hubble Heritage Team

    The dawning of the age of infrared

    When NICMOS was added into the HST in 1997 NASA had no plans for a future infrared space mission. That rapidly changed as the results from NICMOS became apparent. Based on the data from NICMOS, scientists learned that fully formed galaxies existed in the universe much earlier than expected. The NICMOS images also confirmed that the expansion of the universe is accelerating rather than slowing down as previously thought. The NHDF infrared images were followed by the Hubble Ultra Deep Field images in 2005, which further showed the power of near infrared imaging of distant young galaxies. So NASA decided to invest in the James Webb Space Telescope, or JWST, a telescope much larger than HST and completely dedicated to infrared observations.

    NASA/ESA/CSA Webb Telescope annotated

    On Hubble, a near infrared imager was added to the third version of the Wide Field camera which was installed in May of 2009.

    NASA/ESA Hubble WFC3

    This camera used an improved version of the NICMOS detector arrays that had more sensitivity and a wider field of view. The James Webb Space Telescope has much larger versions of the NICMOS detector arrays that have more wavelength coverage than the previous versions.

    The James Webb Space Telescope, scheduled to be launched in March 2021, followed by the Wide Field Infrared Survey Telescope [WFIRST], form the bulk of future space missions for NASA.

    NASA/WFIRST

    These programs were all spawned by the near infrared observations by HST. They were enabled by the original investment for a near infrared camera and spectrometer to give Hubble its infrared eyes. With the James Webb Space Telescope, astronomers expect to see the very first galaxies that formed in the universe.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    The University of Arizona (UA) is a place without limits-where teaching, research, service and innovation merge to improve lives in Arizona and beyond. We aren’t afraid to ask big questions, and find even better answers.

    In 1885, establishing Arizona’s first university in the middle of the Sonoran Desert was a bold move. But our founders were fearless, and we have never lost that spirit. To this day, we’re revolutionizing the fields of space sciences, optics, biosciences, medicine, arts and humanities, business, technology transfer and many others. Since it was founded, the UA has grown to cover more than 380 acres in central Tucson, a rich breeding ground for discovery.

    U Arizona mirror lab-Where else in the world can you find an astronomical observatory mirror lab under a football stadium?

    University of Arizona’s Biosphere 2, located in the Sonoran desert. An entire ecosystem under a glass dome? Visit our campus, just once, and you’ll quickly understand why the UA is a university unlike any other.

     
  • richardmitnick 7:53 am on December 30, 2019 Permalink | Reply
    Tags: "Here Are 6 Reasons Climate Scientists Are Hopeful, And You Should Be, Dénes Csala- lecturer in energy system dynamics at Lancaster University, Hannah Cloke-professor of hydrology at University of Reading, Heather Alberro- associate lecturer in political ecology at Nottingham Trent University, Marc Hudson- researcher in sustainable consumption at University of Manchester, Mark Maslin- professor of earth system science at UCL, Richard Hodgkins- senior lecturer in physical geography at Loughborough University, , The Conversation, Too"   

    From The Conversation via Science Alert: “Here Are 6 Reasons Climate Scientists Are Hopeful, And You Should Be, Too” 

    Conversation
    From The Conversation

    via

    ScienceAlert

    Science Alert

    30 DEC 2019
    THE CONVERSATION

    1
    (Miguel Bruna/Unsplash)

    The climate breakdown continues. Over the past year, The Conversation has covered fires in the Amazon, melting glaciers in the Andes and Greenland, record CO₂ emissions, and temperatures so hot they’re pushing the human body to its thermal limits. Even the big UN climate talks were largely disappointing. But climate researchers have not given up hope.

    We asked a few Conversation authors to highlight some more positive stories from 2019.

    Costa Rica offers us a viable climate future

    Heather Alberro, associate lecturer in political ecology, Nottingham Trent University

    After decades of climate talks, including the recent COP25 in Madrid, emissions have only continued to rise. Indeed, a recent UN report noted that a fivefold increase in current national climate change mitigation efforts would be needed to meet the 1.5℃ limit on warming by 2030.

    With the radical transformations needed in our global transport, housing, agricultural and energy systems in order to help mitigate looming climate and ecological breakdown, it can be easy to lose hope.

    However, countries like Costa Rica offer us promising examples of the “possible”. The Central American nation has implemented a refreshingly ambitious plan to completely decarbonise its economy by 2050.

    In the lead-up to this, last year with its economy still growing at 3 percent, Costa Rica was able to derive 98 percent of its electricity from renewable sources. Such an example demonstrates that with sufficient political will, it is possible to meet the daunting challenges ahead.

    Financial investors are cooling on fossil fuels

    Richard Hodgkins, senior lecturer in physical geography, Loughborough University

    Movements such as 350.org have long argued for fossil fuel divestment, but they have recently been joined by institutional investors such as Climate Action 100+, which is using the influence of its US$35 trillion of managed funds, arguing that minimising climate breakdown risks and maximising renewables’ growth opportunities are a fiduciary duty.

    Moody’s credit-rating agency recently flagged ExxonMobil for falling revenues despite rising expenditure, noting:

    “The negative outlook also reflects the emerging threat to oil and gas companies’ profitability […] from growing efforts by many nations to mitigate the impacts of climate change through tax and regulatory policies.”

    A more adverse financial environment for fossil fuel companies reduces the likelihood of new development in business frontier regions such as the Arctic, and indeed, major investment bank Goldman Sachs has declared that it “will decline any financing transaction that directly supports new upstream Arctic oil exploration or development”.

    We are getting much better at forecasting disaster

    Hannah Cloke, professor of hydrology, University of Reading

    In March and April 2019, two enormous tropical cyclones hit the south-east coast of Africa, killing more than 600 people and leaving nearly 2 million people in desperate need of emergency aid.

    There isn’t much that is positive about that, and there’s nothing new about cyclones.

    But this time scientists were able to provide the first early warning of the impending flood disaster by linking together accurate medium-range forecasts of the cyclone with the best ever simulations of flood risk.

    This meant that the UK government, for example, set about working with aid agencies in the region to start delivering emergency supplies to the area that would flood, all before Cyclone Kenneth had even gathered pace in the Indian Ocean.

    We know that the risk of dangerous floods is increasing as the climate continues to change. Even with ambitious action to reduce greenhouse gases, we must deal with the impact of a warmer more chaotic world.

    We will have to continue using the best available science to prepare ourselves for whatever is likely to come over the horizon.

    Local authorities across the world are declaring a ‘climate emergency’

    Marc Hudson, researcher in sustainable consumption, University of Manchester

    More than 1,200 local authorities around the world declared a “climate emergency” in 2019. I think there are two obvious dangers: first, it invites authoritarian responses (stop breeding! Stop criticising our plans for geoengineering!). And second, an “emergency” declaration may simply be a greenwash followed by business-as-usual.

    In Manchester, where I live and research, the City Council is greenwashing. A nice declaration in July was followed by more flights for staff (to places just a few hours away by train), and further car parks and roads.

    The deadline for a “bring zero-carbon date forward?” report has been ignored.

    But these civic declarations have also kicked off a wave of civic activism, as campaigners have found city councils easier to hold to account than national governments. I’m part of an activist group called “Climate Emergency Manchester” – we inform citizens and lobby councillors.

    We’ve assessed progress so far, based on Freedom of Information Act requests, and produced a “what could be done?” report. As the council falls further behind on its promises, we will be stepping up our activity, trying to pressure it to do the right thing.

    Radical climate policy goes mainstream

    Dénes Csala, lecturer in energy system dynamics, Lancaster University

    Before the 2019 UK general election, I compared the Conservative and Labour election manifestos, from a climate and energy perspective. Although the party with the clearly weaker plan won eventually, I am still stubborn enough to be hopeful with regard to the future of political action on climate change.

    For the first time, in a major economy, a leading party’s manifesto had at its core climate action, transport electrification and full energy system decarbonisation, all on a timescale compatible with IPCC directives to avoid catastrophic climate change.

    This means the discussion that has been cooking at the highest levels since the 2015 Paris Agreement has started to boil down into tangible policies.

    Young people are on the march!

    Mark Maslin, professor of earth system science, UCL

    In 2019, public awareness of climate change rose sharply, driven by the schools strikes, Extinction Rebellion, high impact IPCC reports, improved media coverage, a BBC One climate change documentary and the UK and other governments declaring a climate emergency.

    Two recent polls suggest that over 75 percent of Americans accept humans have caused climate change.

    Empowerment of the first truly globalised generation has catalysed this new urgency. Young people can access knowledge at the click of a button. They know climate change science is real and see through the deniers’ lies because this generation does not access traditional media – in fact, they bypass it.

    The awareness and concern regarding climate change will continue to grow. Next year will be an even bigger year as the UK will chair the UN climate change negotiations in Glasgow – and expectations are running high.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 9:08 am on December 23, 2019 Permalink | Reply
    Tags: "Planetary Confusion-Why Astronomers Keep Changing What It Means to Be A Planet", , , , , , The Conversation   

    From The Conversation: “Planetary Confusion-Why Astronomers Keep Changing What It Means to Be A Planet” 

    Conversation
    From The Conversation

    1
    In 2015, NASA’s New Horizons spacecraft looked back toward the sun and captured this near-sunset view of the rugged, icy mountains and flat ice plains extending to Pluto’s horizon. NASA/JHUAPL/SwRI

    NASA/New Horizons spacecraft

    As an astronomer, the question I hear the most is why isn’t Pluto a planet anymore? More than 10 years ago, astronomers famously voted to change Pluto’s classification. But the question still comes up.

    When I am asked directly if I think Pluto is a planet, I tell everyone my answer is no. It all goes back to the origin of the word “planet.” It comes from the Greek phrase for “wandering stars.” Back in ancient times before the telescope was invented, the mathematician and astronomer Claudius Ptolemy called stars “fixed stars” to distinguish them from the seven wanderers that move across the sky in a very specific way. These seven objects are the Sun, the Moon, Mercury, Venus, Mars, Jupiter and Saturn.

    When people started using the word “planet,” they were referring to those seven objects. Even Earth was not originally called a planet – but the Sun and Moon were.

    Since people use the word “planet” today to refer to many objects beyond the original seven, it’s no surprise we argue about some of them.

    Although I am trained as an astronomer and I studied more distant objects like stars and galaxies, I have an interest in the objects in our Solar System because I teach several classes on planetary science.

    Asteroids, the first demoted planets

    The word “planet” is used to describe Uranus and Neptune, which were discovered in 1781 and 1846 respectively, because they move in the same way that the other “wandering stars” move. Like Saturn and Jupiter, if you look at them through a telescope, they appear bigger than stars, so they were recognized to be more like planets than stars.

    Not long after the discovery of Uranus, astronomers discovered additional wandering objects – these were named Ceres, Pallas, Juno and Vesta. At the time they were considered planets, too. Through a telescope they look like pinpoints of light and not disks. With a small telescope, even distant Neptune appears fuzzier than a star. Even though these other, new objects were called planets at first, astronomers thought they needed a different name since they appear more star-like than planet-like.

    William Herschel (who discovered Uranus) is often said to have named them “asteroids” which means “star-like,” but recently, Clifford Cunningham claimed that the person who coined that name was Charles Burney Jr., a preeminent Greek scholar.

    Today, just like the word “planet,” we use the word “asteroid” differently. Now it refers to objects that are rocky in composition, mostly found between Mars and Jupiter, mostly irregularly shaped, smaller than planets, but bigger than meteoroids. Most people assume there is a strict definition for what makes an object an asteroid. But there isn’t, just like there never was for the word “planet.”

    In the 1800s the large asteroids were called planets. Students at the time likely learned that the planets were Mercury, Venus, Earth, Mars, Ceres, Vesta, Pallas, Juno, Jupiter, Saturn, Uranus and, eventually, Neptune. Most books today write that asteroids are different than planets, but there is a debate among astronomers about whether the term “asteroid” was originally used to mean a small type of planet, rather than a different type of object altogether.

    How are moons different than planets?

    These days, scientists consider properties of these celestial objects to figure out whether an object is a planet or not. For example, you might say that shape is important; planets should be mostly spherical, while asteroids can be lumpy. As astronomers try to fix these definitions to make them more precise, we then create new problems. If we use roundness as an important distinction for objects, what should we call moons? Should moons be considered planets if they are round and asteroids if they are not round? Or are they somehow different from planets and asteroids altogether?

    I would argue we should again look to how the word “moon” came to refer to objects that orbit planets.

    When astronomers talk about the Moon of Earth, we capitalize the word “Moon” to indicate that it’s a proper name. That is, the Earth’s moon has the name, Moon. For much of human history, it was the only Moon known, so there was no need to have a word that referred to one celestial body orbiting another. This changed when Galileo discovered four large objects orbiting Jupiter. These are now called Io, Europa, Ganymede and Callisto, the moons of Jupiter.

    This makes people think the technical definition of moon is a satellite of another object, and so we call lots of objects that orbit Mars, Jupiter, Saturn, Uranus, Neptune, Pluto, Eris, Makemake, Ida and a large number of other asteroids moons. When you start to look at the variety of moons, some, like Ganymede and Titan, are larger than Mercury. Some are similar in size to the object they orbit. Some are small and irregularly shaped, and some have odd orbits.

    So they are not all just like Earth’s Moon. If we try to fix the definition for what is a moon and how that differs from a planet and asteroid, we are likely going to have to reconsider the classification of some of these objects, too. You can argue that Titan has more properties in common with the planets than Pluto does, for example. You can also argue that every single particle in Saturn’s rings is an individual moon, which would mean that Saturn has billions upon billions of moons.

    Planets around other stars

    The most recent naming challenge astronomers face arose when they discovering planets far from our Solar System orbiting around distant stars. These objects have been called extrasolar planets, exosolar planets or exoplanets.

    Astronomers are currently searching for exomoons orbiting exoplanets. Exoplanets are being discovered that have properties unlike the planets in our Solar System, so astronomers have started putting them in categories like “hot Jupiter,” “warm Jupiter,” “super-Earth” and “mini-Neptune.”

    Ideas for how planets form also suggest that there are planetary objects that have been flung out of orbit from their parent star. This means there are free-floating planets not orbiting any star. Should planetary objects that are flung out of a solar system also get ejected from the elite club of planets?

    When I teach, I end this discussion with a recommendation. Rather than arguing over planet, moon, asteroid and exoplanet, I think we need to do what Herschel and Burney did and coin a new word. For now, I use “world” in my class, but I do not offer a rigorous definition of what makes something a world and what does not. Instead, I tell my students that all of these objects are of interest to study.

    The Sun was once a planet

    A lot of people seem to feel that scientists wronged Pluto by changing its classification. I look at it that Pluto was only originally called a planet because of an accident; scientists were looking for planets beyond Neptune, and when they found Pluto they called it a planet, even though its observable properties should have led them to call it an asteroid.

    As our understanding of this object has grown, I feel like the evidence now leads me to call Pluto something besides planet. There are other scientists who disagree, feeling Pluto still should be classified as a planet.

    But remember: The Greeks started out calling the Sun a planet given how it moved on the sky. We now know that the properties of the Sun show it to belong in a very different category from the planets; it’s a star, not a planet. If we can stop calling the Sun a planet, why can’t we do the same to Pluto?

    2
    The Kepler-90 planets have a similar configuration to our solar system with small planets found orbiting close to their star, and the larger planets found farther away. NASA/Ames Research Center/Wendy Stenzel

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 12:11 pm on September 16, 2019 Permalink | Reply
    Tags: , , First-year Research Immersion, SUNY Binghamton, The Conversation, Undergraduate research, ,   

    From The Conversation: “At these colleges, students begin serious research their first year” 

    Conversation
    From The Conversation

    September 16, 2019
    Nancy Stamp

    Rat brains to understand Parkinson’s disease. Drones to detect plastic landmines. Social media to predict acts of terrorism.

    These are just a few potentially lifesaving research projects that students have undertaken in recent years at universities in New York and Maryland. While each project is interesting by itself, there’s something different about these particular research projects – all three were carried out by undergraduates during their earliest years of college.

    That’s noteworthy because students usually have to wait until later in their college experience – even graduate school – to start doing serious research. While about one out of every five undergraduates get some kind of research experience, the rest tend to get just “cookbook” labs that typically do not challenge students to think but merely require them to follow directions to achieve the “correct” answer.

    That’s beginning to change through First-year Research Immersion, an academic model that is part of an emergent trend meant to provide undergraduates with meaningful research experience.

    The First-year Research Immersion is a sequence of three course-based research experiences at three universities: University of Texas at Austin, University of Maryland and Binghamton University, where I teach science.

    As a scientist, researcher and professor, I see undergraduate research experience as a crucial part of college. And as the former director of my university’s First-year Research Immersion program for aspiring science or engineering majors, I also believe these research experiences better equip students to apply what they learn in different situations.

    There is evidence to support this view. For instance, a 2018 study found that undergraduate exposure to a rigorous research program “leads to success in a research STEM career.” The same study found that undergraduates who get a research experience are “more likely to pursue a Ph.D. program and generate significantly more valued products” compared to other students.

    A closer look

    Just what do these undergraduate research experiences look like?

    At the University of Texas, it involved having students identify a new way to manage and repair DNA, the stuff that makes up our genes. This in turn provides insights into preventing genetic disorders.

    At the University of Maryland, student teams investigated how social media promotes terrorism and found that it is possible to identify when conflicts on social media can escalate into physical violence.

    4
    Binghamton student William Frazer test a drone with a sense to detect plastic landmines. Jonathn Cohen/Binghamton University

    Essential elements

    The First-year Research Immersion began as an experiment at the University of Texas at Austin in 2005. The University of Maryland at College Park and Binghamton University – SUNY adapted the model to their institutions in 2014.

    The program makes research experiences an essential part of a college course. These course-based research experiences have five elements. Specifically, they:

    Engage students in scientific practices, such as how and why to take accurate measurements.
    Emphasize teamwork.
    Examine broadly relevant topics, such as the spread of Lyme disease.
    Explore questions with unknown answers to expose students to the process of scientific discovery.
    Repeat measurements or experiments to verify results.

    The model consists of three courses. In the first course, students identify an interesting problem, determine what’s known and unknown and collaborate to draft a preliminary project proposal.

    In the second course, students develop laboratory research skills, begin their team project and use the results to write a full research proposal.

    In the third course, during sophomore year, students execute their proposed research, produce a report and a research poster.

    This sequence of courses is meant to give all students – regardless of their prior academic experience – the time and support they need to be successful.

    Does it work?

    The First-year Research Immersion program is showing promising results. For instance, at Binghamton, where 300 students who plan to major in engineering and science participate in the program, a survey indicated that participants got 14% more research experience than students in traditional laboratory courses.

    At the University of Maryland, where 600 freshmen participate in the program, students reported that they made substantial gains in communication, time management, collaboration and problem-solving.

    At the University of Texas at Austin, where about 900 freshman participate in the First-Year Research Immersion program in the natural sciences, educators found that program participants showed a 23% higher graduation rate than similar students who were not in the program. And this outcome took place irrespective of students’ gender, race or ethnicity, or whether they were the first in their family to attend college or not.

    All three programs have significantly higher numbers of students from minority groups than the campuses overall. For instance, at Binghamton University, there are 22% more students from underrepresented minority groups than the campus overall, university officials reported. This has significant implications for diversity because research shows that longer, more in-depth research experiences – ones that involve faculty – help students from minority groups and low-income students stick with college.

    5
    Akibo Watson, a neuroscience major at Binghamton University, conducts an analysis of brain tissue. Jonathan Cohen/Binghamton University

    Undergraduates who get research experience also enjoy professional and personal growth. Via online surveys and written comments, students routinely say that they improved dramatically in their self-confidence and career-building skills, such as communication, project management skills and teamwork.

    Students also report that their undergraduate research experience has helped them obtain internships or get into graduate school.

    Making research experiences available more broadly

    The challenge remains in making the opportunity of more undergraduate research experiences available to more students.

    The First-year Research Immersion program is not the only course-based research program that is connected to faculty research.

    However, to the best of my knowledge, the First-year Research Immersion programs at my university, and in Texas and Maryland, are the only such programs for first-year students that are overseen by a research scientist and involve taking three courses in a row. This three-course sequence allows student teams to delve deeply into real problems.

    More colleges could easily follow suit. For instance, traditional introductory lab courses could be transformed into research-based courses at no additional cost. And advanced lab courses could be converted to research experiences that build on those research-based courses. In that way, students could take the research projects they started during their first and second years of college even further.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 9:52 am on August 22, 2019 Permalink | Reply
    Tags: "Don’t ban new technologies – experiment with them carefully", Autonomous cars, Environmental pollution, Lime scooter company, San Francisco’s ban on municipal use of facial recognition technologies, The Conversation   

    From The Conversation: “Don’t ban new technologies – experiment with them carefully” 

    Conversation
    From The Conversation

    August 22, 2019
    Ryan Muldoon, SUNY Buffalo

    For many years, Facebook’s internal slogan was “move fast and break things.” And that’s what the company did – along with most other Silicon Valley startups and the venture capitalists who fund them. Their general attitude is one of asking for forgiveness after the fact, rather than for permission in advance. Though this can allow for some bad behavior, it’s probably the right attitude, philosophically speaking.

    It’s true that the try-first mindset has frustrated the public. Take the Lime scooter company, for instance.

    2
    Lime scooter company

    The company launched its scooter sharing service in multiple cities without asking permission from local governments. Its electric scooters don’t need base stations or parking docks, so the company and its customers can leave them anywhere for the next person to pick up – even if that’s in the middle of a sidewalk. This general disruption has led to calls to ban the scooters in cities around the country.

    Scooters are not alone. Ridesharing services, autonomous cars, artificial intelligence systems and Amazon’s cashless stores have also all been targets of bans (or proposed bans) in different states and municipalities before they’ve even gotten off the ground.

    4
    Autonomous cars. The Conversation

    What these efforts have in common is what philosophers like me call the “precautionary principle,” the idea that new technologies, behaviors or policies should be banned until their supporters can demonstrate that they will not result in any significant harms. It’s the same basic idea Hippocrates had in ancient Greece: Doctors should “do no harm” to patients.

    The precautionary principle entered the political conversation in the 1980s in the context of environmental protection. Damage to the environment is hard – if not impossible – to reverse, so it’s prudent to seek to prevent harm from happening in the first place. But as I see it, that’s not the right way to look at most new technologies. New technologies and services aren’t creating irreversible damage, even though they do generate some harms.

    2
    Environmental pollution is so harmful and hard to clean up that precautions are useful. imrankadir/Shutterstock.com

    Precaution has its place

    As a general concept, the precautionary principle is essentially conservative. It allows existing technologies, even if new ones – the ones that face preemptive bans – are safer overall.

    This approach also runs counter to the most basic idea of liberalism, in which people are broadly allowed to do what they want, unless there’s a rule against it. This is limited only when our right to free action interferes with someone else’s rights. The precautionary principle reverses this, banning people from doing what they want, unless it is specifically allowed.

    The precautionary principle makes sense when people are talking about some issues, like the environment or public health. It’s easier to avoid the problems of air pollution or dumping trash in the ocean than trying to clean up afterward. Similarly, giving children drinking water that’s contaminated with lead has effects that aren’t reversible. The children simply must deal with the health effects of their exposure for the rest of their lives.

    But as much of a nuisance as dockless scooters might be, they aren’t the same as poisoned water.

    Managing the effects

    Of course, dockless scooters, autonomous cars and a whole host of new technologies do generate real harms. A Consumer Reports investigation in early 2019 found more than 1,500 injuries from electric scooters since the dockless companies were founded. That’s in addition to the more common nuisance of having to step over scooters carelessly left in the middle of the sidewalk – and the difficulties people using wheelchairs, crutches, strollers or walkers may have in getting around them.

    Those harms are not nothing, and can help motivate arguments for banning scooters. After all, they can’t hurt anyone if they’re not allowed. What’s missing from those figures, however, is how many of those people riding scooters would have gotten into a car instead. Cars are far more dangerous and far worse for the environment.

    Yet the precautionary principle isn’t right for cars, either. As the number of autonomous cars on the road climbs, they’ll be involved in an increasing number of crashes, which will no doubt get lots of media attention.

    It is worth keeping in mind that autonomous cars will have been a wild technology success even if they are in millions of crashes every year, so long as they improve on the 6.5 million crashes and 1.9 million people who were seriously injured in a car crash in 2017.


    A look at the precautionary principle in environmental regulation.

    Disruption brings benefits too

    It may also be helpful to remember that dockless scooters and ridesharing apps and any other technology that displaces existing methods can really only become a nuisance if a lot of people use them – that is, if many people find them valuable. Injuries from scooters, and the number of scooters left lying around, have increased because the number of people using them has skyrocketed. Those 1,500 reported injuries are from 38.5 million rides.

    This is not, of course, to say that these technologies and the firms that produce them should go unregulated. Indeed, a number of these firms have behaved quite poorly, and have legitimately created some harms, which should be regulated.

    But instead of preemptively banning things, I suggest continuing to rely on the standard approach in the liberal tradition: See what kinds of harms arise, handle the early cases via the court system, and then consider whether a pattern of harms emerges that would be better handled upfront by a new or revised regulation. The Consumer Product Safety Commission, which looks out for dangerous consumer goods and holds manufacturers to account, is an example of this.

    Indeed, laws and regulations already cover littering, abandoned vehicles, negligence and assault. New technologies may just introduce new ways of generating the same old harms, ones that are already reasonably well regulated. Genuinely new situations can of course arise: San Francisco’s ban on municipal use of facial recognition technologies may well be sensible, as people quite reasonably can democratically decide that the state shouldn’t be able to track their every move. People might well decide that companies shouldn’t be able to either.

    Silicon Valley’s CEOs aren’t always sympathetic characters. And “disruption” really can be disruptive. But liberalism is about innovation and experimentation and finding new solutions to humanity’s problems. Banning new technologies – even ones as trivial as dockless scooters – embodies a conservatism that denies that premise. A lot of new ideas aren’t great. A handful are really useful. It’s hard to tell which is which until we try them out a bit.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 10:09 am on August 19, 2019 Permalink | Reply
    Tags: "Ocean warming has fisheries on the move helping some but hurting more", , , , , , The Conversation   

    From The Conversation: “Ocean warming has fisheries on the move, helping some but hurting more” 

    Conversation
    From The Conversation

    August 19, 2019
    Chris Free, UCSB

    1
    Atlantic Cod on Ice. Alamy. Cod fisheries in the North Sea and Irish Sea are declining due to overfishing and climate change.

    Climate change has been steadily warming the ocean, which absorbs most of the heat trapped by greenhouse gases in the atmosphere, for 100 years. This warming is altering marine ecosystems and having a direct impact on fish populations. About half of the world’s population relies on fish as a vital source of protein, and the fishing industry employs more the 56 million people worldwide.

    My recent study [Science] with colleagues from Rutgers University and the U.S. National Oceanic and Atmospheric Administration found that ocean warming has already impacted global fish populations. We found that some populations benefited from warming, but more of them suffered.


    3

    Overall, ocean warming reduced catch potential – the greatest amount of fish that can be caught year after year – by a net 4% over the past 80 years. In some regions, the effects of warming have been much larger. The North Sea, which has large commercial fisheries, and the seas of East Asia, which support some of the fastest-growing human populations, experienced losses of 15% to 35%.

    4
    The reddish and brown circles represent fish populations whose maximum sustainable yields have dropped as the ocean has warmed. The darkest tones represent extremes of 35 percent. Blueish colors represent fish yields that increased in warmer waters. Chris Free, CC BY-ND

    Although ocean warming has already challenged the ability of ocean fisheries to provide food and income, swift reductions in greenhouse gas emissions and reforms to fisheries management could lessen many of the negative impacts of continued warming.

    How and why does ocean warming affect fish?

    My collaborators and I like to say that fish are like Goldilocks: They don’t want their water too hot or too cold, but just right.

    Put another way, most fish species have evolved narrow temperature tolerances. Supporting the cellular machinery necessary to tolerate wider temperatures demands a lot of energy. This evolutionary strategy saves energy when temperatures are “just right,” but it becomes a problem when fish find themselves in warming water. As their bodies begin to fail, they must divert energy from searching for food or avoiding predators to maintaining basic bodily functions and searching for cooler waters.

    Thus, as the oceans warm, fish move to track their preferred temperatures. Most fish are moving poleward or into deeper waters. For some species, warming expands their ranges. In other cases it contracts their ranges by reducing the amount of ocean they can thermally tolerate. These shifts change where fish go, their abundance and their catch potential.

    Warming can also modify the availability of key prey species. For example, if warming causes zooplankton – small invertebrates at the bottom of the ocean food web – to bloom early, they may not be available when juvenile fish need them most. Alternatively, warming can sometimes enhance the strength of zooplankton blooms, thereby increasing the productivity of juvenile fish.

    Understanding how the complex impacts of warming on fish populations balance out is crucial for projecting how climate change could affect the ocean’s potential to provide food and income for people.

    4

    Impacts of historical warming on marine fisheries

    Sustainable fisheries are like healthy bank accounts. If people live off the interest and don’t overly deplete the principal, both people and the bank thrive. If a fish population is overfished, the population’s “principal” shrinks too much to generate high long-term yields.

    Similarly, stresses on fish populations from environmental change can reduce population growth rates, much as an interest rate reduction reduces the growth rate of savings in a bank account.

    In our study we combined maps of historical ocean temperatures with estimates of historical fish abundance and exploitation. This allowed us to assess how warming has affected those interest rates and returns from the global fisheries bank account.

    Losers outweigh winners

    We found that warming has damaged some fisheries and benefited others. The losers outweighed the winners, resulting in a net 4% decline in sustainable catch potential over the last 80 years. This represents a cumulative loss of 1.4 million metric tons previously available for food and income.

    Some regions have been hit especially hard. The North Sea, with large commercial fisheries for species like Atlantic cod, haddock and herring, has experienced a 35% loss in sustainable catch potential since 1930. The waters of East Asia, neighbored by some of the fastest-growing human populations in the world, saw losses of 8% to 35% across three seas.

    Other species and regions benefited from warming. Black sea bass, a popular species among recreational anglers on the U.S. East Coast, expanded its range and catch potential as waters previously too cool for it warmed. In the Baltic Sea, juvenile herring and sprat – another small herring-like fish – have more food available to them in warm years than in cool years, and have also benefited from warming. However, these climate winners can tolerate only so much warming, and may see declines as temperatures continue to rise.

    5
    Shucking scallops in Maine, where fishery management has kept scallop numbers sustainable. Robert F. Bukaty/AP

    Management boosts fishes’ resilience

    Our work suggests three encouraging pieces of news for fish populations.

    First, well-managed fisheries, such as Atlantic scallops on the U.S. East Coast, were among the most resilient to warming. Others with a history of overfishing, such as Atlantic cod in the Irish and North seas, were among the most vulnerable. These findings suggest that preventing overfishing and rebuilding overfished populations will enhance resilience and maximize long-term food and income potential.

    Second, new research suggests that swift climate-adaptive management reforms can make it possible for fish to feed humans and generate income into the future. This will require scientific agencies to work with the fishing industry on new methods for assessing fish populations’ health, set catch limits that account for the effects of climate change and establish new international institutions to ensure that management remains strong as fish migrate poleward from one nation’s waters into another’s. These agencies would be similar to multinational organizations that manage tuna, swordfish and marlin today.

    Finally, nations will have to aggressively curb greenhouse gas emissions. Even the best fishery management reforms will be unable to compensate for the 4 degree Celsius ocean temperature increase that scientists project will occur by the end of this century if greenhouse gas emissions are not reduced.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 12:40 pm on August 16, 2019 Permalink | Reply
    Tags: "A cyberattack could wreak destruction comparable to a nuclear weapon", A compromised nuclear facility could result in the discharge of radioactive material chemicals or even possibly a reactor meltdown., Hackers are already laying the groundwork., In 2016 and 2017 hackers shut down major sections of the power grid in Ukraine., In 2018 unknown cybercriminals gained access throughout the United Kingdom’s electricity system; in 2019 a similar incursion may have penetrated the U.S. grid., In August 2017 a Saudi Arabian petrochemical plant was hit by hackers who tried to blow up equipment by taking control of the same types of electronics used in industrial facilities of all kinds., in early 2016 hackers took control of a U.S. treatment plant for drinking water and changed the chemical mixture used to purify the water. Fortunately this attack was thwarted., The Conversation, The FBI has even warned that hackers are targeting nuclear facilities., The U.S. military has also reportedly penetrated the computers that control Russian electrical systems., There are signs that hackers have placed malicious software inside U.S. power and water systems where it’s lying in wait ready to be triggered.   

    From The Conversation: “A cyberattack could wreak destruction comparable to a nuclear weapon” 

    Conversation
    From The Conversation

    1
    Digital attacks can cause havoc in different places all at the same time. Pushish Images/Shutterstock.com

    August 16, 2019
    Naomi Schalit

    People around the world may be worried about nuclear tensions rising, but I think they’re missing the fact that a major cyberattack could be just as damaging – and hackers are already laying the groundwork.

    With the U.S. and Russia pulling out of a key nuclear weapons pact – and beginning to develop new nuclear weapons – plus Iran tensions and North Korea again test-launching missiles, the global threat to civilization is high. Some fear a new nuclear arms race.

    That threat is serious – but another could be as serious, and is less visible to the public. So far, most of the well-known hacking incidents, even those with foreign government backing, have done little more than steal data. Unfortunately, there are signs that hackers have placed malicious software inside U.S. power and water systems, where it’s lying in wait, ready to be triggered. The U.S. military has also reportedly penetrated the computers that control Russian electrical systems.

    Many intrusions already

    As someone who studies cybersecurity and information warfare, I’m concerned that a cyberattack with widespread impact, an intrusion in one area that spreads to others or a combination of lots of smaller attacks, could cause significant damage, including mass injury and death rivaling the death toll of a nuclear weapon.

    Unlike a nuclear weapon, which would vaporize people within 100 feet and kill almost everyone within a half-mile, the death toll from most cyberattacks would be slower. People might die from a lack of food, power or gas for heat or from car crashes resulting from a corrupted traffic light system. This could happen over a wide area, resulting in mass injury and even deaths.

    This might sound alarmist, but look at what has been happening in recent years, in the U.S. and around the world.

    In early 2016, hackers took control of a U.S. treatment plant for drinking water, and changed the chemical mixture used to purify the water. If changes had been made – and gone unnoticed – this could have led to poisonings, an unusable water supply and a lack of water.

    In 2016 and 2017, hackers shut down major sections of the power grid in Ukraine. This attack was milder than it could have been, as no equipment was destroyed during it, despite the ability to do so. Officials think it was designed to send a message. In 2018, unknown cybercriminals gained access throughout the United Kingdom’s electricity system; in 2019 a similar incursion may have penetrated the U.S. grid.

    In August 2017, a Saudi Arabian petrochemical plant was hit by hackers who tried to blow up equipment by taking control of the same types of electronics used in industrial facilities of all kinds throughout the world. Just a few months later, hackers shut down monitoring systems for oil and gas pipelines across the U.S. This primarily caused logistical problems – but it showed how an insecure contractor’s systems could potentially cause problems for primary ones.

    The FBI has even warned that hackers are targeting nuclear facilities. A compromised nuclear facility could result in the discharge of radioactive material, chemicals or even possibly a reactor meltdown. A cyberattack could cause an event similar to the incident in Chernobyl. That explosion, caused by inadvertent error, resulted in 50 deaths and evacuation of 120,000 and has left parts of the region uninhabitable for thousands of years into the future.

    Mutual assured destruction

    My concern is not intended to downplay the devastating and immediate effects of a nuclear attack. Rather, it’s to point out that some of the international protections against nuclear conflicts don’t exist for cyberattacks. For instance, the idea of “mutual assured destruction” suggests that no country should launch a nuclear weapon at another nuclear-armed nation: The launch would likely be detected, and the target nation would launch its own weapons in response, destroying both nations.

    Cyberattackers have fewer inhibitions. For one thing, it’s much easier to disguise the source of a digital incursion than it is to hide where a missile blasted off from. Further, cyberwarfare can start small, targeting even a single phone or laptop. Larger attacks might target businesses, such as banks or hotels, or a government agency. But those aren’t enough to escalate a conflict to the nuclear scale.

    Nuclear grade cyberattacks

    There are three basic scenarios for how a nuclear grade cyberattack might develop. It could start modestly, with one country’s intelligence service stealing, deleting or compromising another nation’s military data. Successive rounds of retaliation could expand the scope of the attacks and the severity of the damage to civilian life.

    In another situation, a nation or a terrorist organization could unleash a massively destructive cyberattack – targeting several electricity utilities, water treatment facilities or industrial plants at once, or in combination with each other to compound the damage.

    Perhaps the most concerning possibility, though, is that it might happen by mistake. On several occasions, human and mechanical errors very nearly destroyed the world during the Cold War; something analogous could happen in the software and hardware of the digital realm.

    Defending against disaster

    Just as there is no way to completely protect against a nuclear attack, there are only ways to make devastating cyberattacks less likely.

    The first is that governments, businesses and regular people need to secure their systems to prevent outside intruders from finding their way in, and then exploiting their connections and access to dive deeper.

    Critical systems, like those at public utilities, transportation companies and firms that use hazardous chemicals, need to be much more secure. One analysis found that only about one-fifth of companies that use computers to control industrial machinery in the U.S. even monitor their equipment to detect potential attacks – and that in 40% of the attacks they did catch, the intruder had been accessing the system for more than a year. Another survey found that nearly three-quarters of energy companies had experienced some sort of network intrusion in the previous year.

    But all those systems can’t be protected without skilled cybersecurity staffs to handle the work. At present, nearly a quarter of all cybersecurity jobs in the U.S. are vacant, with more positions opening up than there are people to fill them. One recruiter has expressed concern that even some of the jobs that are filled are held by people who aren’t qualified to do them. The solution is more training and education, to teach people the skills they need to do cybersecurity work, and to keep existing workers up to date on the latest threats and defense strategies.

    If the world is to hold off major cyberattacks – including some with the potential to be as damaging as a nuclear strike – it will be up to each person, each company, each government agency to work on its own and together to secure the vital systems on which people’s lives depend.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 8:29 am on August 14, 2019 Permalink | Reply
    Tags: "A brief astronomical history of Saturn’s amazing rings", , , , , In the four centuries since the invention of the telescope rings have also been discovered around Jupiter Uranus and Neptune., , Pioneer 11, , The Conversation, The magnificent ring system of Saturn is between 10 meters and one kilometer thick., The shepherd moons Pan; Daphnis; Atlas; Pandora and Prometheus measuring between eight and 130 kilometers across quite literally shepherd the ring particles keeping them in their present orbits.   

    From The Conversation: “A brief astronomical history of Saturn’s amazing rings” 

    Conversation
    From The Conversation

    August 14, 2019
    Vahe Peroomian, University of Southern California

    1
    With giant Saturn hanging in the blackness and sheltering Cassini from the Sun’s blinding glare, the spacecraft viewed the rings as never before.

    Many dream of what they would do had they a time machine. Some would travel 100 million years back in time, when dinosaurs roamed the Earth. Not many, though, would think of taking a telescope with them, and if, having done so, observe Saturn and its rings.

    Whether our time-traveling astronomer would be able to observe Saturn’s rings is debatable. Have the rings, in some shape or form, existed since the beginnings of the solar system, 4.6 billion years ago, or are they a more recent addition? Had the rings even formed when the Chicxulub asteroid wiped out the dinosaurs?

    I am a space scientist with a passion for teaching physics and astronomy, and Saturn’s rings have always fascinated me as they tell the story of how the eyes of humanity were opened to the wonders of our solar system and the cosmos.

    Our view of Saturn evolves

    When Galileo first observed Saturn through his telescope in 1610, he was still basking in the fame of discovering the four moons of Jupiter. But Saturn perplexed him. Peering at the planet through his telescope, it first looked to him as a planet with two very large moons, then as a lone planet, and then again through his newer telescope, in 1616, as a planet with arms or handles.

    Four decades later, Giovanni Cassini first suggested that Saturn was a ringed planet, and what Galileo had seen were different views of Saturn’s rings. Because of the 27 degrees in the tilt of Saturn’s rotation axis relative to the plane of its orbit, the rings appear to tilt toward and away from Earth with the 29-year cycle of Saturn’s revolution about the Sun, giving humanity an ever-changing view of the rings.

    But what were the rings made of? Were they solid disks as some suggested? Or were they made up of smaller particles? As more structure became apparent in the rings, as more gaps were found, and as the motion of the rings about Saturn was observed, astronomers realized that the rings were not solid, and were perhaps made up of a large number of moonlets, or small moons. At the same time, estimates for the thickness of the rings went from Sir William Herschel’s 300 miles in 1789, to Audouin Dollfus’ much more precise estimate of less than two miles in 1966.

    Astronomers understanding of the rings changed dramatically with the Pioneer 11 and twin Voyager missions to Saturn.

    NASA Pioneer 11

    NASA/Voyager 1

    NASA/Voyager 2

    Voyager’s now famous photograph of the rings, backlit by the Sun, showed for the first time that what appeared as the vast A, B and C rings in fact comprised millions of smaller ringlets.

    2
    Voyager 2 false color image of Saturn’s B and C rings showing many ringlets. NASA

    The Cassini mission to Saturn, having spent over a decade orbiting the ringed giant, gave planetary scientists even more spectacular and surprising views.

    NASA/ESA/ASI Cassini-Huygens Spacecraft

    The magnificent ring system of Saturn is between 10 meters and one kilometer thick. The combined mass of its particles, which are 99.8% ice and most of which are less than one meter in size, is about 16 quadrillion tons, less than 0.02% the mass of Earth’s Moon, and less than half the mass of Saturn’s moon Mimas. This has led some scientists to speculate whether the rings are a result of the breakup of one of Saturn’s moons or the capture and breakup of a stray comet.

    The dynamic rings

    In the four centuries since the invention of the telescope, rings have also been discovered around Jupiter, Uranus and Neptune, the giant planets of our solar system. The reason why the giant planets are adorned with rings and Earth and the other rocky planets are not was first proposed by Eduard Roche, a French astronomer in 1849.

    A moon and its planet are always in a gravitational dance. Earth’s moon, by pulling on opposite sides of the Earth, causes the ocean tides. Tidal forces also affect planetary moons. If a moon ventures too close to a planet, these forces can overcome the gravitational “glue” holding the moon together and tear it apart. This causes the moon to break up and spread along its original orbit, forming a ring.

    The Roche limit, the minimum safe distance for a moon’s orbit, is approximately 2.5 times the planet’s radius from the planet’s center. For enormous Saturn, this is a distance of 87,000 kilometers above its cloud tops and matches the location of Saturn’s outer F ring. For Earth, this distance is less than 10,000 kilometers above its surface. An asteroid or comet would have to venture very close to the Earth to be torn apart by tidal forces and form a ring around the Earth. Our own Moon is a very safe 380,000 kilometers away.

    3
    NASA’s Cassini spacecraft about to make one of its dives between Saturn and its innermost rings as part of the mission’s grand finale. NASA/JPL-Caltech

    The thinness of planetary rings is caused by their ever-changing nature. A ring particle whose orbit is tilted with respect to the rest of the ring will eventually collide with other ring particles. In doing so, it will lose energy and settle into the plane of the ring. Over millions of years, all such errant particles either fall away or get in line, leaving only the very thin ring system people observe today.

    During the last year of its mission, the Cassini spacecraft dived repeatedly through the 7,000 kilometer gap between the clouds of Saturn and its inner rings. These unprecedented observations made one fact very clear: The rings are constantly changing. Individual particles in the rings are continually jostled by each other. Ring particles are steadily raining down onto Saturn.

    The shepherd moons Pan, Daphnis, Atlas, Pandora and Prometheus, measuring between eight and 130 kilometers across, quite literally shepherd the ring particles, keeping them in their present orbits. Density waves, caused by the motion of shepherd moons within the rings, jostle and reshape the rings. Small moonlets are forming from ring particles that coalesce together. All this indicates that the rings are ephemeral. Every second up to 40 tons of ice from the rings rain down on Saturn’s atmosphere. That means the rings may last only several tens to hundreds of millions of years.

    Could a time-traveling astronomer have seen the rings 100 million years ago? One indicator for the age of the rings is their dustiness. Objects exposed to the dust permeating our solar system for long periods of time grow dustier and darker.

    Saturn’s rings are extremely bright and dust-free, seeming to indicate that they formed anywhere from 10 to 100 million years ago, if astronomers’ understanding of how icy particles gather dust is correct. One thing is for certain. The rings our time-traveling astronaut would have seen would have looked very different from the way they do today.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 10:38 am on June 18, 2019 Permalink | Reply
    Tags: "It’s time for Australia to commit to the kind of future it wants: CSIRO Australian National Outlook 2019", Australia’s future prosperity is at risk unless we take bold action and commit to long-term thinking., In the Slow Decline scenario Australia fails to adequately address identified challenges., The Conversation, The Outlook Vision scenario shows what could be possible if Australia meets identified challenges   

    From The Conversation: “It’s time for Australia to commit to the kind of future it wants: CSIRO Australian National Outlook 2019” 

    Conversation
    From The Conversation

    June 17, 2019
    James Deverell

    Australia’s future prosperity is at risk unless we take bold action and commit to long-term thinking. This is the key message contained in the Australian National Outlook 2019 (ANO 2019), a report published today by CSIRO and its partners.

    The research used a scenario approach to model different visions of Australia in 2060.

    We contrasted two core scenarios: a base case called Slow Decline, and an Outlook Vision scenario which represents what Australia could achieve. These scenarios took account of 13 different national issues, as well as two global contexts relating to trade and action on climate change.

    We found there are profound differences in long-term outcomes between these two scenarios.

    1
    In the Slow Decline scenario, Australia fails to adequately address identified challenges. CSIRO, Author provided

    2
    The Outlook Vision scenario shows what could be possible if Australia meets identified challenges. CSIRO, Author provided

    Slow decline versus a new outlook

    Australia’s living standards – as measured by Gross Domestic Product (GDP) per capita – could be 36% higher in 2060 in the Outlook Vision, compared with Slow Decline. This translates into a 90% increase in average wages (in real terms, adjusted for inflation) from today.

    Australia could maintain its world-class, highly liveable cities, while increasing its population to 41 million people by 2060. Urban congestion could be reduced, with per capita passenger vehicle travel 45% lower than today in the Outlook Vision.

    Australia could achieve net-zero emissions by 2050 while reducing household spend on electricity (relative to incomes) by up to 64%. Importantly, our modelling shows this could be achieved without significant impact on economic growth.

    Low-emissions, low-cost energy could even become a source of comparative advantage for Australia, opening up new export opportunities.

    And inflation-adjusted returns to rural landholders in Australia could triple to 2060, with the land sector contribution to GDP increasing from around 2% today to over 5%.

    At the same time, ecosystems could be restored through more biodiverse plantings and land management.

    The report, developed over the last two years, explores what Australia must do to secure a future with prosperous and globally competitive industries, inclusive and enabling communities, and sustainable natural endowments, all underpinned by strong public and civic institutions.

    ANO 2019 uses CSIRO’s integrated modelling framework to project economic, environmental and social outcomes to 2060 across multiple scenarios.

    The outlook also features input from more than 50 senior leaders drawn from Australia’s leading companies, universities and not-for-profits.

    So how do we get there?

    Achieving the outcomes in the Outlook Vision won’t be easy.

    Australia will need to address the major challenges it faces, including the rise of Asia, technology disruption, climate change, changing demographics, and declining social cohesion. This will require long-term thinking and bold action across five major “shifts”:

    industry shift
    urban shift
    energy shift
    land shift
    culture shift.

    The report outlines the major actions that will underpin each of these shifts.

    For example, the industry shift would see Australian firms adopt new technologies (such as automation and artificial intelligence) to boost productivity, which accounts for a little over half of the difference in living standards between the Outlook Vision and Slow Decline.

    Developing human capital (through education and training) and investment in high-growth, export-facing industries (such as healthcare and advanced manufacturing) each account for around 20% of the difference between the two scenarios.

    The urban shift would see Australia increase the density of its major cities by between 60-88%, while spreading this density across a wider cross-section of the urban landscape (such as multiple centres).

    Combining this density with a greater diversity of housing types and land uses will allow more people to live closer to high-quality jobs, education, and services.

    Enhancing transport infrastructure to support multi-centric cities, more active transport, and autonomous vehicles will alleviate congestion and enable the “30-minute city”.

    In the energy shift, across every scenario modelled, the electricity sector transitions to nearly 100% renewable generation by 2050, driven by market forces and declining electricity generation and storage costs.

    Likewise, electric vehicles are on pace to hit price-parity with petrol ones by the mid-2020s and could account for 80% of passenger vehicles by 2060.

    In addition, Australia could triple its energy productivity by 2060, meaning it would use only 6% more energy than today, despite the population growing by over 60% and GDP more than tripling.

    2
    Primary energy use in Australia under the modelled scenarios. Primary energy is the measure of energy before it has been converted or transformed, and includes electricity plus combustion of fuels in industry, commercial, residential and transport. CSIRO, Author provided

    The land shift would require boosting agricultural productivity (through a combination of plant genomics and digital agriculture) and changing how we use our land.

    By 2060, up to 30 million hectares – or roughly half of Australia’s marginal land within more intensively farmed areas – could be profitably transitioned to carbon plantings, which would increase returns to landholders and offset emissions from other sectors.

    As much as 700 millions of tonnes of CO₂ equivalent could be offset in 2060, which would allow Australia to become a net exporter of carbon credits.

    A culture shift

    The last, and perhaps most important shift, is the cultural shift.

    Trust in government and industry has eroded in recent years, and Australia hasn’t escaped this trend. If these institutions, which have served Australia so well in its past, cannot regain the public’s trust, it will be difficult to achieve the long-term actions that underpin the other four shifts.

    Unfortunately, there is no silver bullet here.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: