Tagged: The Conversation Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 7:53 am on December 30, 2019 Permalink | Reply
    Tags: "Here Are 6 Reasons Climate Scientists Are Hopeful, And You Should Be, Dénes Csala- lecturer in energy system dynamics at Lancaster University, Hannah Cloke-professor of hydrology at University of Reading, Heather Alberro- associate lecturer in political ecology at Nottingham Trent University, Marc Hudson- researcher in sustainable consumption at University of Manchester, Mark Maslin- professor of earth system science at UCL, Richard Hodgkins- senior lecturer in physical geography at Loughborough University, , The Conversation, Too"   

    From The Conversation via Science Alert: “Here Are 6 Reasons Climate Scientists Are Hopeful, And You Should Be, Too” 

    Conversation
    From The Conversation

    via

    ScienceAlert

    Science Alert

    30 DEC 2019
    THE CONVERSATION

    1
    (Miguel Bruna/Unsplash)

    The climate breakdown continues. Over the past year, The Conversation has covered fires in the Amazon, melting glaciers in the Andes and Greenland, record CO₂ emissions, and temperatures so hot they’re pushing the human body to its thermal limits. Even the big UN climate talks were largely disappointing. But climate researchers have not given up hope.

    We asked a few Conversation authors to highlight some more positive stories from 2019.

    Costa Rica offers us a viable climate future

    Heather Alberro, associate lecturer in political ecology, Nottingham Trent University

    After decades of climate talks, including the recent COP25 in Madrid, emissions have only continued to rise. Indeed, a recent UN report noted that a fivefold increase in current national climate change mitigation efforts would be needed to meet the 1.5℃ limit on warming by 2030.

    With the radical transformations needed in our global transport, housing, agricultural and energy systems in order to help mitigate looming climate and ecological breakdown, it can be easy to lose hope.

    However, countries like Costa Rica offer us promising examples of the “possible”. The Central American nation has implemented a refreshingly ambitious plan to completely decarbonise its economy by 2050.

    In the lead-up to this, last year with its economy still growing at 3 percent, Costa Rica was able to derive 98 percent of its electricity from renewable sources. Such an example demonstrates that with sufficient political will, it is possible to meet the daunting challenges ahead.

    Financial investors are cooling on fossil fuels

    Richard Hodgkins, senior lecturer in physical geography, Loughborough University

    Movements such as 350.org have long argued for fossil fuel divestment, but they have recently been joined by institutional investors such as Climate Action 100+, which is using the influence of its US$35 trillion of managed funds, arguing that minimising climate breakdown risks and maximising renewables’ growth opportunities are a fiduciary duty.

    Moody’s credit-rating agency recently flagged ExxonMobil for falling revenues despite rising expenditure, noting:

    “The negative outlook also reflects the emerging threat to oil and gas companies’ profitability […] from growing efforts by many nations to mitigate the impacts of climate change through tax and regulatory policies.”

    A more adverse financial environment for fossil fuel companies reduces the likelihood of new development in business frontier regions such as the Arctic, and indeed, major investment bank Goldman Sachs has declared that it “will decline any financing transaction that directly supports new upstream Arctic oil exploration or development”.

    We are getting much better at forecasting disaster

    Hannah Cloke, professor of hydrology, University of Reading

    In March and April 2019, two enormous tropical cyclones hit the south-east coast of Africa, killing more than 600 people and leaving nearly 2 million people in desperate need of emergency aid.

    There isn’t much that is positive about that, and there’s nothing new about cyclones.

    But this time scientists were able to provide the first early warning of the impending flood disaster by linking together accurate medium-range forecasts of the cyclone with the best ever simulations of flood risk.

    This meant that the UK government, for example, set about working with aid agencies in the region to start delivering emergency supplies to the area that would flood, all before Cyclone Kenneth had even gathered pace in the Indian Ocean.

    We know that the risk of dangerous floods is increasing as the climate continues to change. Even with ambitious action to reduce greenhouse gases, we must deal with the impact of a warmer more chaotic world.

    We will have to continue using the best available science to prepare ourselves for whatever is likely to come over the horizon.

    Local authorities across the world are declaring a ‘climate emergency’

    Marc Hudson, researcher in sustainable consumption, University of Manchester

    More than 1,200 local authorities around the world declared a “climate emergency” in 2019. I think there are two obvious dangers: first, it invites authoritarian responses (stop breeding! Stop criticising our plans for geoengineering!). And second, an “emergency” declaration may simply be a greenwash followed by business-as-usual.

    In Manchester, where I live and research, the City Council is greenwashing. A nice declaration in July was followed by more flights for staff (to places just a few hours away by train), and further car parks and roads.

    The deadline for a “bring zero-carbon date forward?” report has been ignored.

    But these civic declarations have also kicked off a wave of civic activism, as campaigners have found city councils easier to hold to account than national governments. I’m part of an activist group called “Climate Emergency Manchester” – we inform citizens and lobby councillors.

    We’ve assessed progress so far, based on Freedom of Information Act requests, and produced a “what could be done?” report. As the council falls further behind on its promises, we will be stepping up our activity, trying to pressure it to do the right thing.

    Radical climate policy goes mainstream

    Dénes Csala, lecturer in energy system dynamics, Lancaster University

    Before the 2019 UK general election, I compared the Conservative and Labour election manifestos, from a climate and energy perspective. Although the party with the clearly weaker plan won eventually, I am still stubborn enough to be hopeful with regard to the future of political action on climate change.

    For the first time, in a major economy, a leading party’s manifesto had at its core climate action, transport electrification and full energy system decarbonisation, all on a timescale compatible with IPCC directives to avoid catastrophic climate change.

    This means the discussion that has been cooking at the highest levels since the 2015 Paris Agreement has started to boil down into tangible policies.

    Young people are on the march!

    Mark Maslin, professor of earth system science, UCL

    In 2019, public awareness of climate change rose sharply, driven by the schools strikes, Extinction Rebellion, high impact IPCC reports, improved media coverage, a BBC One climate change documentary and the UK and other governments declaring a climate emergency.

    Two recent polls suggest that over 75 percent of Americans accept humans have caused climate change.

    Empowerment of the first truly globalised generation has catalysed this new urgency. Young people can access knowledge at the click of a button. They know climate change science is real and see through the deniers’ lies because this generation does not access traditional media – in fact, they bypass it.

    The awareness and concern regarding climate change will continue to grow. Next year will be an even bigger year as the UK will chair the UN climate change negotiations in Glasgow – and expectations are running high.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 9:08 am on December 23, 2019 Permalink | Reply
    Tags: "Planetary Confusion-Why Astronomers Keep Changing What It Means to Be A Planet", , , , , , The Conversation   

    From The Conversation: “Planetary Confusion-Why Astronomers Keep Changing What It Means to Be A Planet” 

    Conversation
    From The Conversation

    1
    In 2015, NASA’s New Horizons spacecraft looked back toward the sun and captured this near-sunset view of the rugged, icy mountains and flat ice plains extending to Pluto’s horizon. NASA/JHUAPL/SwRI

    NASA/New Horizons spacecraft

    As an astronomer, the question I hear the most is why isn’t Pluto a planet anymore? More than 10 years ago, astronomers famously voted to change Pluto’s classification. But the question still comes up.

    When I am asked directly if I think Pluto is a planet, I tell everyone my answer is no. It all goes back to the origin of the word “planet.” It comes from the Greek phrase for “wandering stars.” Back in ancient times before the telescope was invented, the mathematician and astronomer Claudius Ptolemy called stars “fixed stars” to distinguish them from the seven wanderers that move across the sky in a very specific way. These seven objects are the Sun, the Moon, Mercury, Venus, Mars, Jupiter and Saturn.

    When people started using the word “planet,” they were referring to those seven objects. Even Earth was not originally called a planet – but the Sun and Moon were.

    Since people use the word “planet” today to refer to many objects beyond the original seven, it’s no surprise we argue about some of them.

    Although I am trained as an astronomer and I studied more distant objects like stars and galaxies, I have an interest in the objects in our Solar System because I teach several classes on planetary science.

    Asteroids, the first demoted planets

    The word “planet” is used to describe Uranus and Neptune, which were discovered in 1781 and 1846 respectively, because they move in the same way that the other “wandering stars” move. Like Saturn and Jupiter, if you look at them through a telescope, they appear bigger than stars, so they were recognized to be more like planets than stars.

    Not long after the discovery of Uranus, astronomers discovered additional wandering objects – these were named Ceres, Pallas, Juno and Vesta. At the time they were considered planets, too. Through a telescope they look like pinpoints of light and not disks. With a small telescope, even distant Neptune appears fuzzier than a star. Even though these other, new objects were called planets at first, astronomers thought they needed a different name since they appear more star-like than planet-like.

    William Herschel (who discovered Uranus) is often said to have named them “asteroids” which means “star-like,” but recently, Clifford Cunningham claimed that the person who coined that name was Charles Burney Jr., a preeminent Greek scholar.

    Today, just like the word “planet,” we use the word “asteroid” differently. Now it refers to objects that are rocky in composition, mostly found between Mars and Jupiter, mostly irregularly shaped, smaller than planets, but bigger than meteoroids. Most people assume there is a strict definition for what makes an object an asteroid. But there isn’t, just like there never was for the word “planet.”

    In the 1800s the large asteroids were called planets. Students at the time likely learned that the planets were Mercury, Venus, Earth, Mars, Ceres, Vesta, Pallas, Juno, Jupiter, Saturn, Uranus and, eventually, Neptune. Most books today write that asteroids are different than planets, but there is a debate among astronomers about whether the term “asteroid” was originally used to mean a small type of planet, rather than a different type of object altogether.

    How are moons different than planets?

    These days, scientists consider properties of these celestial objects to figure out whether an object is a planet or not. For example, you might say that shape is important; planets should be mostly spherical, while asteroids can be lumpy. As astronomers try to fix these definitions to make them more precise, we then create new problems. If we use roundness as an important distinction for objects, what should we call moons? Should moons be considered planets if they are round and asteroids if they are not round? Or are they somehow different from planets and asteroids altogether?

    I would argue we should again look to how the word “moon” came to refer to objects that orbit planets.

    When astronomers talk about the Moon of Earth, we capitalize the word “Moon” to indicate that it’s a proper name. That is, the Earth’s moon has the name, Moon. For much of human history, it was the only Moon known, so there was no need to have a word that referred to one celestial body orbiting another. This changed when Galileo discovered four large objects orbiting Jupiter. These are now called Io, Europa, Ganymede and Callisto, the moons of Jupiter.

    This makes people think the technical definition of moon is a satellite of another object, and so we call lots of objects that orbit Mars, Jupiter, Saturn, Uranus, Neptune, Pluto, Eris, Makemake, Ida and a large number of other asteroids moons. When you start to look at the variety of moons, some, like Ganymede and Titan, are larger than Mercury. Some are similar in size to the object they orbit. Some are small and irregularly shaped, and some have odd orbits.

    So they are not all just like Earth’s Moon. If we try to fix the definition for what is a moon and how that differs from a planet and asteroid, we are likely going to have to reconsider the classification of some of these objects, too. You can argue that Titan has more properties in common with the planets than Pluto does, for example. You can also argue that every single particle in Saturn’s rings is an individual moon, which would mean that Saturn has billions upon billions of moons.

    Planets around other stars

    The most recent naming challenge astronomers face arose when they discovering planets far from our Solar System orbiting around distant stars. These objects have been called extrasolar planets, exosolar planets or exoplanets.

    Astronomers are currently searching for exomoons orbiting exoplanets. Exoplanets are being discovered that have properties unlike the planets in our Solar System, so astronomers have started putting them in categories like “hot Jupiter,” “warm Jupiter,” “super-Earth” and “mini-Neptune.”

    Ideas for how planets form also suggest that there are planetary objects that have been flung out of orbit from their parent star. This means there are free-floating planets not orbiting any star. Should planetary objects that are flung out of a solar system also get ejected from the elite club of planets?

    When I teach, I end this discussion with a recommendation. Rather than arguing over planet, moon, asteroid and exoplanet, I think we need to do what Herschel and Burney did and coin a new word. For now, I use “world” in my class, but I do not offer a rigorous definition of what makes something a world and what does not. Instead, I tell my students that all of these objects are of interest to study.

    The Sun was once a planet

    A lot of people seem to feel that scientists wronged Pluto by changing its classification. I look at it that Pluto was only originally called a planet because of an accident; scientists were looking for planets beyond Neptune, and when they found Pluto they called it a planet, even though its observable properties should have led them to call it an asteroid.

    As our understanding of this object has grown, I feel like the evidence now leads me to call Pluto something besides planet. There are other scientists who disagree, feeling Pluto still should be classified as a planet.

    But remember: The Greeks started out calling the Sun a planet given how it moved on the sky. We now know that the properties of the Sun show it to belong in a very different category from the planets; it’s a star, not a planet. If we can stop calling the Sun a planet, why can’t we do the same to Pluto?

    2
    The Kepler-90 planets have a similar configuration to our solar system with small planets found orbiting close to their star, and the larger planets found farther away. NASA/Ames Research Center/Wendy Stenzel

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 12:11 pm on September 16, 2019 Permalink | Reply
    Tags: , , First-year Research Immersion, SUNY Binghamton, The Conversation, Undergraduate research, ,   

    From The Conversation: “At these colleges, students begin serious research their first year” 

    Conversation
    From The Conversation

    September 16, 2019
    Nancy Stamp

    Rat brains to understand Parkinson’s disease. Drones to detect plastic landmines. Social media to predict acts of terrorism.

    These are just a few potentially lifesaving research projects that students have undertaken in recent years at universities in New York and Maryland. While each project is interesting by itself, there’s something different about these particular research projects – all three were carried out by undergraduates during their earliest years of college.

    That’s noteworthy because students usually have to wait until later in their college experience – even graduate school – to start doing serious research. While about one out of every five undergraduates get some kind of research experience, the rest tend to get just “cookbook” labs that typically do not challenge students to think but merely require them to follow directions to achieve the “correct” answer.

    That’s beginning to change through First-year Research Immersion, an academic model that is part of an emergent trend meant to provide undergraduates with meaningful research experience.

    The First-year Research Immersion is a sequence of three course-based research experiences at three universities: University of Texas at Austin, University of Maryland and Binghamton University, where I teach science.

    As a scientist, researcher and professor, I see undergraduate research experience as a crucial part of college. And as the former director of my university’s First-year Research Immersion program for aspiring science or engineering majors, I also believe these research experiences better equip students to apply what they learn in different situations.

    There is evidence to support this view. For instance, a 2018 study found that undergraduate exposure to a rigorous research program “leads to success in a research STEM career.” The same study found that undergraduates who get a research experience are “more likely to pursue a Ph.D. program and generate significantly more valued products” compared to other students.

    A closer look

    Just what do these undergraduate research experiences look like?

    At the University of Texas, it involved having students identify a new way to manage and repair DNA, the stuff that makes up our genes. This in turn provides insights into preventing genetic disorders.

    At the University of Maryland, student teams investigated how social media promotes terrorism and found that it is possible to identify when conflicts on social media can escalate into physical violence.

    4
    Binghamton student William Frazer test a drone with a sense to detect plastic landmines. Jonathn Cohen/Binghamton University

    Essential elements

    The First-year Research Immersion began as an experiment at the University of Texas at Austin in 2005. The University of Maryland at College Park and Binghamton University – SUNY adapted the model to their institutions in 2014.

    The program makes research experiences an essential part of a college course. These course-based research experiences have five elements. Specifically, they:

    Engage students in scientific practices, such as how and why to take accurate measurements.
    Emphasize teamwork.
    Examine broadly relevant topics, such as the spread of Lyme disease.
    Explore questions with unknown answers to expose students to the process of scientific discovery.
    Repeat measurements or experiments to verify results.

    The model consists of three courses. In the first course, students identify an interesting problem, determine what’s known and unknown and collaborate to draft a preliminary project proposal.

    In the second course, students develop laboratory research skills, begin their team project and use the results to write a full research proposal.

    In the third course, during sophomore year, students execute their proposed research, produce a report and a research poster.

    This sequence of courses is meant to give all students – regardless of their prior academic experience – the time and support they need to be successful.

    Does it work?

    The First-year Research Immersion program is showing promising results. For instance, at Binghamton, where 300 students who plan to major in engineering and science participate in the program, a survey indicated that participants got 14% more research experience than students in traditional laboratory courses.

    At the University of Maryland, where 600 freshmen participate in the program, students reported that they made substantial gains in communication, time management, collaboration and problem-solving.

    At the University of Texas at Austin, where about 900 freshman participate in the First-Year Research Immersion program in the natural sciences, educators found that program participants showed a 23% higher graduation rate than similar students who were not in the program. And this outcome took place irrespective of students’ gender, race or ethnicity, or whether they were the first in their family to attend college or not.

    All three programs have significantly higher numbers of students from minority groups than the campuses overall. For instance, at Binghamton University, there are 22% more students from underrepresented minority groups than the campus overall, university officials reported. This has significant implications for diversity because research shows that longer, more in-depth research experiences – ones that involve faculty – help students from minority groups and low-income students stick with college.

    5
    Akibo Watson, a neuroscience major at Binghamton University, conducts an analysis of brain tissue. Jonathan Cohen/Binghamton University

    Undergraduates who get research experience also enjoy professional and personal growth. Via online surveys and written comments, students routinely say that they improved dramatically in their self-confidence and career-building skills, such as communication, project management skills and teamwork.

    Students also report that their undergraduate research experience has helped them obtain internships or get into graduate school.

    Making research experiences available more broadly

    The challenge remains in making the opportunity of more undergraduate research experiences available to more students.

    The First-year Research Immersion program is not the only course-based research program that is connected to faculty research.

    However, to the best of my knowledge, the First-year Research Immersion programs at my university, and in Texas and Maryland, are the only such programs for first-year students that are overseen by a research scientist and involve taking three courses in a row. This three-course sequence allows student teams to delve deeply into real problems.

    More colleges could easily follow suit. For instance, traditional introductory lab courses could be transformed into research-based courses at no additional cost. And advanced lab courses could be converted to research experiences that build on those research-based courses. In that way, students could take the research projects they started during their first and second years of college even further.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 9:52 am on August 22, 2019 Permalink | Reply
    Tags: "Don’t ban new technologies – experiment with them carefully", Autonomous cars, Environmental pollution, Lime scooter company, San Francisco’s ban on municipal use of facial recognition technologies, The Conversation   

    From The Conversation: “Don’t ban new technologies – experiment with them carefully” 

    Conversation
    From The Conversation

    August 22, 2019
    Ryan Muldoon, SUNY Buffalo

    For many years, Facebook’s internal slogan was “move fast and break things.” And that’s what the company did – along with most other Silicon Valley startups and the venture capitalists who fund them. Their general attitude is one of asking for forgiveness after the fact, rather than for permission in advance. Though this can allow for some bad behavior, it’s probably the right attitude, philosophically speaking.

    It’s true that the try-first mindset has frustrated the public. Take the Lime scooter company, for instance.

    2
    Lime scooter company

    The company launched its scooter sharing service in multiple cities without asking permission from local governments. Its electric scooters don’t need base stations or parking docks, so the company and its customers can leave them anywhere for the next person to pick up – even if that’s in the middle of a sidewalk. This general disruption has led to calls to ban the scooters in cities around the country.

    Scooters are not alone. Ridesharing services, autonomous cars, artificial intelligence systems and Amazon’s cashless stores have also all been targets of bans (or proposed bans) in different states and municipalities before they’ve even gotten off the ground.

    4
    Autonomous cars. The Conversation

    What these efforts have in common is what philosophers like me call the “precautionary principle,” the idea that new technologies, behaviors or policies should be banned until their supporters can demonstrate that they will not result in any significant harms. It’s the same basic idea Hippocrates had in ancient Greece: Doctors should “do no harm” to patients.

    The precautionary principle entered the political conversation in the 1980s in the context of environmental protection. Damage to the environment is hard – if not impossible – to reverse, so it’s prudent to seek to prevent harm from happening in the first place. But as I see it, that’s not the right way to look at most new technologies. New technologies and services aren’t creating irreversible damage, even though they do generate some harms.

    2
    Environmental pollution is so harmful and hard to clean up that precautions are useful. imrankadir/Shutterstock.com

    Precaution has its place

    As a general concept, the precautionary principle is essentially conservative. It allows existing technologies, even if new ones – the ones that face preemptive bans – are safer overall.

    This approach also runs counter to the most basic idea of liberalism, in which people are broadly allowed to do what they want, unless there’s a rule against it. This is limited only when our right to free action interferes with someone else’s rights. The precautionary principle reverses this, banning people from doing what they want, unless it is specifically allowed.

    The precautionary principle makes sense when people are talking about some issues, like the environment or public health. It’s easier to avoid the problems of air pollution or dumping trash in the ocean than trying to clean up afterward. Similarly, giving children drinking water that’s contaminated with lead has effects that aren’t reversible. The children simply must deal with the health effects of their exposure for the rest of their lives.

    But as much of a nuisance as dockless scooters might be, they aren’t the same as poisoned water.

    Managing the effects

    Of course, dockless scooters, autonomous cars and a whole host of new technologies do generate real harms. A Consumer Reports investigation in early 2019 found more than 1,500 injuries from electric scooters since the dockless companies were founded. That’s in addition to the more common nuisance of having to step over scooters carelessly left in the middle of the sidewalk – and the difficulties people using wheelchairs, crutches, strollers or walkers may have in getting around them.

    Those harms are not nothing, and can help motivate arguments for banning scooters. After all, they can’t hurt anyone if they’re not allowed. What’s missing from those figures, however, is how many of those people riding scooters would have gotten into a car instead. Cars are far more dangerous and far worse for the environment.

    Yet the precautionary principle isn’t right for cars, either. As the number of autonomous cars on the road climbs, they’ll be involved in an increasing number of crashes, which will no doubt get lots of media attention.

    It is worth keeping in mind that autonomous cars will have been a wild technology success even if they are in millions of crashes every year, so long as they improve on the 6.5 million crashes and 1.9 million people who were seriously injured in a car crash in 2017.


    A look at the precautionary principle in environmental regulation.

    Disruption brings benefits too

    It may also be helpful to remember that dockless scooters and ridesharing apps and any other technology that displaces existing methods can really only become a nuisance if a lot of people use them – that is, if many people find them valuable. Injuries from scooters, and the number of scooters left lying around, have increased because the number of people using them has skyrocketed. Those 1,500 reported injuries are from 38.5 million rides.

    This is not, of course, to say that these technologies and the firms that produce them should go unregulated. Indeed, a number of these firms have behaved quite poorly, and have legitimately created some harms, which should be regulated.

    But instead of preemptively banning things, I suggest continuing to rely on the standard approach in the liberal tradition: See what kinds of harms arise, handle the early cases via the court system, and then consider whether a pattern of harms emerges that would be better handled upfront by a new or revised regulation. The Consumer Product Safety Commission, which looks out for dangerous consumer goods and holds manufacturers to account, is an example of this.

    Indeed, laws and regulations already cover littering, abandoned vehicles, negligence and assault. New technologies may just introduce new ways of generating the same old harms, ones that are already reasonably well regulated. Genuinely new situations can of course arise: San Francisco’s ban on municipal use of facial recognition technologies may well be sensible, as people quite reasonably can democratically decide that the state shouldn’t be able to track their every move. People might well decide that companies shouldn’t be able to either.

    Silicon Valley’s CEOs aren’t always sympathetic characters. And “disruption” really can be disruptive. But liberalism is about innovation and experimentation and finding new solutions to humanity’s problems. Banning new technologies – even ones as trivial as dockless scooters – embodies a conservatism that denies that premise. A lot of new ideas aren’t great. A handful are really useful. It’s hard to tell which is which until we try them out a bit.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 10:09 am on August 19, 2019 Permalink | Reply
    Tags: "Ocean warming has fisheries on the move helping some but hurting more", , , , , , The Conversation   

    From The Conversation: “Ocean warming has fisheries on the move, helping some but hurting more” 

    Conversation
    From The Conversation

    August 19, 2019
    Chris Free, UCSB

    1
    Atlantic Cod on Ice. Alamy. Cod fisheries in the North Sea and Irish Sea are declining due to overfishing and climate change.

    Climate change has been steadily warming the ocean, which absorbs most of the heat trapped by greenhouse gases in the atmosphere, for 100 years. This warming is altering marine ecosystems and having a direct impact on fish populations. About half of the world’s population relies on fish as a vital source of protein, and the fishing industry employs more the 56 million people worldwide.

    My recent study [Science] with colleagues from Rutgers University and the U.S. National Oceanic and Atmospheric Administration found that ocean warming has already impacted global fish populations. We found that some populations benefited from warming, but more of them suffered.


    3

    Overall, ocean warming reduced catch potential – the greatest amount of fish that can be caught year after year – by a net 4% over the past 80 years. In some regions, the effects of warming have been much larger. The North Sea, which has large commercial fisheries, and the seas of East Asia, which support some of the fastest-growing human populations, experienced losses of 15% to 35%.

    4
    The reddish and brown circles represent fish populations whose maximum sustainable yields have dropped as the ocean has warmed. The darkest tones represent extremes of 35 percent. Blueish colors represent fish yields that increased in warmer waters. Chris Free, CC BY-ND

    Although ocean warming has already challenged the ability of ocean fisheries to provide food and income, swift reductions in greenhouse gas emissions and reforms to fisheries management could lessen many of the negative impacts of continued warming.

    How and why does ocean warming affect fish?

    My collaborators and I like to say that fish are like Goldilocks: They don’t want their water too hot or too cold, but just right.

    Put another way, most fish species have evolved narrow temperature tolerances. Supporting the cellular machinery necessary to tolerate wider temperatures demands a lot of energy. This evolutionary strategy saves energy when temperatures are “just right,” but it becomes a problem when fish find themselves in warming water. As their bodies begin to fail, they must divert energy from searching for food or avoiding predators to maintaining basic bodily functions and searching for cooler waters.

    Thus, as the oceans warm, fish move to track their preferred temperatures. Most fish are moving poleward or into deeper waters. For some species, warming expands their ranges. In other cases it contracts their ranges by reducing the amount of ocean they can thermally tolerate. These shifts change where fish go, their abundance and their catch potential.

    Warming can also modify the availability of key prey species. For example, if warming causes zooplankton – small invertebrates at the bottom of the ocean food web – to bloom early, they may not be available when juvenile fish need them most. Alternatively, warming can sometimes enhance the strength of zooplankton blooms, thereby increasing the productivity of juvenile fish.

    Understanding how the complex impacts of warming on fish populations balance out is crucial for projecting how climate change could affect the ocean’s potential to provide food and income for people.

    4

    Impacts of historical warming on marine fisheries

    Sustainable fisheries are like healthy bank accounts. If people live off the interest and don’t overly deplete the principal, both people and the bank thrive. If a fish population is overfished, the population’s “principal” shrinks too much to generate high long-term yields.

    Similarly, stresses on fish populations from environmental change can reduce population growth rates, much as an interest rate reduction reduces the growth rate of savings in a bank account.

    In our study we combined maps of historical ocean temperatures with estimates of historical fish abundance and exploitation. This allowed us to assess how warming has affected those interest rates and returns from the global fisheries bank account.

    Losers outweigh winners

    We found that warming has damaged some fisheries and benefited others. The losers outweighed the winners, resulting in a net 4% decline in sustainable catch potential over the last 80 years. This represents a cumulative loss of 1.4 million metric tons previously available for food and income.

    Some regions have been hit especially hard. The North Sea, with large commercial fisheries for species like Atlantic cod, haddock and herring, has experienced a 35% loss in sustainable catch potential since 1930. The waters of East Asia, neighbored by some of the fastest-growing human populations in the world, saw losses of 8% to 35% across three seas.

    Other species and regions benefited from warming. Black sea bass, a popular species among recreational anglers on the U.S. East Coast, expanded its range and catch potential as waters previously too cool for it warmed. In the Baltic Sea, juvenile herring and sprat – another small herring-like fish – have more food available to them in warm years than in cool years, and have also benefited from warming. However, these climate winners can tolerate only so much warming, and may see declines as temperatures continue to rise.

    5
    Shucking scallops in Maine, where fishery management has kept scallop numbers sustainable. Robert F. Bukaty/AP

    Management boosts fishes’ resilience

    Our work suggests three encouraging pieces of news for fish populations.

    First, well-managed fisheries, such as Atlantic scallops on the U.S. East Coast, were among the most resilient to warming. Others with a history of overfishing, such as Atlantic cod in the Irish and North seas, were among the most vulnerable. These findings suggest that preventing overfishing and rebuilding overfished populations will enhance resilience and maximize long-term food and income potential.

    Second, new research suggests that swift climate-adaptive management reforms can make it possible for fish to feed humans and generate income into the future. This will require scientific agencies to work with the fishing industry on new methods for assessing fish populations’ health, set catch limits that account for the effects of climate change and establish new international institutions to ensure that management remains strong as fish migrate poleward from one nation’s waters into another’s. These agencies would be similar to multinational organizations that manage tuna, swordfish and marlin today.

    Finally, nations will have to aggressively curb greenhouse gas emissions. Even the best fishery management reforms will be unable to compensate for the 4 degree Celsius ocean temperature increase that scientists project will occur by the end of this century if greenhouse gas emissions are not reduced.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 12:40 pm on August 16, 2019 Permalink | Reply
    Tags: "A cyberattack could wreak destruction comparable to a nuclear weapon", A compromised nuclear facility could result in the discharge of radioactive material chemicals or even possibly a reactor meltdown., Hackers are already laying the groundwork., In 2016 and 2017 hackers shut down major sections of the power grid in Ukraine., In 2018 unknown cybercriminals gained access throughout the United Kingdom’s electricity system; in 2019 a similar incursion may have penetrated the U.S. grid., In August 2017 a Saudi Arabian petrochemical plant was hit by hackers who tried to blow up equipment by taking control of the same types of electronics used in industrial facilities of all kinds., in early 2016 hackers took control of a U.S. treatment plant for drinking water and changed the chemical mixture used to purify the water. Fortunately this attack was thwarted., The Conversation, The FBI has even warned that hackers are targeting nuclear facilities., The U.S. military has also reportedly penetrated the computers that control Russian electrical systems., There are signs that hackers have placed malicious software inside U.S. power and water systems where it’s lying in wait ready to be triggered.   

    From The Conversation: “A cyberattack could wreak destruction comparable to a nuclear weapon” 

    Conversation
    From The Conversation

    1
    Digital attacks can cause havoc in different places all at the same time. Pushish Images/Shutterstock.com

    August 16, 2019
    Naomi Schalit

    People around the world may be worried about nuclear tensions rising, but I think they’re missing the fact that a major cyberattack could be just as damaging – and hackers are already laying the groundwork.

    With the U.S. and Russia pulling out of a key nuclear weapons pact – and beginning to develop new nuclear weapons – plus Iran tensions and North Korea again test-launching missiles, the global threat to civilization is high. Some fear a new nuclear arms race.

    That threat is serious – but another could be as serious, and is less visible to the public. So far, most of the well-known hacking incidents, even those with foreign government backing, have done little more than steal data. Unfortunately, there are signs that hackers have placed malicious software inside U.S. power and water systems, where it’s lying in wait, ready to be triggered. The U.S. military has also reportedly penetrated the computers that control Russian electrical systems.

    Many intrusions already

    As someone who studies cybersecurity and information warfare, I’m concerned that a cyberattack with widespread impact, an intrusion in one area that spreads to others or a combination of lots of smaller attacks, could cause significant damage, including mass injury and death rivaling the death toll of a nuclear weapon.

    Unlike a nuclear weapon, which would vaporize people within 100 feet and kill almost everyone within a half-mile, the death toll from most cyberattacks would be slower. People might die from a lack of food, power or gas for heat or from car crashes resulting from a corrupted traffic light system. This could happen over a wide area, resulting in mass injury and even deaths.

    This might sound alarmist, but look at what has been happening in recent years, in the U.S. and around the world.

    In early 2016, hackers took control of a U.S. treatment plant for drinking water, and changed the chemical mixture used to purify the water. If changes had been made – and gone unnoticed – this could have led to poisonings, an unusable water supply and a lack of water.

    In 2016 and 2017, hackers shut down major sections of the power grid in Ukraine. This attack was milder than it could have been, as no equipment was destroyed during it, despite the ability to do so. Officials think it was designed to send a message. In 2018, unknown cybercriminals gained access throughout the United Kingdom’s electricity system; in 2019 a similar incursion may have penetrated the U.S. grid.

    In August 2017, a Saudi Arabian petrochemical plant was hit by hackers who tried to blow up equipment by taking control of the same types of electronics used in industrial facilities of all kinds throughout the world. Just a few months later, hackers shut down monitoring systems for oil and gas pipelines across the U.S. This primarily caused logistical problems – but it showed how an insecure contractor’s systems could potentially cause problems for primary ones.

    The FBI has even warned that hackers are targeting nuclear facilities. A compromised nuclear facility could result in the discharge of radioactive material, chemicals or even possibly a reactor meltdown. A cyberattack could cause an event similar to the incident in Chernobyl. That explosion, caused by inadvertent error, resulted in 50 deaths and evacuation of 120,000 and has left parts of the region uninhabitable for thousands of years into the future.

    Mutual assured destruction

    My concern is not intended to downplay the devastating and immediate effects of a nuclear attack. Rather, it’s to point out that some of the international protections against nuclear conflicts don’t exist for cyberattacks. For instance, the idea of “mutual assured destruction” suggests that no country should launch a nuclear weapon at another nuclear-armed nation: The launch would likely be detected, and the target nation would launch its own weapons in response, destroying both nations.

    Cyberattackers have fewer inhibitions. For one thing, it’s much easier to disguise the source of a digital incursion than it is to hide where a missile blasted off from. Further, cyberwarfare can start small, targeting even a single phone or laptop. Larger attacks might target businesses, such as banks or hotels, or a government agency. But those aren’t enough to escalate a conflict to the nuclear scale.

    Nuclear grade cyberattacks

    There are three basic scenarios for how a nuclear grade cyberattack might develop. It could start modestly, with one country’s intelligence service stealing, deleting or compromising another nation’s military data. Successive rounds of retaliation could expand the scope of the attacks and the severity of the damage to civilian life.

    In another situation, a nation or a terrorist organization could unleash a massively destructive cyberattack – targeting several electricity utilities, water treatment facilities or industrial plants at once, or in combination with each other to compound the damage.

    Perhaps the most concerning possibility, though, is that it might happen by mistake. On several occasions, human and mechanical errors very nearly destroyed the world during the Cold War; something analogous could happen in the software and hardware of the digital realm.

    Defending against disaster

    Just as there is no way to completely protect against a nuclear attack, there are only ways to make devastating cyberattacks less likely.

    The first is that governments, businesses and regular people need to secure their systems to prevent outside intruders from finding their way in, and then exploiting their connections and access to dive deeper.

    Critical systems, like those at public utilities, transportation companies and firms that use hazardous chemicals, need to be much more secure. One analysis found that only about one-fifth of companies that use computers to control industrial machinery in the U.S. even monitor their equipment to detect potential attacks – and that in 40% of the attacks they did catch, the intruder had been accessing the system for more than a year. Another survey found that nearly three-quarters of energy companies had experienced some sort of network intrusion in the previous year.

    But all those systems can’t be protected without skilled cybersecurity staffs to handle the work. At present, nearly a quarter of all cybersecurity jobs in the U.S. are vacant, with more positions opening up than there are people to fill them. One recruiter has expressed concern that even some of the jobs that are filled are held by people who aren’t qualified to do them. The solution is more training and education, to teach people the skills they need to do cybersecurity work, and to keep existing workers up to date on the latest threats and defense strategies.

    If the world is to hold off major cyberattacks – including some with the potential to be as damaging as a nuclear strike – it will be up to each person, each company, each government agency to work on its own and together to secure the vital systems on which people’s lives depend.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 8:29 am on August 14, 2019 Permalink | Reply
    Tags: "A brief astronomical history of Saturn’s amazing rings", , , , , In the four centuries since the invention of the telescope rings have also been discovered around Jupiter Uranus and Neptune., , Pioneer 11, , The Conversation, The magnificent ring system of Saturn is between 10 meters and one kilometer thick., The shepherd moons Pan; Daphnis; Atlas; Pandora and Prometheus measuring between eight and 130 kilometers across quite literally shepherd the ring particles keeping them in their present orbits.   

    From The Conversation: “A brief astronomical history of Saturn’s amazing rings” 

    Conversation
    From The Conversation

    August 14, 2019
    Vahe Peroomian, University of Southern California

    1
    With giant Saturn hanging in the blackness and sheltering Cassini from the Sun’s blinding glare, the spacecraft viewed the rings as never before.

    Many dream of what they would do had they a time machine. Some would travel 100 million years back in time, when dinosaurs roamed the Earth. Not many, though, would think of taking a telescope with them, and if, having done so, observe Saturn and its rings.

    Whether our time-traveling astronomer would be able to observe Saturn’s rings is debatable. Have the rings, in some shape or form, existed since the beginnings of the solar system, 4.6 billion years ago, or are they a more recent addition? Had the rings even formed when the Chicxulub asteroid wiped out the dinosaurs?

    I am a space scientist with a passion for teaching physics and astronomy, and Saturn’s rings have always fascinated me as they tell the story of how the eyes of humanity were opened to the wonders of our solar system and the cosmos.

    Our view of Saturn evolves

    When Galileo first observed Saturn through his telescope in 1610, he was still basking in the fame of discovering the four moons of Jupiter. But Saturn perplexed him. Peering at the planet through his telescope, it first looked to him as a planet with two very large moons, then as a lone planet, and then again through his newer telescope, in 1616, as a planet with arms or handles.

    Four decades later, Giovanni Cassini first suggested that Saturn was a ringed planet, and what Galileo had seen were different views of Saturn’s rings. Because of the 27 degrees in the tilt of Saturn’s rotation axis relative to the plane of its orbit, the rings appear to tilt toward and away from Earth with the 29-year cycle of Saturn’s revolution about the Sun, giving humanity an ever-changing view of the rings.

    But what were the rings made of? Were they solid disks as some suggested? Or were they made up of smaller particles? As more structure became apparent in the rings, as more gaps were found, and as the motion of the rings about Saturn was observed, astronomers realized that the rings were not solid, and were perhaps made up of a large number of moonlets, or small moons. At the same time, estimates for the thickness of the rings went from Sir William Herschel’s 300 miles in 1789, to Audouin Dollfus’ much more precise estimate of less than two miles in 1966.

    Astronomers understanding of the rings changed dramatically with the Pioneer 11 and twin Voyager missions to Saturn.

    NASA Pioneer 11

    NASA/Voyager 1

    NASA/Voyager 2

    Voyager’s now famous photograph of the rings, backlit by the Sun, showed for the first time that what appeared as the vast A, B and C rings in fact comprised millions of smaller ringlets.

    2
    Voyager 2 false color image of Saturn’s B and C rings showing many ringlets. NASA

    The Cassini mission to Saturn, having spent over a decade orbiting the ringed giant, gave planetary scientists even more spectacular and surprising views.

    NASA/ESA/ASI Cassini-Huygens Spacecraft

    The magnificent ring system of Saturn is between 10 meters and one kilometer thick. The combined mass of its particles, which are 99.8% ice and most of which are less than one meter in size, is about 16 quadrillion tons, less than 0.02% the mass of Earth’s Moon, and less than half the mass of Saturn’s moon Mimas. This has led some scientists to speculate whether the rings are a result of the breakup of one of Saturn’s moons or the capture and breakup of a stray comet.

    The dynamic rings

    In the four centuries since the invention of the telescope, rings have also been discovered around Jupiter, Uranus and Neptune, the giant planets of our solar system. The reason why the giant planets are adorned with rings and Earth and the other rocky planets are not was first proposed by Eduard Roche, a French astronomer in 1849.

    A moon and its planet are always in a gravitational dance. Earth’s moon, by pulling on opposite sides of the Earth, causes the ocean tides. Tidal forces also affect planetary moons. If a moon ventures too close to a planet, these forces can overcome the gravitational “glue” holding the moon together and tear it apart. This causes the moon to break up and spread along its original orbit, forming a ring.

    The Roche limit, the minimum safe distance for a moon’s orbit, is approximately 2.5 times the planet’s radius from the planet’s center. For enormous Saturn, this is a distance of 87,000 kilometers above its cloud tops and matches the location of Saturn’s outer F ring. For Earth, this distance is less than 10,000 kilometers above its surface. An asteroid or comet would have to venture very close to the Earth to be torn apart by tidal forces and form a ring around the Earth. Our own Moon is a very safe 380,000 kilometers away.

    3
    NASA’s Cassini spacecraft about to make one of its dives between Saturn and its innermost rings as part of the mission’s grand finale. NASA/JPL-Caltech

    The thinness of planetary rings is caused by their ever-changing nature. A ring particle whose orbit is tilted with respect to the rest of the ring will eventually collide with other ring particles. In doing so, it will lose energy and settle into the plane of the ring. Over millions of years, all such errant particles either fall away or get in line, leaving only the very thin ring system people observe today.

    During the last year of its mission, the Cassini spacecraft dived repeatedly through the 7,000 kilometer gap between the clouds of Saturn and its inner rings. These unprecedented observations made one fact very clear: The rings are constantly changing. Individual particles in the rings are continually jostled by each other. Ring particles are steadily raining down onto Saturn.

    The shepherd moons Pan, Daphnis, Atlas, Pandora and Prometheus, measuring between eight and 130 kilometers across, quite literally shepherd the ring particles, keeping them in their present orbits. Density waves, caused by the motion of shepherd moons within the rings, jostle and reshape the rings. Small moonlets are forming from ring particles that coalesce together. All this indicates that the rings are ephemeral. Every second up to 40 tons of ice from the rings rain down on Saturn’s atmosphere. That means the rings may last only several tens to hundreds of millions of years.

    Could a time-traveling astronomer have seen the rings 100 million years ago? One indicator for the age of the rings is their dustiness. Objects exposed to the dust permeating our solar system for long periods of time grow dustier and darker.

    Saturn’s rings are extremely bright and dust-free, seeming to indicate that they formed anywhere from 10 to 100 million years ago, if astronomers’ understanding of how icy particles gather dust is correct. One thing is for certain. The rings our time-traveling astronaut would have seen would have looked very different from the way they do today.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 10:38 am on June 18, 2019 Permalink | Reply
    Tags: "It’s time for Australia to commit to the kind of future it wants: CSIRO Australian National Outlook 2019", Australia’s future prosperity is at risk unless we take bold action and commit to long-term thinking., In the Slow Decline scenario Australia fails to adequately address identified challenges., The Conversation, The Outlook Vision scenario shows what could be possible if Australia meets identified challenges   

    From The Conversation: “It’s time for Australia to commit to the kind of future it wants: CSIRO Australian National Outlook 2019” 

    Conversation
    From The Conversation

    June 17, 2019
    James Deverell

    Australia’s future prosperity is at risk unless we take bold action and commit to long-term thinking. This is the key message contained in the Australian National Outlook 2019 (ANO 2019), a report published today by CSIRO and its partners.

    The research used a scenario approach to model different visions of Australia in 2060.

    We contrasted two core scenarios: a base case called Slow Decline, and an Outlook Vision scenario which represents what Australia could achieve. These scenarios took account of 13 different national issues, as well as two global contexts relating to trade and action on climate change.

    We found there are profound differences in long-term outcomes between these two scenarios.

    1
    In the Slow Decline scenario, Australia fails to adequately address identified challenges. CSIRO, Author provided

    2
    The Outlook Vision scenario shows what could be possible if Australia meets identified challenges. CSIRO, Author provided

    Slow decline versus a new outlook

    Australia’s living standards – as measured by Gross Domestic Product (GDP) per capita – could be 36% higher in 2060 in the Outlook Vision, compared with Slow Decline. This translates into a 90% increase in average wages (in real terms, adjusted for inflation) from today.

    Australia could maintain its world-class, highly liveable cities, while increasing its population to 41 million people by 2060. Urban congestion could be reduced, with per capita passenger vehicle travel 45% lower than today in the Outlook Vision.

    Australia could achieve net-zero emissions by 2050 while reducing household spend on electricity (relative to incomes) by up to 64%. Importantly, our modelling shows this could be achieved without significant impact on economic growth.

    Low-emissions, low-cost energy could even become a source of comparative advantage for Australia, opening up new export opportunities.

    And inflation-adjusted returns to rural landholders in Australia could triple to 2060, with the land sector contribution to GDP increasing from around 2% today to over 5%.

    At the same time, ecosystems could be restored through more biodiverse plantings and land management.

    The report, developed over the last two years, explores what Australia must do to secure a future with prosperous and globally competitive industries, inclusive and enabling communities, and sustainable natural endowments, all underpinned by strong public and civic institutions.

    ANO 2019 uses CSIRO’s integrated modelling framework to project economic, environmental and social outcomes to 2060 across multiple scenarios.

    The outlook also features input from more than 50 senior leaders drawn from Australia’s leading companies, universities and not-for-profits.

    So how do we get there?

    Achieving the outcomes in the Outlook Vision won’t be easy.

    Australia will need to address the major challenges it faces, including the rise of Asia, technology disruption, climate change, changing demographics, and declining social cohesion. This will require long-term thinking and bold action across five major “shifts”:

    industry shift
    urban shift
    energy shift
    land shift
    culture shift.

    The report outlines the major actions that will underpin each of these shifts.

    For example, the industry shift would see Australian firms adopt new technologies (such as automation and artificial intelligence) to boost productivity, which accounts for a little over half of the difference in living standards between the Outlook Vision and Slow Decline.

    Developing human capital (through education and training) and investment in high-growth, export-facing industries (such as healthcare and advanced manufacturing) each account for around 20% of the difference between the two scenarios.

    The urban shift would see Australia increase the density of its major cities by between 60-88%, while spreading this density across a wider cross-section of the urban landscape (such as multiple centres).

    Combining this density with a greater diversity of housing types and land uses will allow more people to live closer to high-quality jobs, education, and services.

    Enhancing transport infrastructure to support multi-centric cities, more active transport, and autonomous vehicles will alleviate congestion and enable the “30-minute city”.

    In the energy shift, across every scenario modelled, the electricity sector transitions to nearly 100% renewable generation by 2050, driven by market forces and declining electricity generation and storage costs.

    Likewise, electric vehicles are on pace to hit price-parity with petrol ones by the mid-2020s and could account for 80% of passenger vehicles by 2060.

    In addition, Australia could triple its energy productivity by 2060, meaning it would use only 6% more energy than today, despite the population growing by over 60% and GDP more than tripling.

    2
    Primary energy use in Australia under the modelled scenarios. Primary energy is the measure of energy before it has been converted or transformed, and includes electricity plus combustion of fuels in industry, commercial, residential and transport. CSIRO, Author provided

    The land shift would require boosting agricultural productivity (through a combination of plant genomics and digital agriculture) and changing how we use our land.

    By 2060, up to 30 million hectares – or roughly half of Australia’s marginal land within more intensively farmed areas – could be profitably transitioned to carbon plantings, which would increase returns to landholders and offset emissions from other sectors.

    As much as 700 millions of tonnes of CO₂ equivalent could be offset in 2060, which would allow Australia to become a net exporter of carbon credits.

    A culture shift

    The last, and perhaps most important shift, is the cultural shift.

    Trust in government and industry has eroded in recent years, and Australia hasn’t escaped this trend. If these institutions, which have served Australia so well in its past, cannot regain the public’s trust, it will be difficult to achieve the long-term actions that underpin the other four shifts.

    Unfortunately, there is no silver bullet here.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 11:55 am on June 6, 2019 Permalink | Reply
    Tags: "The tell-tale clue to how meteorites were made, 90% of meteorites are called “chondrites” because they are full of mysterious tiny spheres of rock known as “chondrules.”, About 10% of meteorites are pure iron., Arizona meteor crater, , , at the birth of the solar system", , , Radioactive dating of hundreds of chondrules shows that they formed between 1.8 and 4 million years after the beginning of the solar system – some 4.6 billion years ago., Semarkona meteorite, The Conversation, The Earth is pummeled with rocks incessantly as it orbits the Sun adding around 50 tons to our planet’s mass every day.   

    From The Conversation: “The tell-tale clue to how meteorites were made, at the birth of the solar system” 

    Conversation
    From The Conversation

    June 6, 2019
    William Herbst
    James Greenwood

    1

    April 26, 1803 was an unusual day in the small town of L’Aigle in Normandy, France – it rained rocks.

    Over 3,000 of them fell out of the sky. Fortunately no one was injured. The French Academy of Sciences investigated and proclaimed, based on many eyewitness stories and the unusual look of the rocks, that they had come from space.

    The Earth is pummeled with rocks incessantly as it orbits the Sun, adding around 50 tons to our planet’s mass every day. Meteorites, as these rocks are called, are easy to find in deserts and on the ice plains of Antarctica, where they stick out like a sore thumb. They can even land in backyards, treasures hidden among ordinary terrestrial rocks. Amateurs and professionals collect meteorites, and the more interesting ones make it to museums and laboratories around the world for display and study. They are also bought and sold on eBay.

    Despite decades of intense study by thousands of scientists, there is no general consensus on how most meteorites formed. As an astronomer and a geologist, we have recently developed a new theory of what happened during the formation of the solar system to create these valuable relics of our past. Since planets form out of collisions of these first rocks, this is an important part of the history of the Earth.

    2
    This meteor crater in Arizona was created 50,000 years ago when an iron meteorite struck the Earth. It is about one mile across. W. Herbst, CC BY-SA

    The mysterious chondrules

    3
    Drew Barringer (left), owner of Arizona meteor crater, his wife, Clare Schneider, and author William Herbst in the Van Vleck Observatory Library of Wesleyan University, where an iron meteorite from the crater is on display. W. Herbst

    About 10% of meteorites are pure iron. These form through a multi-step process in which a large molten asteroid has enough gravity to cause iron to sink to its center. This builds an iron core just like the Earth’s. After this asteroid solidifies, it can be shattered into meteorites by collisions with other objects. Iron meteorites are as old as the solar system itself, proving that large asteroids formed quickly and fully molten ones were once abundant.

    The other 90% of meteorites are called “chondrites” because they are full of mysterious, tiny spheres of rock known as “chondrules.” No terrestrial rock has anything like a chondrule inside it. It is clear that chondrules formed in space during a brief period of intense heating when temperatures reached the melting point of rock, around 3,000 degrees Fahrenheit, for less than an hour. What could possibly account for that?

    3
    A closeup of the Semarkona meteorite showing dozens of chondrules. Kenichi Abe

    Researchers have come up with many hypotheses through the last 40 years. But no consensus has been reached on how this brief flash of heating happened.

    The chondrule problem is so famously difficult and contentious that when we announced to colleagues a few years ago that we were working on it, their reaction was to smile, shake their heads and offer their condolences. Now that we have proposed a solution we are preparing for a more critical response, which is fine, because that’s the way science advances.

    The flyby model

    Our idea is quite simple. Radioactive dating of hundreds of chondrules shows that they formed between 1.8 and 4 million years after the beginning of the solar system – some 4.6 billion years ago. During this time, fully molten asteroids, the parent bodies of the iron meteorites, were abundant. Volcanic eruptions on these asteroids released tremendous amounts of heat into the space around them. Any smaller objects passing by during an eruption would experience a short, intense blast of heat.

    To test our hypothesis, we split up the challenge. The astronomer, Herbst, crunched the numbers to determine how much heating was necessary and for how long to create chondrules. Then the geologist, Greenwood, used a furnace in our lab at Wesleyan to recreate the predicted conditions and see if we could make our own chondrules.

    4
    Laboratory technician Jim Zaresky (top) loads a programmable furnace as co-author Jim Greenwood looks on, in his laboratory at Wesleyan University. This is where the synthetic chondrules are made. W. Herbst

    The experiments turned out to be quite successful.

    We put some fine dust from Earth rocks with compositions resembling space dust into a small capsule, placed it in our furnace and cycled the temperature through the predicted range. Out came a nice-looking synthetic chondrule. Case closed? Not so fast.

    Two problems emerged with our model. In the first place, we had ignored the bigger issue of how chondrules came to be part of the whole meteorite. What is their relationship to the stuff between chondrules – called matrix? In addition, our model seemed a bit too chancy to us. Only a small fraction of primitive matter will be heated in the way we proposed. Would it be enough to account for all those chondrule-packed meteorites hitting the Earth?

    6
    A comparison of a synthetic chondrule (left) made in the Wesleyan lab with a heating curve from the flyby model, with an actual chondrule (right) from the Semarkona meteorite. The crystal structure is quite similar, as shown in the enlargements (bottom row). J. Greenwood

    Making whole meteorites

    To address these issues, we extended our initial model to consider flyby heating of a larger object, up to a few miles across. As this material approaches a hot asteroid, parts of it will vaporize like a comet, resulting in an atmosphere rich in oxygen and other volatile elements. This turns out to be just the kind of atmosphere in which chondrules form, based on previous detailed chemical studies.

    We also expect the heat and gas pressure to harden the flyby object into a whole meteorite through a process known as hot isostatic pressing, which is used commercially to make metal alloys. As the chondrules melt into little spheres, they will release gas to the matrix, which traps those elements as the meteorite hardens. If chondrules and chondrites form together in this manner, we expect the matrix to be enhanced in exactly the same elements that the chondrules are depleted. This phenomenon, known as complementarity, has, in fact, been observed for decades, and our model provides a plausible explanation for it.

    7
    The authors’ model for forming chondrules. A small piece of rock (right) — a few miles across or less — swings close to a large hot asteroid erupting lava at its surface. Infrared radiation from the hot lava briefly raises the temperature on the small piece of rock high enough to form chondrules and harden part of that object into a meteorite. W. Herbst/Icarus

    Perhaps the most novel feature of our model is that it links chondrule formation directly to the hardening of meteorites. Since only well-hardened objects from space can make it through the Earth’s atmosphere, we would expect the meteorites in our museums to be full of chondrules, as they are. But hardened meteorites full of chondrules would be the exception, not the rule, in space, since they form by a relatively chancy process – the hot flyby. We should know soon enough if this idea holds water, since it predicts that chondrules will be rare on asteroids. Both Japan and the United States have ongoing missions to nearby asteroids that will return samples over the next few years.

    If those asteroids are full of chondrules, like the hardened meteorites that make it to the Earth’s surface, then our model can be discarded and the search for a solution to the famous chondrule problem can go on. If, on the other hand, chondrules are rare on asteroids, then the flyby model will have passed an important test.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 1:29 pm on May 19, 2019 Permalink | Reply
    Tags: , , , RNA messages in the cell drive function, The Conversation, Today there is no medical treatment for autism.   

    From The Conversation: “New autism research on single neurons suggests signaling problems in brain circuits” 

    Conversation
    From The Conversation

    1
    Artist impression of neurons communicating in the brain. whitehoune/Shutterstock.com

    May 17, 2019
    Dmitry Velmeshev

    Autism affects at least 2% of children in the United States – an estimated 1 in 59. This is challenging for both the patients and their parents or caregivers. What’s worse is that today there is no medical treatment for autism. That is in large part because we still don’t fully understand how autism develops and alters normal brain function.

    One of the main reasons it is hard to decipher the processes that cause the disease is that it is highly variable. So how do we understand how autism changes the brain?

    Using a new technology called single-nucleus RNA sequencing, we analyzed the chemistry inside specific brain cells from both healthy people and those with autism and identified dramatic differences that may cause this disease. These autism-specific differences could provide valuable new targets for drug development.

    I am a neuroscientist in the lab of Arnold Kreigstein, a researcher of human brain development at the University of California, San Francisco. Since I was a teenager, I have been fascinated by the human brain and computers and the similarities between the two. The computer works by directing a flow of information through interconnected electronic elements called transistors. Wiring together many of these small elements creates a complex machine capable of functions from processing a credit card payment to autopiloting a rocket ship. Though it is an oversimplification, the human brain is, in many respects, like a computer. It has connected cells called neurons that process and direct information flow – a process called synaptic transmission in which one neuron sends a signal to another.

    When I started doing science professionally, I realized that many diseases of the human brain are due to specific types of neurons malfunctioning, just like a transistor on a circuit board can malfunction either because it was not manufactured properly or due to wear and tear.

    RNA messages in the cell drive function

    Every cell in any living organism is made of the same types of biological molecules. Molecules called proteins create cellular structures, catalyze chemical reactions and perform other functions within the cell.

    Two related types of molecules – DNA and RNA – are made of sequences of just four basic elements and used by the cell to store information. DNA is used for hereditary long-term information storage; RNA is a short-lived message that signals how active a gene is and how much of a particular protein the cell needs to make. By counting the number of RNA molecules carrying the same message, researchers can get insights into the processes happening inside the cell.

    When it comes to the brain, scientists can measure RNA inside individual cells, identify the type of brain cell and and analyze the processes taking place inside it – for instance, synaptic transmission. By comparing RNA analyses of brain cells from healthy people not diagnosed with any brain disease with those done in patients with autism, researchers like myself can figure out which processes are different and in which cells.

    Until recently, however, simultaneously measuring all RNA molecules in a single cell was not possible. Researchers could perform these analyses only from a piece of brain tissue containing millions of different cells. This was complicated further because it was possible to collect these tissue samples only from patients who have already died.

    New tech pinpoints neurons affected in autism

    However, recent advances in technology allowed our team to measure RNA that is contained within the nucleus of a single brain cell. The nucleus of a cell contains the genome, as well as newly synthesized RNA molecules. This structure remains intact ever after the death of a cell and thus can be isolated from dead (also called postmortem) brain tissue.

    3
    Neurons in the upper (left) and deep layers of the human developing cortex. Chen & Kriegstein, 2015 Science/American Association for the Advancement of Science, CC BY-SA

    By analyzing single cellular nuclei from this postmortem brain of people with and without autism, we profiled the RNA within 100,000 single brain cells of many such individuals.

    Comparing RNA in specific types of brain cells between the individuals with and without autism, we found that some specific cell types are more altered than others in the disease.

    In particular, we found [Science]that certain neurons called upper-layer cortical neurons that exchange information between different regions of the cerebral cortex have an abnormal number of RNA-encoding proteins located at the synapse – the points of contacts between neurons where signals are transmitted from one nerve cell to another. These changes were detected in regions of the cortex vital for higher-order cognitive functions, such as social interactions.

    This suggests that synapses in these upper-layer neurons are malfunctioning, leading to changes in brain functions. In our study, we showed that upper-layer neurons had very different quantities of certain RNA compared to the same cells in healthy people. That was especially true in autism patients who suffered from the most severe symptoms, like not being able to speak.

    4
    New results suggest that the synapse formed by neurons in the upper layers of the cerebral cortex are not functioning correctly. CI Photos/Shutterstock.com

    Glial cells are also affected in autism

    In addition to neurons that are directly responsible for synaptic communication, we also saw changes in the RNA of other non-neuronal cells – called glia. Glia play important roles in regulating the behavior of neurons, including how they send and receive messages via the synapse. These may also play an important role in causing autism.

    So what do these findings mean for future medical treatment of autism?

    From these results, I and my colleagues understand that the same parts of the synaptic machinery which are critical for sending signals and transmitting information in the upper-layer neurons might be broken in many autism patients, leading to abnormal brain function.

    If we can repair these parts, or fine-tune neuronal function to a near-normal state, it might offer dramatic relief of symptoms for the patients. Studies are underway to deliver drugs and gene therapy to specific cell types in the brain, and many scientists including myself believe such approaches will be indispensable for future treatments of autism.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: