Tagged: Applied Research & Technology Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:42 pm on October 16, 2018 Permalink | Reply
    Tags: Applied Research & Technology, , , Reidite-an ultra-rare mineral from what may be the largest-known meteorite impact crater in Australia, Woodleigh impact crater near Shark Bay in Western Australia   

    From Curtin University: “Curtin scientists unearth rare mineral from buried WA impact crater” 

    From Curtin University

    16 October 2018

    Lucien Wilkinson
    Media Consultant
    Supporting Humanities and Science and Engineering
    Tel: +61 8 9266 9185
    Mob: +61 401 103 683

    Yasmine Phillips
    Media Relations Manager, Public Relations
    Tel: +61 8 9266 9085
    Mob: +61 401 103 877

    Curtin University researchers studying core samples taken near Shark Bay have discovered an ultra-rare mineral from what may be the largest-known meteorite impact crater in Australia.

    Honours student Morgan Cox, from Curtin’s School of Earth and Planetary Sciences.

    The mineral, named reidite, only forms in rocks that experience the incredible pressure created when rocks from space slam into the Earth’s crust. Reidite starts as the common mineral zircon and transforms to reidite during the pressure of impact, making it incredibly rare and only the sixth-known crater on Earth where the mineral has been found.

    Published in leading journal Geology, the new research examined drill core from the buried Woodleigh impact crater, near Shark Bay, and found that some zircon grains in the core had partially transformed to reidite.

    The chance find of reidite gave the team new insights into how the Earth responds to the dramatic changes created by meteorite impact, a process that violently lifts deep-seated rocks to the surface in seconds.

    Lead author Honours student Morgan Cox, from Curtin’s School of Earth and Planetary Sciences, said given the Woodleigh impact crater is buried below younger sedimentary rocks, its size is not known and remains hotly debated.

    “Previous research estimated the size of the impact crater between 60km and more than 120km in diameter,” Ms Cox said.

    “However, our discovery of reidite near the base of the core suggests a larger crater. The research team is now using numerical modelling to refine the size of Woodleigh and if we establish its diameter is greater than 100km, it would be the largest-known impact crater in Australia.”

    The researchers also found additional clues in the same zircon grains, some of which had deformation twins, which is another feature that only forms in zircon grains shocked by impact.

    The team analysed the microscopic interaction of reidite and twins, which provided critical evidence for how and when these features formed during the violent upheaval.

    Research supervisor Dr Aaron Cavosie, also from Curtin’s School of Earth and Planetary Sciences, said the drill core sampled the middle of the impact crater, a region called the central uplift.

    “Central uplifts are desirable targets for learning about impact conditions,” Dr Cavosie said.

    “They bring profoundly damaged rocks closer to the surface, and in some instances, are associated with exploration targets.

    “Finding reidite at Woodleigh was quite a surprise as it is much rarer than diamonds or gold, though unfortunately not as valuable.”

    The research team also includes Professor Phil Bland and Dr Katarina Miljković, both from Curtin’s School of Earth and Planetary Sciences, and Dr Michael Wingate from the Geological Survey of Western Australia.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Curtin University (formerly known as Curtin University of Technology and Western Australian Institute of Technology) is an Australian public research university based in Bentley and Perth, Western Australia. The university is named after the 14th Prime Minister of Australia, John Curtin, and is the largest university in Western Australia, with over 58,000 students (as of 2016).

    Curtin was conferred university status after legislation was passed by the Parliament of Western Australia in 1986. Since then, the university has been expanding its presence and has campuses in Singapore, Malaysia, Dubai and Mauritius. It has ties with 90 exchange universities in 20 countries. The University comprises five main faculties with over 95 specialists centres. The University formerly had a Sydney campus between 2005 & 2016. On 17 September 2015, Curtin University Council made a decision to close its Sydney campus by early 2017.

    Curtin University is a member of Australian Technology Network (ATN), and is active in research in a range of academic and practical fields, including Resources and Energy (e.g., petroleum gas), Information and Communication, Health, Ageing and Well-being (Public Health), Communities and Changing Environments, Growth and Prosperity and Creative Writing.

    It is the only Western Australian university to produce a PhD recipient of the AINSE gold medal, which is the highest recognition for PhD-level research excellence in Australia and New Zealand.

    Curtin has become active in research and partnerships overseas, particularly in mainland China. It is involved in a number of business, management, and research projects, particularly in supercomputing, where the university participates in a tri-continental array with nodes in Perth, Beijing, and Edinburgh. Western Australia has become an important exporter of minerals, petroleum and natural gas. The Chinese Premier Wen Jiabao visited the Woodside-funded hydrocarbon research facility during his visit to Australia in 2005.

  • richardmitnick 3:46 pm on October 15, 2018 Permalink | Reply
    Tags: Applied Research & Technology, , , , , Ultra-light gloves let users 'touch' virtual objects   

    From ETH Zürich: “Ultra-light gloves let users ‘touch’ virtual objects” 

    ETH Zurich bloc

    From ETH Zürich


    ETH Zürich
    Media relations
    Phone: +41 44 632 41 41

    For now the glove is powered by a very thin electrical cable, but thanks to the low voltage and power required, a very small battery could eventually be used instead. (Photograph: ETH Zürich)

    Scientists from ETH Zürich and EPFL have developed an ultra-light glove – weighing less than 8 grams – that enables users to feel and manipulate virtual objects. Their system provides extremely realistic haptic feedback and could run on a battery, allowing for unparalleled freedom of movement.

    Engineers and software developers around the world are seeking to create technology that lets users touch, grasp and manipulate virtual objects, while feeling like they are actually touching something in the real world. Scientists at ETH Zürich and EPFL have just made a major step toward this goal with their new haptic glove, which is not only lightweight – under 8 grams – but also provides feedback that is extremely realistic. The glove is able to generate up to 40 Newtons of holding force on each finger with just 200 Volts and only a few milliwatts of power. It also has the potential to run on a very small battery. That, together with the glove’s low form factor (only 2 mm thick), translates into an unprecedented level of precision and freedom of movement.

    “We wanted to develop a lightweight device that – unlike existing virtual-reality gloves – doesn’t require a bulky exoskeleton, pumps or very thick cables,” says Herbert Shea, head of EPFL’s Soft Transducers Laboratory (LMTS). The scientists’ glove, called DextrES, has been successfully tested on volunteers in Zürich and will be presented at the upcoming ACM Symposium on User Interface Software and Technology (UIST).

    Fabric, metal strips and electricity

    DextrES is made of cotton with thin elastic metal strips running over the fingers. The strips are separated by a thin insulator. When the user’s fingers come into contact with a virtual object, the controller applies a voltage difference between the metal strips causing them to stick together via electrostatic attraction – this produces a braking force that blocks the finger’s or thumb’s movement. Once the voltage is removed, the metal strips glide smoothly and the user can once again move his fingers freely.

    Tricking your brain

    For now the glove is powered by a very thin electrical cable, but thanks to the low voltage and power required, a very small battery could eventually be used instead. “The system’s low power requirement is due to the fact that it doesn’t create a movement, but blocks one”, explains Shea. The researchers also need to conduct tests to see just how closely they have to simulate real conditions to give users a realistic experience. “The human sensory system is highly developed and highly complex. We have many different kinds of receptors at a very high density in the joints of our fingers and embedded in the skin. As a result, rendering realistic feedback when interacting with virtual objects is a very demanding problem and is currently unsolved. Our work goes one step in this direction, focusing particularly on kinesthetic feedback,” says Otmar Hilliges, head of the Advanced Interactive Technologies Lab at ETH Fabric, metal strips and electricity

    DextrES is made of cotton with thin elastic metal strips running over the fingers. The strips are separated by a thin insulator. When the user’s fingers come into contact with a virtual object, the controller applies a voltage difference between the metal strips causing them to stick together via electrostatic attraction – this produces a braking force that blocks the finger’s or thumb’s movement. Once the voltage is removed, the metal strips glide smoothly and the user can once again move his fingers freely.

    In this joint research project, the hardware was developed by EPFL at its Microcity campus in Neuchâtel, and the virtual reality system was created by ETH Zürich, which also carried out the user tests. “Our partnership with the EPFL lab is a very good match. It allows us to tackle some of the longstanding challenges in virtual reality at a pace and depth that would otherwise not be possible,” adds Hilliges.

    The next step will be to scale up the device and apply it to other parts of the body using conductive fabric. “Gamers are currently the biggest market, but there are many other potential applications – especially in healthcare, such as for training surgeons. The technology could also be applied in augmented reality,” says Shea..

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    ETH Zurich campus
    ETH Zürich is one of the leading international universities for technology and the natural sciences. It is well known for its excellent education, ground-breaking fundamental research and for implementing its results directly into practice.

    Founded in 1855, ETH Zürich today has more than 18,500 students from over 110 countries, including 4,000 doctoral students. To researchers, it offers an inspiring working environment, to students, a comprehensive education.

    Twenty-one Nobel Laureates have studied, taught or conducted research at ETH Zürich, underlining the excellent reputation of the university.

  • richardmitnick 3:18 pm on October 15, 2018 Permalink | Reply
    Tags: Applied Research & Technology, , , , , , , How geology tells the story of evolutionary bottlenecks and life on Earth,   

    From Astrobiology Magazine: “How geology tells the story of evolutionary bottlenecks and life on Earth” 

    Astrobiology Magazine

    From Astrobiology Magazine

    Giant asteroid impacts could have created evolutionary bottlenecks that decided the path that evolution should take. Image credit: NASA/Don Davis.

    Evidence that catastrophic geological events could have created evolutionary bottlenecks that changed the course of life on Earth may be buried within ancient rocks beneath our feet.

    There is a 700-million year gap in Earth’s history, and in that time one of the most transformative events happened: life appeared. This missing epoch could hold not just the secret of humanity’s first ancestor, but could guide our search for life on other planets.

    To this end a recent paper, published in the scientific journal Astrobiology, tries to bring the worlds of geology and chemistry together by laying out what Earth’s ancient geology tells us about when life began on the planet, and how geological constraints – such as those caused by an asteroid impact or evolutionary bottlenecks – can be used to vet the different theories about the evolution of life.

    “Geologists have only weakly constrained the time when the Earth became habitable and the later time when life actually existed to the long interval between about 4.5 billion years ago and 3.85 billion years ago,” Norm Sleep, a geologist at Stanford University in the United States, writes in his paper.

    A dangerous time

    However, this was a dangerous time to be in the vicinity of Earth. Although evidence for it has become increasingly disputed in recent years, many scientists still think that during this period asteroids pummelled the young Earth and its neighbouring planets in what has become known as the Late Heavy Bombardment.

    An asteroid impact is one of the events that could have created what is called an evolutionary bottleneck, whereby a few species are able to dominate, often as the result of a sudden decrease in the number of other organisms, says Sleep.

    If a big asteroid were to hit Earth, the planet’s surface temperature would sky-rocket and the oceans would vaporize into the atmosphere. It would be catastrophic for the majority of life on Earth. But if an organism could survive that, it would be able to take over the planet – and possibly evolve over the course of billions of years to what would eventually become humans.

    “If you wipe out most life geologically, the survivors are going to find a lot of vacant niches to occupy, and there will be rapid evolution,” Sleep tells Astrobiology Magazine. For example, thermophiles (which are heat-loving organisms) may have been able to survive temperatures that would have killed other organisms.

    “This type of bottleneck, we know from the physics,” Sleep says. “The inside of Earth would be cooler, thermal microbes would be comfortable.”

    A fragment of rock from the Acasta Gneiss formation in Canada’s Northwestern Territories, which contains the oldest known exposed rock in the world. Could carbon, sequestered in such rocks, reveal the existence of asteroid impacts that caused evolutionary bottlenecks? Image credit: Pedroalexandrade/Wikimedia Commons.

    Carbon-based evidence

    Unfortunately, ancient asteroid impacts are difficult to detect in Earth’s geology, in part because of our planet’s shifting tectonic plates. However, traces of sequestered carbon trapped in ancient rocks could offer a clue: post-catastrophic asteroid impact, the atmosphere would have contained abundant quantities of carbon dioxide, linked to the high temperatures and high atmospheric pressures that would have made it difficult for live to thrive on Earth. “The Earth did not become habitable until the bulk of this carbon dioxide was subducted into the mantle,” Sleep writes in his paper. So far, scientists have not found reliable evidence of this sequestered carbon dioxide.

    Another evolutionary bottleneck for life could have been innovation: an organism innovates a trait that makes it very fit for its environment, and it is able to outcompete other organisms. “It quickly takes over all suitable habitable places on Earth and it becomes very abundant very quickly,” says Sleep.

    An example would be an organism that evolves the ability to use iron or sulfur to photosynthesize. “The organism goes from being dependent on hydrogen to sunlight, and its biomass increases by an order of magnitude,” he says.

    “Once this threshold was reached, the transition would be rapid, as in human time scale: years, hundreds of years, millennia. The organism could go from just barely eking it out, to multiplying and inhabiting the whole planet.

    “These are all potentially testable hypotheses,” he says.

    His paper notes that the majority of known mineral species owe their existence to biological processes.

    Getting people thinking

    When asked which was the most likely cause of these bottlenecks, Sleep says it was probably a mixture of both. The purpose of his paper was not to advocate one cause over another, but “to get people thinking”

    “It is to get people to work together, [to] pose things in a way that is helpful to everybody, [and] stir up more thinking about it,” he says.

    William Martin, Director of the Institute for Molecular Evolution at Heinrich-Heine-Universität Düsseldorf, reveals to Astrobiology Magazine that “There is a diversity of views in both disciplines, [and] getting everyone on the same page is no easy task. [Sleep] made a great effort to reach out across disciplines, that is for sure. Views about early evolution change slowly, but [Norm Sleep’s paper] is an important contribution.

    Ultimately geology is crucial, as it defines the environment within which biologists and chemists have to operate, he says.

    Sleep’s research was performed as part of collaboration with the NASA Astrobiology Institute Virtual Planetary Laboratory Lead Team.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 7:32 am on October 12, 2018 Permalink | Reply
    Tags: Applied Research & Technology, , Earth’s First Nuclear Reactor Is 1.7 Billion Years Old And Was Made Naturally, , Natural fusion,   

    From Ethan Siegel: “Earth’s First Nuclear Reactor Is 1.7 Billion Years Old And Was Made Naturally” 

    From Ethan Siegel
    Oct 11, 2018
    From the main mine that humans made in the Oklo region, one of the natural reactors is accessible via an offshoot, as illustrated here. (UNITED STATES DEPARTMENT OF ENERGY)

    Planets can ‘discover’ nuclear power on their own, naturally, without any intelligence. Earth did it 1.7 billion years before humans.

    If you were hunting for alien intelligence, looking for a surefire signature from across the Universe of their activity, you’d have a few options. You could look for an intelligent radio broadcast, like the type humans began emitting in the 20th century. You could look for examples of planet-wide modifications, like human civilization displays when you view Earth at a high-enough resolution. You could look for artificial illumination at night, like our cities, towns, and fisheries display, visible from space.

    Or, you might look for a technological achievement, like the creation of particles like antineutrinos in a nuclear reactor. After all, that’s how we first detected neutrinos (or antineutrinos) on Earth. But if we took that last option, we might fool ourselves. Earth created a nuclear reactor, naturally, long before humans ever existed.

    Reactor nuclear experimental RA-6 (Republica Argentina 6), en marcha, showing the characteristic Cherenkov radiation from the faster-than-light-in-water particles emitted. The neutrinos (or more accurately, antineutrinos) first hypothesized by Pauli in 1930 were detected from a similar nuclear reactor in 1956. (CENTRO ATOMICO BARILOCHE, VIA PIECK DARÍO)


    In order to create a nuclear reactor today, the first ingredient we need is reactor-grade fuel. Uranium, for example, comes in two different naturally-occurring isotopes: U-238 (with 146 neutrons) and U-235 (with 143 neutrons). Changing the number of neutrons doesn’t change your element type, but does change how stable your element is. For U-235 and U-238, they both decay via a radioactive chain reaction, but U-238 lives about six times as long, on average.

    By time you get to the present day, U-235 makes up only about 0.72% of all naturally-occurring Uranium, meaning it has to be enriched to at least about 3% levels in order to get a sustaining fission reaction, or a special setup (involving heavy water mediators) is required. But 1.7 billion years ago was more than two full half-lives ago for U-235. Back then, in ancient Earth, U-235 was about 3.7% of all uranium: enough for a reaction to occur.

    The Uranium-235 chain reaction that both leads to a nuclear fission bomb, but also generates power inside a nuclear reactor, is powered by neutron absorption as its first step, resulting in the production of three additional free neutrons. (E. SIEGEL, FASTFISSION / WIKIMEDIA COMMONS)

    In between different layers of sandstone, before you reach the granite bedrock making up most of Earth’s crust, you often find veins of mineral deposits, rich in a particular element. Sometimes these are extremely lucrative, like when we find gold veins underground. But sometimes, we find other, rarer materials in there, such as uranium. In modern reactors, enriched uranium produces neutrons, and in the presence of water, which acts like a neutron moderator, a fraction of those neutrons will strike another U-235 nucleus, causing a fission reaction.

    As the nucleus splits apart, it produces lighter daughter nuclei, releases energy, and also produces three additional neutrons. If the conditions are right, the reaction will trigger additional fission events, leading to a self-sustaining reactor.

    Geologic cross-section of the Oklo and Okélobondo uranium deposits, showing the locations of the nuclear reactors. The last reactor (#17) is located at Bangombé, ~30 km southeast of Oklo. The nuclear reactors are found in the FA sandstone layer. (MOSSMAN ET AL., 2008; REVIEWS IN ENGINEERING GEOLOGY, VOL. 19: 1–13)

    Two factors came together, 1.7 billion years ago, to create a natural nuclear reactor. The first is that, above the bedrock layer of granite, groundwater flows freely, and it’s only a matter of geology and time before water flows into the uranium-rich regions. Surround your uranium atoms with water molecules, and that’s a solid start.

    But to get your reactor working well, in a self-sustaining fashion, you need an extra component: you want the uranium atoms to be dissolved in the water. In order for uranium to be soluble in water, oxygen must be present. Fortunately, aerobic, oxygen-using bacteria evolved in the aftermath of the first mass extinction in Earth’s recorded history: the great oxygenation event. With oxygen in the groundwater, dissolved uranium would be possible whenever water floods the mineral veins, and could have even created particularly uranium-rich material.

    A selection of some of the original samples from Oklo. These materials were donated to the Vienna Natural History Museum.(LUDOVIC FERRIÈRE/NATURAL HISTORY MUSEUM)

    When you have a uranium fission reaction, a number of important signatures wind up being produced.

    Five isotopes of the element xenon are produced as reaction products.
    The remaining U-235/U-238 ratio should be reduced, since only U-235 is fissile.
    U-235, when split apart, produces large amounts of neodymium (Nd) with a specific weight: Nd-143. Normally, the ratio of Nd-143 to the other isotopes is about 11–12%; seeing an enhancement indicates uranium fission.
    Same deal for ruthenium with a weight of 99 (Ru-99). Naturally occurring with about 12.7% abundance, fission can increase that to about 27–30%.

    In 1972, the French physicist Francis Perrin discovered a total of 17 sites spread across three ore deposits at the Oklo mines in Gabon, West Africa, that contained all four of these signatures.

    This is the site of the Oklo natural nuclear reactors in Gabon, West Africa. Deep inside the Earth, in yet unexplored regions, we might yet find other examples of natural nuclear reactors, not to mention what might be found on other worlds. (US DEPARTMENT OF ENERGY)

    The Oklo fission reactors are the only known examples of a natural nuclear reactor here on Earth, but the mechanism by which they occurred lead us to believe that these could occur in many locations, and could occur elsewhere in the Universe as well. When groundwater inundates a uranium-rich mineral deposit, the fission reactions, of U-235 splitting apart, can occur.

    The groundwater acts as a neutron moderator, allowing (on average) more than 1 out of 3 neutrons to collide wtih a U-235 nucleus, continuing the chain reaction.

    As the reaction goes on for only a short amount of time, the groundwater that moderates the neutrons boils away, which stops the reaction altogether. Over time, however, without fission occurring, the reactor naturally cools down, allowing groundwater back in.

    The terrain surrounding the natural nuclear reactors in Oklo suggests that groundwater insertion, above a layer of bedrock, may be a necessary ingredient for rich uranium ore capable of spontaneous fission. (CURTIN UNIVERSITY / AUSTRALIA)

    By examining the concentrations of xenon isotopes that become trapped in the mineral formations surrounding the uranium ore deposits, humanity, like an outstanding detective, has been able to calculate the specific timeline of the reactor. For approximately 30 minutes, the reactor would go critical, with fission proceeding until the water boils away. Over the next ~150 minutes, there would be a cooldown period, after which water would flood the mineral ore again and fission would restart.

    This three hour cycle would repeat itself for hundreds of thousands of years, until the ever-decreasing amount of U-235 reached a low-enough level, below that ~3% amount, that a chain reaction could no longer be sustained. At that point, all that both U-235 and U-238 could do is radioactively decay.

    There are many natural neutrino signatures produced by stars and other processes in the Universe. For a time, it was thought there would be a unique and unambiguous signal that comes from reactor antineutrinos. Now we know, however, that these neutrinos may also be naturally produced. (ICECUBE COLLABORATION / NSF / UNIVERSITY OF WISCONSIN)

    U Wisconsin IceCube experiment at the South Pole

    Looking at the Oklo sites today, we find natural U-235 abundances that range from 0.44% up to 0.60%: all well-below the normal value of 0.72%. Nuclear fission, in some form or another, is the only naturally-occurring explanation for this discrepancy. Combined with the xenon, the neodymium, and the ruthenium evidence, the conclusion that this was a geologically-created nuclear reactor is all but inescapable.

    Ludovic Ferrière, curator of the rock collection, holds a piece of the Oklo reactor in Vienna’s Natural History Museum. A sample of the Oklo reactor will be displayed permanently in the Vienna museum beginning in 2019. (L. GIL/IAEA)

    Interestingly enough, there are a number of scientific findings we can conclude from looking at the nuclear reactions that occurred here. We can determine the timescales of the on/off cycles by looking at the various xenon deposits. The sizes of the uranium veins and the amount that they’ve migrated (along with the other materials affected by the reactor) over the past 1.7 billion years can give us a useful, natural analogue for how to store and dispose of nuclear waste. The isotope ratios found at the Oklo sites allow us to test the rate of various nuclear reactions, and determine if they (or the fundamental constants driving them) have changed over time. Based on this evidence, we can determine that the rates of nuclear reactions, and therefore the values of the constants that determine them, were the same 1.7 billion years ago as they are today.

    Finally, we can use the ratios of the various elements to determine what the age of the Earth is, and what its composition was when it was created. The lead-isotope and uranium-isotope levels teach us that 5.4 tonnes of fission products were produced, over a 2 million year timespan, in an Earth that’s 4.5 billion years old today.

    A supernova remnant not only expels heavy elements created in the explosion back into the Universe, but the presence of those elements can be detected from Earth. The ratio of U-235 to U-238 in supernovae is approximately 1.6:1, indicating that Earth was born from largely ancient, not recently, created raw uranium. (NASA / CHANDRA X-RAY OBSERVATORY)

    NASA/Chandra X-ray Telescope

    When a supernova goes off, as well as when neutron stars merge, both U-235 and U-238 are produced. From examining supernovae, we know we actually create more U-235 than U-238 in about a 60/40 ratio. If Earth’s uranium were all created from a single supernova, that supernova would have occurred 6 billion years before the formation of Earth.

    On any world, as long as a rich vein of near-surface uranium ore is produced with greater than 3/97 ratio of U-235 to U-238, mediated by water, it’s eminently plausible for a spontaneous and natural nuclear reaction to occur. In one serendipitous location on Earth, in more than a dozen instances, we have overwhelming evidence for a nuclear history. In the game of natural energy, don’t ever leave nuclear fission off the list again.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 3:17 pm on October 11, 2018 Permalink | Reply
    Tags: Applied Research & Technology, , New Techniques Can Detect Lyme Disease Weeks Before Current Tests,   

    From Rutgers University: “New Techniques Can Detect Lyme Disease Weeks Before Current Tests” 

    Rutgers smaller
    Our Great Seal.

    From Rutgers University

    October 11, 2018
    Patti Verbanas

    Rutgers researcher leads team analyzing more exact methods to diagnose the most common tick-borne infection

    New tests are at hand that offer more accurate, less ambiguous test results that can yield actionable results in a timely fashion.

    Researchers have developed techniques to detect Lyme disease bacteria weeks sooner than current tests, allowing patients to start treatment earlier.

    The findings appear in the journal Clinical Infectious Diseases. The authors include scientists from Rutgers Biomedical and Health Sciences, Harvard University, Yale University, the National Institute of Allergy and Infectious Diseases, FDA, Centers for Disease Control and Prevention, and other institutions.

    The new techniques can detect an active infection with the Lyme bacteria faster than the three weeks it takes for the current indirect antibody-based tests, which have been a standard since 1994. Another advantage of the new tests is that a positive result in blood indicates the infection is active and should be treated immediately, allowing quicker treatment to prevent long-term health problems. The techniques detect DNA or protein from the Lyme disease bacteria Borrelia burgdorferi.

    “These direct tests are needed because you can get Lyme disease more than once, features are often non-diagnostic and the current standard FDA-approved tests cannot distinguish an active, ongoing infection from a past cured one,” said lead author Steven Schutzer, a physician-scientist at Rutgers New Jersey Medical School. “The problem is worsening because Lyme disease has increased in numbers to 300,000 per year in the United States and is spreading across the country and world.”

    Lyme disease signs frequently, but not always, include a red ring or bull’s eye skin rash. When there is no rash, a reliable laboratory test is needed and preferably one that indicates active disease. The only FDA-approved Lyme disease tests rely on detecting antibodies that the body’s immune system makes in response to the disease. Such a single antibody test is not an active disease indicator but rather only an exposure indicator — past or present.

    “The new tests that directly detect the Lyme agent’s DNA are more exact and are not susceptible to the same false-positive results and uncertainties associated with current FDA-approved indirect tests,” said Schutzer. “It will not be surprising to see direct tests for Lyme disease join the growing list of FDA-approved direct tests for other bacterial, fungal and viral infections that include Staphylococcus, Streptococcus, Candida, influenza, HIV, herpes and hepatitis, among others.”

    The authors developed the paper after a meeting at Cold Spring Harbor Laboratory’s Banbury Conference Center, a nonprofit research institution in New York to discuss current Lyme disease tests and the potential of new scientific advances to increase the accuracy of an early diagnosis.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition


    Rutgers, The State University of New Jersey, is a leading national research university and the state’s preeminent, comprehensive public institution of higher education. Rutgers is dedicated to teaching that meets the highest standards of excellence; to conducting research that breaks new ground; and to providing services, solutions, and clinical care that help individuals and the local, national, and global communities where they live.

    Founded in 1766, Rutgers teaches across the full educational spectrum: preschool to precollege; undergraduate to graduate; postdoctoral fellowships to residencies; and continuing education for professional and personal advancement.

    As a ’67 graduate of University college, second in my class, I am proud to be a member of

    Alpha Sigma Lamda, National Honor Society of non-tradional students.

  • richardmitnick 2:46 pm on October 11, 2018 Permalink | Reply
    Tags: An earthquake with a magnitude of M = 7.0 earthquake struck today in New Britain, Applied Research & Technology, , , Papua New Guinea, , , Subduction megathrust earthquake preceded by a foreshock,   

    From temblor: “Subduction megathrust earthquake preceded by a foreshock” 


    From temblor

    October 10, 2018
    Jason Patton

    An earthquake with a magnitude of M = 7.0 earthquake struck today in New Britain, Papua New Guinea. New Britain is an island northeast of the Island of Papua New Guinea and Australia. While the earthquake struck on a subduction zone, the Pacific Tsunami Warning Center states that there is no tsunami threat.

    Tectonic Hazards

    Hundreds of millions of people globally live along plate margins called subduction zones. These plate boundaries are formed as the result of millions of years of plate convergence. Earthquakes that occur along subduction zone megathrust faults are compressional earthquakes (aka thrust or reverse).

    Earthquake size is related to the material properties of the earth surrounding the slipped fault, the size of the fault that slipped (the area), and the amount that the fault slipped (distance). Earthquakes occur in specific depth ranges depending upon the conditions. Typical plate boundary earthquakes due to brittle failure along a fault extend to several tens of kilometers into the Earth. Because subduction zone megathrust faults dip into the earth at an angle, the fault area that can slip can be larger than for strike-slip faults. Megathrust earthquakes can therefore have magnitudes larger than strike-slip (shear) earthquakes.

    Do you live along a subduction zone or other plate boundary fault? What about another kind of fault?

    To learn more about your exposure to these hazards, visit http://www.temblor.net.

    Several governments and non-governmental organizations prepare estimates of seismic hazard so that people can ensure their building codes are designed to mitigate these hazards. The Global Earthquake Model (GEM) is an example of our efforts to estimate seismic hazards on a global scale. Temblor.net uses the Global Earth Activity Rate (GEAR) model to provide estimates of seismic hazard at a global to local scale (Bird et al., 2015). GEAR blends quakes during the past 41 years with strain of the Earth’s crust as measured using Global Positioning System (GPS) observations.

    Below is a map prepared using the temblor.net app. Seismicity from the past month, week, and day are shown as colored circles. The rainbow color scale represents the chance of a given earthquake magnitude, for a given location, within the lifetime of a person (technically, it is the magnitude with a 1% chance per year of occurring within 100 km). The temblor app suggests that this region could have an earthquake of M=7.9 in a typical lifetime, and so the M=7.0 was by no means rare or unexpected.

    There was a magnitude M = 5.9 earthquake just 12 minutes before the M 7.0 earthquake, and so, in retrospect, we might consider the M = 5.9 a ‘foreshock’ to the much larger M = 7.0 earthquake. This happens only about 5-10% of the time, which means that foreshocks are a poor predictor of mainshocks.

    Global Earthquake Activity Rate map for this region of the western equatorial Pacific. Faults are shown as red lines and the megathrust faults are shown as pink regions because they dip into the earth at an angle. Warmer colors represent regions that are more likely to experience a larger earthquake than the regions with cooler colors. Seismicity from the past is shown and the location of the M 7.0 earthquake is located near the blue teardrop symbol.

    New Britain Tectonics

    This area of the world is one of the most active and tortured plate boundaries in the world. There are several subduction zones, oceanic spreading ridges, and transform plate boundary faults that interact to form the island of New Britain, Bougainville Island, and the ocean basins below the Solomon and Bismarck seas.

    New Britain is part of a magmatic arc (volcanic island) related to the subduction of the Solomon Sea plate beneath the Bismarck Sea plate. Below is a map showing the major plate boundary faults in this region. The Island of New Britain is located in the southern part of the South Bismarck plate.

    Plate tectonic map from Oregon State University. The Solomon Sea plate subducts beneath the South Bismarck plate to the north, the Pacific plate to the east, and the Australia plate to the south. There are oceanic spreading ridges shown as double black lines. Some of these ridges are offset by transform (strike-slip) faults between the South and North Bismarck Sea plates.

    Earlier this year, there was an earthquake about 20 miles from today’s earthquake. Dr. Stephen Hicks is a postdoctoral research fellow in seismology from the University of Southampton who has been studying the geometry of the subduction zone in associated with the New Britain Trench. Here is his tweet regarding the M = 6.6 earthquake in March 2018. This was a foreshock to an M = 6.9 earthquake a few days later.

    Below are the two panels that show earthquake epicenters on the left and earthquake in cross-section on the right. The location for the M = 6.6 is shown as an orange star on the cross section and a yellow star on the map. We have added the location of the M = 6.9 earthquake using the same color scheme. We also added the location for the M = 7.0 earthquake from today as a blue star.

    Seismicity map and cross section (modified from Dr. Hicks, 2018). Epicenters are shown on the map, with the earthquakes selected for the cross section is outlined as a dashed rectangle labelled A-A’. Hypocenters along cross section A-A’ are shown relative to distance from the trench axis.

    Take Away

    A subduction zone megathrust earthquake with a magnitude M = 7.0 happened along one of the most seismically active subduction zones, the New Britain Trench. The magnitude and depth are the probable reasons that the Pacific Tsunami Warning Center announced that there is no tsunami threat from this earthquake, locally or globally. There was a M = 5.9 foreshock several minutes prior to the mainshock. This subduction zone has a potential for a larger earthquake.


    Bird, P., Jackson, D. D., Kagan, Y. Y., Kreemer, C., and Stein, R. S., 2015. GEAR1: A global earthquake activity rate model constructed from geodetic strain rates and smoothed seismicity, Bull. Seismol. Soc. Am., v. 105, no. 5, p. 2538–2554, DOI: 10.1785/0120150058

    Hamilton, W., 1979, Tectonics of the Indonesian region: U.S. Geological Survey Prof. Paper 1078.

    Holm, R. and Richards, S.W., 2013. A re-evaluation of arc-continent collision and along-arc variation in the Bismarck Sea region, Papua New Guinea in Australian Journal of Earth Sciences, v. 60, p. 605-619.

    Lin, J., and R. S. Stein (2004), Stress triggering in thrust and subduction earthquakes and stress interaction between the southern San Andreas and nearby thrust and strike-slip faults, J. Geophys. Res., 109, B02303, doi:10.1029/2003JB002607

    More can be found about the seismotectonics of this region here.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Earthquake Alert


    Earthquake Alert

    Earthquake Network project

    Earthquake Network is a research project which aims at developing and maintaining a crowdsourced smartphone-based earthquake warning system at a global level. Smartphones made available by the population are used to detect the earthquake waves using the on-board accelerometers. When an earthquake is detected, an earthquake warning is issued in order to alert the population not yet reached by the damaging waves of the earthquake.

    The project started on January 1, 2013 with the release of the homonymous Android application Earthquake Network. The author of the research project and developer of the smartphone application is Francesco Finazzi of the University of Bergamo, Italy.

    Get the app in the Google Play store.

    Smartphone network spatial distribution (green and red dots) on December 4, 2015

    Meet The Quake-Catcher Network

    QCN bloc

    Quake-Catcher Network

    The Quake-Catcher Network is a collaborative initiative for developing the world’s largest, low-cost strong-motion seismic network by utilizing sensors in and attached to internet-connected computers. With your help, the Quake-Catcher Network can provide better understanding of earthquakes, give early warning to schools, emergency response systems, and others. The Quake-Catcher Network also provides educational software designed to help teach about earthquakes and earthquake hazards.

    After almost eight years at Stanford, and a year at CalTech, the QCN project is moving to the University of Southern California Dept. of Earth Sciences. QCN will be sponsored by the Incorporated Research Institutions for Seismology (IRIS) and the Southern California Earthquake Center (SCEC).

    The Quake-Catcher Network is a distributed computing network that links volunteer hosted computers into a real-time motion sensing network. QCN is one of many scientific computing projects that runs on the world-renowned distributed computing platform Berkeley Open Infrastructure for Network Computing (BOINC).

    The volunteer computers monitor vibrational sensors called MEMS accelerometers, and digitally transmit “triggers” to QCN’s servers whenever strong new motions are observed. QCN’s servers sift through these signals, and determine which ones represent earthquakes, and which ones represent cultural noise (like doors slamming, or trucks driving by).

    There are two categories of sensors used by QCN: 1) internal mobile device sensors, and 2) external USB sensors.

    Mobile Devices: MEMS sensors are often included in laptops, games, cell phones, and other electronic devices for hardware protection, navigation, and game control. When these devices are still and connected to QCN, QCN software monitors the internal accelerometer for strong new shaking. Unfortunately, these devices are rarely secured to the floor, so they may bounce around when a large earthquake occurs. While this is less than ideal for characterizing the regional ground shaking, many such sensors can still provide useful information about earthquake locations and magnitudes.

    USB Sensors: MEMS sensors can be mounted to the floor and connected to a desktop computer via a USB cable. These sensors have several advantages over mobile device sensors. 1) By mounting them to the floor, they measure more reliable shaking than mobile devices. 2) These sensors typically have lower noise and better resolution of 3D motion. 3) Desktops are often left on and do not move. 4) The USB sensor is physically removed from the game, phone, or laptop, so human interaction with the device doesn’t reduce the sensors’ performance. 5) USB sensors can be aligned to North, so we know what direction the horizontal “X” and “Y” axes correspond to.

    If you are a science teacher at a K-12 school, please apply for a free USB sensor and accompanying QCN software. QCN has been able to purchase sensors to donate to schools in need. If you are interested in donating to the program or requesting a sensor, click here.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing, developed at UC Berkeley.

    Earthquake safety is a responsibility shared by billions worldwide. The Quake-Catcher Network (QCN) provides software so that individuals can join together to improve earthquake monitoring, earthquake awareness, and the science of earthquakes. The Quake-Catcher Network (QCN) links existing networked laptops and desktops in hopes to form the worlds largest strong-motion seismic network.

    Below, the QCN Quake Catcher Network map
    QCN Quake Catcher Network map

    ShakeAlert: An Earthquake Early Warning System for the West Coast of the United States

    The U. S. Geological Survey (USGS) along with a coalition of State and university partners is developing and testing an earthquake early warning (EEW) system called ShakeAlert for the west coast of the United States. Long term funding must be secured before the system can begin sending general public notifications, however, some limited pilot projects are active and more are being developed. The USGS has set the goal of beginning limited public notifications in 2018.

    Watch a video describing how ShakeAlert works in English or Spanish.

    The primary project partners include:

    United States Geological Survey
    California Governor’s Office of Emergency Services (CalOES)
    California Geological Survey
    California Institute of Technology
    University of California Berkeley
    University of Washington
    University of Oregon
    Gordon and Betty Moore Foundation

    The Earthquake Threat

    Earthquakes pose a national challenge because more than 143 million Americans live in areas of significant seismic risk across 39 states. Most of our Nation’s earthquake risk is concentrated on the West Coast of the United States. The Federal Emergency Management Agency (FEMA) has estimated the average annualized loss from earthquakes, nationwide, to be $5.3 billion, with 77 percent of that figure ($4.1 billion) coming from California, Washington, and Oregon, and 66 percent ($3.5 billion) from California alone. In the next 30 years, California has a 99.7 percent chance of a magnitude 6.7 or larger earthquake and the Pacific Northwest has a 10 percent chance of a magnitude 8 to 9 megathrust earthquake on the Cascadia subduction zone.

    Part of the Solution

    Today, the technology exists to detect earthquakes, so quickly, that an alert can reach some areas before strong shaking arrives. The purpose of the ShakeAlert system is to identify and characterize an earthquake a few seconds after it begins, calculate the likely intensity of ground shaking that will result, and deliver warnings to people and infrastructure in harm’s way. This can be done by detecting the first energy to radiate from an earthquake, the P-wave energy, which rarely causes damage. Using P-wave information, we first estimate the location and the magnitude of the earthquake. Then, the anticipated ground shaking across the region to be affected is estimated and a warning is provided to local populations. The method can provide warning before the S-wave arrives, bringing the strong shaking that usually causes most of the damage.

    Studies of earthquake early warning methods in California have shown that the warning time would range from a few seconds to a few tens of seconds. ShakeAlert can give enough time to slow trains and taxiing planes, to prevent cars from entering bridges and tunnels, to move away from dangerous machines or chemicals in work environments and to take cover under a desk, or to automatically shut down and isolate industrial systems. Taking such actions before shaking starts can reduce damage and casualties during an earthquake. It can also prevent cascading failures in the aftermath of an event. For example, isolating utilities before shaking starts can reduce the number of fire initiations.

    System Goal

    The USGS will issue public warnings of potentially damaging earthquakes and provide warning parameter data to government agencies and private users on a region-by-region basis, as soon as the ShakeAlert system, its products, and its parametric data meet minimum quality and reliability standards in those geographic regions. The USGS has set the goal of beginning limited public notifications in 2018. Product availability will expand geographically via ANSS regional seismic networks, such that ShakeAlert products and warnings become available for all regions with dense seismic instrumentation.

    Current Status

    The West Coast ShakeAlert system is being developed by expanding and upgrading the infrastructure of regional seismic networks that are part of the Advanced National Seismic System (ANSS); the California Integrated Seismic Network (CISN) is made up of the Southern California Seismic Network, SCSN) and the Northern California Seismic System, NCSS and the Pacific Northwest Seismic Network (PNSN). This enables the USGS and ANSS to leverage their substantial investment in sensor networks, data telemetry systems, data processing centers, and software for earthquake monitoring activities residing in these network centers. The ShakeAlert system has been sending live alerts to “beta” users in California since January of 2012 and in the Pacific Northwest since February of 2015.

    In February of 2016 the USGS, along with its partners, rolled-out the next-generation ShakeAlert early warning test system in California joined by Oregon and Washington in April 2017. This West Coast-wide “production prototype” has been designed for redundant, reliable operations. The system includes geographically distributed servers, and allows for automatic fail-over if connection is lost.

    This next-generation system will not yet support public warnings but does allow selected early adopters to develop and deploy pilot implementations that take protective actions triggered by the ShakeAlert notifications in areas with sufficient sensor coverage.


    The USGS will develop and operate the ShakeAlert system, and issue public notifications under collaborative authorities with FEMA, as part of the National Earthquake Hazard Reduction Program, as enacted by the Earthquake Hazards Reduction Act of 1977, 42 U.S.C. §§ 7704 SEC. 2.

    For More Information

    Robert de Groot, ShakeAlert National Coordinator for Communication, Education, and Outreach

    Learn more about EEW Research

    ShakeAlert Fact Sheet

    ShakeAlert Implementation Plan

  • richardmitnick 9:06 am on October 11, 2018 Permalink | Reply
    Tags: Applied Research & Technology, , , , Turbulance unsolved, Werner Heisenberg   

    From ars technica: “Turbulence, the oldest unsolved problem in physics” 

    Ars Technica
    From ars technica

    Lee Phillips

    The flow of water through a pipe is still in many ways an unsolved problem.

    Werner Heisenberg won the 1932 Nobel Prize for helping to found the field of quantum mechanics and developing foundational ideas like the Copenhagen interpretation and the uncertainty principle. The story goes that he once said that, if he were allowed to ask God two questions, they would be, “Why quantum mechanics? And why turbulence?” Supposedly, he was pretty sure God would be able to answer the first question.

    Werner Heisenberg from German Federal Archives

    The quote may be apocryphal, and there are different versions floating around. Nevertheless, it is true that Heisenberg banged his head against the turbulence problem for several years.

    His thesis advisor, Arnold Sommerfeld, assigned the turbulence problem to Heisenberg simply because he thought none of his other students were up to the challenge—and this list of students included future luminaries like Wolfgang Pauli and Hans Bethe. But Heisenberg’s formidable math skills, which allowed him to make bold strides in quantum mechanics, only afforded him a partial and limited success with turbulence.

    Some nearly 90 years later, the effort to understand and predict turbulence remains of immense practical importance. Turbulence factors into the design of much of our technology, from airplanes to pipelines, and it factors into predicting important natural phenomena such as the weather. But because our understanding of turbulence over time has stayed largely ad-hoc and limited, the development of technology that interacts significantly with fluid flows has long been forced to be conservative and incremental. If only we became masters of this ubiquitous phenomenon of nature, these technologies might be free to evolve in more imaginative directions.

    An undefined definition

    Here is the point at which you might expect us to explain turbulence, ostensibly the subject of the article. Unfortunately, physicists still don’t agree on how to define it. It’s not quite as bad as “I know it when I see it,” but it’s not the best defined idea in physics, either.

    So for now, we’ll make do with a general notion and try to make it a bit more precise later on. The general idea is that turbulence involves the complex, chaotic motion of a fluid. A “fluid” in physics talk is anything that flows, including liquids, gases, and sometimes even granular materials like sand.

    Turbulence is all around us, yet it’s usually invisible. Simply wave your hand in front of your face, and you have created incalculably complex motions in the air, even if you can’t see it. Motions of fluids are usually hidden to the senses except at the interface between fluids that have different optical properties. For example, you can see the swirls and eddies on the surface of a flowing creek but not the patterns of motion beneath the surface. The history of progress in fluid dynamics is closely tied to the history of experimental techniques for visualizing flows. But long before the advent of the modern technologies of flow sensors and high-speed video, there were those who were fascinated by the variety and richness of complex flow patterns.

    One of the first to visualize these flows was scientist, artist, and engineer Leonardo da Vinci, who combined keen observational skills with unparalleled artistic talent to catalog turbulent flow phenomena. Back in 1509, Leonardo was not merely drawing pictures. He was attempting to capture the essence of nature through systematic observation and description. In this figure, we see one of his studies of wake turbulence, the development of a region of chaotic flow as water streams past an obstacle.

    For turbulence to be considered a solved problem in physics, we would need to be able to demonstrate that we can start with the basic equation describing fluid motion and then solve it to predict, in detail, how a fluid will move under any particular set of conditions. That we cannot do this in general is the central reason that many physicists consider turbulence to be an unsolved problem.

    I say “many” because some think it should be considered solved, at least in principle. Their argument is that calculating turbulent flows is just an application of Newton’s laws of motion, albeit a very complicated one; we already know Newton’s laws, so everything else is just detail. Naturally, I hold the opposite view: the proof is in the pudding, and this particular pudding has not yet come out right.

    The lack of a complete and satisfying theory of turbulence based on classical physics has even led to suggestions that a full account requires some quantum mechanical ingredients: that’s a minority view, but one that can’t be discounted.

    An example of why turbulence is said to be an unsolved problem is that we can’t generally predict the speed at which an orderly, non-turbulent (“laminar”) flow will make the transition to a turbulent flow. We can do pretty well in some special cases—this was one of the problems that Heisenberg had some success with—but, in general, our rules of thumb for predicting the transition speeds are summaries of experiments and engineering experience.

    There are many phenomena in nature that illustrate the often sudden transformation from a calm, orderly flow to a turbulent flow.The transition to turbulence. Credit: Dr. Gary Settles

    This figure above is a nice illustration of this transition phenomenon. It shows the hot air rising from a candle flame, using a 19th century visualization technique that makes gases of different densities look different. Here, the air heated by the candle is less dense than the surrounding atmosphere.

    For another turbulent transition phenomenon familiar to anyone who frequents the beach, consider gentle, rolling ocean waves that become complex and foamy as they approach the shore and “break.” In the open ocean, wind-driven waves can also break if the windspeed is high or if multiple waves combine to form a larger one.

    For another visual aid, there is a centuries-old tradition in Japanese painting of depicting turbulent, breaking ocean waves. In these paintings, the waves are not merely part of the landscape but the main subjects. These artists seemed to be mainly concerned with conveying the beauty and terrible power of the phenomenon, rather than, as was Leonardo, being engaged in a systematic study of nature. One of the most famous Japanese artworks, and an iconic example of this genre, is Hokusai’s “Great Wave,” a woodblock print published in 1831.

    Hokusai’s “Great Wave.”

    For one last reason to consider turbulence an unsolved problem, turbulent flows exhibit a wide range of interesting behavior in time and space. Most of these have been discovered by measurement, not predicted, and there’s still no satisfying theoretical explanation for them.


    Reasons for and against “mission complete” aside, why is the turbulence problem so hard? The best answer comes from looking at both the history and current research directed at what Richard Feynman once called “the most important unsolved problem of classical physics.”

    The most commonly used formula for describing fluid flow is the Navier-Stokes equation. This is the equation you get if you apply Newton’s first law of motion, F = ma (force = mass × acceleration), to a fluid with simple material properties, excluding elasticity, memory effects, and other complications. Complications like these arise when we try to accurately model the flows of paint, polymers, some biological fluids such as blood (there are many other substances also that violate the assumptions of the Navier-Stokes equations). But for water, air, and other simple liquids and gases, it’s an excellent approximation.

    The Navier-Stokes equation is difficult to solve because it is nonlinear. This word is thrown around quite a bit, but here it means something specific. You can build up a complicated solution to a linear equation by adding up many simple solutions. An example you may be aware of is sound: the equation for sound waves is linear, so you can build up a complex sound by adding together many simple sounds of different frequencies (“harmonics”). Elementary quantum mechanics is also linear; the Schrödinger equation allows you to add together solutions to find a new solution.

    But fluid dynamics doesn’t work this way: the nonlinearity of the Navier-Stokes equation means that you can’t build solutions by adding together simpler solutions. This is part of the reason that Heisenberg’s mathematical genius, which served him so well in helping to invent quantum mechanics, was put to such a severe test when it came to turbulence.

    Heisenberg was forced to make various approximations and assumptions to make any progress with his thesis problem. Some of these were hard to justify; for example, the applied mathematician Fritz Noether (a brother of Emmy Noether) raised prominent objections to Heisenberg’s turbulence calculations for decades before finally admitting that they seemed to be correct after all.

    (The situation was so hard to resolve that Heisenberg himself said, while he thought his methods were justified, he couldn’t find the flaw in Fritz Noether’s reasoning, either!)

    The cousins of the Navier-Stokes equation that are used to describe more complex fluids are also nonlinear, as is a simplified form, the Euler equation, that omits the effects of friction. There are cases where a linear approximation does work well, such as flow at extremely slow speeds (imagine honey flowing out of a jar), but this excludes most problems of interest including turbulence.

    Who’s down with CFD?

    Despite the near impossibility of finding mathematical solutions to the equations for fluid flows under realistic conditions, science still needs to get some kind of predictive handle on turbulence. For this, scientists and engineers have turned to the only option available when pencil and paper failed them—the computer. These groups are trying to make the most of modern hardware to put a dent in one of the most demanding applications for numerical computing: calculating turbulent flows.

    The need to calculate these chaotic flows has benefited from (and been a driver of) improvements in numerical methods and computer hardware almost since the first giant computers appeared. The field is called computational fluid dynamics, often abbreviated as CFD.

    Early in the history of CFD, engineers and scientists applied straightforward numerical techniques in order to try to directly approximate solutions to the Navier-Stokes equations. This involves dividing up space into a grid and calculating the fluid variables (pressure, velocity) at each grid point. The problem of the large range of spatial scales immediately makes this approach expensive: you need to find a solution where the flow features are accurate for the largest scales—meters for pipes, thousands of kilometers for weather, and down to near the molecular scale. Even if you cut off the length scale at the small end at millimeters or centimeters, you will still need millions of grid points.

    A possible grid for calculating the flow over an airfoil.

    One approach to getting reasonable accuracy with a manageable-sized grid begins with the realization that there are often large regions where not much is happening. Put another way, in regions far away from solid objects or other disturbances, the flow is likely to vary slowly in both space and time. All the action is elsewhere; the turbulent areas are usually found near objects or interfaces.

    A non-uniform grid for calculating the flow over an airfoil.

    If we take another look at our airfoil and imagine a uniform flow beginning at the left and passing over it, it can be more efficient to concentrate the grid points near the object, especially at the leading and trailing edges, and not “waste” grid points far away from the airfoil. The next figure shows one possible gridding for simulating this problem.

    This is the simplest type of 2D non-uniform grid, containing nothing but straight lines. The state of the art in nonuniform grids is called adaptive mesh refinement (AMR), where the mesh, or grid, actually changes and adapts to the flow during the simulation. This concentrates grid points where they are needed, not wasting them in areas of nearly uniform flow. Research in this field is aimed at optimizing the grid generation process while minimizing the artificial effects of the grid on the solution. Here it’s used in a NASA simulation of the flow around an oscillating rotor blade. The color represents vorticity, a quantity related to angular momentum.

    Using AMR to simulate the flow around a rotor blade.Neal M. Chaderjian, NASA/Ames

    The above image shows the computational grid, rendered as blue lines, as well as the airfoil and the flow solution, showing how the grid adapts itself to the flow. (The grid points are so close together at the areas of highest grid resolution that they appear as solid blue regions.) Despite the efficiencies gained by the use of adaptive grids, simulations such as this are still computationally intensive; a typical calculation of this type occupies 2,000 compute cores for about a week.

    Dimitri Mavriplis and his collaborators at the Mavriplis CFD Lab at the University of Wyoming have made available several videos of their AMR simulations.

    AMR simulation of flow past a sphere.Mavriplis CFD Lab

    Above is a frame from a video of a simulation of the flow past an object; the video is useful for getting an idea of how the AMR technique works, because it shows how the computational grid tracks the flow features.

    This work is an example of how state-of-the-art numerical techniques are capable of capturing some of the physics of the transition to turbulence, illustrated in the image of candle-heated air above.

    Another approach to getting the most out of finite computer resources involves making alterations to the equation of motion, rather than, or in addition to, altering the computational grid.

    Since the first direct numerical simulations of the Navier-Stokes equations were begun at Los Alamos in the late 1950s, the problem of the vast range of spatial scales has been attacked by some form of modeling of the flow at small scales. In other words, the actual Navier-Stokes equations are solved for motion on the medium and large scales, but, below some cutoff, a statistical or other model is substituted.

    The idea is that the interesting dynamics occur at larger scales, and grid points are placed to cover these. But the “subgrid” motions that happen between the gridpoints mainly just dissipate energy, or turn motion into heat, so don’t need to be tracked in detail. This approach is also called large-eddy simulation (LES), the term “eddy” standing in for a flow feature at a particular length scale.

    The development of subgrid modeling, although it began with the beginning of CFD, is an active area of research to this day. This is because we always want to get the most bang for the computer buck. No matter how powerful the computer, a sophisticated numerical technique that allows us to limit the required grid resolution will enable us to handle more complex problems.

    There are several other prominent approaches to modeling fluid flows on computers, some of which do not make use of grids at all. Perhaps the most successful of these is the technique called “smoothed particle hydrodynamics,” which, as its name suggests, models the fluid as a collection of computational “particles,” which are moved around without the use of a grid. The “smoothed” in the name comes from the smooth interpolations between particles that are used to derive the fluid properties at different points in space.

    Theory and experiment

    Despite the impressive (and ever-improving) ability of fluid dynamicists to calculate complex flows with computers, the search for a better theoretical understanding of turbulence continues, for computers can only calculate flow solutions in particular situations, one case at a time. Only through the use of mathematics do physicists feel that they’ve achieved a general understanding of a group of related phenomena. Luckily, there are a few main theoretical approaches to turbulence, each with some interesting phenomena they seek to penetrate.

    Only a few exact solutions of the Navier-Stokes equations are known; these describe simple, laminar flows (and certainly not turbulent flows of any kind). For flow in a pipe or between two flat plates, the flow velocity profile between two plates is zero at the boundaries and a maximum half-way between them. This parabolic flow profile (shown below) solves the equations: something that has been known for over a century. Laminar flow in a pipe is similar, with the maximum velocity occurring at the center.

    Exact solution for flow between plates.

    The interesting thing about this parabolic solution, and similar exact solutions, is that they are valid (mathematically speaking) at any flow velocity, no matter how high. However, experience shows that while this works at low speeds, the flow breaks up and becomes turbulent at some moderate “critical” speed. Using mathematical methods to try to find this critical speed is part of what Heisenberg was up to in his thesis work.

    Theorists describe what’s happening here by using the language of stability theory. Stability theory is the examination of the exact solutions to the Navier-Stokes equation and their ability to survive “perturbations,” which are small disturbances added to the flow. These disturbances can be in the form of boundaries that are less than perfectly smooth, variations in the pressure driving the flow, etc.

    The idea is that, while the low-speed solution is valid at any speed, near a critical speed another solution also becomes valid, and nature prefers that second, more complex solution. In other words, the simple solution has become unstable and is replaced by a second one. As the speed is ramped up further, each solution gives way to a more complicated one, until we arrive at the chaotic flow we call turbulence.

    In the real world, this will always happen, because perturbations are always present—and this is why laminar flows are much less common in everyday experience than turbulence.

    Experiments to directly observe these instabilities are delicate, because the distance between the first instability and the onset of full-blown turbulence is usually quite small. You can see a version of the process in the figure above, showing the transition to turbulence in the heated air column above a candle. The straight column is unstable, but it takes a while before the sinuous instability grows large enough for us to see it as a visible wiggle. Almost as soon as this happens, the cascade of instabilities piles up, and we see a sudden explosion into turbulence.

    Another example of the common pattern is in the next illustration, which shows the typical transition to turbulence in a flow bounded by a single wall.

    Transition to turbulence in a wall-bounded flow. NASA.

    We can again see an approximately periodic disturbance to the laminar flow begin to grow, and after just a few wavelengths the flow suddenly becomes turbulent.

    Capturing, and predicting, the transition to turbulence is an ongoing challenge for simulations and theory; on the theoretical side, the effort begins with stability theory.

    In fluid flows close to a wall, the transition to turbulence can take a somewhat different form. As in the other examples illustrated here, small disturbances get amplified by the flow until they break down into chaotic, turbulent motion. But the turbulence does not involve the entire fluid, instead confining itself to isolated spots, which are surrounded by calm, laminar flow. Eventually, more spots develop, enlarge, and ultimately merge, until the entire flow is turbulent.

    The fascinating thing about these spots is that, somehow, the fluid can enter them, undergo a complex, chaotic motion, and emerge calmly as a non-turbulent, organized flow on the other side. Meanwhile, the spots persist as if they were objects embedded in the flow and attached to the boundary.

    Turbulent spot experiment: pressure fluctuation. (Credit: Katya Casper et al., Sandia National Labs)

    Despite a succession of first-rate mathematical minds puzzling over the Navier-Stokes equation since it was written down almost two centuries ago, exact solutions still are rare and cherished possessions, and basic questions about the equation remain unanswered. For example, we still don’t know whether the equation has solutions in all situations. We’re also not sure if its solutions, which supposedly represent the real flows of water and air, remain well-behaved and finite, or whether some of them blow up with infinite energies or become unphysically unsmooth.

    The scientist who can settle this, either way, has a cool million dollars waiting for them—this is one of the seven unsolved “Millennium Prize” mathematical problems set by the Clay Mathematics Institute.

    Fortunately, there are other ways to approach the theory of turbulence, some of which don’t depend on the knowledge of exact solutions to the equations of motion. The study of the statistics of turbulence uses the Navier-Stokes equation to deduce average properties of turbulent flows without trying to solve the equations exactly. It addresses questions like, “if the velocity of the flow here is so and so, then what is the probability that the velocity one centimeter away will be within a certain range?” It also answers questions about the average of quantities such as the resistance encountered when trying to push water through a pipe, or the lifting force on an airplane wing.

    These are the quantities of real interest to the engineer, who has little use for the physicist’s or mathematician’s holy grail of a detailed, exact description.

    Despite a succession of first-rate mathematical minds puzzling over the Navier-Stokes equation since it was written down almost two centuries ago, exact solutions still are rare and cherished possessions, and basic questions about the equation remain unanswered. For example, we still don’t know whether the equation has solutions in all situations. We’re also not sure if its solutions, which supposedly represent the real flows of water and air, remain well-behaved and finite, or whether some of them blow up with infinite energies or become unphysically unsmooth.

    The scientist who can settle this, either way, has a cool million dollars waiting for them—this is one of the seven unsolved “Millennium Prize” mathematical problems set by the Clay Mathematics Institute.

    Fortunately, there are other ways to approach the theory of turbulence, some of which don’t depend on the knowledge of exact solutions to the equations of motion. The study of the statistics of turbulence uses the Navier-Stokes equation to deduce average properties of turbulent flows without trying to solve the equations exactly. It addresses questions like, “if the velocity of the flow here is so and so, then what is the probability that the velocity one centimeter away will be within a certain range?” It also answers questions about the average of quantities such as the resistance encountered when trying to push water through a pipe, or the lifting force on an airplane wing.

    These are the quantities of real interest to the engineer, who has little use for the physicist’s or mathematician’s holy grail of a detailed, exact description.

    It turns out that the one great obstacle in the way of a statistical approach to turbulence theory is, once again, the nonlinear term in the Navier-Stokes equation. When you use this equation to derive another equation for the average velocity at a single point, it contains a term involving something new: the velocity correlation between two points. When you derive the equation for this velocity correlation, you get an equation with yet another new term: the velocity correlation involving three points. This process never ends, as the diabolical nonlinear term keeps generating higher-order correlations.

    The need to somehow terminate, or “close,” this infinite sequence of equations is known as the “closure problem” in turbulence theory and is still the subject of active research. Very briefly, to close the equations you need to step outside of the mathematical procedure and appeal to a physically motivated assumption or approximation.

    Despite its difficulty, some type of statistical solution to the fluid equations is essential for describing the phenomena of fully developed turbulence, of which there are a number. Turbulence need not be merely a random, featureless expanse of roiling fluid; in fact, it usually is more interesting than that. One of the most intriguing phenomena is the existence of persistent, organized structures within a violent, chaotic flow environment. We are all familiar with magnificent examples of these in the form of the storms on Jupiter, recognizable, even iconic, features that last for years, embedded within a highly turbulent flow.

    More down-to-Earth examples occur in almost any real-world case of a turbulent flow—in fact, experimenters have to take great pains if they want to create a turbulent flow field that is truly homogeneous, without any embedded structure.

    In the below images of a turbulent wake behind a cylinder and of the transition to turbulence in a wall-bounded flow, you can see the echoes of the wave-like disturbance that precedes the onset of fully developed turbulence: a periodicity that persists even as the flow becomes chaotic.

    Cyclones at Jupiter’s north pole. NASA, JPL-Caltech, SwRI, ASI, INAF, JIRAM.

    Wake behind a cylinder. Joseph Straccia et al. (CC By NC-ND)

    When your basic governing equation is very hard to solve or even to simulate, it’s natural to look for a more tractable equation or model that still captures most of the important physics. Much of the theoretical effort to understand turbulence is of this nature.

    We’ve mentioned subgrid models above, used to reduce the number of grid points required in a numerical simulation. Another approach to simplifying the Navier-Stokes equation is a class of models called “shell models.” Roughly speaking, in these models you take the Fourier transform of the Navier-Stokes equation, leading to a description of the fluid as a large number of interacting waves at different wavelengths. Then, in a systematic way, you discard most of the waves, keeping just a handful of significant ones. You can then calculate, using a computer or, with the simplest models, by hand, the mode interactions and the resulting turbulent properties. While, naturally, much of the physics is lost in these types of models, they allow some aspects of the statistical properties of turbulence to be studied in situations where the full equations cannot be solved.

    Occasionally, we hear about the “end of physics”—the idea that we are approaching the stage where all the important questions will be answered, and we will have a theory of everything. But from another point of view, the fact that such a commonplace phenomenon as the flow of water through a pipe is still in many ways an unsolved problem means that we are unlikely to ever reach a point that all physicists will agree is the end of their discipline. There remains enough mystery in the everyday world around us to keep physicists busy far into the future.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

  • richardmitnick 12:47 pm on October 10, 2018 Permalink | Reply
    Tags: Applied Research & Technology, Disentangling Quantum Entanglement, Fundamental research in theoretical and mathematical physics, , QMAP at Davis, ,   

    From UC Davis egghead blog: “Grants for Quantum Information Science” 

    UC Davis bloc

    From UC Davis egghead blog

    UC Davis egghead blog

    October 9th, 2018
    Andy Fell

    The QMAP initiative at UC Davis is aimed at fundamental research in theoretical and mathematical physics.

    The U.S. Department of Energy recently announced $218 million in new grants for “Quantum Information Science” and researchers with the Center for Quantum Mathematics and Physics (QMAP) at UC Davis are among the recipients.

    Professors Veronika Hubeny and Mukund Rangamani were awarded $348,000 over two years for work on “Entanglement in String Theory and the Emergence of Geometry.” They will explore connections between the nature of spacetime, quantum entanglement and string theory. Entanglement, famously described by Einstein as “spooky action at a distance,” is a phenomenon in quantum physics where the properties of pairs of particles are correlated even when they are widely separated.

    Another grant in the program went to a team including Professor Andreas Albrecht and led by Andrew Sornborger, scientist at the Los Alamos National Laboratory and a research associate in the UC Davis Department of Mathematics. They will work on “Disentangling Quantum Entanglement: A Machine Learning Approach to Decoherence, Quantum Error Correction, and Phase Transition Dynamics.” This research will involve investigating quantum theories with a variety of model systems, including using machine learning to interpret and understand the models. The larger goals are understanding the emergence of locality and the arrow of time in the physics that governs the cosmos, Albrecht said.

    The energy department’s program is a long-term investment in research towards the next generation of computing and information technologies, according to their news release. While digital computers are based on “bits” that are ones or zeroes, a quantum computer work with “qubits” that exploit the properties of quantum theory – such as entanglement – to function.

    Working quantum computers are in their very early infancy, but they could theoretically achieve tasks that are not possible or too time-consuming for current computing technology.

    Quantum computing – IBM

    The grants awarded range from new materials, hardware and software to the implications of quantum computing for fundamental physics.

    QMAP was founded in 2015 in the College of Letters and Science with the goal of having mathematicians and physicists work together on topics at the intersection of both fields such as quantum gravity, quantum field theory and string theory. Albrecht is the center’s founding director; Hubeny and Rangamani are among the first faculty hired for the center.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    About Egghead

    Egghead is a blog about research by, with or related to UC Davis. Comments on posts are welcome, as are tips and suggestions for posts. General feedback may be sent to Andy Fell. This blog is created and maintained by UC Davis Strategic Communications, and mostly edited by Andy Fell.

    UC Davis Campus

    The University of California, Davis, is a major public research university located in Davis, California, just west of Sacramento. It encompasses 5,300 acres of land, making it the second largest UC campus in terms of land ownership, after UC Merced.

  • richardmitnick 12:20 pm on October 10, 2018 Permalink | Reply
    Tags: Applied Research & Technology, , , Scientists forge ahead with electron microscopy to build quantum materials atom by atom,   

    From Oak Ridge National Laboratory: “Scientists forge ahead with electron microscopy to build quantum materials atom by atom” 


    From Oak Ridge National Laboratory

    October 9, 2018
    Sara Shoemaker, Communications

    A novel technique that nudges single atoms to switch places within an atomically thin material could bring scientists another step closer to realizing theoretical physicist Richard Feynman’s vision of building tiny machines from the atom up.

    A significant push to develop materials that harness the quantum nature of atoms is driving the need for methods to build atomically precise electronics and sensors. Fabricating nanoscale devices atom by atom requires delicacy and precision, which has been demonstrated by a microscopy team at the Department of Energy’s Oak Ridge National Laboratory.

    They used a scanning transmission electron microscope, or STEM, at the lab’s Center for Nanophase Materials Sciences to introduce silicon atoms into a single-atom-thick sheet of graphene. As the electron beam scans across the material, its energy slightly disrupts the graphene’s molecular structure and creates room for a nearby silicon atom to swap places with a carbon atom.

    Custom-designed scanning transmission electron microscope at Cornell University by David Muller/Cornell University

    “We observed an electron beam-assisted chemical reaction induced at a single atom and chemical bond level, and each step has been captured by the microscope, which is rare,” said ORNL’s Ondrej Dyck, co-author of a study published in the journal Small that details the STEM demonstration.

    Ondrej Dyck of Oak Ridge National Laboratory used a scanning transmission electron microscope to move single atoms in a two-dimensional layer of graphene, an approach that could be used to build nanoscale devices from the atomic level up for quantum-based applications. Credit: Carlos Jones/Oak Ridge National Laboratory, U.S. Dept. of Energy.

    Using this process, the scientists were further able to bring two, three and four silicon atoms together to build clusters and make them rotate within the graphene layer. Graphene is a two-dimensional, or 2D, layer of carbon atoms that exhibits unprecedented strength and high electrical conductivity. Dyck said he selected graphene for this work, because “it is robust against a 60-kilovolt electron beam.”

    “We can look at graphene for long periods of time without hurting the sample, compared with other 2D materials such as transition metal dichalcogenide monolayers, which tend to fall apart more easily under the electron beam,” he added.

    STEM has emerged in recent years as a viable tool for manipulating atoms in materials while preserving the sample’s stability.

    With a STEM microscope, ORNL’s Ondrej Dyck brought two, three and four silicon atoms together to build clusters and make them rotate within a layer of graphene, a two-dimensional layer of carbon atoms that exhibits unprecedented strength and high electrical conductivity. Credit: Ondrej Dyck/Oak Ridge National Laboratory, U.S. Dept. of Energy.

    Dyck and ORNL colleagues Sergei Kalinin, Albina Borisevich and Stephen Jesse are among few scientists learning to control the movement of single atoms in 2D materials using the STEM. Their work supports an ORNL-led initiative coined The Atomic Forge, which encourages the microscopy community to reimagine STEM as a method to build materials from scratch.

    The fields of nanoscience and nanotechnology have experienced explosive growth in recent years. One of the earlier steps toward Feynman’s idea of building tiny machines atom by atom—a follow-on from his original theory of atomic manipulation first presented during his famous 1959 lecture—was seeded by the work of IBM fellow Donald Eigler. He had shown the manipulation of atoms using a scanning tunneling microscope.

    “For decades, Eigler’s method was the only technology to manipulate atoms one by one. Now, we have demonstrated a second approach with an electron beam in the STEM,” said Kalinin, director of the ORNL Institute for Functional Imaging of Materials. He and Jesse initiated research with the electron beam about four years ago.

    Successfully moving atoms in the STEM could be a crucial step toward fabricating quantum devices one atom at a time. The scientists will next try introducing other atoms such as phosphorus into the graphene structure.

    “Phosphorus has potential because it contains one extra electron compared to carbon,” Dyck said. “This would be ideal for building a quantum bit, or qubit, which is the basis for quantum-based devices.”

    Their goal is to eventually build a device prototype in the STEM.

    Dyck cautioned that while building a qubit from phosphorus-doped graphene is on the horizon, how the material would behave at ambient temperatures—outside of the STEM or a cryogenic environment—remains unknown.

    “We have found that exposing the silicon-doped graphene to the outside world does impact the structures,” he said.

    They will continue to experiment with ways to keep the material stable in non-laboratory environments, which is important to the future success of STEM-built atomically precise structures.

    “By controlling matter at the atomic scale, we are going to bring the power and mystery of quantum physics to real-world devices,” Jesse said.

    Co-authors of the paper titled, Building Structures Atom by Atom via Electron Beam Manipulation
    are Ondrej Dyck, Sergei V. Kalinin and Stephen Jesse of ORNL; Songkil Kim of Pusan National University in South Korea; Elisa Jimenez-Izal of the University of California and UPV/EHU and DIPC in Spain; and Anastassia N. Alexandrova of UPV/EHU and DIPC in Spain and the California NanoSystems Institute.

    The research was funded by ORNL’s Laboratory-Directed Research and Development program. Microscopy experiments were performed at the Center for Nanophase Materials Sciences, a DOE Office of Science User Facility.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.


  • richardmitnick 9:33 am on October 10, 2018 Permalink | Reply
    Tags: Applied Research & Technology, , , , ,   

    From MIT: “A new path to solving a longstanding fusion challenge” 

    MIT News
    MIT Widget

    From MIT News

    October 9, 2018
    David L. Chandler

    The ARC conceptual design for a compact, high magnetic field fusion power plant. The design now incorporates innovations from the newly published research to handle heat exhaust from the plasma. ARC rendering by Alexander Creely

    The ARC conceptual design for a compact, high magnetic field fusion power plant. Numbered components are as follows: 1. plasma; 2. The newly designed divertor; 3. copper trim coils; 4. High-temperature superconductor (HTS) poloidal field coils, used to shape the plasma in the divertor; 5. FLiBe blanket, a liquid material that collects heat from emitted neutrons; 6. HTS toroidal field coils, which shape the main plasma torus; 7. HTS central solenoid; 8. vacuum vessel; 9. FLiBe tank; 10. joints in toroidal field coils, which can be opened to allow for access to the interior. ARC rendering by Alexander Creely

    Novel design could help shed excess heat in next-generation fusion power plants.

    A class exercise at MIT, aided by industry researchers, has led to an innovative solution to one of the longstanding challenges facing the development of practical fusion power plants: how to get rid of excess heat that would cause structural damage to the plant.

    The new solution was made possible by an innovative approach to compact fusion reactors, using high-temperature superconducting magnets. This method formed the basis for a massive new research program launched this year at MIT and the creation of an independent startup company to develop the concept. The new design, unlike that of typical fusion plants, would make it possible to open the device’s internal chamber and replace critical comonents; this capability is essential for the newly proposed heat-draining mechanism.

    The new approach is detailed in a paper in the journal Fusion Engineering and Design, authored by Adam Kuang, a graduate student from that class, along with 14 other MIT students, engineers from Mitsubishi Electric Research Laboratories and Commonwealth Fusion Systems, and Professor Dennis Whyte, director of MIT’s Plasma Science and Fusion Center, who taught the class.

    In essence, Whyte explains, the shedding of heat from inside a fusion plant can be compared to the exhaust system in a car. In the new design, the “exhaust pipe” is much longer and wider than is possible in any of today’s fusion designs, making it much more effective at shedding the unwanted heat. But the engineering needed to make that possible required a great deal of complex analysis and the evaluation of many dozens of possible design alternatives.

    Taming fusion plasma

    Fusion harnesses the reaction that powers the sun itself, holding the promise of eventually producing clean, abundant electricity using a fuel derived from seawater — deuterium, a heavy form of hydrogen, and lithium — so the fuel supply is essentially limitless. But decades of research toward such power-producing plants have still not led to a device that produces as much power as it consumes, much less one that actually produces a net energy output.

    Earlier this year, however, MIT’s proposal for a new kind of fusion plant — along with several other innovative designs being explored by others — finally made the goal of practical fusion power seem within reach.

    MIT SPARC fusion reactor tokamak

    But several design challenges remain to be solved, including an effective way of shedding the internal heat from the super-hot, electrically charged material, called plasma, confined inside the device.

    Most of the energy produced inside a fusion reactor is emitted in the form of neutrons, which heat a material surrounding the fusing plasma, called a blanket. In a power-producing plant, that heated blanket would in turn be used to drive a generating turbine. But about 20 percent of the energy is produced in the form of heat in the plasma itself, which somehow must be dissipated to prevent it from melting the materials that form the chamber.

    No material is strong enough to withstand the heat of the plasma inside a fusion device, which reaches temperatures of millions of degrees, so the plasma is held in place by powerful magnets that prevent it from ever coming into direct contact with the interior walls of the donut-shaped fusion chamber. In typical fusion designs, a separate set of magnets is used to create a sort of side chamber to drain off excess heat, but these so-called divertors are insufficient for the high heat in the new, compact plant.

    One of the desirable features of the ARC design is that it would produce power in a much smaller device than would be required from a conventional reactor of the same output. But that means more power confined in a smaller space, and thus more heat to get rid of.

    “If we didn’t do anything about the heat exhaust, the mechanism would tear itself apart,” says Kuang, who is the lead author of the paper, describing the challenge the team addressed — and ultimately solved.

    Inside job

    In conventional fusion reactor designs, the secondary magnetic coils that create the divertor lie outside the primary ones, because there is simply no way to put these coils inside the solid primary coils. That means the secondary coils need to be large and powerful, to make their fields penetrate the chamber, and as a result they are not very precise in how they control the plasma shape.

    But the new MIT-originated design, known as ARC (for advanced, robust, and compact) features magnets built in sections so they can be removed for service. This makes it possible to access the entire interior and place the secondary magnets inside the main coils instead of outside. With this new arrangement, “just by moving them closer [to the plasma] they can be significantly reduced in size,” says Kuang.

    In the one-semester graduate class 22.63 (Principles of Fusion Engineering), students were divided into teams to address different aspects of the heat rejection challenge. Each team began by doing a thorough literature search to see what concepts had already been tried, then they brainstormed to come up with multiple concepts and gradually eliminated those that didn’t pan out. Those that had promise were subjected to detailed calculations and simulations, based, in part, on data from decades of research on research fusion devices such as MIT’s Alcator C-Mod, which was retired two years ago. C-Mod scientist Brian LaBombard also shared insights on new kinds of divertors, and two engineers from Mitsubishi worked with the team as well. Several of the students continued working on the project after the class ended, ultimately leading to the solution described in this new paper. The simulations demonstrated the effectiveness of the new design they settled on.

    “It was really exciting, what we discovered,” Whyte says. The result is divertors that are longer and larger, and that keep the plasma more precisely controlled. As a result, they can handle the expected intense heat loads.

    “You want to make the ‘exhaust pipe’ as large as possible,” Whyte says, explaining that the placement of the secondary magnets inside the primary ones makes that possible. “It’s really a revolution for a power plant design,” he says. Not only do the high-temperature superconductors used in the ARC design’s magnets enable a compact, high-powered power plant, he says, “but they also provide a lot of options” for optimizing the design in different ways — including, it turns out, this new divertor design.

    Going forward, now that the basic concept has been developed, there is plenty of room for further development and optimization, including the exact shape and placement of these secondary magnets, the team says. The researchers are working on further developing the details of the design.

    “This is opening up new paths in thinking about divertors and heat management in a fusion device,” Whyte says.

    “All of the ARC work has been both eye-opening and stimulating of new ways of looking at tokamak fusion reactors,” says Bruce Lipschultz, a professor of physics at the University of York, in the U.K., who was not involved in this work. This latest paper, he says, “incorporates new ideas in the field with the many other significant improvements in the tokamak concept. … The ARC study of the extended leg divertor concept shows that the application to a reactor is not impossible, as others have contended.”

    Lipschultz adds that this is “very high-quality research that shows a way forward for the tokamak reactor and stimulates new research elsewhere.”

    The team included MIT graduate students Norman Cao, Alexander Creely, Cody Dennett, Jake Hecla, Brian LaBombard, Roy Tinguely, Elizabeth Tolman, H. Hoffman, Maximillian Major, Juan Ruiz Ruiz, Daniel Brunner, and Brian Sorbom, and Mitsubishi Electric Research Laboratories engineers P. Grover and C. Laughman. The work was supported by MIT’s Department of Nuclear Science and Engineering, the Department of Energy, the National Science Foundation, and Mitsubishi Electric Research Laboratories.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: