Tagged: Science Alert Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:02 pm on October 14, 2018 Permalink | Reply
    Tags: , , Science Alert, The World's Fastest Camera Can 'Freeze Time' Show Beams of Light in Slow Motion, University of Quebec   

    From Science Alert: “The World’s Fastest Camera Can ‘Freeze Time’, Show Beams of Light in Slow Motion” 

    ScienceAlert

    From Science Alert

    14 OCT 2018
    JON CHRISTIAN

    1
    (Adobe Stock)

    When you push the button on a laser pointer, its entire beam seems to appear instantaneously. In reality, though, the photons shoot out like water from a hose, just at a speed too fast to see.

    Too fast for the human eye to see, anyways.

    Researchers at Caltech and the University of Quebec have invented what is now the world’s fastest camera, and it takes a mind-boggling 10 trillion shots per second —enough to record footage of a pulse of light as it travels through space.

    The extraordinary camera, which the researchers describe in a paper published Monday in the journal Light: Science & Applications, builds on a technology called compressed ultrafast photography (CUP).

    2
    Figure 1. The trillion-frame-per-second compressed ultrafast photography system. INRS

    CUP can lock down an impressive 100 billion frames per second, but by simultaneously recording a static image and performing some tricky math, the researchers were able to reconstruct 10 trillion frames.

    They call the new technique T-CUP, and while they don’t say what the “T” stands for, our money is on “trillion.”

    Ludicrous Speed

    The camera more than doubles the speed record set in 2015 by a camera that took 4.4 trillion shots per second. Its inventors hope it’ll be useful in biomedical and materials research.

    But they’ve already turned their attention to smashing their newly set record.

    “It’s an achievement in itself,” said lead author Jinyang Liang in a press release, “but we already see possibilities for increasing the speed to up to one quadrillion frames per second!”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Advertisements
     
  • richardmitnick 11:31 am on September 30, 2018 Permalink | Reply
    Tags: , , , , , , NASA Technosignatures Workshop, Science Alert, ,   

    From Science Alert: “NASA Has Announced The Next Step in Their Hunt For Alien Life” 

    ScienceAlert

    From Science Alert

    30 SEP 2018
    MATT WILLIAMS

    1
    (NASA/JPL-Caltech)

    We’re closer than ever before.

    NASA is targeting technosignatures in its renewed effort to detect alien civilizations.

    Congress asked NASA to re-boot its search for other civilizations a few months ago. Their first step towards that goal is the NASA Technosignatures Workshop, held in Houston from September 26th to 28th, 2018.

    If you’ve never stared out to space at night and wondered if there are other civilizations out there, well…that’s difficult to understand.

    One of humanity’s most ancient and persistent longings is to know if there are others out there. Though it may seem like a long shot, the attempt is irresistible. And NASA’s newest attempt involves technosignatures.

    What are Technosignatures?

    Technosignatures are simply evidence of technology. They’re the effects or signature of technological use. The most obvious example might be radio waves, but some experts in technosignatures reject those, because the universe is riddled with radio waves produced by natural sources.

    SETI was the original search for alien civilizations. But SETI was more or less a search for intentional radio signals sent by another civilization. This new search will be different in scope. Technosignatures are the unintentional signals that provide evidence for a technological civilization.

    Technosignatures include laser emissions, indications of massive megastructures like Dyson Spheres, or, sadly, highly-polluted atmospheres.

    2
    An artist’s concept of a Dyson sphere, built by an advanced civilization to capture the energy of a star. Image via CapnHack, via energyphysics.wikispaces.com.

    At the Technosignatures Workshop, they also talked about detecting megacities on other planets through their heat signature, and detecting satellites orbiting other planets.

    But in each of these cases, any technosignatures would likely not jump right out at us. It will require some advanced sleuthing techniques to determine if what searchers are detecting are in fact technosignatures.

    That’s why NASA held the workshop. Presenters outlined the current state of the field in detecting technosignatures, what the most promising avenues of research are, and what investments can advance the science of technosignature detection.

    A major stated goal of the workshop is to understand how NASA can support the whole field through partnerships with both private and philanthropic partners.

    There’s precedent for partnerships in the search for the detection of technosignatures. The SETI effort was a NASA program up until 1993 when Congress reigned it in.



    SETI/Allen Telescope Array situated at the Hat Creek Radio Observatory, 290 miles (470 km) northeast of San Francisco, California, USA, Altitude 986 m (3,235 ft)

    Since then, other organizations and wealthy people like Paul Allen, co-founder of Microsoft, have kept SETI going.


    SETI@home, a BOINC project originated in the Space Science Lab at UC Berkeley

    Laser SETI, the future of SETI Institute research

    But now NASA is back in the game, and their Technosignatures Workshop is their first step in a renewed effort to detect other civilizations.

    This new effort comes on the heels of major discoveries in the past few years. For a long time we didn’t know if other stars had planets in their orbits, or if our Solar System was unique. But the Kepler mission changed all that.

    NASA/Kepler Telescope

    Kepler has discovered over 2,600 exoplanets and is still going. And Kepler has only searched a tiny portion of the sky.

    3
    (NASA/Kepler)

    With that data in hand, there’s no reason to think that exoplanets aren’t plentiful throughout the galaxy and the universe. Congress must have realized that, and decided to urge NASA to search some of the newly-discovered exoplanets for evidence of civilizations.

    Telescopes now in the design and construction phases will allow us to image exoplanets, to study their atmospheres, and potentially detect hot-spots on their surfaces.

    We may even be able to use the transit method to detect any satellites orbiting another planet. Nobody knows what we’ll find, but it’s hard not to get a little excited.

    There’s a lot of work to be done. Scientists will have to decide the best way to proceed. But once they get going, it promises to be a very exciting endeavour.

    And then there is

    Breakthrough Listen Project

    1

    Lick Automated Planet Finder telescope, Mount Hamilton, CA, USA



    GBO radio telescope, Green Bank, West Virginia, USA


    CSIRO/Parkes Observatory, located 20 kilometres north of the town of Parkes, New South Wales, Australia

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 11:24 am on September 22, 2018 Permalink | Reply
    Tags: , , Science Alert,   

    From U Tokyo via ScienceAlert: “Scientists Just Created a Magnetic Field That Takes Us Closer Than Ever Before to Harnessing Nuclear Fusion” 

    From University of Tokyo

    via

    ScienceAlert

    1
    (Zoltan Tasi/Unsplash)

    22 SEP 2018
    KRISTIN HOUSER

    They were able to control it without destroying any equipment this time.

    Inexpensive clean energy sounds like a pipe dream. Scientists have long thought that nuclear fusion, the type of reaction that powers stars like the Sun, could be one way to make it happen, but the reaction has been too difficult to maintain.

    Now, we’re closer than ever before to making it happen — physicists from the University of Tokyo (UTokyo) say they’ve produced the strongest-ever controllable magnetic field.

    “One way to produce fusion power is to confine plasma — a sea of charged particles — in a large ring called a tokamak in order to extract energy from it,” said lead researcher Shojiro Takeyama in a press release.

    ITER Tokamak in Saint-Paul-lès-Durance, which is in southern France

    September 18, 2018

    Physicists from the Institute for Solid State Physics at the University of Tokyo have generated the strongest controllable magnetic field ever produced. The field was sustained for longer than any previous field of a similar strength. This research could lead to powerful investigative tools for material scientists and may have applications in fusion power generation.

    Magnetic fields are everywhere. From particle smashers to the humble compass, our capacity to understand and control these fields crafted much of the modern world. The ability to create stronger fields advances many areas of science and engineering. UTokyo physicist Shojiro Takeyama and his team created a large sophisticated device in a purpose-built lab, capable of producing the strongest controllable magnetic field ever using a method known as electromagnetic flux compression.

    “Decades of work, dozens of iterations and a long line of researchers who came before me all contributed towards our achievement,” said Professor Takeyama. “I felt humbled when I was personally congratulated by directors of magnetic field research institutions around the world.”

    Physicists from the Institute for Solid State Physics at the University of Tokyo have generated the strongest controllable magnetic field ever produced. The field was sustained for longer than any previous field of a similar strength. This research could lead to powerful investigative tools for material scientists and may have applications in fusion power generation.

    Magnetic fields are everywhere. From particle smashers to the humble compass, our capacity to understand and control these fields crafted much of the modern world. The ability to create stronger fields advances many areas of science and engineering. UTokyo physicist Shojiro Takeyama and his team created a large sophisticated device in a purpose-built lab, capable of producing the strongest controllable magnetic field ever using a method known as electromagnetic flux compression.

    “Decades of work, dozens of iterations and a long line of researchers who came before me all contributed towards our achievement,” said Professor Takeyama. “I felt humbled when I was personally congratulated by directors of magnetic field research institutions around the world.”

    2
    The megagauss generator just before it’s switched on. Some parts for the device are exceedingly rare and very few companies around the world are capable of producing them. Image: ©2018 Shojiro Takeyama

    3
    Sparks fly at the moment of activation. Four million amps of current feed the megagauss generator system, hundreds of times the current of a typical lightning bolt. Image: ©2018 Shojiro Takeyama

    But what is so interesting about this particular magnetic field?

    At 1,200 teslas – not the brand of electric cars, but the unit of magnetic field strength – the generated field dwarfs almost any artificial magnetic field ever recorded; however, it’s not the strongest overall. In 2001, physicists in Russia produced a field of 2,800 teslas, but their explosive method literally blew up their equipment and the uncontrollable field could not be tamed. Lasers can also create powerful magnetic fields, but in experiments they only last a matter of nanoseconds.

    The magnetic field created by Takeyama’s team lasts thousands of times longer, around 100 microseconds, about one-thousandth of the time it takes to blink. It’s possible to create longer-lasting fields, but these are only in the region of hundreds of teslas. The goal to surpass 1,000 teslas was not just a race for the sake of it, that figure represents a significant milestone.

    4
    Earth’s own magnetic field is 25 to 65 microteslas. The megagauss generator system creates a field of 1,200 teslas, about 20 million to 50 million times stronger. Image: ©2018 Shojiro Takeyama

    “With magnetic fields above 1,000 Teslas, you open up some interesting possibilities,” says Takeyama. “You can observe the motion of electrons outside the material environments they are normally within. So we can study them in a whole new light and explore new kinds of electronic devices. This research could also be useful to those working on fusion power generation.”

    This is an important point, as many believe fusion power is the most promising way to provide clean energy for future generations. “One way to produce fusion power is to confine plasma – a sea of charged particles – in a large ring called a tokamak in order to extract energy from it,” explains Takeyama. “This requires a strong magnetic field in the order of thousands of teslas for a duration of several microseconds. This is tantalizingly similar to what our device can produce.”

    The magnetic field that a tokamak would require is “tantalizingly similar to what our device can produce,” he said.

    To generate the magnetic field, the UTokyo researchers built a sophisticated device capable of electromagnetic flux-compression (EMFC), a method of magnetic field generation well-suited for indoor operations.

    They describe the work in a new paper published Monday in the Review of Scientific Instruments.

    Using the device, they were able to produce a magnetic field of 1,200 teslas — about 120,000 times as strong as a magnet that sticks to your refrigerator.

    Though not the strongest field ever created, the physicists were able to sustain it for 100 microseconds, thousands of times longer than previous attempts.

    They could also control the magnetic field, so it didn’t destroy their equipment like some past attempts to create powerful fields.

    As Takeyama noted in the press release, that means his team’s device can generate close to the minimum magnetic field strength and duration needed for stable nuclear fusion — and it puts us all one step closer to the unlimited clean energy we’ve been dreaming about for nearly a century.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The University of Tokyo aims to be a world-class platform for research and education, contributing to human knowledge in partnership with other leading global universities. The University of Tokyo aims to nurture global leaders with a strong sense of public responsibility and a pioneering spirit, possessing both deep specialism and broad knowledge. The University of Tokyo aims to expand the boundaries of human knowledge in partnership with society. Details about how the University is carrying out this mission can be found in the University of Tokyo Charter and the Action Plans.

     
  • richardmitnick 12:27 pm on September 6, 2018 Permalink | Reply
    Tags: , For The First Time, , Quantum gates, , , Science Alert, Scientists Have Teleported And Measured a Quantum Gate in Real Time, Teleporting a special quantum operation between two locations,   

    From Yale University via Science Alert: “For The First Time, Scientists Have Teleported And Measured a Quantum Gate in Real Time” 

    Yale University bloc

    From Yale University

    via

    Science Alert

    6 SEP 2018
    MIKE MCRAE

    1
    (agsandrew/istock)

    Welcome to the future.

    Around 20 years ago, two computer scientists proposed a technique for teleporting a special quantum operation between two locations with the goal of making quantum computers more reliable.

    Now a team of researchers from Yale University have successfully turned their idea into reality, demonstrating a practical approach to making this incredibly delicate form of technology scalable.

    These physicists have developed a practical method for teleporting a quantum operation – or gate – across a distance and measuring its effect. While this feat has been done before, it’s never been done in real time. This paves the way for developing a process that can make quantum computing modular, and therefore more reliable.

    Unlike regular computers, which perform their calculations with states of reality called bits (on or off, 1 or 0), quantum computers operate with qubits – a strange state of reality we can’t wrap our heads around, but which taps into some incredibly useful mathematics.

    In classical computers, bits interact with operations called logic gates. Like the world’s smallest gladiatorial arena, two bits enter, one bit leaves. Gates come in different forms, selecting a winner depending on their particular rule.

    These bits, channelled through gates, form the basis of just about any calculation you can think of, as far as classical computers are concerned.

    But qubits offer an alternative unit to base algorithms on. More than just a 1 or a 0, they also provide a special blend of the two states. It’s like a coin held in a hand before you see whether it’s heads or tails.

    In conjunction with a quantum version of a logic gate, qubits can do what classical bits can’t. There’s just one problem – that indeterminate state of 1 and 0 turns into a definite 1 or 0 when it becomes part of a measured system.

    Worse still, it doesn’t take much to collapse the qubit’s maybe into a definitely, which means a quantum computer can become an expensive paperweight if those delicate components aren’t adequately hidden from their noisy environment.

    Right now, quantum computer engineers are super excited by devices that can wrangle just over 70 qubits – which is impressive, but quantum computers will really only earn their keep as they stock up on hundreds, if not thousands of qubits all hovering on the brink of reality at the same time.

    To make this kind of scaling a reality, scientists need additional tricks. One option would be to make the technology as modular as possible, networking smaller quantum systems into a bigger one in order to offset errors.

    But for that to work, quantum gates – those special operations that deal with the heavy lifting of qubits – also need to be shared.

    Teleporting information, such as a quantum gate, sounds pretty sci-fi. But we’re obviously not talking about Star Trek transport systems here.

    In reality it simply refers to the fact that objects can have their history entangled so that when one is measured, the other immediately collapses into a related state, no matter how far away it is.

    This has technically been demonstrated experimentally already [Physical Review Letters], but, until now, the process hasn’t been reliably performed and measured in real time, which is crucial if it’s to become part of a practical computer.

    “Our work is the first time that this protocol has been demonstrated where the classical communication occurs in real-time, allowing us to implement a ‘deterministic’ operation that performs the desired operation every time,” says lead author Kevin Chou.

    The researchers used qubits in sapphire chips inside a cutting-edge setup to teleport a type of quantum operation called a controlled-NOT gate. Importantly, by applying error-correctable coding, the process was 79 percent reliable.

    “It is a milestone toward quantum information processing using error-correctable qubits,” says principal investigator Robert Schoelkopf.

    It’s a baby step on the road to making quantum modules, but this proof-of-concept shows modules could still be the way to go in growing quantum computers to the scale we need.

    This research was published in Nature.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Yale University Campus

    Yale University comprises three major academic components: Yale College (the undergraduate program), the Graduate School of Arts and Sciences, and the professional schools. In addition, Yale encompasses a wide array of centers and programs, libraries, museums, and administrative support offices. Approximately 11,250 students attend Yale.

     
  • richardmitnick 12:04 pm on September 6, 2018 Permalink | Reply
    Tags: , , , , Jupiter's magnetic field, Science Alert   

    From Science Alert: “Scientists Mapped Jupiter’s Magnetic Field, And It’s Unlike Anything We’ve Ever Seen” 

    ScienceAlert

    From Science Alert

    6 SEP 2018
    MICHELLE STARR

    1
    (Moore et al./Nature)

    The first map of Jupiter’s magnetic field at a range of depths is in, and it solidifies something we already knew: that it’s really, really, really weird. Aside from that, though, it’s unlike anything else planetary scientists have ever seen.

    We already knew that the gas giant’s external magnetic field was an odd duck. For a start, it’s incredibly strong. Jupiter’s diameter is over 11 times that of Earth, but its magnetic field is a massive 20,000 times stronger.

    It’s also absolutely huge, and unrivalled in complexity – Earth’s magnetic field is strong, but some of the structures seen in Jupiter’s have no terrestrial counterpart. It’s thought that this complexity may have something to do with Jupiter’s rapid rotation and large liquid metallic hydrogen interior.

    But now Juno, orbiting Jupiter’s poles, is allowing unprecedented access to the dynamics of this strange magnetic field, taking much closer observations than have ever been possible.

    Scientists from the US and Denmark have used data from eight Juno orbits to map the magnetic field in unprecedented detail at depths of 10,000 kilometres (6,214 miles), and found that it’s much stranger than they were expecting.

    Earth’s magnetic field is predominantly dipolar, as if it had a bar magnet running through the centre of the planet, with the poles of the magnet at the planet’s poles, emerging from the South pole and re-entering at the North. There are also non-dipolar components, evenly spread out throughout the hemispheres.

    If you could see it, it would look a little something like this.

    2
    (Geek3/Wikimedia Commons)

    But that’s not how Jupiter’s turned out.

    Instead, the field emerges from a broad section of the northern hemisphere, re-entering around the south pole – and a highly concentrated region just south of the equator, what the researchers are calling the Great Blue Spot. Elsewhere, the field is much weaker.

    Take a look for yourself [above]:

    “Before the Juno mission, our best maps of Jupiter’s field resembled Earth’s field,” planetary scientist Kimberly Moore of Harvard University told Newsweek.

    “The main surprise was that Jupiter’s field is so simple in one hemisphere and so complicated in the other. None of the existing models predicted a field like that.”

    In another strange discovery, the researchers found that the non-dipolar part of the magnetic field is almost entirely concentrated in the northern hemisphere. The whole thing is deeply lopsided and utterly unique.

    This indicates that something unknown is occurring in Jupiter’s interior.

    Magnetic fields are generated by conductive liquids inside a planet. When combined with the planet’s rotation, they generate magnetism. This is called a planetary dynamo. In Earth, the dynamo operates within a thick, uniform shell.

    The researchers believe their finding indicates that Jupiter’s does not. One model, they proposed, is that Jupiter’s core is not a solid, tiny ball of ice and rock, but a slush of rock and ice fragments partially dissolved in the liquid metallic hydrogen.

    This could create layers, the dynamics of which generate an asymmetrical magnetic field.

    Another explanation could be helium rain, which could destabilise the field, although, the researchers noted, this would be unlikely to account for the hemispheric asymmetry.

    In total, Juno is planned to make 34 orbits of Jupiter. The team is planning to use future observations to try and solve the mystery.

    Their research has been published in the journal Nature.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 10:18 am on September 5, 2018 Permalink | Reply
    Tags: Black body radiation, , , Planck's law of radiative heat transfer has held up well under a century of intense testing but a new analysis has found it fails on the smallest of scales, , Science Alert, University of Michigan, William & Mary   

    From University of Michigan and William & Mary via Science Alert: “A Fundamental Physics Law Just Failed a Test Using Nanoscale Objects” 

    U Michigan bloc

    University of Michigan

    1

    William & Mary

    via

    ScienceAlert

    From Science Alert

    1
    (Xanya69/istock)

    5 SEP 2018
    MIKE MCRAE

    Planck’s law of radiative heat transfer has held up well under a century of intense testing, but a new analysis has found it fails on the smallest of scales.

    Exactly what this means isn’t all that clear yet, but where laws fail, new discoveries can follow. Such a find wouldn’t just affect physics on an atomic scale – it could impact everything from climate models to our understanding of planetary formation.

    The foundational law of quantum physics was recently put to the test by researchers from William & Mary in Virginia and the University of Michigan, who were curious about whether the age-old rule could describe the way heat radiation was emitted by nanoscale objects.

    Not only does the law fail, the experimental result is 100 times greater than the predicted figure, suggesting nanoscale objects can emit and absorb heat with far greater efficiency than current models can explain.

    “That’s the thing with physics,” says William & Mary physicist Mumtaz Qazilbash.

    “It’s important to experimentally measure something, but also important to actually understand what is going on.”

    Planck is one of the big names in physics. While it’d be misleading to attribute the birth of quantum mechanics to a single individual, his work played a key role in getting the ball rolling.

    Humans have known since ancient times that hot things glow with light. We’ve also understood for quite a while that there’s a relationship between the colour of that light and its temperature.

    To study this in detail, physicists in the 19th century would measure the colour of light inside a black, heated box, watching through a tiny hole. This ‘black body radiation’ provided a reasonably precise measure of that relationship.

    Coming up with simple formulae to describe the wavelengths of colour and their temperatures proved to be rather challenging, and so Planck came at it from a slightly different angle.

    His approach was to treat the way light was absorbed and emitted like a pendulum’s swing, with discrete quantities of energy being soaked up and spat out. Not that he really thought this was the case – it was just a convenient way to model light.

    As strange as it seemed at first, the model worked perfectly. This ‘quantity’ of energy approach generated decades of debate over the nature of reality, and has come to form the underpinnings of physics as we know it.

    Planck’s law of radiative heat transfer informs a theory describing a maximum frequency at which heat energy can be emitted from an object at a given temperature.

    This works extremely well for visible objects separated at a visible distance. But what if we push those objects together, so the space between them isn’t quite a single wavelength of the light being emitted? What happens to that ‘pendulum swing’?

    Physicists well versed in the dynamics of electromagnetism already know weird things happen here in this area, known as the ‘near field’ region.

    For one thing, the relationship between the electrical and magnetic aspects of the electromagnetic field becomes more complex.

    Just how this might affect the way heated objects interact has already been the focus of previous research, which has established some big differences in how heat moves in the near field as compared with the far field observed by Planck.

    But that’s just if the gap is confined to a distance smaller than the wavelength of emitted radiation. What about the size of the objects themselves?

    The researchers had quite a challenge ahead of them. They had to engineer objects smaller than about 10 microns in size – the approximate length of a wave of infrared light.

    They settled on two membranes of silicon nitride a mere half micron thick, separated by a distance that put them well into the far field.

    Heating one and measuring the second allowed them to test Planck’s law with a fair degree of precision.

    “Planck’s radiation law says if you apply the ideas that he formulated to two objects, then you should get a defined rate of energy transfer between the two,” says Qazilbash.

    “Well, what we have observed experimentally is that rate is actually 100 times higher than Planck’s law predicts if the objects are very, very small.”

    Qazilbash likens it to the plucking of a guitar string at different places along its length. “If you pluck it in those places, it’s going to resonate at certain wavelengths more efficiently.”

    The analogy is a useful way to visualise the phenomenon, but understanding the details of the physics behind the discovery could have some big impacts. Not just in nanotechnology, but on a far bigger scale.

    This hyper-efficient rate of energy transfer could feasibly change how we understand heat transfer in the atmosphere, or in a cooling body the size of a planet. The extent of this difference is still a mystery, but one with some potentially profound implications.

    “Wherever you have radiation playing an important role in physics and science, that’s where this discovery is important,” says Qazilbash.

    This research was published in Nature.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 11:31 am on September 4, 2018 Permalink | Reply
    Tags: Aftershocks can often be as horrifying as the main event, , , , , , Science Alert, , This New AI Tool Could Solve a Deadly Earthquake Problem We Currently Can't Fix   

    From Harvard University via Science Alert: “This New AI Tool Could Solve a Deadly Earthquake Problem We Currently Can’t Fix” 

    Harvard University
    From Harvard University

    via

    Science Alert

    4 SEP 2018
    DAVID NIELD

    1
    (mehmetakgu/iStock)

    It could literally save lives.

    The aftershocks of a devastating earthquake can often be as horrifying as the main event. Now scientists have developed a system for predicting where such post-quake tremors could take place, and they’ve used an ingenious application of artificial intelligence (AI) to make this happen.

    Knowing more about what’s coming next can be a matter of life or death for communities reeling from a large quake. The aftershocks can often cause further injuries and fatalities, damage buildings, and complicate rescue efforts.

    A team led by researchers from Harvard University has trained AI to crunch huge amounts of sensor data and apply deep learning to make more accurate predictions.

    The researchers behind the new system say it’s not ready to be deployed yet, but is already more reliable at pinpointing aftershocks than current prediction models.

    In the years ahead, it could become a vital part of the prediction systems used by seismologists.

    “There are three things you want to know about earthquakes – you want to know when they are going to occur, how big they’re going to be and where they’re going to be,” says one of the team, Brendan Meade from Harvard University in Massachusetts.

    “Prior to this work we had empirical laws for when they would occur and how big they were going to be, and now we’re working the third leg, where they might occur.”

    The idea to use deep learning to tackle this came to Meade when he was on a sabbatical at Google – a company where AI is being deployed in many different areas of computing and science.

    Machine learning is just one facet of AI, and is exactly what it sounds like: machines learning from sets of data, so they can cope with new problems that they haven’t been specifically programmed to tackle.

    Deep learning is a more advanced type of machine learning, applying what are called neural networks to try and mimic the thinking processes of the brain.

    In simple terms it means the AI can see more possible results at once, and weigh up a more complex map of factors and considerations, sort-of like neurons in a brain would.

    It’s perfect for earthquakes, with so many variables to consider – from the strength of the shock to the position of the tectonic plates to the type of ground involved. Deep learning could potentially tease out patterns that human analysts could never spot.

    To put this to use with aftershocks, Meade and his colleagues tapped into a database of over 131,000 pairs of earthquake and aftershock readings, taken from 199 previous earthquakes.

    Having let the AI engine chew through those, they then got it to predict the activity of more than 30,000 similar pairs, suggesting the likelihood of aftershocks hitting locations based on a grid of 5 square kilometre (1.9 square mile) units.

    The results were ahead of the Coulomb failure stress change model currently in use. If 1 represents perfect accuracy, and .5 represents flipping a coin, the Coulomb model scored 0.583, and the new AI system managed 0.849.

    “I’m very excited for the potential for machine learning going forward with these kind of problems – it’s a very important problem to go after,” says one of the researchers, Phoebe DeVries from Harvard University.

    “Aftershock forecasting in particular is a challenge that’s well-suited to machine learning because there are so many physical phenomena that could influence aftershock behaviour and machine learning is extremely good at teasing out those relationships.”

    A key ingredient, the researchers say, was the addition of the von Mises yield criterion into the AI’s algorithms – a calculation that can predict when materials will break under stress. Previously used in fields like metallurgy, the calculation hasn’t been extensively used in modelling earthquakes before now.

    There’s still a way to go here – the researchers point out their current AI models are only designed to deal with one type of aftershock trigger, and simple fault lines: it’s not yet a system that can be applied to any kind of quake around the world.

    What’s more, it’s too slow right now to predict the deadly aftershocks that can happen a day or two after the first earthquake.

    However, the good news is that neural networks are designed to continually get better over time, which means with more data and more learning cycles, the system should steadily improve.

    “I think we’ve really just scratched the surface of what could be done with aftershock forecasting… and that’s really exciting,” says DeVries.

    The research has been published in Nature.

    See the full article here .

    Earthquake Alert

    1

    Earthquake Alert

    Earthquake Network projectEarthquake Network is a research project which aims at developing and maintaining a crowdsourced smartphone-based earthquake warning system at a global level. Smartphones made available by the population are used to detect the earthquake waves using the on-board accelerometers. When an earthquake is detected, an earthquake warning is issued in order to alert the population not yet reached by the damaging waves of the earthquake.

    The project started on January 1, 2013 with the release of the homonymous Android application Earthquake Network. The author of the research project and developer of the smartphone application is Francesco Finazzi of the University of Bergamo, Italy.

    Get the app in the Google Play store.

    3
    Smartphone network spatial distribution (green and red dots) on December 4, 2015

    Meet The Quake-Catcher Network

    QCN bloc

    Quake-Catcher Network

    The Quake-Catcher Network is a collaborative initiative for developing the world’s largest, low-cost strong-motion seismic network by utilizing sensors in and attached to internet-connected computers. With your help, the Quake-Catcher Network can provide better understanding of earthquakes, give early warning to schools, emergency response systems, and others. The Quake-Catcher Network also provides educational software designed to help teach about earthquakes and earthquake hazards.

    After almost eight years at Stanford, and a year at CalTech, the QCN project is moving to the University of Southern California Dept. of Earth Sciences. QCN will be sponsored by the Incorporated Research Institutions for Seismology (IRIS) and the Southern California Earthquake Center (SCEC).

    The Quake-Catcher Network is a distributed computing network that links volunteer hosted computers into a real-time motion sensing network. QCN is one of many scientific computing projects that runs on the world-renowned distributed computing platform Berkeley Open Infrastructure for Network Computing (BOINC).

    The volunteer computers monitor vibrational sensors called MEMS accelerometers, and digitally transmit “triggers” to QCN’s servers whenever strong new motions are observed. QCN’s servers sift through these signals, and determine which ones represent earthquakes, and which ones represent cultural noise (like doors slamming, or trucks driving by).

    There are two categories of sensors used by QCN: 1) internal mobile device sensors, and 2) external USB sensors.

    Mobile Devices: MEMS sensors are often included in laptops, games, cell phones, and other electronic devices for hardware protection, navigation, and game control. When these devices are still and connected to QCN, QCN software monitors the internal accelerometer for strong new shaking. Unfortunately, these devices are rarely secured to the floor, so they may bounce around when a large earthquake occurs. While this is less than ideal for characterizing the regional ground shaking, many such sensors can still provide useful information about earthquake locations and magnitudes.

    USB Sensors: MEMS sensors can be mounted to the floor and connected to a desktop computer via a USB cable. These sensors have several advantages over mobile device sensors. 1) By mounting them to the floor, they measure more reliable shaking than mobile devices. 2) These sensors typically have lower noise and better resolution of 3D motion. 3) Desktops are often left on and do not move. 4) The USB sensor is physically removed from the game, phone, or laptop, so human interaction with the device doesn’t reduce the sensors’ performance. 5) USB sensors can be aligned to North, so we know what direction the horizontal “X” and “Y” axes correspond to.

    If you are a science teacher at a K-12 school, please apply for a free USB sensor and accompanying QCN software. QCN has been able to purchase sensors to donate to schools in need. If you are interested in donating to the program or requesting a sensor, click here.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing, developed at UC Berkeley.

    Earthquake safety is a responsibility shared by billions worldwide. The Quake-Catcher Network (QCN) provides software so that individuals can join together to improve earthquake monitoring, earthquake awareness, and the science of earthquakes. The Quake-Catcher Network (QCN) links existing networked laptops and desktops in hopes to form the worlds largest strong-motion seismic network.

    Below, the QCN Quake Catcher Network map
    QCN Quake Catcher Network map

    ShakeAlert: An Earthquake Early Warning System for the West Coast of the United States

    The U. S. Geological Survey (USGS) along with a coalition of State and university partners is developing and testing an earthquake early warning (EEW) system called ShakeAlert for the west coast of the United States. Long term funding must be secured before the system can begin sending general public notifications, however, some limited pilot projects are active and more are being developed. The USGS has set the goal of beginning limited public notifications in 2018.

    Watch a video describing how ShakeAlert works in English or Spanish.

    The primary project partners include:

    United States Geological Survey
    California Governor’s Office of Emergency Services (CalOES)
    California Geological Survey
    California Institute of Technology
    University of California Berkeley
    University of Washington
    University of Oregon
    Gordon and Betty Moore Foundation

    The Earthquake Threat

    Earthquakes pose a national challenge because more than 143 million Americans live in areas of significant seismic risk across 39 states. Most of our Nation’s earthquake risk is concentrated on the West Coast of the United States. The Federal Emergency Management Agency (FEMA) has estimated the average annualized loss from earthquakes, nationwide, to be $5.3 billion, with 77 percent of that figure ($4.1 billion) coming from California, Washington, and Oregon, and 66 percent ($3.5 billion) from California alone. In the next 30 years, California has a 99.7 percent chance of a magnitude 6.7 or larger earthquake and the Pacific Northwest has a 10 percent chance of a magnitude 8 to 9 megathrust earthquake on the Cascadia subduction zone.

    Part of the Solution

    Today, the technology exists to detect earthquakes, so quickly, that an alert can reach some areas before strong shaking arrives. The purpose of the ShakeAlert system is to identify and characterize an earthquake a few seconds after it begins, calculate the likely intensity of ground shaking that will result, and deliver warnings to people and infrastructure in harm’s way. This can be done by detecting the first energy to radiate from an earthquake, the P-wave energy, which rarely causes damage. Using P-wave information, we first estimate the location and the magnitude of the earthquake. Then, the anticipated ground shaking across the region to be affected is estimated and a warning is provided to local populations. The method can provide warning before the S-wave arrives, bringing the strong shaking that usually causes most of the damage.

    Studies of earthquake early warning methods in California have shown that the warning time would range from a few seconds to a few tens of seconds. ShakeAlert can give enough time to slow trains and taxiing planes, to prevent cars from entering bridges and tunnels, to move away from dangerous machines or chemicals in work environments and to take cover under a desk, or to automatically shut down and isolate industrial systems. Taking such actions before shaking starts can reduce damage and casualties during an earthquake. It can also prevent cascading failures in the aftermath of an event. For example, isolating utilities before shaking starts can reduce the number of fire initiations.

    System Goal

    The USGS will issue public warnings of potentially damaging earthquakes and provide warning parameter data to government agencies and private users on a region-by-region basis, as soon as the ShakeAlert system, its products, and its parametric data meet minimum quality and reliability standards in those geographic regions. The USGS has set the goal of beginning limited public notifications in 2018. Product availability will expand geographically via ANSS regional seismic networks, such that ShakeAlert products and warnings become available for all regions with dense seismic instrumentation.

    Current Status

    The West Coast ShakeAlert system is being developed by expanding and upgrading the infrastructure of regional seismic networks that are part of the Advanced National Seismic System (ANSS); the California Integrated Seismic Network (CISN) is made up of the Southern California Seismic Network, SCSN) and the Northern California Seismic System, NCSS and the Pacific Northwest Seismic Network (PNSN). This enables the USGS and ANSS to leverage their substantial investment in sensor networks, data telemetry systems, data processing centers, and software for earthquake monitoring activities residing in these network centers. The ShakeAlert system has been sending live alerts to “beta” users in California since January of 2012 and in the Pacific Northwest since February of 2015.

    In February of 2016 the USGS, along with its partners, rolled-out the next-generation ShakeAlert early warning test system in California joined by Oregon and Washington in April 2017. This West Coast-wide “production prototype” has been designed for redundant, reliable operations. The system includes geographically distributed servers, and allows for automatic fail-over if connection is lost.

    This next-generation system will not yet support public warnings but does allow selected early adopters to develop and deploy pilot implementations that take protective actions triggered by the ShakeAlert notifications in areas with sufficient sensor coverage.

    Authorities

    The USGS will develop and operate the ShakeAlert system, and issue public notifications under collaborative authorities with FEMA, as part of the National Earthquake Hazard Reduction Program, as enacted by the Earthquake Hazards Reduction Act of 1977, 42 U.S.C. §§ 7704 SEC. 2.

    For More Information

    Robert de Groot, ShakeAlert National Coordinator for Communication, Education, and Outreach
    rdegroot@usgs.gov
    626-583-7225

    Learn more about EEW Research

    ShakeAlert Fact Sheet

    ShakeAlert Implementation Plan

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Harvard University campus
    Harvard is the oldest institution of higher education in the United States, established in 1636 by vote of the Great and General Court of the Massachusetts Bay Colony. It was named after the College’s first benefactor, the young minister John Harvard of Charlestown, who upon his death in 1638 left his library and half his estate to the institution. A statue of John Harvard stands today in front of University Hall in Harvard Yard, and is perhaps the University’s best known landmark.

    Harvard University has 12 degree-granting Schools in addition to the Radcliffe Institute for Advanced Study. The University has grown from nine students with a single master to an enrollment of more than 20,000 degree candidates including undergraduate, graduate, and professional students. There are more than 360,000 living alumni in the U.S. and over 190 other countries.

     
  • richardmitnick 12:37 pm on August 31, 2018 Permalink | Reply
    Tags: A particle called a beauty meson was breaking down in ways that just weren't line up with predictions, , Anomalies in The Large Hadron Collider's Data Are Still Stubbornly Pointing to New Physics, , , , , , Science Alert   

    From CERN via Science Alert: “Anomalies in The Large Hadron Collider’s Data Are Still Stubbornly Pointing to New Physics” 

    Cern New Bloc

    Cern New Particle Event

    CERN New Masthead

    From CERN

    via

    Science Alert

    31 AUG 2018
    MIKE MCRAE

    1
    (CERN) [CMS?]

    Past experiments using CERN’s super-sized particle-smasher, the Large Hadron Collider (LHC), hinted at something unexpected. A particle called a beauty meson was breaking down in ways that just weren’t line up with predictions.

    That means one of two things – our predictions are wrong, or the numbers are out. And a new approach makes it less likely that the observations are a mere coincidence, making it’s nearly enough for scientists to start getting excited.

    3
    (CERN / ALICE event)

    A small group of physicists took the collider’s data on beauty meson (or b meson for short) disintegration, and investigated what might happen if they swapped one assumption regarding its decay for another that assumed interactions were still occurring after they transformed.

    The results were more than a little surprising. The alternative approach doubles down on the take that something strange really is going on.

    In physics, anomalies are usually viewed as good things. Fantastic things. Unexpected numbers could be the window to a whole new way of seeing physics.

    Physicists are quite a conservative bunch. You have to be when the fundamental laws of the Universe are at stake.

    So when experimental results don’t quite match up with the theory, it’s first presumed to be a random blip in the statistical chaos of a complicated test. If a follow-up experiment shows the same thing, it’s still presumed to be ‘one of those things’.

    But after enough experiments, sufficient data can be collected to compare the chances of errors with the likelihood that something truly interesting is going on.

    If an unexpected result differs from the predicted outcome by at least three standard deviations it’s called a 3 sigma [σ], and physicists are allowed to look at the results while nodding enthusiastically with their eyebrows raised. It becomes an observation.

    To really attract attention, the anomaly should persist when there’s enough data to push that difference to five standard deviations. A 5σ event is cause to break out the champagne.

    Over the years, the LHC has been used to create particles called mesons, with the purpose of watching what happens in the moments after they’re born.

    Mesons are a type of hadron, somewhat like the proton. Only instead of consisting of three quarks in a stable formation under strong interactions, they’re made of only two – a quark and an antiquark.

    Even the most stable of mesons fall apart after hundredths of a second. The framework we use to describe the construction and decay of particles – the Standard Model – describes what we should see when different mesons split up.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.


    Standard Model of Particle Physics from Symmetry Magazine

    The beauty meson is a down quark connected to a bottom anti-quark. When the particle’s properties are plugged into the Standard Model, b-meson decay should produce pairs of electrons and positrons, or electron-like muons and their opposites, anti-muons.

    This electron or muon outcome should be 50-50. But that’s not what we’re seeing. Results are showing far more of the electron-positron products than muon-anti-muons.

    This is worth paying attention to. But when the sum of the results are held up next to the Standard Model’s prediction, the difference is a mere 3.4 sigma. Interesting, but nothing to go wild over.

    The Standard Model is a fine piece of work. Built over decades on the foundations of the field theories first laid out by the brilliant Scottish theorist James Clerk Maxwell, it’s served as a map for the unseen realms of many new particles.

    But it’s not perfect. There’s things we’ve seen in nature – from dark matter to the masses of neutrinos – that currently seem to be out of reach of the Standard Model’s framework.

    In moments like this, physicists tweak basic assumptions on the model and see if they do a better job of explaining what we’re seeing.

    “In previous calculations, it was assumed that when the meson disintegrates, there are no more interactions between its products,” says physicist Danny van Dyk from the University of Zurich.

    “In our latest calculations we have included the additional effect: long-distance effects called the charm-loop.”

    The details of this effect aren’t for the amateur, and aren’t quite Standard Model material.

    In short, they involve complicated interactions of virtual particles – particles that don’t persist long enough to go anywhere, but arise in principle in the fluctuations of quantum uncertainty – and an interaction between the decay products after they’ve split up.

    What is interesting is that by explaining the meson’s breakdown through this speculative charm loop the anomaly’s significance jumps to a convincing 6.1σ.

    In spite of the leap, it’s still not a champagne affair. More work needs to be done, which includes piling up the observations in light of this new process.

    “We will probably have a sufficient amount within two or three years to confirm the existence of an anomaly with a credibility entitling us to talk about a discovery,” says Marcin Chrzaszcz from the University of Zurich.

    If confirmed it would show enough flexibility in the Standard Model to stretch its boundaries, potentially revealing pathways to new areas of physics.

    It’s a tiny crack, and still might turn up nothing. But nobody said solving the biggest mysteries in the Universe would be easy.

    This research was published in European Physical Journal C.

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Meet CERN in a variety of places:

    Quantum Diaries
    QuantumDiaries

    Cern Courier

    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS
    CERN ATLAS New

    ALICE
    CERN ALICE New

    CMS
    CERN CMS New

    LHCb
    CERN LHCb New II

    LHC

    CERN map

    CERN LHC Grand Tunnel

    CERN LHC particles

    OTHER PROJECTS AT CERN

    CERN AEGIS

    CERN ALPHA

    CERN ALPHA

    CERN AMS

    CERN ACACUSA

    CERN ASACUSA

    CERN ATRAP

    CERN ATRAP

    CERN AWAKE

    CERN AWAKE

    CERN CAST

    CERN CAST Axion Solar Telescope

    CERN CLOUD

    CERN CLOUD

    CERN COMPASS

    CERN COMPASS

    CERN DIRAC

    CERN DIRAC

    CERN ISOLDE

    CERN ISOLDE

    CERN LHCf

    CERN LHCf

    CERN NA62

    CERN NA62

    CERN NTOF

    CERN TOTEM

    CERN UA9

    CERN Proto Dune

    CERN Proto Dune

     
  • richardmitnick 9:02 am on August 26, 2018 Permalink | Reply
    Tags: , , , , , , , , Science Alert   

    From CERN via Science Alert: “Physicists Are Almost Able to Cool Antimatter. Here’s Why That’s a Big Deal” 

    Cern New Bloc

    Cern New Particle Event

    CERN New Masthead

    From CERN

    via

    Science Alert

    1
    (koto_feja/iStock)

    26 AUG 2018
    KRISTIN HOUSER

    Where is all the antimatter?

    We’re still figuring out what the heck antimatter even is, but scientists are already getting ready to fiddle with it.

    Physicists at the European Organization for Nuclear Research (CERN) are one step closer to cooling antimatter using lasers, a milestone that could help us crack its many mysteries.

    They published their research on Wednesday in the journal Nature.

    Antimatter is essentially the opposite of “normal” matter. While protons have a positive charge, their antimatter equivalents, antiprotons, have the same mass, but a negative charge.

    Electrons and their corresponding antiparticle, positrons, have the same mass — the only difference is that they have different charges (negative for electrons, positive for positrons).

    When a particle meets its antimatter equivalent, the two annihilate one another, canceling the other out.

    In theory, the Big Bang should have produced an equal amount of matter and antimatter, in which case, the two would have just annihilated one another.

    But that’s not what happened — the Universe seems to have way more matter than antimatter.

    Researchers have no idea why that is, and because antimatter is very difficult to study, they haven’t had much recourse for figuring it out.

    And that’s why CERN researchers are trying to cool antimatter off, so they can get a better look.

    Using a tool called the Antihydrogen Laser Physics Apparatus (ALPHA), the researchers combined antiprotons with positrons to form antihydrogen atoms.

    CERN ALPHA Antimatter Factory

    Then, they magnetically trapped hundreds of these atoms in a vacuum and zapped them with laser pulses. This caused the antihydrogen atoms to undergo something called the Lyman-alpha transition.

    “The Lyman-alpha transition is the most basic, important transition in regular hydrogen atoms, and to capture the same phenomenon in antihydrogen opens up a new era in antimatter science,” one of the researchers, Takamasa Momose, said in a university press release.

    According to Momose, this phase change is a critical first step toward cooling antihydrogen.

    Researchers have long used lasers to cool other atoms to make them easier to study. If we can do the same for antimatter atoms, we’ll be better able to study them.

    Scientists can take more accurate measurements, and they might even be able to solve another long-unsettled mystery: figuring out how antimatter interacts with gravity.

    For now, the team plans to continue working toward that goal of cooling antimatter. If they’re successful, they might be able to help unravel mysteries with answers critical to our understanding of the Universe.

    See the full article here.

     
  • richardmitnick 11:34 am on August 23, 2018 Permalink | Reply
    Tags: Astronomer Masafumi Noguchi of Tohoku University, , , , , Our Galaxy Has Already Died Once. Now We Are in Its Second Life, Science Alert   

    From Science Alert: “Our Galaxy Has Already Died Once. Now We Are in Its Second Life” 

    ScienceAlert

    From Science Alert

    23 AUG 2018
    MICHELLE STARR

    1
    (NASA/JPL-Caltech/S. Stolovy)

    Milky Way NASA/JPL-Caltech /ESO R. Hurt

    The stars remember.

    The Milky Way is a zombie. No, not really, it doesn’t go around eating other galaxies’ brains. But it did “die” once, before flaring back to life. That’s what a Japanese scientist has ascertained after peering into the chemical compositions of our galaxy’s stars.

    In a large section of the Milky Way, the stars can be divided into two distinct populations based on their chemical compositions. The first group is more abundant in what is known as α elements – oxygen, magnesium, silicon, sulphur, calcium and titanium. The second is less abundant in α elements, and markedly more abundant in iron.

    The existence of these two distinct populations implies that something different is happening during the formation stages. But the precise mechanism behind it was unclear.

    Astronomer Masafumi Noguchi of Tohoku University believes his modelling shows the answer. The two different populations represent two different periods of star formation, with a quiescent, or “dormant” period in between, with no star formation.

    Based on the theory of cold flow galactic accretion proposed back in 2006 [MNRAS], Noguchi has modelled the evolution of the Milky Way over a 10 billion-year period.

    Originally, the cold flow model was suggested for much larger galaxies, proposing that massive galaxies form stars in two stages. Because of the chemical composition dichotomy of its stars, Noguchi believes this also applies to the Milky Way.

    That’s because the chemical composition of stars is dependent on the gases from which they are formed. And, in the early Universe, certain elements – such as the heavier metals – hadn’t yet arrived on the scene, since they were created in stars, and only propagated once those stars had gone supernova.

    In the first stage, according to Noguchi’s model, the galaxy is accreting cold gas from outside. This gas coalesces to form the first generation of stars.

    After about 10 million years, which is a relatively short timescale in cosmic terms, some of these stars died in Type II supernovae. This propagated the α elements throughout the galaxy, which were incorporated into new stars.

    But, according to the model, it all went a bit belly-up after about 3 billion years.

    “When shock waves appeared and heated the gas to high temperatures 7 billion years ago, the gas stopped flowing into the galaxy and stars ceased to form,” a release from Tohoku University says.

    During a hiatus of about 2 billion years, a second round of supernovae took place – the much longer scale Type Ia supernova, which typically occur after a stellar lifespan of about 1 billion years.

    It’s in these supernovae that iron is forged, and spewed out into the interstellar medium. When the gas cooled enough to start forming stars again – about 5 billion years ago – those stars had a much higher percentage of iron than the earlier generation. That second generation includes our Sun, which is about 4.6 billion years old.

    Noguchi’s model is consistent with recent research on our closest galactic neighbour, Andromeda, which is thought to be in the same size class as the Milky Way. In 2017, a team of researchers published a paper [The Astrophysical Journal] that found Andromeda’s star formation also occurred in two stages, with a relatively quiescent period in between.

    If the model holds up, it may mean that the evolution models of galaxies need to be revised – that, while smaller dwarf galaxies experience continuous star formation, perhaps a “dead” period is the norm for massive ones.

    If future observations confirm, who’s up for renaming our galaxy Frankenstein?

    Noguchi’s paper has been published in the journal Nature.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: