Tagged: ars technica Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:52 pm on February 23, 2021 Permalink | Reply
    Tags: "A curious observer’s guide to quantum mechanics pt 7- The quantum century", ars technica, ,   

    From ars technica: “A curious observer’s guide to quantum mechanics pt 7- The quantum century” 

    From ars technica

    2/21/2021
    Miguel F. Morales

    Manipulating quantum devices has been like getting an intoxicating new superpower for society.

    1
    Credit: Aurich Lawson / Getty Images.
    ________________________________________________________________________________________________________
    One of the quietest revolutions of our current century has been the entry of quantum mechanics into our everyday technology. It used to be that quantum effects were confined to physics laboratories and delicate experiments. But modern technology increasingly relies on quantum mechanics for its basic operation, and the importance of quantum effects will only grow in the decades to come. As such, physicist Miguel F. Morales has taken on the herculean task of explaining quantum mechanics to laypeople in this seven-part series (no math, we promise). Below is the series finale, but you can always find the starting story plus a landing page for the entire series on site.
    ________________________________________________________________________________________________________

    As tool builders, it is only very recently that we’ve been able to use quantum mechanics. Understanding and manipulating quantum devices has been like getting an intoxicating new superpower—there are so many things we can now build that would have been impossible just a few years ago.

    We encountered a few of these quantum technologies in the previous articles. Some of them, like the quantum dots in TVs, are already becoming commonplace; others, like optical clocks, exist but are still very rare.

    As this is the last article in this series, I’d like to look to a near future where quantum technologies are likely to infuse our everyday existence. One does not have to look far—all of the technologies we’ll explore today already exist. Most of them are still rare, isolated in laboratories or as technology demonstrators. Others are hiding in plain sight, such as the MRI machine at the local hospital or the hard drive sitting on your desk. In this article, let’s focus on some of the technologies that we did not encounter in earlier articles: superconductivity, particle polarization, and quantum electronics.

    As we look at these quantum technologies, envision what it will be like to live in a world where quantum devices are everywhere. What will it mean to be technically literate when knowing quantum mechanics is a prerequisite for understanding everyday technology?

    So pick up your binoculars, and let’s look at the quantum technologies coming over the next ridge.

    2
    MRI magnets under construction at the Philips Healthcare production facility in 2010.

    3
    A magnet levitating above a superconductor—this makes a great classroom demonstration!
    Credit: Trevor Prentice / phys.org.

    Superconductors

    In a normal conducting wire, you can attach a battery and measure how quickly the electrons move through it (the current, or number and speed of electrons). It takes some pressure (voltage) to push the electrons through, and doing that pushing releases some heat—think of the red glow of the coils in a room heater or hair dryer. The difficulty of pushing the electrons through a material is the resistance.

    But we know that electrons move as waves. As you cool down all the atoms in a material, the size of the electron waves carrying the electric current become larger. Once the temperature gets low enough, this waviness can go from being an annoying subtlety to the defining characteristic of the electrons. Suddenly the electron waves pair up and move effortlessly through the material—the resistance drops to zero.

    The temperature at which the waviness of electrons takes over depends on the crystal the electrons are in, but it is always cold, involving temperatures at which gasses like nitrogen or helium become liquids. Despite the challenge of keeping things this cold, superconductivity is such an amazing and useful property that we’re using it anyway.

    Electromagnets. The most widespread use of superconductivity is for the electromagnets in MRI (Magnetic Resonance Imaging) machines. As a kid, you may have made an electro-magnet by coiling a wire around a nail and attaching the wire to a battery. The magnet in an MRI machine is similar, in that it’s just a big coil of wire. But when you have ~1000 Amps of current flowing through the wire, keeping the magnet working becomes expensive. It would normally end up looking like the world’s largest space heater.

    So the answer is to use a special wire and cool it down in liquid helium. Once it is superconducting, you can plug it into a power source and ramp up the current (this takes 2-3 days). Then you unplug the magnet and walk away. Because there is no resistance, the current will continue to flow for as long as you keep the magnet cold. When a hospital installs a new MRI, the magnet is turned on when it is installed, then unplugged and left on for the rest of its life.

    4
    A superconducting magnet used for a particle detector. Credit: Brookhaven National Laboratory(US).

    While MRI machines are the most visible examples, superconducting magnets are actually quite common. Any good chemistry laboratory or department will have several superconducting magnets in their Nuclear Magnetic Resonance (NMR) machines and mass spectrometers. Superconducting magnets line 18 km of the Large Hadron Collider and they show up in other ways in physics departments. When we had a shoestring project, we scrounged up a superconducting magnet from the storage alley behind my lab and refurbished it. Physicists are mailed glossy catalogs by superconducting magnet manufacturers.

    Transmission lines. The next obvious application is to stretch a superconducting wire out and use it to carry electricity. There are several demonstration projects around the world that use superconducting power lines. As with most industrial applications, it is just a matter of finding cases where the performance of a superconductor is worth its high price. As the price comes down, long distance superconducting transmission lines may become crucial as we add more renewable solar and wind energy to the grid—being able to losslessly ship power long distances can even out the local variations in renewable power production.

    Generators and motors. If you have incredibly strong superconducting magnets, you want to use them in electric generators and motors. Cooling, as always, is an issue, but the much stronger magnets can make the motor/generators significantly smaller and more efficient. This is particularly enticing for wind turbines (reduced weight on the tower), and electric drives for boats and aircraft (reduced weight and improved efficiency).

    Polarized particles

    In the last article, we played a lot with polarized photons. But polarization is an inherent property of most other fundamental particles, too. While photons get opposite polarizations when you turn the polarizer 90° (vertical vs. horizontal), electrons, protons, and neutrons are oppositely polarized when you turn the polarizer 180° (up vs. down). Whether a particle is an introvert or extrovert is closely related to its polarization characteristics. (Polarization at 90° and 180° are not the only possibilities; particles like gravitons are oppositely polarized at 45°, and the Higgs boson has no polarization at all.)

    MRI machines work by polarizing the hydrogen nuclei in your body. The single proton in a hydrogen nucleus can be polarized by a very strong magnetic field (preferably from a superconducting magnet). Once polarized, the nuclei can be tapped with a radio pulse and they will ring like a radio bell. The exact pitch of the proton’s ringing depends on the strength of the magnetic field that proton sees (stronger field creates higher pitch).

    MRI machines work by polarizing the hydrogen nuclei in your body. The single proton in a hydrogen nucleus can be polarized by a very strong magnetic field (preferably from a superconducting magnet). Once polarized, the nuclei can be tapped with a radio pulse and they will ring like a radio bell. The exact pitch of the proton’s ringing depends on the strength of the magnetic field that proton sees (stronger field creates higher pitch).

    An MRI images your body by carefully changing the magnetic field and repeatedly listening to the radio ringing of your hydrogen nuclei. The MRI will make the field a little stronger at your head than your feet, ring all the hydrogen, then make the field a little stronger on the left than the right, and ring all the hydrogen again. By carefully listening to how many nuclei are ringing at each pitch in different magnet configurations, it can build up a three dimensional view of where all the hydrogen in your body is. The result? A beautiful map of your insides. (Modern MRIs use additional tricks, as exhaustively explained here.)

    I love MRI machines because they rely on quantum mechanics for every aspect of their operation. They use superconductivity to make the strong magnetic field, which causes quantum polarization of the nuclei in your body, and then use a radio ping and ringing related to even more advanced quantum magic. MRIs are inherently quantum mechanical machines.

    5
    A qubit for a quantum computer. Credit: UCSB(US).

    Quantum electronics

    The obvious next step is to play with the polarization of electrons. This is often called “spintronics,” and we’re already using that, too—all modern hard disk drives rely on electron polarization to function.

    Conceptually, you can read a hard disk by running a small loop of wire past the tiny magnets on the disk that record the data. The magnets induce a weak electrical signal in the wire, thus ‘reading’ the magnets’ directions. A loop of wire, with a small bit of magnetic iron to boost the signal, is exactly how audio-tape works and was how hard drives worked until nearly the end of the 20th century.

    But making a hard disk hold more data means making the magnets on the disk ever smaller; correspondingly, the electrical signal produced in the loop of wire became smaller and smaller until it was at the edge of being undetectable. This led to the use of electron polarization heads (giant magneto resistance in 1997 and tunneling magneto resistance in 2004). Both of these technologies are based on quantum mechanics, and use the polarization of the electron to detect the direction of the magnets on the disk.

    You can think of this like passing light through two polarizers. If the polarizers are aligned, most of the light goes through (low resistance); if they are crossed, very little light gets through (high resistance). In today’s read heads, there are two thin layers of iron that act as electron polarizers. One of them has a fixed direction, and polarizes the incoming electrons, while the other iron layer is free to align with the magnet on the disk. As the second electron polarizer switches direction, the number of electrons passing through changes dramatically.

    The tunneling magneto resistance heads in current hard drives include electron tunneling between the iron layers for even more quantum goodness. Modern hard drives simply do not work without quantum electronics.

    But we can go beyond the polarization of electrons and really leverage the electron waviness. By interleaving thin layers of superconducting and normal materials, we can make the quantum electronic equivalents of transistors and diodes such as Superconducting Tunnel Junctions (SJTs) and Superconducting Quantum Interference Devices (affectionately known as SQUIDs). These devices take full advantage of the wave-like nature of electrons and can be used as building blocks for all sorts of novel electronics.

    5
    The BICEP2 microwave telecope’s focal plane is designed to operate at 0.25 K (0.25 degrees Celsius above absolute zero) in order to reduce thermal noise. All of the circuitry in this camera incorporates superconductivity, from the 512 TES bolometers, to the SQUID multiplexers, to the superconducting PCB traces. Credit: Anthony Turner, JPL-Catech.

    Because of the superconducting requirement, they need to be kept very cold, but quantum electronics have already revolutionized precision measurement. The most visible application has been in measuring the Cosmic Microwave Background (CMB). Observations of the CMB have shown that we live in an expanding Universe, determined the age of our Universe, and identified the fraction of it composed of dark matter and dark energy. Measurements of the CMB have transformed our understanding of the Universe we live in. These measurements have been largely enabled by SQUIDs and related superconducting electronics in their microwave cameras.

    In the world of precision measurement, quantum circuitry is the new normal. Quantum effects are found in the radio amplifiers of dark matter searches, the cameras of x-ray satellites, and can image the working brain using magnetic fields. They also serve as the building blocks of many quantum computers, such as D-Wave and IBM Q.

    The quantum world is already here, and is starting to become commonplace.

    Living with Quantum Mechanics

    So what does it mean to be technically literate when knowing quantum mechanics is a prerequisite for understanding our everyday technology? You have to understand that electrons move like waves to understand how a quantum dot in your TV works. Without quantum mechanics it makes no sense—there is no classical analog. As quantum devices become pervasive, an understanding of quantum mechanics will be required to make sense of our world. For people who do not know quantum mechanics, the answer to “How does X work?” will increasingly become “magic.”

    Maybe this is OK? None of us understand how everything we encounter works, and many people are fine with not understanding. Quantum mechanics will enable us to build some wondrous technology—will not understanding how it works detract from its magic?

    I suspect some of you will say the whole reason you read this marathon article series is because you want to know why. And no, magic is not a good enough answer!

    Perhaps there is a more hopeful answer. Maybe this is but a step on a long road. In the early 20th century, electronics was a new thing, understood only by specialists. Transistors were not invented until mid-century. Now, elementary school children wire up circuits and there are university departments dedicated to teaching it. We have learned how to teach electronics much more broadly as it has become a crucial part of our lives.

    Today quantum mechanics is rarely taught outside the physics building. Most college students have never taken a course in quantum mechanics, and popular science videos and articles rarely discuss the gorgeous but advanced topics we explored in this article series. If we want a world where most people can understand the technology around them, we are going to have to figure out how to teach quantum mechanics more broadly.

    And so I’d like to close with a huge thank you to you, the reader, for accompanying me on this journey. I have been teaching many of these concepts in an advanced graduate physics class (a year of graduate quantum mechanics is a prerequisite…). This article series is my attempt to take these ideas and present them in a more accessible way. And while it is one thing to say we need to figure out how to teach quantum mechanics to a broader audience, it is much scarier to actually lace up your boots and try and lead a tour. This article series is my attempt to teach the beautiful and subtle aspects of quantum mechanics to a wider audience, and I am so grateful to all of you for your energy and time, and for trooping along with me on this safari through the quantum woods.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 2:21 pm on February 14, 2021 Permalink | Reply
    Tags: "A curious observer’s guide to quantum mechanics Pt. 6- Two quantum spooks", ars technica, Polarization, Today we’re going to see entanglement and measurement order.   

    From ars technica: “A curious observer’s guide to quantum mechanics Pt. 6- Two quantum spooks” 

    From ars technica

    2/14/2021
    Miguel Morales

    1
    Credit: Aurich Lawson / Getty Images.

    Throughout our quantum adventures to date, we’ve seen a bunch of interesting quantum effects. So for our last major excursion, let’s venture into a particularly creepy corner of the quantum wood: today, we’re going to see entanglement and measurement order.

    Together, these two concepts create some of the most counterintuitive effects in quantum mechanics. They are so counterintuitive that this is probably a good time to re-emphasize that nothing in this series is speculative—everything we’ve seen is backed by hundreds of observations. Sometimes the world is much stranger than we expect it to be.

    I’ve always considered the world of spies and espionage to be strange and spooky, so maybe it is fitting that one of the applications of what we are talking about today is cryptography. But there’s a lot to go over before we get there.

    Playing in the sunlight

    After a long walk through an increasingly dark and gloomy forest, gnarled trees dripping with vines, we unexpectedly emerge into a meadow sparkling in bright sunshine. Blinking in the light, we pull out our polarized sunglasses.

    The lenses in polarized sunglasses are polarized in the vertical direction, meaning that when worn normally, they will allow light polarized vertically to pass through but will completely block horizontally polarized light. This is advantageous, because light glinting off of water is mostly horizontally polarized; a lens that only allows vertically polarized light through will greatly reduce the reflected glare.

    Being a little suspicious of the beautiful sunshine in our mysterious glade, we start looking through multiple pairs of sunglasses. Squinting through two or three pairs of sunglasses at the same time does make you look ridiculous, but such are the sacrifices we make for science. We’ll diagram the results you would see, but if you can scrounge together three pairs of polarized sunglasses, you can perform all of the experiments in this article at home.

    ________________________________________________________________________________________________________
    Polarization
    If you’ve never played with polarized sunglasses, it is great fun to look at different objects while tilting your head in various directions (or holding the glasses in front of you and rotating them if you want to look slightly less like a doofus—whatever you do, please don’t look at the Sun.) Some things will look the same regardless of how you rotate the glasses: concrete, trees, and houses all reflect unpolarized light. Others will dramatically change brightness as you rotate the glasses: reflections off of water and car windows, the blue sky (tangential to the direction to the Sun), and most LCD computer monitors. That’s because these objects reflect or emit polarized light, which has its own set of rules for how it interacts with the glasses.
    ________________________________________________________________________________________________________

    2
    What you see when looking through two polarized sunglasses. Each polarized lens only lets through light that’s polarized in the direction of the arrow on the temple. All of the lenses will let through half of the unpolarized light (the medium-gray tint). But when light has to pass through both glasses, the relative orientation of the glasses matters. On the left, both lenses are letting through vertically polarized light, so all of the light that makes it through the first lens also goes through the second lens. By contrast, on the right, the glasses in the back are rotated to only let through horizontally polarized light, which is entirely blocked by the front glasses. If you hold the glasses at a 45° angle with respect to one another, half of the light going through the first pair of glasses will make it through the second pair (1/2 x 1/2 = 1/4 of background light). Credit: Miguel Morales.

    If you hold two pairs of the glasses in front of you and then rotate one of the pairs, you will notice that the amount of light passing through varies dramatically. When the glasses are at a 90° angle (e.g. one held normally, and the other pair sideways) almost no light will get through. The combined view through the lenses will look almost completely black—for really good polarized lenses, they will get almost welding-glass dark. Conversely, when held with the same orientation, they let almost the same amount of light through as one pair of glasses alone (with good polarizing lenses held in perfect alignment, this is exactly true).

    This is fairly straightforward to understand. If both glasses are held vertically, the first pair of glasses only lets vertical light through. Since the second pair also lets vertical light through, it has nothing to block—all of the light that made it through the first pair will also make it past the second. Conversely, if we hold the first pair sideways so it passes only horizontal light and hold the second pair normally so it blocks horizontal light, then there is no light that can get through both. Together, they appear very dark. And if you hold the glasses at a 45° angle to each other, it’s intermediate; half the light passing through the first pair of glasses gets through the second.

    3
    Only the orientation between the lenses matters, not how they’re oriented relative to the environment.
    Credit: Miguel Morales.

    It’s only the angle between the lenses that matters. If we pick a relative orientation, such as crossed, and rotate both lenses together, we’ll see the opacity stays the same. The green-framed glasses on the right are held at ±45° to the vertical, but since the first pair lets through light polarized 45° left of vertical, all of the light is blocked.

    This all seems to make sense. But then you remember this is quantum mechanics, and the ominous movie music starts to play in the background. Let’s add a third pair of glasses. And to help keep all the orientations straight, we will always use blue frames for the horizontal/vertical orientations and green frames for the ±45° orientations.

    4
    Starting with crossed lenses, we add a diagonal lens behind them. Credit: Miguel Morales.

    We will start with two crossed glasses, as shown on the left. We will then add a third lens oriented at a 45° behind the two we already had. On close examination, this behaves as we’d expect. Anywhere the original glasses block the light is still black. In the little corners where there’s not complete overlap, the light goes through the 45° lens and one of the other lenses and we get the slightly darker tint we saw back in Figure 1. If we look at what’s new, we see that the region where the light must pass through all three lenses is also black.

    5
    Rearranging the order of the glasses makes a big difference. Credit: Miguel Morales.

    But we see something remarkable if we reorder the glasses. We again start with crossed glasses, but now slide the 45° lens between them. When we do this, light gets through where all three lenses overlap. Where only two lenses overlap we get the expected results—black for crossed and tinted for 45° relative orientation. But even though the front and back lenses are crossed and would normally let no light through, if you interpose another lens with intermediate polarization, suddenly light can pass through.

    This is weird. The order of the glasses matters. If you are following along at home, try flipping the order of the lenses. If crossed lenses are next to each other in the stack, almost no light gets through. But if you alternate between blue (horizontal/vertical) and green (±45°) frames, some light will get through.

    Wearing sunglasses in the dark

    In the bright sunlight, we saw that various fractions of the light got through depending on how the polarized sunglasses were oriented—from half the light when the lenses were oriented the same way to none at all when crossed. But sunlight is the product of a huge number of photons. What happens if there is only one photon?

    Faint stars are my favorite source of single, unpolarized photons, so if we wait in our spooky meadow till nightfall, we can observe how single photons pass through the sunglasses.

    If we look at a star with one polarized lens, half of the photons will get through. Whether a photon will go through or be absorbed by the lens is random—and not “kind of random,” but perfectly random. It is so perfectly random that the passage of starlight through a polarized lens can be used to create the highest quality random numbers known to humankind—there is actually a market for random numbers generated in this way.

    6
    Regardless of how the glasses are held, exactly half of the unpolarized single photons will pass through the lens. Whether an unpolarized photon passes through the lens is perfectly random. Credit: Miguel Morales.

    The question now becomes what happens to photons if we make them pass through two stacked lenses? We’ll start with lenses oriented in the same way (below left). When unpolarized starlight hits the first lens, half of the photons are randomly absorbed. But those photons that were lucky enough to make it through the first lens all make it through the second. Or said another way, 100 percent of the photons that made it through the first lens make it through the second.

    If we turn the lenses 45° relative to one another (middle), we know fewer photons will make it through. But what is the pattern? It turns out that of the 50 percent of photons that pass the first lens, half will also make it through the second, giving us a total of 25 percent making it through. And this 50 percent chance of getting through the second lens is again perfectly random. It is like the photons rolled the dice again.

    Lastly, if we have two crossed lenses (right), there is no chance of a photon that passed the first lens gets through the second.

    7
    The same rules that applied above to lots of photons also apply to just one. Credit: Miguel Morales.

    These rules probably seem similar to the ones we saw applied to sunlight. But the more we play around with things, the stranger this becomes. If we use two blue-framed glasses or two green-framed glasses, whether a photon makes it through the second lens is deterministic (100 percent or 0 percent). But if we use one lens from the blue glasses and one lens from the green glasses, whether the photon makes it through the second lens is random. It turns out this works in any combination. Any two glasses of the same frame color cause deterministic behavior, and two glasses of different frame colors cause random behavior. And it is this difference between deterministic and random that enables quantum cryptography.

    8
    Credit: Miguel Morales.

    This also helps us understand the sets of three lenses. When the blue-framed lenses are next to each other, as in the left stack, whether a photon makes it through the next lens is deterministic (zero in this case). But in the right stack we keep alternating the color of the glasses frames. So there is a 50 percent chance of the light after the first blue-framed lens making it through the green, and another 50 percent chance of the light from the green-framed lens making it through the frontmost blue-framed lens. Because at every stage the chance of making it through was random, some of the light gets lucky and makes it through (1/2 x 1/2 x 1/2 = 12.5 percent).

    We were able to take the four orientations of sunglasses we considered and sort them into two sets—blue-framed vertical and horizontal, and green-framed ±45°. Within frames of a set, the probability of a photon passing the second lens is deterministic (100 percent or zero); but when using frames of different sets (green with blue) the probability of the photon passing the second lens was perfectly random. This means we have the ability to sort our measuring devices—polarized sunglasses in this case—into sets that are internally deterministic but mutually random. This is a very deep feature of quantum mechanics, a property of our world we can see by looking at the stars with sunglasses on.

    Why spies wear shades?

    Crystals, like those found in rock shops, have remarkable abilities to bend and reflect light. And some special crystals can take one photon of light and split it into two photons. These daughter photons will head off in different directions, each with about half the energy of their parent photon. If the photon-splitting is done with care, the daughter photons look like twins—they will always have the same polarization.

    Any good study of twins inevitably involves separating them. So we’ll put our twin photons into separate fiber optic cables and send them to different cities, say Detroit (US) and Windsor (Canada), where we can look at them using either blue- or green-framed polarized sunglasses. We’ll make a series of twinned photons; to make the bookkeeping slightly easier, we’ll limit ourselves to the two lenses shown and always look at the twin in Detroit first.

    9
    Making photon twins and sending one twin to Detroit (US) and the other to Windsor (Canada). Credit: Miguel Morales.

    If we use the blue-framed sunglasses in Detroit, each incoming photon has a random 50-50 chance of being absorbed and being transmitted. If we write down whether each photon was absorbed or transmitted with zeroes (absorbed) and ones (transmitted), we get a long, perfectly random sequence. And if our friend in Windsor looks at their stream of photons with the same blue-framed lens, they will also see a long random sequence of photons being absorbed and transmitted. Just like we’d seen with our earlier experiments.

    10
    The record for both Detroit (US) and Windsor (Canada) of whether the photon twins were absorbed (0) or transmitted (1) when they both use blue-framed lenses. Credit: Miguel Morales.

    But if we pick up the phone and compare notes, we discover that we both saw the same random sequence. Whenever a photon was absorbed in Detroit, its twin was also absorbed in Windsor. Whenever a photon was transmitted in Detroit, its twin was transmitted in Windsor. This is a natural consequence of the photons being twins. If the twin in Detroit passes through the lens, it has a vertical polarization, and its twin must also have a vertical polarization. So the twin will also pass through the lens in Windsor if it’s in the same orientation. When we both use the blue-framed lenses, there is a deterministic relationship between the photon twins.

    But what if we use the blue-framed lens in Detroit and our friend uses the green-framed lens in Windsor? We repeat the experiment with a bunch of newly generated twinned photons, and as before, both see a random 50-50 sequence of photons being transmitted and absorbed. But when we pick up the phone to compare results, this time there is no relationship between what we see. If the photon in Detroit was absorbed by the blue-framed lens, there is a random, 50-50 chance of its twin being absorbed or transmitted by the green-framed lens in Windsor. Just like when we looked at starlight through two stacked lenses, if we both use the same kind of lens, there is a deterministic relationship, and when we use lenses from different sets (blue frame and green frame) the results are random.

    11
    The record for both Detroit (US) and Windsor (Canada) of whether the photon twins were absorbed (0) or transmitted (1) when Detroit uses a blue-framed lens and Windsor a green-framed lens. Credit: Miguel Morales.

    Now this is odd. The photon in Windsor seems to ‘know’ what happened to its twin in Detroit. If the photon in Detroit encountered a blue-framed lens, and the photon in Windsor sees a blue-framed lens, it seems to know what to do (deterministic). Similarly if they both see green-framed lenses, they know what to do. But if the photon in Windsor sees a different kind of lens, it is free to do whatever it wants (random transmission or absorption). How does each photon in Windsor know what its twin in Detroit saw?

    We can test whether it seems to know by trying all of the different combinations of lenses in Detroit and Windsor. Even if we randomly change which lenses are used for every photon that arrives, we always see the same pattern: the results are deterministic when twinned photons see the same lens and random when they do not. The photon in Windsor knows what happened to its twin.

    We can even wait until the last moment to decide which lenses to use. In Detroit, we can wait until the photon has almost reached us before we decide whether to use the blue-framed or green-framed lens. Even if there is not enough time for light to travel from Detroit to Windsor before the other photon encounters its lens, it already knows what happened to its twin. As far as we can tell, the photon in Windsor knows what happened in Detroit instantly.

    This is such a weird answer, and it doesn’t make intuitive sense. Physicists have tested this in every way we know how: randomly choosing lenses at both Windsor and Detroit, making sure the decisions are truly unpredictable, greatly increasing separation between the laboratories. No matter how fast or mischievous we get, the second photon seems to know instantly what happened to its twin. Current limits are better than 1,000 times the speed of light.

    It is at this point that a small, sinister man steps into the clearing and flashes an identification badge. He’s from your national espionage agency, and he quietly walks up to us and says, “I think it is time we had a little talk.” He turns to lead us out of the quantum woods, as you notice the iridescent frames on the sunglasses he is wearing.

    Terms of entanglement

    The instantaneous relationship between twinned photons is incredibly unexpected. Yet it definitely exists; not only have we tested it in thousands of ways, it turns out it is an important feature of the way our Universe works.

    Having a deterministic relationship between two (or more) particles is called entanglement. As with any quantum effect, it applies to all types of particles. It is even possible to create deterministic relationships between different kinds of particles—one of the promising quantum computer prototypes entangles photons with Strontium ions (~126 constituent protons, neutrons, and electrons).

    In this article, we have actually explored three closely related effects. With the sunglasses we saw we could categorize measurements into sets (blue frames vs. green frames), where the observations within a set are deterministic but observations between sets are random. This leads to measurement order being important. When looking through three sunglasses, we saw light get through with one ordering and not through another. The non-commuting algebra involved here is one of the things that makes the math of quantum mechanics so hard.

    We also saw with the twinned photons that it’s possible to create a deterministic relationship between two particles. This has the bizarre effect of ensuring that the related particles instantly know what happens to its sibling.

    Together, these are the most counter-intuitive aspects of quantum mechanics; it doesn’t feel like this is how our world should work. But it does.

    Not only are these effects real, they are necessary. From the extrovert & introvert bunching of particles we saw back in article three to the precise colors emitted by atoms in the Sun, these three related effects underpin how the world works—the Universe would be a different place without them. And we can put them to practical use; quantum computers are only possible because of entanglement. A strange feature of how particle siblings interact is the basis for an entirely new technology.

    Back at the unmarked government building (aka visitor’s center)

    After a long and odd interview, the spy with the iridescent sunglass frames hands us a business card and tells us to stay in touch. Why? When we measured the twinned photons with the same polarizing lenses, we saw identical copies of a perfectly random sequence. To a spy this is immediately interesting.

    Perfect encryption is possible when you have two—and only two—copies of a long, random number sequence. You can combine your secret message (in binary) with a random sequence of the same length (the key), and the resulting encrypted message cannot, even in principle, be cracked. Only the person with the other copy of the random number can unlock the encryption and read your Very Important Message™. This was used back in the World Wars and Cold War in the form of one time pads.

    The trick is making sure no one makes a copy of the random key. If there are three copies, and the Very Bad Man (VBM) has one of them, he can decode your message. Much of traditional spycraft boils down to making sure no one can intercept and copy one of the two random keys.

    It only takes two small modifications of our earlier experiment with twinned photons to ensure that the key was not intercepted and copied in secret.

    In the first modification, both Detroit and Windsor randomly choose blue- or green-framed lenses for each pair of twinned photons. It doesn’t matter which lens they choose each time, just that about half the time they choose a blue frame and half the time a green.

    They keep this up for quite a while, getting thousands of random ones and zeroes. They then pick up the phone, but instead of telling each other whether they saw each twinned photon transmit or absorb (ones and zeroes), they only say which color frame they chose. Half the time they will have used different lenses, so they throw all those readings away. But when they happened to both have chosen the same frame color, they will have gotten the same readings.

    For this subset of twinned photons, the results are deterministic and they will have both seen the same zeroes and ones. Because they only told each other which twins to use, not what the measurements were, the VBM would not know the key necessary to decode the message even if he listened to the phone call.

    12
    Both Detroit (US) and Windsor (Canada) randomly choose lenses and record whether the photon was absorbed (0) or transmitted (1). They then talk on the phone and say only what color of frame they chose for each pair. For the subset of twins where they both happened to choose the same frame (blue background), they know that they will get the same readings. So they keep these as their shared random key and discard the others. Credit: Miguel Morales.

    But what if the VBM cut the fiber and intercepted one of the photon twins? This is like catching the courier carrying the encryption keys—can’t he then copy the key? It turns out there is an easy defense for this that leverages the quantum nature of entanglement. The VBM can cut the fiber to Detroit, make a measurement, and send an imposter photon on to Detroit. But which lens should he use? The VBM does not know if Detroit is going to use a blue- or green-framed lens (Detroit may not even have decided yet).

    The VBM has to guess which lens Detroit will use, and about half the time the VBM will guess wrong. When the VBM picks a blue frame, but both Detroit and Windsor pick green frames, the deterministic relationship between what Detroit and Windsor see is broken. The relationship the scheme relied on was between the true twins, not the photon Windsor sees and the imposter sent to Detroit.

    So to protect against interception, all Detroit and Windsor have to do is compare a small subset of their readings when they picked the same lens. If no one was listening, 100 percent of the time they will have the same reading. But if the VBM is listening, then there is no correlation when the VBM guessed wrong and injected an imposter. Working through the math, only 75 percent of the readings will agree. Windsor and Detroit will immediately know that someone was listening and that the rest of the key should be discarded.

    This is the holy grail of encryption: a shared random key where you can guarantee that no one intercepted the messenger and made a copy. It is a little disturbing how excited three-letter agencies are about this capability. And it’s not a potential interest—quantum cryptography is already here. While there are still limitations on the range and speed, quantum cryptographic links are in production and use.

    FAQ and one more hike

    Next week we will wrap up our series. Instead of focusing on some strange and beautiful feature of quantum mechanics, we’ll instead survey the wide range of the quantum technologies that will soon permeate our lives. In the process, we’ll try to tackle the larger question of what it means to be technologically literate in a world infused with quantum machines.

    Doesn’t instantaneous correlation mean we can communicate faster than the speed of light? Sadly we have not invented ansibles. You will note that whatever lenses the friends in Windsor and Detroit use, they always see a random sequence (50-50 chance each photon will be absorbed or transmitted). It is only when they pick up the phone and talk, which happens at normal light speed, that they realize they got the same random sequence when they pick the same lenses.

    Because it is always a random sequence no matter which lenses are used, it turns out there is no way to send any information faster than the speed of light. This makes everything compatible with Einstein’s relativity. Still super weird, but not faster than light communication.

    What about Feynman diagrams? In this article series, I have purposefully avoided math. Quantum mechanics is not only written in math, but there are three completely different versions of the math in widespread use: the Schrödinger wave approach, the Dirac formulation, and Feynman’s path integrals. The Schrödinger approach emphasizes the waviness of particles and uses differential equations. The Dirac formulation focuses on quantum mechanics’ sensitivity to measurement order and uses the language of linear algebra.

    Feynman’s path integrals also have a wavy point of view and can be seen as an extension of the Huygens–Fresnel principle of wave propagation. This leads to some truly terrifying path integrals, covering all possible paths and possibilities. Feynman diagrams are a shorthand for keeping track of the approximations you need to make to actually solve things. While the mental models behind the three mathematical traditions are quite distinct, they always give the same answers.

    So why are there three equivalent versions of quantum mechanics? Depending on the problem you are worrying about, it turns out that it can be easier to get the answer using one of the three approaches. And physicists are all about using the path of least resistance.

    Looking back at this article series, the wavy ideas from the first five articles are usually expressed using the Schrödinger wave approach. But the deterministic and random measurement sets and entanglement we discussed in this article are easier to express using the Dirac approach. The Feynman approach comes into its own at high energies when particles can be created, and the results of particle accelerators like Large Hadron Collider are discussed almost entirely in the language of Feynman path integrals. It is not uncommon for experts to switch from one approach to another mid problem, using whichever approach will get them the right answer with the least pain and suffering (a relative scale, I know).

    So when someone claims to have a “new” version of quantum mechanics, I roll my eyes. Not only do they have to correctly predict all the different observations we’ve seen in this article series, from particle mixing to anti-bunching to entanglement, but we already have three versions that work. The new version had better be useful and make it easier to get the right answer.

    What about Eve? It is traditional when talking about cryptography to have Alice talking to Bob, with Eve trying to listen in. While Eve is a reference to ‘eavesdropping’, not the biblical Eve, I tend to think Eve gets a bad rap. As a physicist, the first person to pick from the tree of knowledge is closer to a patron saint than a villain. So I’ll leave Eve out of my version of the cryptography story.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 1:40 pm on February 8, 2021 Permalink | Reply
    Tags: "A curious observer’s guide to quantum mechanics pt. 5- Catching a wave", A century ago the discreteness of the colors emitted by atoms was one of the first hints of electron waves and drove the invention of quantum mechanics., All particles move like waves., , ars technica, Artificial atoms and quantum dots, Atoms are naturally occurring electron traps., Breaking the spectral code allows us to determine which atoms exist in faraway stars., Electron trap, How does a particle’s behavior change when we confine it?, How particles behave when you confine them., Quantum dots really are a purely quantum device and a great example of how quantum mechanics is bleeding into our everyday experience., , The origin of emission spectra from stars, Using spectra taken during a solar eclipse helium was discovered streaming off the Sun 27 years before helium was isolated on Earth., We can identify stars with strange and interesting histories by finding the fingerprints of unusual elements in their spectra., We can make artificial atoms called quantum dots.   

    From ars technica: “A curious observer’s guide to quantum mechanics pt. 5- Catching a wave” 

    From ars technica

    2/7/2021
    Miguel Morales

    1
    Credit: Aurich Lawson / Getty Images.

    Sung to the abbess’s lines in Maria from The Sound of Music:

    “How do you catch a wave like Maria? How do you grab a cloud and pin it down? Oh, how do you solve a particle like Maria? How do you hold a moonbeam in your hand?”

    Through our expeditions into the wilderness of quantum mechanics so far, we’ve seen particles that are wild and free. But most particles spend their lives in more constrained circumstances: electrons caught in the embrace of nuclei, atoms shackled into molecules, or the regimented lines of crystals. Confinement is not necessarily bad—only strings tightly tied into a musical instrument can make music.

    In today’s hike into the quantum-mechanical wood, we are going to carry along some traps so we can see how particles behave when you confine them. (Being sensitive sorts, we’ll treat them kindly and release them when we’re done.) In the process, we’ll discover the origin of emission spectra from stars and encounter artificial atoms and quantum dots, which play leading roles in everything from quantum computing to consumer televisions.

    Why the caged bird sings

    As we’ve seen a number of times, all particles move like waves. But what happens when we trap a wave? How does a particle’s behavior change when we confine it?

    A great everyday example of a trapped wave is a guitar string. Before being attached to a guitar, a string can wiggle any way it feels like. Fast waves, slow waves—every kind of wave is possible. But when we tie the string to a guitar and pluck it, the resulting wave is trapped by the ends that are attached to the guitar. The wave can bounce between the ends, but it can’t escape.

    1
    The trapped waves of a guitar string. Clockwise from upper left are the fundamental, 2nd harmonic, and 3rd harmonic of an open string. Only waves that fit neatly in the trap are allowed, and the increasing frequency is associated with higher energy (higher pitch). We can also shorten the trap by using one of the guitar’s frets, which changes the frequency of the fundamental (lower left) and all the harmonics.
    Credit: Miguel Morales.

    As shown in the diagram above, some sets of waves (harmonics) are allowed, but only waves of the right length are possible. When we trapped the wave, we went from any note being possible to a state where only those waves that fit within the trap—and the notes they correspond to—can exist. In other words, the pitches of the guitar string are caused by the trap. And when we put a finger on a fret to change the size of the trap, the size of the waves that fit changes, and the notes we hear change.

    We can see the same thing happen with electrons. In 1993, Don Eigler and colleagues made an electron trap by placing 48 iron atoms into a ring on top of a copper sheet. The ring of iron atoms creates a quantum corral—a circular electron trap. When imaged with a scanning tunneling microscope, the wave of a trapped electron can be clearly seen inside the ring of iron atoms.

    3
    A circular corral of 48 iron atoms (sharp peaks) on a copper sheet. The wave of an electron trapped inside the corral can be clearly seen. Credit: Don Eigler, IBM Almaden Research Center.

    Since particles move like waves, they respond just like any other kind of wave when caught—they sing with specific notes. The electron in the quantum corral looks like the vibrations of a drum head. That is not an accident: a drum also creates a circular trap for waves analogous to the quantum corral. The observation that quantum particles pick up specific notes when trapped is a consequence of them moving like waves. So by catching particle waves, we can make music.

    Atomic music

    Atoms are naturally occurring electron traps. The protons in atomic nuclei have a positive charge that attracts the negatively charged electrons. And, just as in the quantum corral and guitar string, when the electron wave is trapped, it has particular harmonic notes.

    We can plot the shape of the electron waves; below we’ve shown the radial wiggles of a few of the notes in hydrogen.

    3
    The radial shape of three trapped electron waves in a hydrogen trap. Credit: Miguel Morales.

    Even with our current technology, it is hard to image the electron waves within an atom, but we can see their shadow by looking at the light atoms emit. If we heat up atoms enough—either by putting them in a flame or by zapping them with electricity (as in neon lights)—and look at the colors of light they emit, we notice that atoms don’t emit a broad range of colors. Instead, they emit a set of very narrow colors, and the fingerprint of the colors is unique for every kind of atom.

    4
    Credit: NASA

    When we look at the electron waves, the high-frequency harmonics (more wiggles) have more energy. When an electron wave skips from one harmonic to another, it makes a photon of light, with the photon’s energy being the difference between the two electron harmonics. The electron gave the energy it lost to the newly created photon.

    So the colors of the light tell us how far apart, energetically, the electron waves are. It is almost like listening to a kid jumping down stairs at a funhouse where each step has a different height. Sometimes they jump from one step to the next, but sometimes they will jump two or three steps at a go. Just as a big drop creates a big thump, a high-energy drop will create a high-energy photon (blue light), a medium drop green light, and a small drop red light (with ultraviolet and infrared for the really big and small jumps).

    The discreteness of the spectra clearly reflects that there are steps—the colors that don’t exist represent energies where there are no steps the right distance apart. But these colors don’t tell us what the electron harmonics are—only how far apart they are. But by sending lots of kids up and down the stairs and listening to all the jumps, we can puzzle out the height of each individual step.

    5
    Observations of the hydrogen Balmer spectral lines are shown inset. For the first four spectral lines, the initial electron wave is shown above; they all transition to the lower energy wave shown below. The more wiggles or the higher the harmonic of the initial wave, the larger the energy step and the bluer the emitted light is. Credit: Morales, based on Jan Homann.

    A century ago, this discreteness of the colors emitted by atoms was one of the first hints of electron waves, and drove the invention of quantum mechanics.

    Life is pain

    We’ve seen what some of the harmonics of an electron in a hydrogen trap look like. In principle, given the number of protons and electrons in an atom and a sharp pencil, you could calculate all the electron waves and the associated spectrum of emitted colors. But figuring out the electron harmonics and spectra for larger atoms is hard.

    Every time you add an electron, it is not only attracted to the protons in the nucleus but repelled by all the other electrons already in the atom. This repulsion changes the shape of the trap, which changes all the harmonic waves of all the electrons in it. The electrons are like a flock of birds in a tree, and every time another one joins, they all have to rearrange themselves and resettle. This changes the electron waves and the resulting emission colors—a singly ionized iron atom with 25 electrons has a slightly different spectrum than the same atom with 26 electrons. And while there are only 26 electrons in neutral iron, since every electron repels every other electron there are 325 extra terms in the math from all the pairwise interactions. Calculating the spectra of complicated atoms is the purview of heroes armed with supercomputers.

    But breaking the spectral code allows us to determine which atoms exist in faraway stars. Using spectra taken during a solar eclipse, helium was discovered streaming off the Sun 27 years before helium was isolated on Earth. And we can identify stars with strange and interesting histories by finding the fingerprints of unusual elements in their spectra. A red giant star that may have swallowed a neutron star has been identified by the anomalously strong molybdenum lines in its spectrum.

    But the complications of spectra don’t stop when all the parts of the atom are accounted for. Multiple atoms make things even worse. When two nuclei get close enough to one another, an electron experiences it as a single trap with a double bottom.

    6
    The three-dimensional structure of the electron waves in the double trap of molecular hydrogen. Credit: Creon Levit/NASA Ames.

    The electron will naturally get caught in this double trap, forming a chemical bond between the atoms. Because the electron is attracted to and pulling both nuclei, it also pulls the two nuclei towards one another. The shared electron binds the atoms together into a molecule.

    This has all of the effects one would expect. Electrons in a double trap have their own set of wave harmonics and associated emission lines that are quite different from the harmonics of a single atom trap. By looking for these signature lines, we can identify complicated molecules in the planet-forming regions around distant stars.

    7

    The electron harmonics in multi-atom traps also determine the strengths of the chemical bonds, which are crucial for chemists predicting the properties of novel materials or uncharacterized chemical reactions. Unfortunately, for complicated molecules, predicting the shape of the trap and the associated chemical bonds brings even supercomputers to their knees. We know how to set up the problem, but there are too many interacting parts to obtain accurate solutions for all but simple molecules.

    One of the hopes for quantum computing is that it will finally give chemists the firepower needed to predict complicated reaction rates and material properties. Just as the aerodynamics of airplanes and automobiles has been revolutionized by numerical fluid simulations, quantum computing would greatly accelerate the development of new materials.

    Making our own traps back at the Visitor’s Center

    Until recently, we only had access to naturally occurring atomic and molecular traps. Finding a material with the desired property, such as a dye of a specific color, became a hunt to find a molecule that created just the right kind of natural trap.

    But what if we could make our own traps? What if we could make artificial atoms tailored to our needs? We now have enough control over making very small things that we can make artificial atoms called quantum dots.

    A quantum dot is a small semiconducting bead a few tens of atoms in diameter. In a semiconductor, there are only a few electrons free to move—which means fewer issues with interacting electrons—and the free electrons are confined within the bead just as in the quantum corral. Because the electrons move like waves, they naturally form harmonics when trapped in the bead, and the harmonics depend on the size of the bead, rather than the atoms that it’s made of.

    This is great—just by varying the size of the bead we can make traps of different sizes. In the photo below are vials of quantum dots made of Cadmium Selenide.

    8
    Cadmium Selenide beads were slowly grown in solution, with samples removed at the indicated times. The size of the beads determines the size of the electron trap—small beads are blue while large beads are red. Credit: U Waterloo (CA).

    The only thing that changes between the vials shown above is the size of the beads in solution. Violet light is emitted by the smallest beads, which have the largest energy between harmonics; red is emitted by the largest beads.

    When we make tiny beads, the wave nature of quantum mechanics naturally makes electron harmonics that determine the colors emitted. We’ve made artificial atoms. It’s like suddenly gaining the ability to use the frets on a guitar to change the trap size.

    Being able to make our own ‘atoms’ allows us to manufacture electron traps with desired properties, and of course this is going to show up first in consumer televisions. Quantum dots allow TV makers to precisely tune the colors emitted, creating more accurate color, over a wider color range, all while using less electricity. There is a really nice review in IEEE Spectrum magazine of the ways quantum dots can be used to enhance television screens.

    Beyond consumer electronics, there are a huge variety of uses for artificial atoms. To a designer, the flexibility is staggering: changing the shape of the bead from a sphere to a pyramid changes the electron harmonics and the associated light emission. Or one can make beads with layers of different materials, like in peanut M&Ms. And we are not limited to beads. A designer can take a sheet of diamond and replace one carbon atom with a nitrogen atom to make a little divot or trap within the sheet. This new trap has properties that are completely different from carbon or nitrogen, and are a promising building block for quantum computers.

    Another part of the woods

    Quantum dots really are a purely quantum device, and a great example of how quantum mechanics is bleeding into our everyday experience. Designers have used their knowledge of electron waves to make artificial atoms with custom properties, all so that we can better enjoy our Netflix shows.

    In this series we’ve spent a lot of time thinking about particle waves—they’re one of the most surprising and useful aspects of quantum mechanics. But next week, we’ll go to a different part of the quantum woods and talk about measurement order, a subtle and fascinating aspect of quantum mechanics behind unbreakable quantum cryptography.

    FAQ

    First, an apology to anyone who has had real chemistry. Yes, I know I skipped a ton of stuff. I had intended to cover more: angular electron waves, electron packing, 3D orbitals, selection rules, photon angular momentum, the periodic table, molecular bonds, etc. But it’s complicated, and I couldn’t find a way to include any of the above and still find a thread through to the current applications. So apologies, and I’d be grateful if in the comments you could help fill in the areas I missed.

    Can I see this at home? Yes! If you can get a demonstration-quality diffraction grating (a film that acts like a prism), you can look at the spectra of different lights. If you carry around a diffraction grating, you will see that incandescent lights have a broad rainbow of colors, as does sunlight off white paint (don’t look at the Sun!). But neon lights have the very narrow colors of atomic emission lines. Red is usually neon gas, blue mercury. Yellow streetlights often have narrow lines of sodium, and you can see the blue mercury lines in the light of fluorescent lights. Many LED lights can have narrow or narrow-ish colors, too.

    I looked up the spectrum of quantum dots on the Web, and the spectra are not sharp like the atomic spectra but instead spread over a narrow range of colors. Why? Great question. It turns out, if you isolate a single quantum dot and look at its spectrum (particularly when cold), it has very narrow emission lines like an atom. But the beads of semiconductor in a vial are not all exactly the same size. Some are a few atoms larger, some may be egg shaped, etc. And in even a little droplet of solution, there are bajillions of individual quantum dots, each emitting slightly different color lines. So together they will emit a range of colors depending on the mixture of dots.

    Manufacturers can actually use this to their advantage. By adjusting how many dots of a particular size are in the solution, they can fine-tune the spectrum emitted. Particularly for house lighting, this can make a much more natural light than standard LEDs. This tuning is something we can’t do with atoms. Every neon atom is identical to every other neon atom, so they all emit the same color lines. But when we make our own traps we don’t have to make them all identical and can engineer the spectrum emitted.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 12:42 pm on January 31, 2021 Permalink | Reply
    Tags: "A curious observer’s guide to Quantum Mechanics pt. 4: Looking at the stars", ars technica, , , ,   

    From ars technica: “A curious observer’s guide to Quantum Mechanics pt. 4: Looking at the stars” 

    From ars technica

    1/31/2021
    Miguel F. Morales

    1
    Credit: Aurich Lawson / Getty Images.

    Beautiful telescopic images of our Universe are often associated with the stately, classical physics of Newton. While quantum mechanics dominates the microscopic world of atoms and quarks, the motions of planets and galaxies follow the majestic clockwork of classical physics.

    But there is no natural limit to the size of quantum effects. If we look closely at the images produced by telescopes, we see the fingerprints of quantum mechanics. That’s because particles of light must travel across the vast reaches of space in a wave-like way to make the beautiful images we enjoy.

    2
    Asteroids in the Distance. Image Credit: NASA, ESA, Hubble; R. Evans & K. Stapelfeldt (NASA/JPL-Caltech).

    NASA/ESA Hubble Telescope.

    Rocks from space hit Earth every day. The larger the rock, though, the less often Earth is struck. Many kilograms of space dust pitter to Earth daily. Larger bits appear initially as a bright meteor. Baseball-sized rocks and ice-balls streak through our atmosphere daily, most evaporating quickly to nothing. Significant threats do exist for rocks near 100 meters in diameter, which strike the Earth roughly every 1000 years. An object this size could cause significant tsunamis were it to strike an ocean, potentially devastating even distant shores. A collision with a massive asteroid, over 1 km across, is more rare, occurring typically millions of years apart, but could have truly global consequences. Many asteroids remain undiscovered. In the featured image, one such asteroid — shown by the long blue streak — was found by chance in 1998 by the Hubble Space Telescope. A collision with a large asteroid would not affect Earth’s orbit so much as raise dust that would affect Earth’s climate. One likely result is a global extinction of many species of life, possibly dwarfing the ongoing extinction occurring now.

    This week we’ll concentrate on how photons travel across light years, and how their inherent quantum waviness enables modern telescopes, including interferometric telescopes the size of the Earth.

    Starlight

    How should we think about the light from a distant star? Last week we used the analogy of dropping a pebble into a lake, with the ring of ripples on the water standing in for the wave-like motion of photons. This analogy helped us understand the length of a particle ripple and how photons overlap and bunch together.

    We can continue that analogy. Every star is similar to the Sun, in that it makes a lot of photons. As opposed to someone carefully dropping single pebbles into a mirror-smooth lake, it’s more like they poured in a bucket of gravel. Each pebble makes a ring of ripples, and the ripples from each stone spread out as before. But now the ripples are constantly mixing and overlapping. As we watch the waves lap against Earth’s distant shore, we don’t see the ripples from each individual pebble; instead the combination of many individual ripples have added together.

    3
    The chaotic waves from a gravel star crossing our pond. The ripples of many pebbles overlap, creating a complex set of waves. Credit: Miguel F. Morales.

    So let’s imagine we’re standing on the shore of a lake as the waves wash in, looking at our gravel ‘star’ with a telescope for water waves. The lens of the telescope focuses the waves from the star onto a spot: the place on the camera sensor where the light from that star lands.

    If a second bucket of gravel is dropped into the lake farther along the opposite shore, the ripples will overlap at our shore, but will be focused by the telescope into two distinct spots on the detector. Similarly, a telescope can sort the light from the stars into two distinct groups—photons from star A and photons from star B.

    But what if the stars are very close together? Most of the ‘stars’ we see at night are actually double stars—two suns so close together they appear as one bright pinprick of light. When they’re in distant galaxies, stars can be separated by light years yet look like a single spot in professional telescopes. We’d need a telescope that could somehow sort the photons produced by the different stars to resolve them. Similar things apply if we want to image features like sunspots or flares on the surface of a star.

    4
    The Changing Surface of Fading Betelgeuse. Credit: ESO, M. Montargès et al.
    Besides fading, is Betelgeuse changing its appearance? Yes. The famous red supergiant star in the familiar constellation of Orion is so large that telescopes on Earth can actually resolve its surface — although just barely.

    Orion Molecular Cloud Complex showing the distinctive three stars of Orion’s belt. Credit: Rogelio Bernal Andreo Wikimedia Commons.

    Betelgeuse in the infrared from the Herschel Space Observatory is a superluminous red giant star 650 light-years away. Stars much more massive- like Betelgeuse- end their lives as supernova. Credit: ESA/Herschel/PACS/L. Decin et al).

    The two featured images taken with the European Southern Observatory’s Very Large Telescope show how the star’s surface appeared during the beginning and end of last year.

    ESO VLT at Cerro Paranal in the Atacama Desert, •ANTU (UT1; The Sun ),
    •KUEYEN (UT2; The Moon ),
    •MELIPAL (UT3; The Southern Cross ), and
    •YEPUN (UT4; Venus – as evening star).
    elevation 2,635 m (8,645 ft) from above Credit J.L. Dauvergne & G. Hüdepohl atacama photo.

    The earlier image shows Betelgeuse having a much more uniform brightness than the later one, while the lower half of Betelgeuse became significantly dimmer than the top. Now during the first five months of 2019 amateur observations show Betelgeuse actually got slightly brighter, while in the last five months the star dimmed dramatically. Such variability is likely just normal behavior for this famously variable supergiant, but the recent dimming has rekindled discussion on how long it may be before Betelgeuse does go supernova. Since Betelgeuse is about 700 light years away, its eventual supernova — probably thousands of years in the future — will likely be an amazing night-sky spectacle, but will not endanger life on Earth.

    To return to the lake, there is nothing special about the ripples made by different pebbles—the ripples from one pebble are indistinguishable from the ripples made by another. Our wave telescope does not care if the ripples came from different pebbles in one bucket or different buckets altogether—a ripple is a ripple. The question is how far apart must two pebbles be dropped for our telescope to distinguish that the ripples came from different locations?

    Sometimes when you’re stumped, it’s best to take a slow walk along the beach. So we’ll have two friends sit on the far shore dropping pebbles, while we walk along our shore, looking at the waves and thinking deep thoughts. As we walk along the beach we see that the waves from our friends overlap everywhere, and that the waves come in randomly. There appears to be no pattern.

    4
    The waves from two gravel “stars.” The waves from each star are circular (see next panels), but combine in an apparent jumble. However, we notice that while the wave train at each location is chaotic, at locations close to each other on the beach, the wave trains are very similar. At locations far down the beach, we see a completely different wave train. Credit: Miguel Morales.

    5
    The waves from just one star. Credit: Miguel Morales.

    6
    The waves from the other star. The waves can be combined to produce the wave pattern seen in the first panel. Credit: Miguel Morales.

    But on closer inspection, we notice that spots on the beach very near each other see nearly identical waves. The waves are random in time, but locations on the beach a few paces apart see the same random train of waves. But if we look at waves hitting far down the beach, that wave train is completely different than the one hitting near us. Any two places on the beach that are close together will see nearly identical wave trains, but widely separated locations on the beach see different wave trains.

    This makes sense if we think of the waves on the beach as being the combination of little ripples from hundreds of pebbles. At nearby locations on the beach, the ripples from the pebbles dropped by both friends add up in the same way. But farther along the beach, the ripples from one friend will have to travel farther, so the ripples add up in a different way, giving us a new wave train.

    While we can no longer see the ripples of individual pebbles once they have combined into waves, we can pace off how far we need to walk to see a new wave train. And that tells us something about how the ripples are adding together.

    We can confirm this by asking our two pebble-dropping friends to move closer together. When our friends are close together, we notice that we have to walk a long way along our beach to see the ripples add up in a different way. But when our friends are far apart, just a few steps on our beach will make the wave trains look different. By pacing off how far we need to walk before the waves look different, we can determine how far apart our pebble-dropping friends are.

    7
    Large and small telescopes looking at the same two stars. Because the waves appear different at the far edges of the large telescope, it can sort the waves into two sources. For the small telescope, the waves look the same across the lens, so it sees the two stars as a single unresolved source. Credit: Miguel Morales.

    The same effect happens with photon waves, which can help us understand the resolution of a telescope. Looking at a distant binary star, if the light waves entering opposite edges of the telescope look different, then the telescope can sort the photons into two distinct groups—the photons from star A and the photons from star B. But if the light waves entering opposite edges of the telescope look the same, then the telescope can no longer sort the photons into two groups and the binary star will look like one spot to our telescope.

    If you want to resolve nearby objects, the obvious thing to do is to make the diameter of the telescope bigger. The farther apart the edges of the telescope, the more close the stars can be and still be distinguished. Bigger telescopes have better resolution than small telescopes, and can separate the light from more closely spaced sources. This is one of the driving ideas behind building truly enormous 30 or even 100 meter diameter telescopes—the bigger the telescope, the better the resolution.

    ESO ELT 39 meter telescope to be on top of Cerro Armazones in the Atacama Desert of northern Chile. located at the summit of the mountain at an altitude of 3,060 metres (10,040 ft).

    GMT

    Giant Magellan Telescope, 21 meters, to be at the NOIRLab NOAO Carnegie Institution for Science’s Las Campanas Observatory, some 115 km (71 mi) north-northeast of La Serena, Chile, over 2,500 m (8,200 ft) high.

    TMT-Thirty Meter Telescope, proposed and now approved for Mauna Kea, Hawaii, USA , Altitude 4,050 m or 13,290 ft, the only giant 30 meter class telescope for the Northern hemisphere.

    (This is always true in space, and true on the ground with adaptive optics to correct for atmospheric distortions.)

    For telescopes bigger really is better.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 4:01 pm on January 25, 2021 Permalink | Reply
    Tags: "A curious observer’s guide to quantum mechanics, , ars technica, “How big is a particle?” Well that's a subtle (and unsurprisingly complex) question., “How long is a particle?”, complex) question., , Light sources with distinct colors tend to have the longest ripples., , , pt. 3: Rose colored glasses", , The extroverts that like to join up (bosons) and the introverts that avoid one another (fermions)., The length of a particle wave is given by the range of colors (and thus energies) it has.   

    From ars technica: “A curious observer’s guide to quantum mechanics, pt. 3: Rose colored glasses” 

    From ars technica

    1/24/2021
    Miguel F. Morales

    “How big is a particle?” Well, that’s a subtle (and, unsurprisingly, complex) question.

    1
    So far, we’ve seen particles move as waves and learned that a single particle can take multiple, widely separated paths. There are a number of questions that naturally arises from this behavior—one of them being, “How big is a particle?” The answer is remarkably subtle, and over the next two weeks (and articles) we’ll explore different aspects of this question.

    Today, we’ll start with a seemingly simple question: “How long is a particle?”

    Go long

    To answer that, we need to think about a new experiment. Earlier, we sent a photon on two very different paths. While the paths were widely separated in that experiment, their lengths were identical: each went around two sides of a rectangle. We can improve this setup by adding a couple of mirrors, allowing us to gradually change the length of one of the paths.

    2
    An improved two-path experiment where we can adjust the length of one of the paths. Credit: Miguel F. Morales.

    When the paths are the same length, we see stripes just as we did in the first article. But as we make one of the paths longer or shorter, the stripes slowly fade. This is the first time we’ve seen stripes slowly disappear; in our previous examples, the stripes were either there or not.

    We can tentatively associate this fading of the stripes as we change the path length with the length of the photon traveling down the path. The stripes only appear if a photon’s waves overlap when recombined.

    But if particles travel as waves, what do we even mean by a length? A useful mental image may be dropping a pebble into a smooth pool of water. The resulting ripples spread out in all directions as a set of rings. If you draw a line from where the rock fell through the rings, you’ll find there are five to 10 of them. In other words, there is a thickness to the ring of waves.

    Another way to look at it is as if we were a cork on the water; we would sense no waves, a period of waves, then smooth water again after the ripple had passed. We’d say the ‘length’ of the ripple is the distance/time over which we experienced waves.

    Similarly we can think of a traveling photon as being a set of ripples, a lump of waves entering our experiment. The waves naturally split and take both paths, but they can only recombine if the two path lengths are close enough for the ripples to interact when they are brought back together. If the paths are too different, one set of ripples will have already gone past before the other arrives.

    This picture nicely explains why the stripes slowly disappear: they are strong when there’s perfect overlap, but fade as the overlap decreases. By measuring how far until the stripes disappear, we have measured the length of the particle’s wave ripples.

    Digging through the light bulb drawer

    We can go through our usual experiments and see the same features we saw before: turning the photon rate way down (which produces a paintball pointillism of stripes), changing the color (bluer colors mean closer spacing), etc. But now we can also measure how the stripes behave as we adjust the path length.

    While we often use lasers to generate particles of light (they are great photon pea shooters), any kind of light will do: an incandescent light bulb, an LED room light, a neon lamp, sodium streetlights, starlight, light passed through colored filters. Whatever kind of light we send through creates stripes when the path lengths match. But the stripes fade away at distances that range from microns for white light to hundreds of kilometers for the highest quality lasers.

    Light sources with distinct colors tend to have the longest ripples. We can investigate the color properties of our light sources by sending their light through a prism. Some of the light sources have a very narrow range of colors (the laser light, the neon lamp, the sodium streetlight); some have a wide rainbow of colors (the incandescent bulb, LED room light, starlight); while others such as sunlight sent through a colored filter are intermediate in the range of composite colors.

    3
    We can measure the length of a ripple by seeing how far we can lengthen one arm of the experiment before the stripes disappear. A long ripple has a narrow range of colors. Credit: Miguel F. Morales.

    4
    A medium length ripple has a wider range of component colors. Credit: Miguel F. Morales.

    5
    A very short pulse of light necessarily includes a wide range of colors, becoming white.
    Miguel F. Morales.

    What we notice is that there is a correlation: the narrower the color range of the light source, the longer the path difference can be before the stripes disappear. The color itself does not matter. If I choose a red filter and a blue filter that allow the same width of colors through, they will have their stripes disappear at the same path difference. It is the range of color that matters, not the average color.

    6
    A medium length ripple of blue light and its component colors. Credit: Miguel F.Morales.

    7
    A medium length ripple of orange light. Note that while the orange wave is longer than the blue wave (shown by colored line), the length of the ripple is the same (shown by grey region). The length of the ripple depends on the range of color, not the central color. Credit: Miguel F. Morales.

    Which brings us to a rather startling result: the length of a particle wave is given by the range of colors (and thus energies) it has. The length is not a set value for a particular kind of particle. Just by digging through our drawer of light sources, we made photons with lengths ranging from microns (white light) to a few cm (a laser pointer).

    Friendly photons

    We saw in the second article that two independent particles can interact and mix, so what does the mixing of two sets of ripples look like?

    8
    Two well separated ripples of light. The horizontal lines are associated with the probability of seeing a photon. Credit: Miguel F. Morales.

    9
    Two ripples partially overlapping. Credit: Miguel F. Morales.

    10
    Two ripples mostly overlapping. When ripples overlap the probability of seeing photons close together increases dramatically—photons like to bunch together and hold hands. Credit: Miguel F.Morales.

    In the sequence above, you can see how photon ripples add as they overlap. As the height of the waves increases, the probability of seeing photons increases dramatically.

    Let’s start with some photons emitted randomly in time—sunlight or starlight is perfect for this. Huge numbers of atoms are emitting photons of light across the surface of a star, each independently of the others, so the emission of the photons is perfectly random in time. But if we take those photons and squeeze them onto an optical fiber, some of the ripples from separate photons will overlap.

    11
    A few photon ripples, randomly inserted into a fiber. Where the ripples overlap the photons bunch. Credit: Miguel F. Morales.

    Because we have an enhanced probability of seeing photons when their ripples overlap, if we watch for the photons coming out at the end of the fiber, their appearance is no longer random—photons like to bunch. We see more photons exiting the fiber very close together in time, and this enhancement happens at the size of the ripple we measured at the start of this article. This bunching is a beautiful quantum mechanical effect—photons like to hold hands when they overlap.

    This also leads us to a subtle question. Starlight or sunlight is a mixture of all colors, so the ripples are very short and we see this in the bunching which appears only if we look at very short intervals. But if we associate one ripple with one photon, what is the color of the photon? Was it a red photon or a green photon or a blue photon? Intriguingly, the most natural answer is the photon was white—each photon ripple is a mixture of all colors. If we force each photon to have a defined color, then the ripples are very broad, and we would see that in the bunching length.

    So a photon in flight has a mixture of colors. Just as asking which path the photon takes makes no sense, asking what color a white photon has while in motion turns out to make no sense.

    Particle Introverts and Extroverts

    All of our previous experiments have shown that all particles behave the same way, whether we use photons or neutrons or Bucky Balls. So, being careful observers, we’ll want to repeat our last experiments with neutrons. We measure the length of the neutron ripples with our variable path length experiment, and the fringes slowly fade in the same way. But if we take randomly emitted neutrons and allow the ripples to overlap, we find that neutrons avoid each other. Instead of bunching up like photons, neutrons push each other away, or anti-bunch.

    This is still a very quantum mechanical effect; classically we’d expect randomly emitted neutrons to arrive, well, randomly. But instead of bunching up and holding hands like photons, neutrons avoid each other.

    We can repeat this experiment with all the particles we know, and they divide into two distinct camps: the extroverts that like to join up (bosons), and the introverts that avoid one another (fermions). There is no kind of particle that will arrive randomly—they are all either introverts or extroverts. Quarks, electrons, protons, and neutrons all belong to the introvert fermion camp; photons, gluons, and pions are all extrovert bosons.

    Fermions have one additional trick up their sleeve: two fermions can be packed together so that they behave like a boson. All quarks are introvert fermions, yet pions are made up of 2 quarks and act like extrovert bosons. Protons and neutrons, which are made up of 3 quarks, act like fermions. So it is possible to make composite particles out of fermions that are bosons, as long as an even number of fermions are used. Given a wingman, fermions are much friendlier. (Interestingly you cannot pack bosons to act like fermions.)

    This leads us to one of my favorite quantum demonstrations of all time. Jeltes and colleagues started out by cooling some Helium atoms to less than a millionth of a degree above absolute zero. This cooling decreases the range of energy, or colors, which increases the ripple length of the Helium atoms to about half a millimeter—the size of typical sand grains.

    They then dropped the helium onto a detector and looked for the bunching or anti-bunching in the arrival time. When they use Helium-4, which has 2 protons, 2 neutrons, and 2 electrons (total of 6 fermions), they clearly see the bunching of an extrovert boson. But when they use Helium-3 instead (2 protons, 1 neutron, 2 electrons for a total of 5 fermions) they see the anti-bunching of an introverted fermion.

    12
    Samples of 4He atoms (bosons, upper line) and 3He atoms (fermions, lower line) are cooled to half a millionth of a degree above absolute zero, dropped onto a detector, and the separations between atoms is recorded. 1.00 means the atoms arrived randomly. At close distances the particle ripples overlap and the extrovert 4He* atoms show up more often than random (bunch) while the introvert 3He atoms show up less often than random (avoid each other). Because 3He weighs slightly less than 4He its ripple length is a little longer (0.75 mm vs. 0.56 mm). Credit: Jeltes et. al.

    Uncertainty

    In our light experiment, a short packet of ripples was associated with a wide range of colors (energy), while a narrow range of colors indicated a long packet of ripples. We saw this again in the Jeltes experiment: by cooling the atoms to less than a millionth of a Kelvin, they were able to narrow their energy range and thus increase the length of the Helium ripple to half a millimeter.

    An implicit consequence of this behavior is that it is impossible to have both a short particle wave and a small color range. If you limit the color range, the particle’s length increases; if you shorten the ripple length, the color range necessarily increases.

    We see this experimentally in pulsed lasers. A nanosecond laser pulse will have a color—it will look red or blue. If we send this pulse through a prism, we will see that there is a range of color, but the range only includes different shades of the same color. But as the laser pulse gets shorter, the color content necessarily becomes broader. A femto-second laser is white.

    This is not an accident of technology. If you send a femto-second laser pulse through a colored filter, the pulse gets longer because there are no longer enough colors to make a short pulse.

    This interplay of the length of a traveling particle and the range of colors is a very deep feature of quantum mechanics—it is commonly known as the Heisenberg uncertainty principle. The location and energy (momentum) cannot be both well defined. A sharp position necessitates a wide range of energy, and a sharp energy (narrow color range) necessitates a long particle ripple.

    Back at the Visitor’s Center

    So how does this relationship between the range of color and the length of a particle ripple affect our everyday world? Sending light pulses down an optical fiber naturally brings to mind computer internet connections and fiber optic network connections.

    13
    Two digital signals being sent down a fiber, one in orange and the other in blue. At the far end of the fiber the color range associated with the top user (red-yellow) and the lower user (green-indigo) can be separated and each user will receive their data. Credit: Miguel F. Morales.

    If we envision a digital data stream where each light pulse represents a 1 and a missing pulse indicates a 0, the speed at which I can send data is directly related to the length of the pulses. The shorter the pulses, the more closely in time they can be packed.

    But there is a limit to how closely I can pack photon ripples—if they get too close, they start holding hands. This photon friendliness starts erasing the data I was trying to convey. As I keep increasing the data rate, I need to make the pulses shorter and shorter to keep the ripples from overlapping and erasing the message. But to make a shorter pulse I must use a wider range of colors.

    14
    If the data rate become too fast the pulses start overlapping, garbling the message. Credit:
    Miguel F. Morales.

    15
    The clarity of the fast data can be restored by using narrower pulses, but this in tern requires a wider color range—more bandwidth. Credit: Miguel F.Morales.

    The term bandwidth means the range of color, and this word has crept into everyday language in a technically correct way. The higher the data rate, the more color bandwidth you need.

    This gets more interesting if several users are sharing a single fiber. Parallel data streams can come from many users on a fiber-optic backbone or all the TV channels provided by your local cable TV provider. Conceptually, each data stream has its own color range, so one channel is on orange, the next one yellow, another yellow-green, etc. At the end of the fiber, we can use a prism to split the channels and give each user their data stream.

    Clearly, the internet provider can make more money by splitting the color allocation ever finer, but there is a limit. Each user needs not just a central color, but a range of colors so they can make pulses fast enough. The width of the range of color—the color bandwidth—determines how short they can make pulses and thus how fast they can send or receive data.

    While the internet provider can put many different colors onto a fiber, the total bandwidth is conserved. The internet provider can have a 1,000 users, each at a slow, narrow bandwidth, or 10 users each with very fast, high-bandwidth connections. But there is only so much color range to go around.

    This naturally applies to radio waves too (radio is just low-frequency light). How we manage and sell the limited range of radio colors is called spectrum management. Here is a link to one of my favorite charts: the radio spectrum allocation for the US (note the logarithmic frequency scale). There are many users, and you can read off the maximum data rate of each user by the width of their allocation. High data rate users like high definition TV and cell phones need wide blocks of radio colors, while low data rate users like FM radio and GPS need only need narrow allocations of color.

    Next week’s expedition

    This week, we explored the ‘length’ of a particle. This leads to the concept of bandwidth, and the one-to-one correspondence between the length of ripple and the color range. A short ripple necessitates a wide color range.

    In next week’s hike, let’s go big. We’ll ask how wide a particle is, and in the process see quantum mechanical effects that span light-years! So keep your binoculars handy and meet me back here next week.

    FAQ

    But bandwidth can be explained classically without resorting to quantum mechanics! Yes, of course, any wave-like description incorporates the idea of bandwidth. But I find it helpful to keep an eye on the fundamental waviness of particles in motion. I could decide to send my message using a stream of neutrons instead of light, and due to the waviness of particles the same bandwidth arguments would apply. These ideas of particle waviness and bunching will become increasingly important in the next two articles.

    Don’t all lasers have a narrow color? Interestingly no. To work all lasers require amplification of light (stimulated emission), and early lasers and the inexpensive lasers you may have around the house have a very narrow color range over which they can work. However, there are amplification media that work over a wide range of colors. In the optical/IR titanium doped sapphire is a favorite, and Erbium-doped fiber amplifiers work over a wide range of infra-red colors and are crucial for long distance internet fiber communication. Because of the ubiquity of Erbium-doped fiber amplifiers, there is a good chance that at a wide color laser was involved in getting this webpage to you.

    This is basically just magic, isn’t it? I found this a very funny question. Then I started wondering how I could test whether science is magic? At least in popular culture magic is: arcane knowledge (✓) learned through many years of arduous study (✓) by pouring through old books filled with cryptic runes (✓) apprenticed to temperamental sages (✓) while living on scraps in a tower or dungeon (✓). And one must always be careful when performing major feats of magic/science, as the results are never quite what you hoped (✓). So I guess the answer is, experimentally at least, that yes it’s magic.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 1:50 pm on January 20, 2021 Permalink | Reply
    Tags: "A curious observer’s guide to quantum mechanics pt. 2- The particle melting pot", , ars technica, Atomic clocks oscillate a few billion times a second., Do particle waves only interact with themselves or do they mix together?, If particles move like waves what happens when I overlap the paths of two particles?, If two lasers were fun what would happen if we used a lot of fancy lasers?, , Nature does not care if one particle is interacting with itself or if two particles are interacting with each other—a wave is a wave and particle waves act just like any other wave., Optical clocks- one of the new wonders of the world., Optical Frequency Comb laser, , Our second guided walk into the quantum mechanical woods!, This week I’d like to get off the paved trail and go a bit deeper into the woods in order to talk about how particles meld and combine while in motion., we can a prism backwards and with careful alignment use the prism to combine the light from two lasers into a single beam.   

    From ars technica: “A curious observer’s guide to quantum mechanics pt. 2- The particle melting pot” 

    From ars technica

    1/17/2021
    Miguel F. Morales

    In which lasers do things that make absolutely no sense but give us great clocks.

    1
    Aurich Lawson / Getty Image.

    Welcome back for our second guided walk into the quantum mechanical woods! Last week, we saw how particles move like waves and hit like particles and how a single particle takes multiple paths. While surprising, this is a well-explored area of quantum mechanics—it is on the paved nature path around the visitor’s center.

    This week I’d like to get off the paved trail and go a bit deeper into the woods in order to talk about how particles meld and combine while in motion. This is a topic that is usually reserved for physics majors; it’s rarely discussed in popular articles. But the payoff is understanding how precision lidar works and getting to see one of the great inventions making it out of the lab, the optical comb. So let’s go get our (quantum) hiking boots a little dirty—it’ll be worth it.

    Two particles

    Let’s start with a question: if particles move like waves, what happens when I overlap the paths of two particles? Or said another way, do particle waves only interact with themselves, or do they mix together?

    3
    On the left is the interferometer from last week, where a single particle is split by the first mirror and takes two very different paths. On the right is our new setup where we start with particles from two different lasers and combine them.
    Credit: Miguel Morales.

    We can test this in the lab by modifying the setup we used last week. Instead of splitting the light from one laser into two paths, we can use two separate lasers to create the light coming into the final half-silvered mirror.

    We need to be careful about the lasers we use, and the quality of your laser pointer is no longer up to the task. If you carefully measure the light from a normal laser, the color of the light and the phase of the wave (when the wave peaks occur) wander around. This color wander is not discernible to our eyes—the laser still looks red—but it turns out that the exact shade of red varies. This is a problem money and modern technology can fix—if we shell out enough cash we can buy precision mode-locked lasers. Thanks to these, we can have two lasers both emitting photons of the same color with time-aligned wave crests.

    When we combine the light from two high-quality lasers, we see exactly the same stripey pattern that we saw before. The waves of particles produced by two different lasers are interacting!

    So what happens if we again go to the single photon limit? We can turn the intensity of the two lasers down so low that we see the photons appear one at a time on the screen, like little paintballs. If the rate is sufficiently low, only one photon will exist between the lasers and the screen at a time. When we perform this experiment we will see the photons arrive at the screen one at a time; but when we look at the accumulated pointillism painting, we will see the same stripes we saw last week. Once again, we’re seeing single particle interference.

    It turns out that all the experiments we performed before give exactly the same answer. Nature does not care if one particle is interacting with itself or if two particles are interacting with each other—a wave is a wave, and particle waves act just like any other wave.

    But now that we have two precision lasers, we have a number of new experiments we can try.

    Two colors

    First, let’s try interfering photons of different colors. Let’s take the color of one of the lasers and make it slightly more blue (shorter wavelength). When we look at the screen we again see stripes, but now the stripes walk slowly sideways. Both the appearance of stripes and their motion are interesting.

    First, the fact that we see stripes indicates that particles of different energy still interact.

    The second observation is that the striped pattern is now time dependent; the stripes walk to the side. As we make the difference in color between the lasers larger, the speed of stripes increases. The musicians in the audience will already recognize the beating pattern we are seeing, but, before we get to the explanation, let’s improve our experimental setup.

    If we are content to use narrow laser beams, we can use a prism to combine the light streams. A prism is usually used to split a single light beam and send each color in a different direction, but we can use it backwards and with careful alignment use the prism to combine the light from two lasers into a single beam.

    4
    The light from two lasers with different color combined with a prism. After the prism the light ‘beats’ in intensity.
    Credit: Miguel Morales.

    If we look at the intensity of the combined laser beam, we will see the intensity of the light ‘beat.’ While the light from each laser was steady, when their beams with slightly different colors are combined, the resulting beam oscillates from bright to dim. Musicians will recognize this from tuning their instruments. When the sound from a tuning fork is combined with the sound of a slightly out-of-tune string, one can hear the ‘beats’ as the sound oscillates between loud and soft. The speed of the beats is the difference in the frequencies, and the string is tuned by adjusting the beat speed to zero (zero difference in frequency). Here we are seeing the same thing with light—the beat frequency is the color difference between the lasers.

    While this makes sense when thinking about instrument strings, it is rather surprising when thinking of photons. We started with two steady streams of light, but now the light is bunched into times when it is bright and times when it is faint. As the difference between the colors of the lasers is made larger (they’re de-tuned), the faster the pulsing becomes.

    Paintballs in time

    So what happens if we again turn down the lasers really low? Again we see the photons hit our detector one at a time like little paintballs. But if we look carefully at the timing of when the photons arrive, we see that it is not random—they arrive in time with the beats. It does not matter how low we turn the lasers—the photons can be so rare that they only show up one every 100 beats—but they will always arrive in time with the beats.

    This pattern is even more interesting if we compare the arrival time of the photons in this experiment with the stripes we saw with our laser pointer last week. One way of understanding what is happening in the two-slit experiment is to picture the wave nature of quantum mechanics directing where the photons can land side to side: the paintballs can hit in the bright regions and not in the dark regions. We see a similar pattern in the paintball arrival in the two-color beam, but now the paintballs are being directed forward and back in time and can only hit in time with the beats. The beats can be thought of as stripes in time.

    A little over the top

    Well, if two lasers were fun, what would happen if we used a lot of fancy lasers? With a prism we can in principle add the light of any number of different colored lasers, and the theory of what we should see is pretty clear. As we add more lasers, each locked in time and with evenly spaced steps in color, the duration of the light pulses in the combined beam gets smaller and smaller. All of the photons still have to show up, so when there is a pulse of laser light, it is very bright. But the dark times between the pulses get wider and wider as we add more lasers. It starts to look like a strobe light—bright white flashes separated by long periods of darkness.

    5
    The light from many fancy lasers combined with a prism would produce very strong beating: the resulting beam would look like the white flashes from a strobe light. Unfortunately assembling this many fancy lasers is very hard to do in practice.
    Credit: Miguel Morales.

    6
    If the strobed light from our hypothetical many-laser setup were passed through a second prism, as expected we’d see the continuous light beams of the original lasers. Credit: Miguel Morales.

    While this kind of laser cascade is entirely possible in theory, in practice they are a pain in the butt to set up. Lasers of this precision are expensive and fickle beasts. They are like Italian sports cars—incredible when they are running, but they spend as much time in the shop as lazing. Chaining them together and keeping them all working in sync requires incredible patience—a set of ten locked lasers is a major technical achievement.

    But there is a kind of laser that emits very short pulses of light (creatively called a pulsed laser). By repeatedly firing our laser like a precision strobe light, we can create a stream of light pulses that looks just like the light stream after the prism in our hypothetical many-hundred-laser setup. So let’s reverse our prism and send the strobed laser pulses through it.
    The light from a white pulsed laser looks just like the strobed light in our hypothetical many-laser setup, and if we send the pulsed light through a prism we see the same result—many steady laser beams evenly spaced in color. This is called an optical comb.

    8
    The light from a white pulsed laser looks just like the strobed light in our hypothetical many-laser setup, and if we send the pulsed light through a prism we see the same result—many steady laser beams evenly spaced in color. This is called an optical comb. Credit: Miguel Morales.

    When we look at the light from the strobed laser after the prism, it looks like a set of steady lasers equally spaced in frequency. In certain ways, this makes sense—if steady lasers can beat in time to make strobed light, the reverse should be true, too. In other ways, it makes no sense at all.

    If I look at the light from one of the colors after the prism and time when the photons arrive, they arrive steadily in time. This means most of the photons arrive at times between the original laser pulses. The light in the individual ‘lasers’ after the prism is perfectly steady and is just as bright between the strobe pulses as during them. This is a purely quantum effect.

    This strobed laser is called an Optical Frequency Comb, because the colors look like the teeth of a hair comb as seen in the upper line below.

    9
    A folded spectrum from the High Accuracy Radial velocity Planet Searcher (HARPS) at the La Silla Observatory in Chile. The bottom line is the spectrum of a star with characteristic absorption lines (dimmer/narrower regions), while the top line is the spectrum of an optical frequency comb from a pulsed laser to provide absolute color reference. The individual beams of the optical comb can be clearly seen and are used to measure tiny doppler shifts in the star’s spectrum due to orbiting planets.
    Credit: ESO.

    The optical frequency comb is one of the great inventions of our century, and it is hard to overstate the importance it is having on measurements; its development was awarded the 2005 Nobel Prize in Physics. To work properly, an optical frequency comb requires timing the pulses with an atomic clock and exquisite control of the shape of each pulse, but you can now just buy one of them. They aren’t cheap, at least not yet, but several companies will sell you one complete with a warranty and a service plan.

    Back at the Visitor’s Center

    I’m very excited that we got to see temporal interference this week and how we can interfere with different particles. This is the kind of fun quantum effect that we rarely get to share with non-professional physicists.

    We’re building on the idea of particles moving as waves by showing that particles from different sources can blend together. Temporal beats can be viewed as a time analog of the stripes we saw coming from the slits in aluminum foil from the first article. And just like those stripes, temporal beats persist even when the number of particles from the two sources is less than one particle at a time. The mixing of colors to form beats is reversible, and an optical frequency comb uses strobes of light to create steady laser sources at many precise colors.

    There are two neat applications I’d like to highlight: coherent lidar and optical clocks.

    Lidar is the optical or infrared light analog of radar. Like any really useful technology, there are multiple versions and implementations. Many lidars work by bouncing pulses of light off distant objects. By measuring how long it takes for the light to return, they can determine how far away the objects are. But coherent lidars work on a different principle and are particularly well suited for precise speed measurements such as imaging the air flow near wind turbines.

    In coherent lidar, a single color laser is bounced off an object, and the doppler-induced color shift of the reflected light indicates that object’s relative speed. The trick is that the color shift is small, and it would be very difficult to actually measure the color with the necessary accuracy—the prisms needed would be prohibitively expensive. Instead, these devices combine the reflected light with a copy of the outgoing light.

    Because the color of the reflected light was doppler shifted, we observe temporal beats when it is combined with the original laser beam. Measuring the speed of the beats measures the doppler shift of the reflected beam, and thus the speed of the object that the beam is bouncing off. Coherent lidars use the temporal beats of different colored light beams to measure speed.

    The best optical clocks

    Which brings us to optical clocks, one of the new wonders of the world. All clocks work by counting what we can call “swings.” In a grandfather clock, a pendulum slowly swings back and forth, once a second, and the clock works by counting the swings. In a mechanical watch, it is the twisting of a small wheel on a spring (typically three swings/second); in a quartz watch, it is the vibrations of a quartz crystal, typically 32,768 vibrations a second.

    While many factors such as temperature affect the accuracy of a clock, one of the key contributors is simply how many swings it makes per second. It is easier to make an accurate quartz clock than a grandfather clock because it is oscillating more than 30,000 times more quickly.

    Atomic clocks oscillate a few billion times a second. In an atomic clock, what is being counted are the oscillations of microwave light absorbed by an atom (cesium and rubidium are favorite targets). Fundamentally, we are still just counting swings like in a grandfather clock. But because we get billions of oscillations per second, an atomic clock can be vastly more accurate.

    But there are optical atomic transitions that are even faster—hundreds of trillions of oscillations per second. How do you count that fast? Even the fastest computers have no hope of counting a hundred trillion oscillations a second.

    10
    Because we timed out the white pulses accurately with an atomic clock, the color of each beam after the prism is accurately known. If we select one of the colors and combine it with the light of a reference atom using another prism, the resulting beam will beat at the difference in frequency. Effectively we are using the beam of the optical comb to allow us to count the very fast oscillations of the reference atom. If the beat speed changes, we know our atomic clock has drifted a little in time and we can correct it. Credit: Miguel Morales.

    11
    A beautiful photo of a Ytterbium lattice optical clock. Credit: NIST.

    The answer is to count beats instead. Because the pulses of light in an optical frequency comb were timed out with an (old-fashioned) atomic clock, the colors of the ‘lasers’ after the prism are at known stable frequencies. So we can take a Ytterbium atom, for instance, and select the light from a particularly stable oscillation of electrons deep inside the atom. The light from this transition can then be combined with the light from the nearest ‘laser’ of the optical comb. And just like with the two lasers of different color, we can measure the beat frequency.

    We cannot count a 100 trillion oscillations a second. But if we know the light from a laser in the comb has a frequency of a 100 trillion oscillations a second and we see 12 beats a second when we combine it with the light from the Ytterbium atom, then we know the Ytterbium light is oscillating 100 trillion + 12 times a second. We can use the combination of measured beats and a known reference to count very fast.

    Because the Ytterbium atom oscillations are even more stable than our atomic clock, we know if the beat frequency changes, it is due to drifts in our atomic clock. We can then work backward to correct the ‘huge’ errors in our atomic clock.

    The precision of current optical clocks is astounding. You may have heard that time goes more slowly when gravity is strong due to general relativity. Optical clocks are so sensitive they can measure the different flows of time 2cm apart in height. If I lay a book on the table, the bottom of the book is slightly closer to the center of the Earth than the top, so experiences slightly stronger gravity. This difference is measurable with an optical clock. Optical clocks are so sensitive we can no longer average the time of multiple clocks together—the ground you or a clock are sitting on typically rises and falls by ~5cm a day due to land tides. The seismic motion of the ground currently limits our ability to measure time.

    Precise optical clocks are but one application of the optical comb. Optical combs are transforming precision measurement in many areas, from finding planets around distant stars (precision doppler measurements), to potentially measuring the expansion of space itself (time dependence of redshift). Optical frequency combs are one of the next big things working their way out of the laboratory and rely both on the mixing of particles and the measurement of beats between particles of different color.

    Our next expedition

    Congratulations on surviving another expedition into the quantum mechanical woods, this time to see effects rarely explored outside of advanced physics classes. In next week’s expedition, I’d like to head into a different part of the woods. The first two articles looked at how particles move and mix. There’s a natural question that arises when we see that a particle can take two paths: how big is a particle? This will be our question for the next two articles. Along the way we’ll learn why you buy ‘bandwidth’ when you want a lot of data and how all particles can be divided into ‘introverts’ and ‘extroverts.’ So keep your boots dry, and we’ll see you again next week.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 1:46 pm on January 10, 2021 Permalink | Reply
    Tags: ars technica, No math. No philosophy. Everything we encounter will be experimentally verified.,   

    From ars technica: “A “no math” (but seven-part) guide to modern quantum mechanics” 

    From ars technica

    1/10/2021
    Miguel F. Morales

    1
    Quantum mechanics is complex, fold-your-brain stuff. But it can be explained. Credit: Aurich Lawson / Getty Images.

    Some technical revolutions enter with drama and a bang, others wriggle unnoticed into our everyday experience. And one of the quietest revolutions of our current century has been the entry of quantum mechanics into our everyday technology. It used to be that quantum effects were confined to physics laboratories and delicate experiments. But modern technology increasingly relies on quantum mechanics for its basic operation, and the importance of quantum effects will only grow in the decades to come.

    As such, the time has come to explain quantum mechanics—or, at least, its basics.

    My goal in this seven(!)-part series is to introduce the strangely beautiful effects of quantum mechanics and explain how they’ve come to influence our everyday world. Each edition will include a guided hike into the quantum mechanical woods where we’ll admire a new—and often surprising—effect. Once back at the visitor’s center, we’ll talk about how that effect is used in technology and where to look for it.

    Embarking on a series of quantum mechanics articles can be intimidating. Few things trigger more fear than “a simple introduction to physics.” But to the intrepid and brave, I will make a few promises before we start:

    No math. While the language of quantum mechanics is written using fairly advanced math, I don’t believe one has to read Japanese before you can appreciate Japanese art. Our journey will focus on the beauty of the quantum world.
    No philosophy. There has been a fascination with the ‘meaning’ of quantum mechanics, but we’ll leave that discussion for pints down at the pub. Here we will focus on what we see.
    Everything we encounter will be experimentally verified. While some of the results might be surprising, nothing we encounter will be speculative.

    If you choose to follow me through this series of articles, we will see quantum phenomena on galactic scales, watch particles blend and mix, and see how these effects give rise to both our current technology and advances that are on the verge of making it out of the lab.

    So put on your mental hiking boots, grab your binoculars, and follow me as we set out to explore the quantum world.

    What is quantum mechanics?

    My Mom once asked me, “What is quantum mechanics?” This question has had me stumped for a while now. My best answer so far is that quantum mechanics is the study of how small particles move and interact. But that’s an incomplete answer, since quantum effects can be important on galactic scales too. And it is doubly unsatisfactory because many effects like superconductivity are caused by the blending and mixing of multiple particles.

    In many ways, the role of quantum mechanics can be understood in analogy with Newtonian gravity and Einstein’s general relativity. Both describe gravity, but general relativity is more correct—it describes how the Universe works in every situation we’ve managed to test. But 99.99 percent of the time, Newtonian gravity and general relativity give the same answer, and Newtonian gravity is much easier to use. So unless we’re near a black hole, or making precision measurements of time with an optical clock, Newtonian gravity is good enough.

    Similarly classical mechanics and quantum mechanics both describe motions and interactions. Quantum mechanics is more right, but most of the time classical mechanics is good enough.

    What I find fascinating is that “good enough” increasingly isn’t. Much of the technology developed in this century is starting to rely on quantum mechanics—classical mechanics is no longer accurate enough to understand how these inventions work.

    So let’s start today’s hike with a deceptively simple question, “How do particles move?”

    Kitchen quantum mechanics

    Some of the experiments we will see require specialized equipment, but let’s start with an experiment you can do at home. Like a cooking show, I’ll explain how to do it, but you are encouraged to follow along and do the experiment for yourself. (Share your photos in the discussion below. Bonus points for setting the experiment up in your cubicle/place of work/other creative setting.)

    To study how particles move, we need a good particle pea shooter to make lots of particles for us to play with. It turns out a laser pointer, in addition to entertaining the cat, is a great source of particles. It makes copious amounts of photons, all moving in nearly the same direction and with nearly the same energy (as indicated by their color).

    If we look at the light from a laser pointer, it exits the end of the laser pointer and moves in a straight line until it hits an obstacle and scatters (or hits a mirror and bounces). At this point, it is tempting to guess that we know how particles move: they exit the end of the laser like little ball bearings and move in a straight line until they hit something. But as good observers, let’s make sure.

    Let’s challenge the particles with an obstacle course by cutting thin slits in aluminum foil with razor blades. In the aluminum foil I’ve made a couple of different cuts. The first is a single slit, a few millimeters long. For the second I’ve stacked two razor blades together and used them to cut two parallel slits a few tenths of a millimeter apart.

    2
    Horizontal slits in aluminum foil made with razor blades. The upper slit is from a single blade, while the lower is from two blades taped together. Credit: Miguel Morales.

    In a darkened room, I setup my laser pointer to shoot across the room and hit a blank wall. As expected I see a spot (provided the cat’s not around). Next, I put the single slit in the aluminum foil in the laser’s path and look at the pattern on the wall. When we send the light through the single slit, we see that the beam dramatically expands in the direction perpendicular to the slit—not along the slit.

    3
    Credit: Miguel Morales.

    Interesting. But let’s press on.

    Now let’s put the closely spaced slits into the laser beam. The light is again spread out, but now there is a stripey pattern.

    4
    Credit: Miguel Morales.

    Congratulations! You’ve just spotted a quantum mechanical effect! (whoo hoo animated emoji) This is the classic double-slit experiment. The stripey pattern is called interference, and is a telltale signature of quantum mechanics. We will see a lot of stripes like these.

    Now you have probably seen interference like this before, since water and sound waves show exactly this kind of striping.

    5
    Water waves from two sources (one visible in green, the other hidden behind the presenter). The circular waves overlap into regions of extra strength (bright stripes) and regions where the waves cancel each other out (dark bands). The formation of stripes is a signature of wave motion. Credit: Veritasium.


    The Original Double Slit Experiment.

    In the photo above, each ball creates waves that move out in a circle. But a wave has both a peak and a trough. In some places the peak of the wave from one of the balls always coincides with the trough from the other (and vice versa). In these areas the waves always cancel out and the water is calm. In other locations the peaks of the waves from both balls always arrive together and add up to make a wave that is extra tall. In these locations the troughs also add up to be extra deep.

    So does the fact that we are seeing stripes when our laser pointer goes through two slits mean that particles are waves? To answer that question, we’re going to have to look more closely.

    The professional kitchen

    The double slits served two purposes: each slit spread the beam out (as we saw when we went through just one slit), while the two closely spaced slits provided the particles with a choice as to which slit to pass through. We can use a different experimental setup to more clearly separate the act of spreading the beam and forcing a choice, and this bigger setup will allow us to study in detail what is happening. (This is the moment in the cooking show where they pull out some obscure cooking tool—like a crème brûlée torch—that you’re unlikely to have at home.)

    7
    Credit: Miguel Morales.

    Keeping the laser, let’s use a lens to spread the beam out, and then a half-silvered mirror to give the particles a one-way-or-the-other choice. We then use a couple of good mirrors and another half-silvered mirror to recombine the beams before they reach a wall or screen, as shown above. If one of the mirrors is mis-aligned ever so slightly, we will again see stripes (places where waves add together and cancel out). In the 38 second video below I show the working version.

    Interferometer demo

    What I find fun about this setup is that it is big. I don’t need a microscope to mess with it. For convenience, I made this table top sized, but we could have made it huge. In the LIGO experiment used to detect gravitational waves, the light is sent down arms 4 km long, and with radio light we’ve used the Cassini spacecraft as it approached Saturn as a mirror. Quantum mechanics is not limited to the microscopic world.

    But let’s play with our human sized experiment, and see some more quantum mechanical effects.

    Slow mo

    The first thing I’d like to do is to look very carefully at the stripey pattern. Instead of just looking at the screen, let’s use a sensitive camera so we can watch it as it develops. When the laser is bright, the camera will immediately capture stripes. But if we make the room very dark and turn down the strength of the laser, we will notice that light is a not smoothly distributed, but appears at individual ‘points’—in slow motion we can see the arrival of individual photons. They look like small red paintballs hitting the camera sensor.

    Since this is a video camera, we have a record of where the photons hit. If we play the video, it looks like we are watching someone make a pointillism painting. Individual spots are appear on the screen without an obvious pattern. If we keep where they landed lit up by summing the video frames, the spots start to fill in the stripey pattern we saw when we had the laser turned all the way up.

    The stripy pattern is there, even if we only generate it one photon at a time.

    Which way did he go?

    In both the double slit experiment and the bigger lab setup, the light can take two different paths: the left slit/path or the right slit/path. But what we see is the result of individual particles hitting the screen. So which path do the individual particles take?

    If I block either of the paths, the stripes go away. The same thing happens with the aluminum foil if you block one of the slits with an index card (though this requires a very steady hand). We only see stripes if the light can travel both paths and interact before hitting the screen.

    This gets even stranger if we turn down the intensity of the light. We can turn the laser down so low that only one particle of light is traveling through the experiment at a time. No matter how low we turn the light, the stripes only appear when each individual photon can take both paths. (Even if we limit it to one particle at a time as in the video above, the particles will slowly add up to the pattern seen above.)

    Even though a single photon hits the screen like a little paintball, where it hits the screen (in the bright stripes) shows that that every photon took both paths.

    If your head isn’t hurting yet, you’re not paying attention. But, it is important to register all of the markings on a strange bird before looking in the guidebook, so let’s look a little more closely before we explain what is happening.

    Color

    What if we use a different color laser pointer? Replacing the red laser pointer with a green one, we see the same set of stripes but now the stripes are closer together. The stripes are even closer together for a blue laser pointer. As the wavelength of the light gets shorter, the spacing of the stripes gets narrower.

    7
    Credit: MIT Department of Physics.

    We can repeat these experiments, looking at the arrival of the photons in slow mo and blocking different paths, and everything works the same way with the exception of the stripes being closer together.

    But looking at the slow mo video of the particles hitting the camera begs the question of how hard are the photons hitting? Are there hard hits and soft hits? The standard camera sensor we’ve been using can’t measure how hard the photons hit. But there are (expensive) detectors that can measure how much each pixel heats up when it is hit by a photon. The heat deposited by the photon directly tells us how hard the photon hit—how much energy it had.

    If we put one of these fancy detectors in our experiment, we will see that all of the red photons have the same energy. All green photons have more energy than the red photons, but the same energy as all the other green photons. Blue photons are even more energetic. The color directly tells us how hard a photon will hit the detector.

    So we have a pair of observations associated with color. As the color becomes more blue, the spacing of the stripes becomes closer and the energy of each impact with the screen increases. Energy is closely associated with the spacing of the stripey patterns.

    So what is happening?

    The stripes we are seeing are a hallmark of waves. Sound waves and water waves both exhibit this kind of striping. If you have ever heard loud and quiet places at a concert or while listening to the stereo and walking around the room, you’ve heard the stripes of wave interference. You can recreate this experience by playing a single note from a synthesizer through a pair of stereo speakers. The two speakers act like the two slits in the aluminum foil; by moving your head left to right you can hear the loud (bright) and soft (dim) parts of the stripes (try a note a couple of octaves above middle C). We saw the same wave effect in water waves earlier.

    As the pitch of the sound gets higher—or the water waves become shorter in length—the distance between the bright stripes will get closer, just like when we changed the laser color. And if we block the waves in one of the paths (by pulling the speaker cord on one of the speakers or stopping one of the sources of waves in water) the stripes will go away. All the experiments we did that tested how the particles moved from the laser pointer to the screen show that the particles move like waves.

    However, we also saw the light hitting the screen like little paintballs. They hit at a spot and deposit a set amount of energy depending on their color. All of the experiments about how the particles hit the camera show that they behave like little ball bearings.

    This is the fundamental mystery of quantum mechanics: particles move like waves and hit like particles.

    The immediate question is whether this is something special associated with light particles, or whether it is true of all particles. The short answer is it is always true. It is true for any kind of particle, under any circumstance, all the time. Photons, electrons—even molecules. Particles always move like waves and hit like particles.

    Why physicists are rarely invited to parties

    Clearly this statement—that all particles move like waves and hit like particles—requires intensive testing. My favorite example involves neutrons; we can repeat the experiment we did with laser light to see if neutrons behave the same way. To do this, we need two things: a good neutron pea shooter, and some neutron mirrors.

    We are all very lucky that neutron laser pointers don’t exist. But there is a very good source of neutrons: your friendly neighborhood nuclear power plant. Nuclear reactors create copious amounts of neutrons, and the plant operators already pipe some of the neutrons out of the reactor area because they need to measure the neutrons to keep track of the reactor. If you bat your eyelashes and get security clearance, they will let you pipe off some of the neutrons too.

    You then need to throw away both the low energy and high energy neutrons to make a stream of neutrons of the same energy (like photons of the same color in a laser). Once you’ve done that, you have a great neutron pea shooter that produces a lot of neutrons of nearly the same color (it is probably for the best that it is shackled to an immovable nuclear reactor).

    Now we need some neutron mirrors. One can use crystals to reflect neutrons, but because the wavelength of the neutrons is very small, alignment of the mirrors becomes a critical issue. This is where the semi-conductor industry comes to our rescue. Computer chips are built on silicon wafers that are cut from large, perfect crystals.

    Instead of slicing these crystals into wafers, we can cut away half of the crystal to leave three ‘fins’ that act like the mirrors of an interferometer and are held in perfect alignment by the remaining crystal.

    9
    A perfect crystal of silicon. These are usually sliced into thin wafers for making computer chips, but they can instead be machined to create a perfect crystal interferometer. Credit: De Agostini Editorial.

    10
    The silicon after it’s been cut and polished to form an interferometer. Credit: NIST

    11
    The paths of the neutrons through the perfect crystal interferometer. Credit Miguel Morales.

    When we send our single-energy neutrons through the crystal mirrors, we see exactly the same pattern of interference that we saw with light. We can repeat all of the experiments we did with light and we get exactly the same results. Even when we send one neutron at a time through the crystal, we see the wave-like stripes. With the right hardware, we can extend this beyond photons and neutrons to test many different kinds of particles. And when we do, we find that all particles move like waves and hit like particles.

    Neutrons are a particularly interesting particle for several reasons. First, they’re heavy and slow. This makes them more sensitive to gravity. We can rotate the crystal so instead of left and right paths we have top and bottom paths as shown below. As a neutron climbs uphill, it slows down (becomes more red), and as it descends it speeds up (becomes more yellow). So the neutron wave on the upper path will travel more slowly than the neutron wave on the bottom path.

    We still see the stripes when the waves recombine, but because the neutron wave on the upper arm was slow and arrived a little late the stripes are shifted side-to-side. The amount of this shift is one way to measure the strength of gravity.

    (While very precise, there are currently better ways of measuring gravity than with neutrons. However, many of them use exactly the same kind of experimental setup, just with the neutrons replaced with Rubidium atoms and mirrors made out of light.)

    Neutrons are also interesting because they are composite particles—a neutron is made up of three quarks. Even though it is made up of sub-particles, it still moves like a wave. Modern experiments have taken this much farther and regularly send Cesium atoms (more than 180 protons+neutrons+electrons), and even large molecules like Bucky Balls (60 atoms) and phthalocyanine with thousands constituent particles through similar interferometer setups. Even these huge composite particles move like waves and produce the telltale stripes in an interferometer.

    While getting a narrow energy range becomes very hard as the mass increases, these large composite objects behave exactly the same way as the photons from our laser pointer. A single Bucky Ball will move like a wave and take all of the paths through the experiment, then hit like a particle. The telltale stripes of wave motion, and the energy-dependent stripe spacing, show that even larger objects move like a waves until they reach the detector, where they hit like particles.

    Back at the Visitor’s Center

    Congratulations on surviving your first guided walk through the spooky woods of quantum mechanics! So what did we see on our first excursion? We observed that when we send one particle at a time through an experiment, it will take both paths just like a wave, but when we detect it, it will hit like a particle. How hard it hits, its energy, is related to the wavelength of the wave. As the energy goes up (shorter wavelength), the stripes become more closely spaced. This behavior is consistent for all kinds of particles, from photons of light to electrons to composite particles like neutrons, atoms, or molecules. It is a fundamental feature of the way nature works.

    Now that we’re back at the visitor’s center, let’s talk about how these observations apply to our technological world. The wave nature of particles appears everywhere. The iridescence of hummingbird feathers and soap bubbles, the anti-reflection coatings on camera lenses, and the optics of electron microscopes all rely on the wave-like quantum motion of particles.

    But a good technological example is the optical gyroscope. Gyroscopic guidance systems are used in airplanes, satellites, and rockets. They originally used physical spinning hardware to keep track of orientation, but they have nearly all been replaced by optical gyroscopes because they are cheaper, more sensitive, and more reliable.

    In the neutron interferometer, we noticed that the stripes shifted when the wave that went through the upper arm arrived at the final mirror a little late—in that case because gravity slowed down the neutrons in the upper arm. We can get a similar effect using light if the interferometer rotates while the photon is traversing the experiment.

    10
    If we rotate the interferometer clockwise, because light takes some time to travel through the arms the clockwise path is a little longer than the counter-clockwise path. Like in the vertical neutron interferometer, the light waves in the clockwise path arrive a little late, causing the stripes to shift in position. How far the stripes shift depends on the rotation speed.
    Credit: Morales.

    12
    Fiber optical gyroscopes can be incredibly compact and robust. Credit: Caltech.

    Because the final mirror has moved during the experiment, the clockwise arm is slightly longer than the counter-clockwise arm, so the clockwise wave arrives a little late and the interference pattern shifts. The faster the experiment rotates, the farther the stripes shift. Measuring how far the stripes shift directly tells us the rotation speed. Changes in orientation in whatever the device is attached to—be it a car, a plane, or a satellite—will rotate the interferometer, and add an additional shift to the stripes.

    To make the gyroscope more robust, we can make all the lasers, mirrors, and paths out of fiber optics. And to make it more sensitive, we can have the light travel in many clockwise and counter-clockwise loops before recombining. Fiber gyroscopes work because a photon of light moves like a wave and will take both the clockwise and counter-clockwise paths and produce stripes when recombined. Fiber gyroscopes rely on quantum mechanics to work.

    The coming weeks

    One of my goals in these articles is to show some of the wonders hiding in the quantum mechanical woods. In this article we stayed pretty close to the nature path, but in the coming weeks I’d like to go deeper into the woods and show you things that are usually reserved for physics graduate students. I think we can do this without the math that is usually used, and really see some of the beauty that arises when you look closely at the natural world.

    In next week’s article, we will expand on the idea of particle waves and look at particle mixing. This will lead us to understanding how continuous wave LIDAR works, and we’ll see one of the great inventions that’s only recently made its way out of the lab—the optical comb. In the subsequent weeks, we’ll look at particle introverts and extroverts, interference on extra-galactic scales, artificial atoms, quantum cryptography, and more. So come back next week for another hike into the quantum mechanical woods.

    FAQ

    But which path did the particle really take? The experiments show that the particles really take both paths. Despite much confusion (even among some physicists), this is the answer. But the question is based on a faulty mental image. The question assumes that a particle is really a little ball bearing, and thus must have chosen one path or the other. But this mental image is wrong. Particles really behave like waves when in motion. Asking which path a tsunami wave took when traveling between Hawaii and California really makes no sense—it is spread out. Similarly, asking which path the particle really takes makes no sense; it moves like a wave so it naturally takes all of the available paths.

    But isn’t the stripy pattern we see with light a classical effect? Yes and no. In quantum mechanics, there is nothing special about a photon of light vs. any other kind of particle—they all move like waves and hit like particles and they will all make interference stripes. What is different is the history of our understanding. Before the invention of quantum mechanics there was a wave-theory of light—Maxwell’s electrodynamics.

    But at the time electrons and protons were understood as little ball bearings, and no wave-like classical theory for electrons and protons was ever developed (all other particles were discovered after the invention of quantum mechanics). Our lack of a wave-like classical theory for electrons and protons is largely a fluke of history. So while quantum mechanics sees all particles as behaving in the same way, due to history, there is a simplified classical theory for photons but no simplified theory for the other particles. To me, the stripes clearly show the wavelike motion of particles, and this is one of the hallmarks of quantum mechanics, so it is a quantum mechanical effect.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 10:22 am on July 26, 2020 Permalink | Reply
    Tags: "The real science behind SETI’s hunt for intelligent aliens", ars technica, , , , , , ,   

    From ars technica: “The real science behind SETI’s hunt for intelligent aliens” 

    From ars technica

    7/25/2020
    Madeleine O’Keefe

    1
    Aurich Lawson / Getty

    In 1993, a team of scientists published a paper in the scientific journal Nature that announced the detection of a planet harboring life. Using instruments on the spacecraft Galileo, they imaged the planet’s surface and saw continents with colors “compatible with mineral soils” and agriculture, large expanses of ocean with “spectacular reflection,” and frozen water at the poles.

    NASA/Galileo 1989-2003

    An analysis of the planet’s chemistry revealed an atmosphere with oxygen and methane so abundant that they must come from biological sources. “Galileo found such profound departures from equilibrium that the presence of life seems the most probable cause,” the authors wrote.

    But the most telltale sign of life was measured by Galileo’s spectrogram: radio transmissions from the planet’s surface. “Of all Galileo science measurements, these signals provide the only indication of intelligent, technological life,” wrote the authors.

    The paper’s first author was Carl Sagan, the astronomer, author, and science communicator. The planet that he and his co-authors described was Earth.

    Twenty years later, as far as we can tell, Earth remains the only planet in the Universe with any life, intelligent or otherwise. But that Galileo fly-by of Earth was a case study for future work. It confirmed that modern instruments can give us hints about the presence of life on other planets—including intelligent life. And since then, we’ve dedicated decades of funding and enthusiasm to look for life elsewhere in the Universe.

    But one component of this quest has, for the most part, been overlooked: the Search for Extraterrestrial Intelligence (SETI). This is the field of astronomical research that looks for alien civilizations by searching for indicators of technology called “technosignatures.” Despite strong support from Sagan himself (he even made SETI the focus of his 1985 science-fiction novel Contact, which was turned into a hit movie in 1997 starring Jodie Foster and Matthew McConaughey), funding and support for SETI have been paltry compared to the search for extraterrestrial life in general.

    Throughout SETI’s 60-year history, a stalwart group of astronomers has managed to keep the search alive. Today, this cohort is stronger than ever, though they are mostly ignored by the research community, largely unfunded by NASA, and dismissed by some astronomers as a campy fringe pursuit. After decades of interest and funding dedicated toward the search for biological life, there are tentative signs that SETI is making a resurgence.

    At a time when we’re in the process of building hardware that should be capable of finding signatures of life (intelligent or otherwise) in the atmospheres of other planets, SETI astronomers simply want a seat at the table. The stakes are nothing less than the question of our place in the Universe.

    2
    The Arecibo Radio Telescope on Puerto Rico [recently unfunded by NSF and now picked up by UCF and a group of funders] receives interplanetary signals and transmissions. And it was in the movie Contact!

    How to search for life on other worlds

    You may have heard of searching for life on other planets by looking for “biosignatures”—molecules or phenomena that would only occur or persist if life were present. These could be microbes discovered by directly sampling material from the planet (known as “in-situ sampling”) or using spectroscopic biosignatures, like chemical disequilibria in the atmosphere and images of water and agriculture, like those detected by the Galileo probe in 1990.

    The biosignature search is happening now, but it comes with limitations. In-situ sampling requires sending a spacecraft to another planet; we’ve done this, for example, with rovers sent to Mars and the Cassini spacecraft that sampled plumes of water erupting from Saturn’s moon Enceladus. And while in-situ sampling is the ideal option for planets in the Solar System, with our current technology, it will take millennia to get a vehicle to a planet orbiting a different star—and these exoplanets are far, far more numerous.

    To detect spectroscopic biosignatures we will need telescopes like the James Webb Space Telescope (JWST) or the ground-based Extremely Large Telescope, both currently under construction.

    NASA/ESA/CSA Webb Telescope annotated

    ESO/E-ELT, 39 meter telescope to be on top of Cerro Armazones in the Atacama Desert of northern Chile. located at the summit of the mountain at an altitude of 3,060 metres (10,040 ft).

    To directly image an exoplanet and obtain more definitive spectra will require future missions like LUVOIR (Large Ultraviolet Optical Infrared Surveyor) or the Habitable Exoplanet Imaging Mission. But all of these lie a number of years in the future.

    NASA Large UV Optical Infrared Surveyor (LUVOIR)

    NASA Habitable Exoplanet Imaging Mission (HabEx) The Planet Hunter depiction

    SETI researchers, however, are interested in “technosignatures”—biosignatures that indicate intelligent life. They are signals that could only come from technology, including TV and radio transmitters—like the radio transmission detected by the Galileo spacecraft—planetary radar systems, or high-power lasers.

    The first earnest call to search for technosignatures—and SETI’s formal beginning—came in 1959. That was the year that Cornell University physicists Giuseppe Cocconi and Philip Morrison published a landmark paper in Nature outlining the most likely characteristics of alien communication. It would make the most sense, they postulated, for aliens to communicate across interstellar distances using electromagnetic waves since they are the only media known to travel fast enough to conceivably reach us across vast distances of space. Within the electromagnetic spectrum, Cocconi and Morrison determined that it would be most promising to look for radio waves because they are less likely to be absorbed by planetary atmospheres and require less energy to transmit. Specifically, they proposed a narrowband signal around the frequency at which hydrogen atoms emit radiation—a frequency that should be familiar to any civilization with advanced radio technology.

    What’s special about these signals is that they exhibit high degrees of coherence, meaning there is a large amount of electromagnetic energy in just one frequency or a very small instance of time—not something nature typically does.

    “As far as we know, these kinds of [radio] signals would be unmistakable indicators of technology,” says Andrew Siemion, professor of astronomy at the University of California, Berkeley. “We don’t know of any natural source that produces them.”

    Such a signal was detected on August 18, 1977 by the Ohio State University Radio Observatory, known as “Big Ear.”

    Ohio State Big Ear Radio Telescope, Construction of the Big Ear began in 1956 and was completed in 1961, and it was finally turned on for the first time in 1963

    Astronomy professor Jerry Ehman was analyzing Big Ear data in the form of printouts that, to the untrained eye, looked like someone had simply smashed the number row of a typewriter with a preference for lower digits. Numbers and letters in the Big Ear data indicated, essentially, the intensity of the electromagnetic signal picked up by the telescope, starting at 1 and moving up to letters in the double-digits (A was 10, B was 11, and so on). Most of the page was covered in 1s and 2s, with a stray 6 or 7 sprinkled in.

    But that day, Ehman found an anomaly: 6EQUJ5. This signal had started out at an intensity of 6—already an outlier on the page—climbed to E, then Q, peaked at U—the highest power signal Big Ear had ever seen—then decreased again. Ehman circled the sequence in red pen and wrote “Wow!” next to it.

    Alas, SETI researchers have never been able to detect the so-called “Wow! Signal” again, despite many tries with radio telescopes around the world. To this day, no one knows the source of the Wow! Signal, and it remains one of the strongest candidates for alien transmission ever detected.

    NASA began funding SETI studies in 1975, a time when the idea of extraterrestrial life was still unthinkable, according to former NASA Chief Historian Steven J. Dick. After all, no one then knew if there were even other planets outside our Solar System, much less life.

    In 1992, NASA made its strongest-ever commitment to SETI, pledging $100 million over ten years to fund the High Resolution Microwave Survey (HRMS), an expansive SETI project led by astrophysicist Jill Tarter.

    5
    Jill Tarter

    One of today’s most prominent SETI researchers, Tarter was the inspiration for the protagonist of Sagan’s Contact, Eleanor Arroway.

    But less than a year after HRMS got underway, Congress abruptly canceled the project. “The Great Martian Chase may finally come to an end,” said Senator Richard Bryan of Nevada, one of its most vocal detractors. “As of today, millions have been spent and we have yet to bag a single little green fellow. Not a single Martian has said take me to your leader, and not a single flying saucer has applied for FAA approval.”

    The whole ordeal was “incredibly traumatic,” says Tarter. “It [the removal of funding] was so vindictive that, in fact, we became the four-letter S-word that you couldn’t say at NASA headquarters for decades.”

    Since that humiliating public reprimand by Congress, NASA’s astrobiology division has been largely focused on searching for biosignatures. And it has made sure to distinguish its current work from SETI, going so far as to say in a 2015 report that “the traditional Search for Extraterrestrial Intelligence… is not a part of astrobiology.”

    Despite or because of this, the SETI community quickly regrouped and headed to the private sector for funding. Out of those efforts came Project Phoenix, rising from the ashes of the HRMS. From February 1995 to March 2004, Phoenix scanned about 800 nearby candidate stars for microwave transmission in three separate campaigns with the Parkes Observatory in New South Wales, Australia; the National Radio Astronomy Observatory in Green Bank, West Virginia; and Arecibo Observatory in Puerto Rico [above].

    CSIRO/Parkes Observatory, located 20 kilometres north of the town of Parkes, New South Wales, Australia, 414.80m above sea level

    Green Bank Radio Telescope, West Virginia, USA, now the center piece of the GBO, Green Bank Observatory, being cut loose by the NSF

    The project did not find any signs of E.T., but it was considered the most comprehensive and sensitive SETI program ever conducted.

    At the same time, other projects run by the Planetary Society and UC Berkeley (including a project called SERENDIP, which is still active) carried out SETI experiments and found a handful of anomalous radio signals, but none showed up a second time.

    To search or not to search

    There is plenty of understandable skepticism surrounding the search for extraterrestrial intelligence. At first glance, one might reason that biosignatures are more common than technosignatures and therefore easier to detect. After all, complex life takes a long time to develop and so is probably rarer. But as astronomer and SETI researcher Jason Wright points out, “Slimes and fungus and molds and things are extremely hard to detect [on an exoplanet]. They’re not doing anything to get your attention. They’re not commanding energy resources that might be obvious at interstellar distances.”

    Linda Billings, a communications consultant for NASA’s Astrobiology Division, is not so convinced that SETI is worth it. She worked with SETI in the early 1990s when it was still being funded by the space agency.

    “I felt like there was a resistance to providing a realistic depiction of the SETI search, of how limited it is, how little of our own galaxy that we are capable of detecting in radio signals,” Billings says.

    While she supports NASA’s biosignature searches, she feels that there are too many assumptions embedded into the idea that intelligent aliens would emit signals that we can intercept and understand, so the likelihood of successfully detecting technosignatures is too low.

    What is the likelihood of encountering extraterrestrial intelligence? Astronomers have thought about this question and have even tried to quantify it, most famously in the Drake equation, introduced by radio astronomer Frank Drake in 1961. The equation estimates the number of active and communicative alien civilizations in the Milky Way galaxy by considering seven factors:

    Frank Drake with his Drake Equation. Credit Frank Drake

    Drake Equation, Frank Drake, Seti Institute

    Since these values have been largely conjectural, the Drake equation has served as more of a thought exercise than a precise calculation of probability. But SETI skeptics reason that the equation’s huge uncertainties render the search futile until we know more.

    Plus, the question remains as to whether we are looking the “right” way. By assuming aliens will transmit radio waves, SETI researchers also assume that alien civilizations must have intelligence similar to humans’. But intelligence—like life—could develop elsewhere in ways we can’t possibly imagine. So for some, the small chance that aliens are sending out radio transmissions isn’t enough to justify the search.

    Seth Shostak, senior astronomer at the SETI Institute, defended the radio approach in a blog post honoring Frank Drake’s 90th birthday earlier this year. “…[A] search for radio transmissions is not a parochial enterprise,” he wrote. “It doesn’t assume that the aliens are like us in any particular, only that they live in the same Universe, with the same physics.”

    SETI researchers can also cast a much wider net with their radio searches: Optical telescopes looking for biosignatures can only resolve data from exoplanets within a few tens of light-years, totaling to no more than 100 tractable targets. But existing radio observatories, like those at Green Bank and in Arecibo, can detect signals as far as 10,000 light-years away, producing 10-million more targets than biosignature search methods.

    The SETI community has no desire to stop the search for biosignatures. “Technosignatures and biosignatures both lie under the same umbrella that we call ‘astrobiology,’ so we are trying to learn from each other,” says Tarter.

    The current state of SETI

    Since the 1990s, new discoveries have strengthened the case to search for technosignatures. For example, NASA’s Kepler Space Telescope has identified over 4,000 exoplanets, and Kepler data suggest that half of all stars may harbor Earth-sized exoplanets, many of which may be the right distance from their stars to be conducive to life.

    NASA/Kepler Telescope, and K2 March 7, 2009 until November 15, 2018

    NASA/MIT TESS replaced Kepler in search for exoplanets

    Plus, the discovery of extremophiles—organisms that can grow and thrive in extreme temperature, acidity, or pressure—has shown astrobiologists that life exists in environments previously assumed to be inhospitable.

    But of the two arms of the search for life, SETI is still up against a perception problem—what some call a “giggle factor.” What does it take for SETI to be taken seriously? There are some indications that the perception problem is solving itself, albeit slowly.

    In 2015, SETI got a much-needed injection of cash—and faith—when Russian-born billionaire Yuri Milner pledged $100 million over 10 years to form the Breakthrough Initiatives, including Breakthrough Listen, a SETI project based at UC Berkeley and directed by Andrew Siemion.

    Breakthrough Listen Project

    1

    UC Observatories Lick Autmated Planet Finder, fully robotic 2.4-meter optical telescope at Lick Observatory, situated on the summit of Mount Hamilton, east of San Jose, California, USA




    GBO radio telescope, Green Bank, West Virginia, USA


    CSIRO/Parkes Observatory, located 20 kilometres north of the town of Parkes, New South Wales, Australia


    SKA Meerkat telescope, 90 km outside the small Northern Cape town of Carnarvon, SA

    Newly added

    CfA/VERITAS, a major ground-based gamma-ray observatory with an array of four Čerenkov Telescopes for gamma-ray astronomy in the GeV – TeV energy range. Located at Fred Lawrence Whipple Observatory,Mount Hopkins, Arizona, US in AZ, USA, Altitude 2,606 m (8,550 ft)

    As the name suggests, Breakthrough Listen’s goal is to listen for signs of intelligent life. Breakthrough Listen has access to more than a dozen facilities around the world, including the NRAO in Green Bank, the Arecibo Observatory, and the MeerKAT radio telescope in South Africa.

    A few years later in 2018, NASA—prodded by SETI fan and Texas Congressman Lamar Smith—hosted a technosignatures workshop at the Lunar and Planetary Institute in Houston, Texas. Over the course of three days, SETI scientists including Wright and Siemion met and discussed the current state of technosignature searches and how NASA could contribute to the field’s future. But Smith retired from Congress that same year, which put SETI’s future with federal funding back into question.

    In March 2019, Pennsylvania State University announced the new Penn State Extraterrestrial Intelligence Center (PSETI)—to be led by Wright, who is an associate professor of astronomy and astrophysics at the school. One of just two astrobiology PhD programs in the world (the other is at UCLA), PSETI plans on hosting the first Penn State SETI Symposium in June 2021.

    Some of PSETI’s main goals are to permanently fund SETI research worldwide, train the next generation of SETI practitioners, and support and foster a worldwide SETI community. These elements are important to any scientific endeavor but are currently lacking in the small field, even with initiatives like Breakthrough Listen. According to a recent white paper, only five people in the US have ever earned a PhD with SETI as the focus of their dissertations, and that number won’t be growing rapidly any time soon.

    “If you can’t propose for grants to work on a topic, it’s really difficult to convince young graduate students and postdocs to work in the field, because they don’t really see a future in it,” says Siemion.

    Tarter agrees that community and funding are the essential ingredients to SETI’s future. “We sort of lost a generation of scientists and engineers in this fallow period where a few of us could manage to keep this going,” she says. “A really well-educated, larger population of young exploratory scientists—and a stable path to allow them to pursue this large question into the future—is what we need.”

    Wright often calls SETI low-hanging fruit. “This field has been starved of resources for so long that there is still a ton of work to do that could have been done decades ago,” says Wright. “We can very quickly make a lot of progress in this field without a lot of effort.” This is made clear in Wright’s SETI graduate course at Penn State, in which his students’ final projects have sometimes become papers that get published in peer-reviewed journals—something that rarely happens in any other field of astronomy.

    In February 2020, Penn State graduate student Sofia Sheikh submitted a paper to The Astrophysical Journal outlining a survey of 20 stars in the “restricted Earth Transit Zone,” the area of the sky in which an observer on another planet could see Earth pass in front of the sun. Sheikh didn’t find any technosignatures in the direction of those 20 stars, but her paper is one of a number of events in the past year that seem to signal the resurgence of SETI.

    In July 2019, Breakthrough Listen announced a collaboration with VERITAS, an array of gamma-ray telescopes in Arizona [above]. VERITAS agreed to spend 30 hours per year looking at Breakthrough Listen’s targets for signs of extraterrestrial intelligence starting in 2021. Breakthrough Listen also announced, in March 2020, that it will soon partner with the NRAO to use the Very Large Array (VLA), an array of radio telescopes in Socorro, New Mexico.

    NRAO/Karl V Jansky Expanded Very Large Array, on the Plains of San Agustin fifty miles west of Socorro, NM, USA, at an elevation of 6970 ft (2124 m)

    (Coincidentally, the VLA was featured in the film Contact but was never actually used in SETI research.)

    And there are other forthcoming projects that take advantage of alternate avenues to search. Optical SETI instruments, like PANOSETI, will look for bright pulses in optical or near-infrared light that could be artificial in origin. Similarly, LaserSETI will use inexpensive, wide-field, astronomical grade cameras to probe the whole sky, all the time, for brief flickers of laser light coming from deep space. However, neither PANOSETI nor LaserSETI are fully funded.

    Panoseti

    LASERSETi

    Just last month, though, NASA did award a grant to a group of scientists to search for technosignatures. It is the first time NASA has given funding to a non-radio technosignature search, and it’s also the first grant to support work at PSETI. The project team, led by Adam Frank from the University of Rochester, includes Jason Wright.

    “It’s a great sign that the winds are changing at NASA,” Wright said in an email. He credits NASA’s 2018 technosignatures workshop as a catalyst that led NASA to relax its stance against SETI research. “We have multiple proposals in to NASA right now to do more SETI work across its science portfolio and I’m more optimistic now that it will be fairly judged against the rest of the proposals.”

    Despite all the obstacles in their path, today’s SETI researchers have no plans to stop searching. After all, they are trying to answer one of the most profound and captivating questions in the entire Universe: are we alone?

    “You can certainly get a little tired and a little beat down by the challenges associated with any kind of job. We’re certainly not immune from that in SETI or in astronomy,” admits Siemion. “But you need only take 30 seconds to just contemplate the fact that you’re potentially on the cusp of making really an incredibly profound discovery—a discovery that would forever change the human view of our place in the universe. And, you know, it gets you out of bed.”

    SETI Institute

    Laser SETI, the future of SETI Institute research

    SETI/Allen Telescope Array situated at the Hat Creek Radio Observatory, 290 miles (470 km) northeast of San Francisco, California, USA, Altitude 986 m (3,235 ft), the origins of the Institute’s search.

    ____________________________________________________

    Further to the story

    UCSC alumna Shelley Wright, now an assistant professor of physics at UC San Diego, discusses the dichroic filter of the NIROSETI instrument, developed at the Dunlap Institute, U Toronto and brought to UCSD and installed at the Nickel telescope at UCSC (Photo by Laurie Hatch)

    Shelley Wright of UC San Diego, with NIROSETI, developed at Dunlap Institute U Toronto, at the 1-meter Nickel Telescope at Lick Observatory at UC Santa Cruz

    NIROSETI team from left to right Rem Stone UCO Lick Observatory Dan Werthimer UC Berkeley Jérôme Maire U Toronto, Shelley Wright UCSD Patrick Dorval, U Toronto Richard Treffers Starman Systems. (Image by Laurie Hatch)

    LASERSETI

    And separately and not connected to the SETI Institute

    SETI@home a BOINC project based at UC Berkeley


    SETI@home, a BOINC project originated in the Space Science Lab at UC Berkeley


    For transparency, I am a financial supporter of the SETI Institute. I was a BOINC cruncher for many years.

    My BOINC

    I am also a financial supporter of UC Santa Cruz and Dunlap Institute at U Toronto.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 12:54 pm on July 11, 2020 Permalink | Reply
    Tags: "How small satellites are radically remaking space exploration", ars technica, , , , ,   

    From ars technica: “How small satellites are radically remaking space exploration” 


    From ars technica

    7/11/2020
    Eric Berger
    eric.berger@arstechnica.com

    “There’s so much of the Solar System that we have not explored.”

    1
    An Electron rocket launches in August 2019 from New Zealand.

    At the beginning of this year, a group of NASA scientists agonized over which robotic missions they should choose to explore our Solar System. Researchers from around the United States had submitted more than 20 intriguing ideas, such as whizzing by asteroids, diving into lava tubes on the Moon, and hovering in the Venusian atmosphere.

    Ultimately, NASA selected four of these Discovery-class missions for further study. In several months, the space agency will pick two of the four missions to fully fund, each with a cost cap of $450 million and a launch late within this decade. For the losing ideas, there may be more chances in future years—but until new opportunities arise, scientists can only plan, wait, and hope.

    This is more or less how NASA has done planetary science for decades. Scientists come up with all manner of great ideas to answer questions about our Solar System; then, NASA announces an opportunity, a feeding frenzy ensues for those limited slots. Ultimately, one or two missions get picked and fly. The whole process often takes a couple of decades from the initial idea to getting data back to Earth.

    This process has succeeded phenomenally. In the last half century, NASA has explored most of the large bodies in the Solar System, from the Sun and Mercury on one end to Pluto and the heliopause at the other. No other country or space agency has come close to NASA’s planetary science achievements. And yet, as the abundance of Discovery-class mission proposals tells us, there is so much more we can learn about the Solar System.

    Now, two emerging technologies may propel NASA and the rest of the world into an era of faster, low-cost exploration. Instead of spending a decade or longer planning and developing a mission, then spending hundreds of millions (to billions!) of dollars bringing it off, perhaps we can fly a mission within a couple of years for a few tens of millions of dollars. This would lead to more exploration and also democratize access to the Solar System.

    In recent years, a new generation of companies is developing new rockets for small satellites that cost roughly $10 million for a launch. Already, Rocket Lab has announced a lunar program for its small Electron rocket. And Virgin Orbit has teamed up with a group of Polish universities to launch up to three missions to Mars with its LauncherOne vehicle.

    At the same time, the various components of satellites, from propulsion to batteries to instruments, are being miniaturized. It’s not quite like a mobile phone, which today has more computing power than a machine that filled a room a few decades ago. But small satellites are following the same basic trend line.

    Moreover, the potential of tiny satellites is no longer theoretical. Two years ago, a pair of CubeSats built by NASA (and called MarCO-A and MarCO-B) launched along with the InSight mission. In space, the small satellites deployed their own solar arrays, stabilized themselves, pivoted toward the Sun, and then journeyed to Mars.

    “We are at a time when there are really interesting opportunities for people to do missions much more quickly,” said Elizabeth Frank, an Applied Planetary Scientist at First Mode, a Seattle-based technology company. “It doesn’t have to take decades. It creates more opportunity. This is a very exciting time in planetary science.”

    Small sats

    NASA had several goals with its MarCO spacecraft, said Andy Klesh, an engineer at the Jet Propulsion Laboratory who served as technical lead for the mission.

    JPL Cubesat MarCO Mars Cube

    CubeSats had never flown beyond low-Earth orbit before. So during their six-month transit to Mars, the MarCOs proved small satellites could thrive in deep space, control their attitudes and, upon reaching their destination, use a high-gain antenna to stream data back home at 8 kilobits per second.

    But the briefcase-sized MarCO satellites were more than a mere technology demonstration. With the launch of its Mars InSight lander in 2018, NASA faced a communications blackout during the critical period when the spacecraft was due to enter the Martian atmosphere and touch down on the red planet.

    NASA/Mars InSight Lander

    To close the communications gap, NASA built the two MarCO 6U CubeSats for $18.5 million and used them to relay data back from InSight during the landing process. Had InSight failed to land, the MarCOs would have served as black box data recorders, Klesh told Ars.

    The success of the MarCOs changed the perception of small satellites and planetary science. A few months after their mission ended, the European Space Agency announced that it would send two CubeSats on its “Hera” mission to a binary asteroid system.

    ESA’s proposed Hera spaceraft depiction

    European engineers specifically cited the success of the MarCOs in their decision to send along CubeSats on the asteroid mission.

    The concept of interplanetary small satellite missions also spurred interest in the emerging new space industry. “That mission got our attention at Virgin Orbit,” said Will Pomerantz, director of special projects at the California-based launch company. “We were inspired by it, and we wondered what else we might be able to do.”

    After the MarCO missions, Pomerantz said, the company began to receive phone calls from research groups about LauncherOne, Virgin’s small rocket that is dropped from a 747 aircraft before igniting its engine. How many kilograms could LauncherOne put into lunar orbit? Could the company add a highly energetic third stage? Ideas for missions to Venus, the asteroids, and Mars poured in.

    Polish scientists believe they can build a spacecraft with a mass of 50kg or less (each of the MarCO spacecraft weighed 13.5kg) that can take high-quality images of Mars and its moon, Phobos. Such a spacecraft might also be able to study the Martian atmosphere or even find reservoirs of liquid water beneath the surface of Mars. Access to low-cost launch was a key enabler of the idea.

    Absent this new mode of planetary exploration, Pomerantz noted, a country like Poland might only be able to participate as one of several secondary partners on a Mars mission. Now it can get full credit. “With even a modest mission like this, it could really put Poland on the map,” Pomerantz said.

    1
    Engineers inspect one of the two MarCO CubeSats in 2016 at JPL.

    2
    Engineer Joel Steinkraus stands with both of the MarCO spacecraft. The one on the left is folded up the way it will be stowed on its rocket; the one on the right has its solar panels fully deployed, along with its high-gain antenna on top. NASA/JPL-Caltech

    Small rockets

    A few months before the MarCO satellites launched with the InSight lander on the large Atlas V rocket, the much smaller Electron rocket took flight for the first time. Developed and launched from New Zealand by Rocket Lab, Electron is the first of a new generation of commercial, small satellite rockets to reach orbit.

    The small booster has a payload capacity of about 200kg to low-Earth orbit. But since Electron’s debut, Rocket Lab has developed a Photon kick stage to provide additional performance.

    In an interview, Rocket Lab’s founder, Peter Beck, said the company believes it can deliver 25kg to Mars or Venus and up to 37kg to the Moon. Because the Photon stage provides many of the functions of a deep space vehicle, most of the mass can be used for sensors and scientific instruments.

    “We’re saying that for just $15 to $20 million you can go to the Moon,” he said. “I think this is a huge, disruptive program for the scientific community.”

    Of the destinations Electron can reach, Beck is most interested in Venus. “I think it’s the unsung hero of our Solar System,” he said. “We can learn a tremendous amount about our own Earth from Venus. Mars gets all the press, but Venus is where it’s really happening. That’s a mission that we really, really want to do.”

    There are other, somewhat larger rockets coming along, too. Firefly’s Alpha booster can put nearly 1 ton into low-Earth orbit, and Relativity Space is developing a Terran 1 rocket that can launch a little more than a ton. These vehicles probably could put CubeSats beyond the asteroid belt, toward Jupiter or beyond.

    Finally, the low-cost launch revolution spurred by SpaceX with larger rockets may also help. The company’s Falcon 9 rocket costs less than $60 million in reusable mode and could get larger spacecraft into deep space cheaply. Historically, NASA has paid triple this price, or more, for scientific launches.

    Accepting failure

    There will be some trade-offs, of course. One of the reasons NASA missions cost so much is that the agency takes extensive precautions to ensure that its vehicles will not fail in the unforgiving environment of space. And ultimately, most of NASA’s missions—so complex and large and capable—do succeed wonderfully.

    CubeSats will be riskier, with fewer redundancies. But that’s okay, says Pomerantz. As an example, he cited NASA’s Curiosity rover mission, launched in 2011 at a cost of $2.5 billion. Imagine sending 100 tiny robots into the Solar System for the price of one Curiosity, Pomerantz said. If just one quarter of the missions work, that’s 25 mini Curiosities.

    Frank agreed that NASA would have to learn to accept failure, taking chances on riskier technologies. Failure must be an option.

    NASA Mars Curiosity Rover


    What’s better than one Curiosity rover? How about 25 mini missions?

    “You want to fail for the right reasons, because you took technical chances and not because you messed up,” she said. “But I think you could create a new culture around failure, where you learn things and fix them and apply what you learn to new missions.”

    NASA seems open to this idea. Already, as it seeks to control costs and work with commercial partners for its new lunar science program, the space agency has said it will accept failure. The leader of NASA’s scientific programs, Thomas Zurbuchen, said he would tolerate some misses as NASA takes “shots on goal” in attempting to land scientific experiments on the Moon. “We do not expect every launch and landing to be successful,” he said last year.

    At the Jet Propulsion Laboratory, too, planetary scientists and engineers are open-minded. John Baker, who leads “game-changing” technology development and missions at the lab, said no one wants to spend 20 years or longer going from mission concept to flying somewhere in the Solar System. “Now, people want to design and print their structure, add instruments and avionics, fuel it and launch it,” he said. “That’s the vision.”

    Spaceflight remains highly challenging, of course. Many technologies can be miniaturized, but propulsion and fuel remain difficult problems. However, a willingness to fail opens up a wealth of new possibilities. One of Baker’s favorite designs is a “Cupid’s Arrow” mission to Venus where a MarCO-like spacecraft is shot through Venus’s atmosphere. An on-board mass spectrometer would analyze a sample of the atmosphere. It’s the kind of mission that could launch as a secondary payload on a Moon mission and use a gravity assist to reach Venus.

    “There’s so much of the Solar System that we have not explored,” Baker said. “There are how many thousands of asteroids? And they’re completely different. Each one of them tells us a different story.”

    Democratizing space

    One of the exciting aspects of bringing down the cost of interplanetary missions is that it increases access for new players—smaller countries like Poland as well as universities around the world.

    “I think the best thing that can be done is to figure out how to lower the price and then make this technology publicly available to everyone,” Baker said. “As more and more countries get engaged in Solar System exploration, we’re just going to learn so much more.”

    Already, organizations such as the Milo Institute at Arizona State University have started to foster collaborations between universities, emerging space agencies, private philanthropy, and small space companies.

    Historically, there have been so few opportunities for planetary scientists to get involved in missions that it has been difficult for researchers to gain the necessary project management skills to lead large projects. With a larger number of smaller missions, Frank said she believes it will increase the diversity of the planetary science community.

    In turn, she said, this will ultimately help NASA and other large space agencies by increasing and developing the global pool of talent for carrying out the biggest and most challenging planetary science missions that still require billions of dollars and big rockets. Because, while some things can be done on the cheap, really ambitious planetary science missions like plumbing the depths of Europa’s oceans or orbiting Pluto will remain quite costly.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 12:02 pm on December 31, 2019 Permalink | Reply
    Tags: ars technica, , , , , ESA’s Characterising Exoplanet Satellite Cheops, , Future giant ground based optical telescopes, ,   

    From ars technica: “The 2010s: Decade of the exoplanet” 

    Ars Technica
    From ars technica

    12/31/2019
    John Timmer

    1
    Artist conception of Kepler-186f, the first Earth-size exoplanet found in a star’s “habitable zone.”

    ESO Belgian robotic Trappist National Telescope at Cerro La Silla, Chile

    A size comparison of the planets of the TRAPPIST-1 system, lined up in order of increasing distance from their host star. The planetary surfaces are portrayed with an artist’s impression of their potential surface features, including water, ice, and atmospheres. NASA

    Centauris Alpha Beta Proxima 27, February 2012. Skatebiker

    The last ten years will arguably be seen as the “decade of the exoplanet.” That might seem like an obvious thing to say, given that the discovery of the first exoplanet was honored with a Nobel Prize this year. But that discovery happened back in 1995—so what made the 2010s so pivotal?

    One key event: 2009’s launch of the Kepler planet-hunting probe.

    NASA/Kepler Telescope, and K2 March 7, 2009 until November 15, 2018

    Kepler spawned a completely new scientific discipline, one that has moved from basic discovery—there are exoplanets!—to inferring exoplanetary composition, figuring out exoplanetary atmosphere, and pondering what exoplanets might tell us about prospects for life outside our Solar System.

    To get a sense of how this happened, we talked to someone who was in the field when the decade started: Andrew Szentgyorgyi, currently at the Harvard-Smithsonian Center for Astrophysics, where he’s the principal investigator on the Giant Magellan Telescope’s Large Earth Finder instrument.

    Giant Magellan Telescope, 21 meters, to be at the Carnegie Institution for Science’s Las Campanas Observatory, to be built some 115 km (71 mi) north-northeast of La Serena, Chile, over 2,500 m (8,200 ft) high

    In addition to being famous for having taught your author his “intro to physics” course, Szentgyorgyi was working on a similar instrument when the first exoplanet was discovered.

    Two ways to find a planet

    The Nobel-winning discovery of 51 Pegasi b came via the “radial velocity” method, which relies on the fact that a planet exerts a gravitational influence on its host star, causing the star to accelerate slightly toward the planet.

    Radial Velocity Method-Las Cumbres Observatory

    Radial velocity Image via SuperWasp http http://www.superwasp.org-exoplanets.htm

    Unless the planet’s orbit is oriented so that it’s perpendicular to the line of sight between Earth and the star, some of that acceleration will draw the star either closer to or farther from Earth. This acceleration can be detected via a blue or red shift in the star’s light, respectively.

    The surfaces of stars can expand and contract, which also produces red and blue shifts, but these won’t have the regularity of acceleration produced by an orbital body. But it explains why, back in the 1990s, people studying the surface changes in stars were already building the necessary hardware to study radial velocity.

    “We had a group that was building instruments that I’ve worked with to study the pulsations of stars—astroseismology,” Szentgyorgyi told Ars, “but that turns out to be sort of the same instrumentation you would use” to discern exoplanets.

    He called the discovery of 51 Pegasi b a “seismic event” and said that he and his collaborators began thinking about how to use their instruments “probably when I got the copy of Nature” that the discovery was published in. Because some researchers already had the right equipment, a steady if small flow of exoplanet announcements followed.

    During this time, researchers developed an alternate way to find exoplanets, termed the “transit method.”

    Planet transit. NASA/Ames

    The transit method requires a more limited geometry from an exoplanet’s orbit: the plane has to cause the exoplanet to pass through the line of sight between its host star and Earth. During these transits, the planet will eclipse a small fraction of light from the host star, causing a dip in its brightness. This doesn’t require the specialized equipment needed for radial velocity detections, but it does require a telescope that can detect small brightness differences despite the flicker caused by the light passing through our atmosphere.

    By 2009, transit detections were adding regularly to the growing list of exoplanets.

    The tsunami

    In the first year it was launched, Kepler started finding new planets. Given time and a better understanding of how to use the instrument, the early years of the 2010s saw thousands of new planets cataloged. In 2009, Szentgyorgyi said, “it was still ‘you’re finding handfuls of exoplanetary systems.’ And then with the launch of Kepler, there’s this tsunami of results which has transformed the field.”

    Suddenly, rather than dozens of exoplanets, we knew about thousands.

    2
    The tsunami of Kepler planet discoveries.

    The sheer numbers involved had a profound effect on our understanding of planet formation. Rather than simply having a single example to test our models against—our own Solar System—we suddenly had many systems to examine (containing over 4,000 currently known exoplanets). These include objects that don’t exist in our Solar System, things like hot Jupiters, super-Earths, warm Neptunes, and more. “You found all these crazy things that, you know, don’t make any sense from the context of what we knew about the Solar System,” Szentgyorgyi told Ars.

    It’s one thing to have models of planet formation that say some of these planets can form; it’s quite another to know that hundreds of them actually exist. And, in the case of hot Jupiters, it suggests that many exosolar systems are dynamic, shuffling planets to places where they can’t form and, in some cases, can’t survive indefinitely.

    But Kepler gave us more than new exoplanets; it provided a different kind of data. Radial velocity measurements only tell you how much the star is moving, but that motion could be caused by a relatively small planet with an orbital plane aligned with the line of sight from Earth. Or it could be caused by a massive planet with an orbit that’s highly inclined from that line of sight. Physics dictates that, from our perspective, these will produce the same acceleration of the star. Kepler helped us sort out the differences.

    3
    A massive planet orbiting at a steep angle (left) and a small one orbiting at a shallow one will both produce the same motion of a star relative to Earth.

    “Kepler not only found thousands and thousands of exoplanets, but it found them where we know the geometry,” Szentgyorgyi told Ars. “If you know the geometry—if you know the planet transits—you know your orbital inclination is in the plane you’re looking.” This allows follow-on observations using radial velocity to provide a more definitive mass of the exoplanet. Kepler also gave us the radius of each exoplanet.

    “Once you know the mass and radius, you can infer the density,” Szentgyorgyi said. “There’s a remarkable amount of science you can do with that. It doesn’t seem like a lot, but it’s really huge.”

    Density can tell us if a planet is rocky or watery—or whether it’s likely to have a large atmosphere or a small one. Sometimes, it can be tough to tell two possibilities apart; density consistent with a watery world could also be provided by a rocky core and a large atmosphere. But some combinations are either physically implausible or not consistent with planetary formation models, so knowing the density gives us good insight into the planetary type.

    Beyond Kepler

    Despite NASA’s heroic efforts, which kept Kepler going even after its hardware started to fail, its tsunami of discoveries slowed considerably before the decade was over. By that point, however, it had more than done its job. We had a new catalog of thousands of confirmed exoplanets, along with a new picture of our galaxy.

    For instance, binary star systems are common in the Milky Way; we now know that their complicated gravitational environment isn’t a barrier to planet formation.

    We also know that the most common type of star is the low-mass red dwarf. It was previously possible to think that the star’s low mass would be matched by a low-mass planet-forming disk, preventing the generation of large planets and the generation of large families of smaller planets. Neither turned out to be true.

    “We’ve moved into a mode where we can actually say interesting, global, statistical things about exoplanets,” Szentgyorgyi told Ars. “Most exoplanets are small—they’re sort of Earth to sub-Neptune size. It would seem that probably most of the solar-type stars have exoplanets.” And, perhaps most important, there’s a lot of them. “The ubiquity of exoplanets certainly is a stunner… they’re just everywhere,” Szentgyorgyi added.

    That ubiquity has provided the field with two things. First, it has given scientists the confidence to build new equipment, knowing that there are going to be planets to study. The most prominent piece of gear is NASA’s Transiting Exoplanet Survey Satellite, a space-based telescope designed to perform an all-sky exoplanet survey using methods similar to Kepler’s.

    NASA/MIT TESS replaced Kepler in search for exoplanets

    But other projects are smaller, focused on finding exoplanets closer to Earth. If exoplanets are everywhere, they’re also likely to be orbiting stars that are close enough so we can do detailed studies, including characterizing their atmospheres. One famous success in this area came courtesy of the TRAPPIST telescopes [above], which spotted a system hosting at least seven planets. More data should be coming soon, too; on December 17, the European Space Agency launched the first satellite dedicated to studying known exoplanets.

    ESA/CHEOPS

    With future telescopes and associated hardware similar to what Szentgyorgyi is working on, we should be able to characterize the atmospheres of planets out to about 30 light years from Earth. One catch: this method requires that the planet passes in front of its host star from Earth’s point of view.

    When an exoplanet transits in front of its star, most of the light that reaches Earth comes directly to us from the star. But a small percentage passes through the atmosphere of the exoplanet, allowing it to interact with the gases there. The molecules that make up the atmosphere can absorb light of specific wavelengths—essentially causing them to drop out of the light that makes its way to Earth. Thus, the spectrum of the light that we can see using a telescope can contain the signatures of various gases in the exoplanet’s atmosphere.

    There are some important caveats to this method, though. Since the fraction of light that passes through the exoplanet atmosphere is small compared to that which comes directly to us from the star, we have to image multiple transits for the signal to stand out. And the host star has to have a steady output at the wavelengths we’re examining in order to keep its own variability from swamping the exoplanetary signal. Finally, gases in the exoplanet’s atmosphere are constantly in motion, which can make their signals challenging to interpret. (Clouds can also complicate matters.) Still, the approach has been used successfully on a number of exoplanets now.

    In the air

    Understanding atmospheric composition can tell us critical things about an exoplanet. Much of the news about exoplanet discoveries has been driven by what’s called the “habitable zone.” That zone is defined as the orbital region around a star where the amount of light reaching a planet’s surface is sufficient to keep water liquid. Get too close to the star and there’s enough energy reaching the planet to vaporize the water; get too far away and the energy is insufficient to keep water liquid.

    These limits, however, assume an atmosphere that’s effectively transparent at all wavelengths. As we’ve seen in the Solar System, greenhouse gases can play an outsized role in altering the properties of planets like Venus, Earth, and Mars. At the right distance from a star, greenhouse gases can make the difference between a frozen rock and a Venus-like oven. The presence of clouds can also alter a planet’s temperature and can sometimes be identified by imaging the atmosphere. Finally, the reflectivity of a planet’s surface might also influence its temperature.

    The net result is that we don’t know whether any of the planets in a star’s “habitable zone” are actually habitable. But understanding the atmosphere can give us good probabilities, at least.

    The atmosphere can also open a window into the planet’s chemistry and history. On Venus, for example, the huge levels of carbon dioxide and the presence of sulfur dioxide clouds indicate that the planet has an oxidizing environment and that its atmosphere is dominated by volcanic activity. The composition of the gas giants in the outer Solar System likely reflects the gas that was present in the disk that formed the planets early in the Solar System’s history.

    But the most intriguing prospect is that we could find something like Earth, where biological processes produce both methane and the oxygen that ultimately converts it to carbon dioxide. The presence of both in an atmosphere indicates that some process(es) are constantly producing the gases, maintaining a long-term balance. While some geological phenomena can produce both these chemicals, finding them together in an atmosphere would at least be suggestive of possible life.

    Interdisciplinary

    Just the prospect of finding hints of life on other worlds has rapidly transformed the study of exoplanets, since it’s a problem that touches on nearly every area of science. Take the issue of atmospheres and habitability. Even if we understand the composition of a planet’s atmosphere, its temperature won’t just pop out of a simple equation. Distance from the star, type of star, the planet’s rotation, and the circulation of the atmosphere will all play a role in determining conditions. But the climate models that we use to simulate Earth’s atmosphere haven’t been capable of handling anything but the Sun and an Earth-like atmosphere. So extensive work has had to be done to modify them to work with the conditions found elsewhere.

    Similar problems appear everywhere. Geologists and geochemists have to infer likely compositions given little more than a planet’s density and perhaps its atmospheric compositions. Their results need to be combined with atmospheric models to figure out what the surface chemistry of a planet might be. Biologists and biochemists can then take that chemistry and figure out what reactions might be possible there. Meanwhile, the planetary scientists who study our own Solar System can provide insight into how those processes have worked out here.

    “I think it’s part of the Renaissance aspect of exoplanets,” Szentgyorgyi told Ars. “A lot of people now think a lot more broadly, there’s a lot more cross-disciplinary interaction. I find that I’m going to talks about geology, I’m going to talks about the atmospheric chemistry on Titan.”

    The next decade promises incredible progress. A new generation of enormous telescopes is expected to come online, and the James Webb space telescope should devote significant time to imaging exosolar systems.

    NASA/ESA/CSA Webb Telescope annotated


    ____________________________________________
    Other giant 30 meter class telescopes planned

    ESO/E-ELT,39 meter telescope to be on top of Cerro Armazones in the Atacama Desert of northern Chile. located at the summit of the mountain at an altitude of 3,060 metres (10,040 ft).

    TMT-Thirty Meter Telescope, proposed and now approved for Mauna Kea, Hawaii, USA4,207 m (13,802 ft) above sea level, the only giant 30 meter class telescope for the Northern hemisphere


    ____________________________________________

    We’re likely to end up with much more detailed pictures of some intriguing bodies in our galactic neighborhood.

    The data that will flow from new experiments and new devices will be interpreted by scientists who have already transformed their field. That transformation—from proving that exoplanets exist to establishing a vibrant, multidisciplinary discipline—really took place during the 2010s, which is why it deserves the title “decade of exoplanets.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: