Tagged: WIRED Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:05 pm on September 8, 2019 Permalink | Reply
    Tags: , , , , , , Craig Callender, , Second law of thermodynamics, WIRED   

    From WIRED: “Are We All Wrong About Black Holes?” 

    Wired logo

    From WIRED

    09.08.2019
    Brendan Z. Foster

    1
    Craig Callender, a philosopher of science at the University of California San Diego, argues that the connection between black holes and thermodynamics is less ironclad than assumed.Photograph: Peggy Peattie/Quanta Magazine

    In the early 1970s, people studying general relativity, our modern theory of gravity, noticed rough similarities between the properties of black holes and the laws of thermodynamics. Stephen Hawking proved that the area of a black hole’s event horizon—the surface that marks its boundary—cannot decrease. That sounded suspiciously like the second law of thermodynamics, which says entropy—a measure of disorder—cannot decrease.

    Yet at the time, Hawking and others emphasized that the laws of black holes only looked like thermodynamics on paper; they did not actually relate to thermodynamic concepts like temperature or entropy.

    Then in quick succession, a pair of brilliant results—one by Hawking himself—suggested that the equations governing black holes were in fact actual expressions of the thermodynamic laws applied to black holes. In 1972, Jacob Bekenstein argued that a black hole’s surface area was proportional to its entropy [Physical Review D], and thus the second law similarity was a true identity. And in 1974, Hawking found that black holes appear to emit radiation [Nature]—what we now call Hawking radiation—and this radiation would have exactly the same “temperature” in the thermodynamic analogy.

    This connection gave physicists a tantalizing window into what many consider the biggest problem in theoretical physics—how to combine quantum mechanics, our theory of the very small, with general relativity. After all, thermodynamics comes from statistical mechanics, which describes the behavior of all the unseen atoms in a system. If a black hole is obeying thermodynamic laws, we can presume that a statistical description of all its fundamental, indivisible parts can be made. But in the case of a black hole, those parts aren’t atoms. They must be a kind of basic unit of gravity that makes up the fabric of space and time.

    Modern researchers insist that any candidate for a theory of quantum gravity must explain how the laws of black hole thermodynamics arise from microscopic gravity, and in particular, why the entropy-to-area connection happens. And few question the truth of the connection between black hole thermodynamics and ordinary thermodynamics.

    But what if the connection between the two really is little more than a rough analogy, with little physical reality? What would that mean for the past decades of work in string theory, loop quantum gravity, and beyond? Craig Callender, a philosopher of science at the University of California, San Diego, argues that the notorious laws of black hole thermodynamics may be nothing more than a useful analogy stretched too far [Phil Sci]. The interview has been condensed and edited for clarity.

    Why did people ever think to connect black holes and thermodynamics?

    Callender: In the early ’70s, people noticed a few similarities between the two. One is that both seem to possess an equilibrium-like state. I have a box of gas. It can be described by a small handful of parameters—say, pressure, volume, and temperature. Same thing with a black hole. It might be described with just its mass, angular momentum, and charge. Further details don’t matter to either system.

    Nor does this state tell me what happened beforehand. I walk into a room and see a box of gas with stable values of pressure, volume and temperature. Did it just settle into that state, or did that happen last week, or perhaps a million years ago? Can’t tell. The black hole is similar. You can’t tell what type of matter fell in or when it collapsed.

    The second feature is that Hawking proved that the area of black holes is always non-decreasing. That reminds one of the thermodynamic second law, that entropy always increases. So both systems seem to be heading toward simply described states.

    Now grab a thermodynamics textbook, locate the laws, and see if you can find true statements when you replace the thermodynamic terms with black hole variables. In many cases you can, and the analogy improves.

    Hawking then discovers Hawking radiation, which further improves the analogy. At that point, most physicists start claiming the analogy is so good that it’s more than an analogy—it’s an identity! That’s a super-strong and surprising claim. It says that black hole laws, most of which are features of the geometry of space-time, are somehow identical to the physical principles underlying the physics of steam engines.

    Because the identity plays a huge role in quantum gravity, I want to reconsider this identity claim. Few in the foundations of physics have done so.

    So what’s the statistical mechanics for black holes?

    Well, that’s a good question. Why does ordinary thermodynamics hold? Well, we know that all these macroscopic thermodynamic systems are composed of particles. The laws of thermodynamics turn out to be descriptions of the most statistically likely configurations to happen from the microscopic point of view.

    Why does black hole thermodynamics hold? Are the laws also the statistically most likely way for black holes to behave? Although there are speculations in this direction, so far we don’t have a solid microscopic understanding of black hole physics. Absent this, the identity claim seems even more surprising.

    What led you to start thinking about the analogy?

    Many people are worried about whether theoretical physics has become too speculative. There’s a lot of commentary about whether holography, the string landscape—all sorts of things—are tethered enough to experiment. I have similar concerns. So my former Ph.D. student John Dougherty and I thought, where did it all start?

    To our mind a lot of it starts with this claimed identity between black holes and thermodynamics. When you look in the literature, you see people say, “The only evidence we have for quantum gravity, the only solid hint, is black hole thermodynamics.”

    If that’s the main thing we’re bouncing off for quantum gravity, then we ought to examine it very carefully. If it turns out to be a poor clue, maybe it would be better to spread our bets a little wider, instead of going all in on this identity.

    What problems do you see with treating a black hole as a thermodynamic system?

    I see basically three. The first problem is: What is a black hole? People often think of black holes as just kind of a dark sphere, like in a Hollywood movie or something; they’re thinking of it like a star that collapsed. But a mathematical black hole, the basis of black hole thermodynamics, is not the material from the star that’s collapsed. That’s all gone into the singularity. The black hole is what’s left.

    The black hole isn’t a solid thing at the center. The system is really the entire space-time.

    Yes, it’s this global notion for which black hole thermodynamics was developed, in which case the system really is the whole space-time.

    Here is another way to think about the worry. Suppose a star collapses and forms an event horizon. But now another star falls past this event horizon and it collapses, so it’s inside the first. You can’t think that each one has its own little horizon that is behaving thermodynamically. It’s only the one horizon.

    Here’s another. The event horizon changes shape depending on what’s about to be thrown into it. It’s clairvoyant. Weird, but there is nothing spooky here so long as we remember that the event horizon is only defined globally. It’s not a locally observable quantity.

    The picture is more counterintuitive than people usually think. To me, if the system is global, then it’s not at all like thermodynamics.

    The second objection is: Black hole thermodynamics is really a pale shadow of thermodynamics. I was surprised to see the analogy wasn’t as thorough as I expected it to be. If you grab a thermodynamics textbook and start replacing claims with their black hole counterparts, you will not find the analogy goes that deep.


    Craig Callender explains why the connection between black holes and thermodynamics is little more than an analogy.

    For instance, the zeroth law of thermodynamics sets up the whole theory and a notion of equilibrium — the basic idea that the features of the system aren’t changing. And it says that if one system is in equilibrium with another — A with B, and B with C — then A must be in equilibrium with C. The foundation of thermodynamics is this equilibrium relation, which sets up the meaning of temperature.

    The zeroth law for black holes is that the surface gravity of a black hole, which measures the gravitational acceleration, is a constant on the horizon. So that assumes temperature being constant is the zeroth law. That’s not really right. Here we see a pale shadow of the original zeroth law.

    The counterpart of equilibrium is supposed to be “stationary,” a technical term that basically says the black hole is spinning at a constant rate. But there’s no sense in which one black hole can be “stationary with” another black hole. You can take any thermodynamic object and cut it in half and say one half is in equilibrium with the other half. But you can’t take a black hole and cut it in half. You can’t say that this half is stationary with the other half.

    Here’s another way in which the analogy falls flat. Black hole entropy is given by the black hole area. Well, area is length squared, volume is length cubed. So what do we make of all those thermodynamic relations that include volume, like Boyle’s law? Is volume, which is length times area, really length times entropy? That would ruin the analogy. So we have to say that volume is not the counterpart of volume, which is surprising.

    The most famous connection between black holes and thermodynamics comes from the notion of entropy. For normal stuff, we think of entropy as a measure of the disorder of the underlying atoms. But in the 1970s, Jacob Bekenstein said that the surface area of a black hole’s event horizon is equivalent to entropy. What’s the basis of this?

    This is my third concern. Bekenstein says, if I throw something into a black hole, the entropy vanishes. But this can’t happen, he thinks, according to the laws of thermodynamics, for entropy must always increase. So some sort of compensation must be paid when you throw things into a black hole.

    Bekenstein notices a solution. When I throw something into the black hole, the mass goes up, and so does the area. If I identify the area of the black hole as the entropy, then I’ve found my compensation. There is a nice deal between the two—one goes down while the other one goes up—and it saves the second law.

    When I saw that I thought, aha, he’s thinking that not knowing about the system anymore means its entropy value has changed. I immediately saw that this is pretty objectionable, because it identifies entropy with uncertainty and our knowledge.

    There’s a long debate in the foundations of statistical mechanics about whether entropy is a subjective notion or an objective notion. I’m firmly on the side of thinking it’s an objective notion. I think trees unobserved in a forest go to equilibrium regardless of what anyone knows about them or not, that the way heat flows has nothing to do with knowledge, and so on.

    Chuck a steam engine behind the event horizon. We can’t know anything about it apart from its mass, but I claim it can still do as much work as before. If you don’t believe me, we can test this by having a physicist jump into the black hole and follow the steam engine! There is only need for compensation if you think that what you can no longer know about ceases to exist.

    Do you think it’s possible to patch up black hole thermodynamics, or is it all hopeless?

    My mind is open, but I have to admit that I’m deeply skeptical about it. My suspicion is that black hole “thermodynamics” is really an interesting set of relationships about information from the point of view of the exterior of the black hole. It’s all about forgetting information.

    Because thermodynamics is more than information theory, I don’t think there’s a deep thermodynamic principle operating through the universe that causes black holes to behave the way they do, and I worry that physics is all in on it being a great hint for quantum gravity when it might not be.

    Playing the role of the Socratic gadfly in the foundations of physics is sometimes important. In this case, looking back invites a bit of skepticism that may be useful going forward.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 10:18 am on September 1, 2019 Permalink | Reply
    Tags: , , , , Neptune, Voyager 2, Voyager 2 completed its historic flyby 30 years ago. No probe has been back since., WIRED   

    From WIRED: “Space Photos of the Week: Tune Into Neptune” 

    Wired logo

    From WIRED

    08.31.2019
    Shannon Stirone

    Voyager 2 completed its historic flyby 30 years ago. No probe has been back since.

    NASA/Voyager 2

    2
    We’re looking at Neptune’s south pole, just barely illuminated by the sun. Voyager 2 took this photo from 560,000 miles away, and even at that extreme distance, the camera on the spacecraft managed to pick up features in the atmosphere as small as 75 miles in diameter. One example: Look on the lower left at the edge of the curve, where you can see a bright white strip of clouds that appears to stretch upward into the shadow. NASA JPL-Caltech.



    Thirty years ago NASA’s Voyager 2 spacecraft flew past Neptune, completing its epic journey through the outer solar system. The eighth and outermost planet in our neighborhood, Neptune is considered one of the ice giants, along with Uranus. But that name is a misnomer since the planet is actually covered in gas, and whatever ice is below that is basically slushy.

    When Voyager launched we had no idea what Uranus and Neptune looked like up close. The mission uncovered two worlds very unlike any other planets in our solar system, and we now know that both have rings, as well as robust storms and bizarre icy moons. And as scientists discover more exoplanets around other stars, many of them end up looking an awful lot like Neptune—which means that the Voyager 2 planetary data from long ago turns out to be a good model for other planets we might discover in the future.

    3
    Did you know Neptune has rings? Most large planets do. Voyager 2 snapped this photo in 1989 during its flyby, and this was the first photo of said rings in detail. Like those surrounding Jupiter and Uranus. Neptune’s rings are likely made out of carbon-containing molecules that have been irradiated by the Sun and become darker as a result. NASA/JPL-Caltech

    4
    As Voyager flew by Neptune it kept turning its camera, capturing this beautiful image of a shadowed, crescent Neptune along with its moon Triton. Triton is dwarfed by the sheer size of Neptune, and the darkness of space around them and their shadow feels like a fitting ending to Voyager 2’s journey. NASA/JPL-Caltech

    5
    Triton from 25,000 miles away: This moon is one of the most interesting in the entire solar system. It’s covered with a snakeskin-textured terrain and even has dust devil-like plumes of nitrogen ice jutting out into space. Something else strange is happening on this desolate moon; the surface is pocked with circular depressions that don’t exist anywhere else in the solar system. Scientists suspect that the frozen substances on the surface could be sinking into the ground or melting away, but until we swing by there again, there’s no way to know for sure. NASA/JPL-Caltech

    6
    This image combines red and green filters on Voyager 2’s narrow angle camera to show off the true blue of Neptune’s rich atmosphere, composed mostly of helium, hydrogen, and methane. The methane in the upper atmosphere is responsible for absorbing all the red light from the Sun, which is why Neptune is such a deep azure. The winds in that atmosphere can move at speeds of 1,000 miles per hour, though, which keeps things mixed up: The dark oval storm in the north has since disappeared, and this is the only time it has been captured in a photo with the smaller storm below, nicknamed “Skeeter.” NASA/JPL-Caltech

    7
    The ultramarine blue planet almost glows from 4.4 million miles away. While Voyager 2’s mission to Neptune brought the astronomical community a whole new perspective on a far-off planet, it also introduced scientists to many more mysteries, which might not be solved for many decades. NASA/JPL-Caltech

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 11:59 am on July 28, 2019 Permalink | Reply
    Tags: A quantum particle can have a range of possible states known as a “superposition.”, “Quantum-classical transition.”, But why can’t we see a quantum superposition?, , Darwin-Survival of the Fittest, Many independent observers can make measurements of a quantum system and agree on the outcome—a hallmark of classical behavior., Quantum Darwinism, , , The definite properties of objects that we associate with classical physics—position and speed say—are selected from a menu of quantum possibilities., The process is loosely analogous to natural selection in evolution., The vexing question then becomes: How do quantum probabilities coalesce into the sharp focus of the classical world?, This doesn’t really mean it is in several states at once; rather it means that if we make a measurement we will see one of those outcomes., This process by which “quantumness” disappears into the environment is called decoherence., WIRED   

    From WIRED: “Quantum Darwinism Could Explain What Makes Reality Real” 

    Wired logo

    From WIRED

    07.28.19
    Philip Ball

    1
    Contrary to popular belief, says physicist Adán Cabello, “quantum theory perfectly describes the emergence of the classical world.” Olena Shmahalo/Quanta Magazine

    It’s not surprising that quantum physics has a reputation for being weird and counterintuitive. The world we’re living in sure doesn’t feel quantum mechanical. And until the 20th century, everyone assumed that the classical laws of physics devised by Isaac Newton and others—according to which objects have well-defined positions and properties at all times—would work at every scale. But Max Planck, Albert Einstein, Niels Bohr and their contemporaries discovered that down among atoms and subatomic particles, this concreteness dissolves into a soup of possibilities. An atom typically can’t be assigned a definite position, for example—we can merely calculate the probability of finding it in various places. The vexing question then becomes: How do quantum probabilities coalesce into the sharp focus of the classical world?

    Physicists sometimes talk about this changeover as the “quantum-classical transition.” But in fact there’s no reason to think that the large and the small have fundamentally different rules, or that there’s a sudden switch between them. Over the past several decades, researchers have achieved a greater understanding of how quantum mechanics inevitably becomes classical mechanics through an interaction between a particle or other microscopic system and its surrounding environment.

    One of the most remarkable ideas in this theoretical framework is that the definite properties of objects that we associate with classical physics—position and speed, say—are selected from a menu of quantum possibilities in a process loosely analogous to natural selection in evolution: The properties that survive are in some sense the “fittest.” As in natural selection, the survivors are those that make the most copies of themselves. This means that many independent observers can make measurements of a quantum system and agree on the outcome—a hallmark of classical behavior.

    This idea, called quantum Darwinism (QD), explains a lot about why we experience the world the way we do rather than in the peculiar way it manifests at the scale of atoms and fundamental particles. Although aspects of the puzzle remain unresolved, QD helps heal the apparent rift between quantum and classical physics.

    3
    Chaoyang Lu (left) and Jian-Wei Pan of the University of Science and Technology of China in Hefei led a recent experiment that tested quantum Darwinism in an artificial environment made of interacting photons. Chaoyang Lu

    Only recently, however, has quantum Darwinism been put to the experimental test. Three research groups, working independently in Italy, China and Germany, have looked for the telltale signature of the natural selection process by which information about a quantum system gets repeatedly imprinted on various controlled environments. These tests are rudimentary, and experts say there’s still much more to be done before we can feel sure that QD provides the right picture of how our concrete reality condenses from the multiple options that quantum mechanics offers. Yet so far, the theory checks out.

    Survival of the Fittest

    At the heart of quantum Darwinism is the slippery notion of measurement—the process of making an observation. In classical physics, what you see is simply how things are. You observe a tennis ball traveling at 200 kilometers per hour because that’s its speed. What more is there to say?

    In quantum physics that’s no longer true. It’s not at all obvious what the formal mathematical procedures of quantum mechanics say about “how things are” in a quantum object; they’re just a prescription telling us what we might see if we make a measurement. Take, for example, the way a quantum particle can have a range of possible states, known as a “superposition.” This doesn’t really mean it is in several states at once; rather, it means that if we make a measurement we will see one of those outcomes. Before the measurement, the various superposed states interfere with one another in a wavelike manner, producing outcomes with higher or lower probabilities.

    But why can’t we see a quantum superposition? Why can’t all possibilities for the state of a particle survive right up to the human scale?

    The answer often given is that superpositions are fragile, easily disrupted when a delicate quantum system is buffeted by its noisy environment. But that’s not quite right. When any two quantum objects interact, they get “entangled” with each other, entering a shared quantum state in which the possibilities for their properties are interdependent. So say an atom is put into a superposition of two possible states for the quantum property called spin: “up” and “down.” Now the atom is released into the air, where it collides with an air molecule and becomes entangled with it. The two are now in a joint superposition. If the atom is spin-up, then the air molecule might be pushed one way, while, if the atom is spin-down, the air molecule goes another way—and these two possibilities coexist. As the particles experience yet more collisions with other air molecules, the entanglement spreads, and the superposition initially specific to the atom becomes ever more diffuse. The atom’s superposed states no longer interfere coherently with one another because they are now entangled with other states in the surrounding environment—including, perhaps, some large measuring instrument. To that measuring device, it looks as though the atom’s superposition has vanished and been replaced by a menu of possible classical-like outcomes that no longer interfere with one another.

    This process by which “quantumness” disappears into the environment is called decoherence. It’s a crucial part of the quantum-classical transition, explaining why quantum behavior becomes hard to see in large systems with many interacting particles. The process happens extremely fast. If a typical dust grain floating in the air were put into a quantum superposition of two different physical locations separated by about the width of the grain itself, collisions with air molecules would cause decoherence—making the superposition undetectable—in about 10−31 seconds. Even in a vacuum, light photons would trigger such decoherence very quickly: You couldn’t look at the grain without destroying its superposition.

    Surprisingly, although decoherence is a straightforward consequence of quantum mechanics, it was only identified in the 1970s, by the late German physicist Heinz-Dieter Zeh. The Polish-American physicist Wojciech Zurek further developed the idea in the early 1980s and made it better known, and there is now good experimental support for it.

    5
    Wojciech Zurek, a theoretical physicist at Los Alamos National Laboratory in New Mexico, developed the quantum Darwinism theory in the 2000s to account for the emergence of objective, classical reality. Los Alamos National Laboratory

    But to explain the emergence of objective, classical reality, it’s not enough to say that decoherence washes away quantum behavior and thereby makes it appear classical to an observer. Somehow, it’s possible for multiple observers to agree about the properties of quantum systems. Zurek, who works at Los Alamos National Laboratory in New Mexico, argues that two things must therefore be true.

    First, quantum systems must have states that are especially robust in the face of disruptive decoherence by the environment. Zurek calls these “pointer states,” because they can be encoded in the possible states of a pointer on the dial of a measuring instrument. A particular location of a particle, for instance, or its speed, the value of its quantum spin, or its polarization direction can be registered as the position of a pointer on a measuring device. Zurek argues that classical behavior—the existence of well-defined, stable, objective properties—is possible only because pointer states of quantum objects exist.

    What’s special mathematically about pointer states is that the decoherence-inducing interactions with the environment don’t scramble them: Either the pointer state is preserved, or it is simply transformed into a state that looks nearly identical. This implies that the environment doesn’t squash quantumness indiscriminately but selects some states while trashing others. A particle’s position is resilient to decoherence, for example. Superpositions of different locations, however, are not pointer states: Interactions with the environment decohere them into localized pointer states, so that only one can be observed. Zurek described this “environment-induced superselection” of pointer states in the 1980s [Physical Review D].

    But there’s a second condition that a quantum property must meet to be observed. Although immunity to interaction with the environment assures the stability of a pointer state, we still have to get at the information about it somehow. We can do that only if it gets imprinted in the object’s environment. When you see an object, for example, that information is delivered to your retina by the photons scattering off it. They carry information to you in the form of a partial replica of certain aspects of the object, saying something about its position, shape and color. Lots of replicas are needed if many observers are to agree on a measured value—a hallmark of classicality. Thus, as Zurek argued in the 2000s, our ability to observe some property depends not only on whether it is selected as a pointer state, but also on how substantial a footprint it makes in the environment. The states that are best at creating replicas in the environment—the “fittest,” you might say—are the only ones accessible to measurement. That’s why Zurek calls the idea quantum Darwinism [Nature Physics].

    It turns out that the same stability property that promotes environment-induced superselection of pointer states also promotes quantum Darwinian fitness, or the capacity to generate replicas. “The environment, through its monitoring efforts, decoheres systems,” Zurek said, “and the very same process that is responsible for decoherence should inscribe multiple copies of the information in the environment.”

    Information Overload

    It doesn’t matter, of course, whether information about a quantum system that gets imprinted in the environment is actually read out by a human observer; all that matters for classical behavior to emerge is that the information get there so that it could be read out in principle. “A system doesn’t have to be under study in any formal sense” to become classical, said Jess Riedel, a physicist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, and a proponent of quantum Darwinism.


    “QD putatively explains, or helps to explain, all of classicality, including everyday macroscopic objects that aren’t in a laboratory, or that existed before there were any humans.”

    About a decade ago, while Riedel was working as a graduate student with Zurek, the two showed theoretically that information from some simple, idealized quantum systems is “copied prolifically into the environment,” Riedel said, “so that it’s necessary to access only a small amount of the environment to infer the value of the variables.” They calculated [Physical Review Letters] that a grain of dust one micrometer across, after being illuminated by the sun for just one microsecond, will have its location imprinted about 100 million times in the scattered photons.

    It’s because of this redundancy that objective, classical-like properties exist at all. Ten observers can each measure the position of a dust grain and find that it’s in the same location, because each can access a distinct replica of the information. In this view, we can assign an objective “position” to the speck not because it “has” such a position (whatever that means) but because its position state can imprint many identical replicas in the environment, so that different observers can reach a consensus.

    What’s more, you don’t have to monitor much of the environment to gather most of the available information—and you don’t gain significantly more by monitoring more than a fraction of the environment. “The information one can gather about the system quickly saturates,” Riedel said.

    This redundancy is the distinguishing feature of QD, explained Mauro Paternostro, a physicist at Queen’s University Belfast who was involved in one of the three new experiments. “It’s the property that characterizes the transition towards classicality,” he said.

    Quantum Darwinism challenges a common myth about quantum mechanics, according to the theoretical physicist Adán Cabello of the University of Seville in Spain: namely, that the transition between the quantum and classical worlds is not understood and that measurement outcomes cannot be described by quantum theory. On the contrary, he said, “quantum theory perfectly describes the emergence of the classical world.”

    Just how perfectly remains contentious, however. Some researchers think decoherence and QD provide a complete account of the quantum-classical transition. But although these ideas attempt to explain why superpositions vanish at large scales and why only concrete “classical” properties remain, there’s still the question of why measurements give unique outcomes. When a particular location of a particle is selected, what happens to the other possibilities inherent in its quantum description? Were they ever in any sense real? Researchers are compelled to adopt philosophical interpretations of quantum mechanics precisely because no one can figure out a way to answer that question experimentally.

    Into the Lab

    Quantum Darwinism looks fairly persuasive on paper. But until recently that was as far as it got. In the past year, three teams of researchers have independently put the theory to the experimental test by looking for its key feature: how a quantum system imprints replicas of itself on its environment.

    The experiments depended on the ability to closely monitor what information about a quantum system gets imparted to its environment. That’s not feasible for, say, a dust grain floating among countless billions of air molecules. So two of the teams created a quantum object in a kind of “artificial environment” with only a few particles in it. Both experiments—one by Paternostro [Physical Review A] and collaborators at Sapienza University of Rome, and the other by the quantum-information expert Jian-Wei Pan [https://arxiv.org/abs/1808.07388] and co-authors at the University of Science and Technology of China—used a single photon as the quantum system, with a handful of other photons serving as the “environment” that interacts with it and broadcasts information about it.

    Both teams passed laser photons through optical devices that could combine them into multiply entangled groups. They then interrogated the environment photons to see what information they encoded about the system photon’s pointer state—in this case its polarization (the orientation of its oscillating electromagnetic fields), one of the quantum properties able to pass through the filter of quantum Darwinian selection.

    A key prediction of QD is the saturation effect: Pretty much all the information you can gather about the quantum system should be available if you monitor just a handful of surrounding particles. “Any small fraction of the interacting environment is enough to provide the maximal classical information about the observed system,” Pan said.

    The two teams found precisely this. Measurements of just one of the environment photons revealed a lot of the available information about the system photon’s polarization, and measuring an increasing fraction of the environment photons provided diminishing returns. Even a single photon can act as an environment that introduces decoherence and selection, Pan explained, if it interacts strongly enough with the lone system photon. When interactions are weaker, a larger environment must be monitored.

    6
    Fedor Jelezko, director of the Institute for Quantum Optics at Ulm University in Germany. Ulm University

    7
    A team led by Jelezko probed the state of a nitrogen “defect” inside a synthetic diamond (shown mounted on the right) by monitoring surrounding carbon atoms. Their findings confirmed predictions of a theory known as quantum Darwinism.
    Ulm University

    The third experimental test of QD, led by the quantum-optical physicist Fedor Jelezko at Ulm University in Germany in collaboration with Zurek and others, used a very different system and environment, consisting of a lone nitrogen atom substituting for a carbon atom in the crystal lattice of a diamond—a so-called nitrogen-vacancy defect. Because the nitrogen atom has one more electron than carbon, this excess electron cannot pair up with those on neighboring carbon atoms to form a chemical bond. As a result, the nitrogen atom’s unpaired electron acts as a lone “spin,” which is like an arrow pointing up or down or, in general, in a superposition of both possible directions.

    This spin can interact magnetically with those of the roughly 0.3 percent of carbon nuclei present in the diamond as the isotope carbon-13, which, unlike the more abundant carbon-12, also has spin. On average, each nitrogen-vacancy spin is strongly coupled to four carbon-13 spins within a distance of about 1 nanometer.

    By controlling and monitoring the spins using lasers and radio-frequency pulses, the researchers could measure how a change in the nitrogen spin is registered by changes in the nuclear spins of the environment. As they reported in a preprint last September, they too observed the characteristic redundancy predicted by QD: The state of the nitrogen spin is “recorded” as multiple copies in the surroundings, and the information about the spin saturates quickly as more of the environment is considered.

    Zurek says that because the photon experiments create copies in an artificial way that simulates an actual environment, they don’t incorporate a selection process that picks out “natural” pointer states resilient to decoherence. Rather, the researchers themselves impose the pointer states. In contrast, the diamond environment does elicit pointer states. “The diamond scheme also has problems, because of the size of the environment,” Zurek added, “but at least it is, well, natural.”

    Generalizing Quantum Darwinism

    So far, so good for quantum Darwinism. “All these studies see what is expected, at least approximately,” Zurek said.

    Riedel says we could hardly expect otherwise, though: In his view, QD is really just the careful and systematic application of standard quantum mechanics to the interaction of a quantum system with its environment. Although this is virtually impossible to do in practice for most quantum measurements, if you can sufficiently simplify a measurement, the predictions are clear, he said: “QD is most like an internal self-consistency check on quantum theory itself.”

    But although these studies seem consistent with QD, they can’t be taken as proof that it is the sole description for the emergence of classicality, or even that it’s wholly correct. For one thing, says Cabello, the three experiments offer only schematic versions of what a real environment consists of. What’s more, the experiments don’t cleanly rule out other ways to view the emergence of classicality. A theory called “spectrum broadcasting,” for example, developed by Pawel Horodecki at the Gdańsk University of Technology in Poland and collaborators, attempts to generalize QD. Spectrum broadcast theory (which has only been worked through for a few idealized cases) identifies those states of an entangled quantum system and environment that provide objective information that many observers can obtain without perturbing it. In other words, it aims to ensure not just that different observers can access replicas of the system in the environment, but that by doing so they don’t affect the other replicas. That too is a feature of genuinely “classical” measurements.

    Horodecki and other theorists have also sought to embed QD in a theoretical framework that doesn’t demand any arbitrary division of the world into a system and its environment, but just considers how classical reality can emerge from interactions between various quantum systems. Paternostro says it might be challenging to find experimental methods capable of identifying the rather subtle distinctions between the predictions of these theories.

    Still, researchers are trying, and the very attempt should refine our ability to probe the workings of the quantum realm. “The best argument for performing these experiments probably is that they are good exercise,” Riedel said. “Directly illustrating QD can require some very difficult measurements that will push the boundaries of existing laboratory techniques.” The only way we can find out what measurement really means, it seems, is by making better measurements.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 7:13 am on July 21, 2019 Permalink | Reply
    Tags: "Where Do Supermassive Black Holes Come From?", , , , Caltech/MIT Advanced aLigo and Advanced VIRGO, , , , , , WIRED   

    From Western University, CA and WIRED: “Where Do Supermassive Black Holes Come From?” 

    From Western University Canada

    2
    Scott Woods, Western University, Illustration of supermassive black hole
    via

    WIRED

    Wired logo
    NASA

    June 28, 2019

    Researchers decipher the history of supermassive black holes in the early universe.

    At Western University
    MEDIA CONTACT:
    Jeff Renaud, Senior Media Relations Officer,
    519-661-2111, ext. 85165,
    519-520-7281 (mobile),
    jrenaud9@uwo.ca, @jeffrenaud99

    07.18.19
    From Wired
    Meredith Fore

    1
    NASA

    A pair of researchers at Western University in Ontario, Canada, developed their model by looking at quasars, which are supermassive black holes.

    Astronomers have a pretty good idea of how most black holes form: A massive star dies, and after it goes supernova, the remaining mass (if there’s enough of it) collapses under the force of its own gravity, leaving behind a black hole that’s between five and 50 times the mass of our Sun. What this tidy origin story fails to explain is where supermassive black holes, which range from 100,000 to tens of billions of times the mass of the Sun, come from. These monsters exist at the center of almost all galaxies in the universe, and some emerged only 690 million years after the Big Bang. In cosmic terms, that’s practically the blink of an eye—not nearly long enough for a star to be born, collapse into a black hole, and eat enough mass to become supermassive.

    One long-standing explanation for this mystery, known as the direct-collapse theory, hypothesizes that ancient black holes somehow got big without the benefit of a supernova stage. Now a pair of researchers at Western University in Ontario, Canada—Shantanu Basu and Arpan Das—have found some of the first solid observational evidence for the theory. As they described late last month in The Astrophysical Journal Letters, they did it by looking at quasars.

    Quasars are supermassive black holes that continuously suck in, or accrete, large amounts of matter; they get a special name because the stuff falling into them emits bright radiation, making them easier to observe than many other kinds of black holes. The distribution of their masses—how many are bigger, how many are smaller, and how many are in between—is the main indicator of how they formed.

    Astrophysicists at Western University have found evidence for the direct formation of black holes that do not need to emerge from a star remnant. The production of black holes in the early universe, formed in this manner, may provide scientists with an explanation for the presence of extremely massive black holes at a very early stage in the history of our universe.

    After analyzing that information, Basu and Das proposed that the supermassive black holes might have arisen from a chain reaction. They can’t say exactly where the seeds of the black holes came from in the first place, but they think they know what happened next. Each time one of the nascent black holes accreted matter, it would radiate energy, which would heat up neighboring gas clouds. A hot gas cloud collapses more easily than a cold one; with each big meal, the black hole would emit more energy, heating up other gas clouds, and so on. This fits the conclusions of several other astronomers, who believe that the population of supermassive black holes increased at an exponential rate in the universe’s infancy.

    “This is indirect observational evidence that black holes originate from direct-collapses and not from stellar remnants,” says Basu, an astronomy professor at Western who is internationally recognized as an expert in the early stages of star formation and protoplanetary disk evolution.

    Basu and Das developed the new mathematical model by calculating the mass function of supermassive black holes that form over a limited time period and undergo a rapid exponential growth of mass. The mass growth can be regulated by the Eddington limit that is set by a balance of radiation and gravitation forces or can even exceed it by a modest factor.

    “Supermassive black holes only had a short time period where they were able to grow fast and then at some point, because of all the radiation in the universe created by other black holes and stars, their production came to a halt,” explains Basu. “That’s the direct-collapse scenario.”

    But at some point, the chain reaction stopped. As more and more black holes—and stars and galaxies—were born and started radiating energy and light, the gas clouds evaporated. “The overall radiation field in the universe becomes too strong to allow such large amounts of gas to collapse directly,” Basu says. “And so the whole process comes to an end.” He and Das estimate that the chain reaction lasted about 150 million years.

    The generally accepted speed limit for black hole growth is called the Eddington rate, a balance between the outward force of radiation and the inward force of gravity. This speed limit can theoretically be exceeded if the matter is collapsing fast enough; the Basu and Das model suggests black holes were accreting matter at three times the Eddington rate for as long as the chain reaction was happening. For astronomers regularly dealing with numbers in the millions, billions, and trillions, three is quite modest.

    “If the numbers had turned out crazy, like you need 100 times the Eddington accretion rate, or the production period is 2 billion years, or 10 years,” Basu says, “then we’d probably have to conclude that the model is wrong.”

    There are many other theories for how direct-collapse black holes could be created: Perhaps halos of dark matter formed ultramassive quasi-stars that then collapsed, or dense clusters of regular mass stars merged and then collapsed.

    For Basu and Das, one strength of their model is that it doesn’t depend on how the giant seeds were created. “It’s not dependent on some person’s very specific scenario, specific chain of events happening in a certain way,” Basu says. “All this requires is that some very massive black holes did form in the early universe, and they formed in a chain reaction process, and it only lasted a brief time.”

    The ability to see a supermassive black hole forming is still out of reach; existing telescopes can’t look that far back yet. But that may change in the next decade as powerful new tools come online, including the James Webb Space Telescope, the Wide Field Infrared Survey Telescope, and the Laser Interferometer Space Antenna—all of which will hover in low Earth orbit—as well as the Large Synoptic Survey Telescope, based in Chile.

    NASA/ESA/CSA Webb Telescope annotated

    NASA/WFIRST

    Gravity is talking. Lisa will listen. Dialogos of Eide

    ESA/LISA Pathfinder

    ESA/NASA eLISA space based, the future of gravitational wave research

    LSST Camera, built at SLAC

    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    In the next five or 10 years, Basu adds, as the “mountain of data” comes in, models like his and his colleague’s will help astronomers interpret what they see.

    Avi Loeb, one of the pioneers of direct-collapse black hole theory and the director of the Black Hole Initiative at Harvard, is especially excited for the Laser Interferometer Space Antenna. Set to launch in the 2030s, it will allow scientists to measure gravitational waves—fine ripples in the fabric of space-time—more accurately than ever before.

    “We have already started the era of gravitational wave astronomy with stellar-mass black holes,” he says, referring to the black hole mergers detected by the ground-based Laser Interferometer Gravitational-Wave Observatory.

    Its space-based counterpart, Loeb anticipates, could provide a better “census” of the supermassive black hole population.

    For Basu, the question of how supermassive black holes are created is “one of the big chinks in the armor” of our current understanding of the universe. The new model “is a way of making everything work according to current observations,” he says. But Das remains open to any surprises delivered by the spate of new detectors—since surprises, after all, are often how science progresses.

    MIT /Caltech Advanced aLigo



    VIRGO Gravitational Wave interferometer, near Pisa, Italy


    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    LSC LIGO Scientific Collaboration


    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger

    Gravity is talking. Lisa will listen. Dialogos of Eide

    ESA/eLISA the future of gravitational wave research

    Localizations of gravitational-wave signals detected by LIGO in 2015 (GW150914, LVT151012, GW151226, GW170104), more recently, by the LIGO-Virgo network (GW170814, GW170817). After Virgo came online in August 2018


    Skymap showing how adding Virgo to LIGO helps in reducing the size of the source-likely region in the sky. (Credit: Giuseppe Greco (Virgo Urbino group)

    See the full WIRED article here .
    See the full Western University article here .

    The University of Western Ontario (UWO), corporately branded as Western University as of 2012 and commonly shortened to Western, is a public research university in London, Ontario, Canada. The main campus is on 455 hectares (1,120 acres) of land, surrounded by residential neighbourhoods and the Thames River bisecting the campus’s eastern portion. The university operates twelve academic faculties and schools. It is a member of the U15, a group of research-intensive universities in Canada.

    The university was founded on 7 March 1878 by Bishop Isaac Hellmuth of the Anglican Diocese of Huron as the Western University of London, Ontario. It incorporated Huron University College, which had been founded in 1863. The first four faculties were Arts, Divinity, Law and Medicine. The Western University of London became non-denominational in 1908. Beginning in 1919, the university has affiliated with several denominational colleges. The university grew substantially in the post-World War II era, as a number of faculties and schools were added to university.

    Western is a co-educational university, with more than 24,000 students, and with over 306,000 living alumni worldwide. Notable alumni include government officials, academics, business leaders, Nobel Laureates, Rhodes Scholars, and distinguished fellows. Western’s varsity teams, known as the Western Mustangs, compete in the Ontario University Athletics conference of U Sports.

    Wired logo

    WIRED

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:33 am on June 28, 2019 Permalink | Reply
    Tags: "Jony Ive Is Leaving Apple", , , WIRED   

    From WIRED: “Jony Ive Is Leaving Apple” 

    Wired logo

    From WIRED

    1
    Jony Ive. iMore

    The man who designed the iMac, the iPod, the iPhone—and even the Apple Store—is leaving Apple. Jony Ive announced in an interview with the Financial Times on Thursday that he was departing the company after more than two decades to start LoveFrom, a creative agency that will count Apple as its first client. The transition will start later this year, and LoveFrom will formally launch in 2020.

    Ive has been an indispensable leader at Apple and the chief guide of the company’s aesthetic vision. His role took on even greater importance after Apple cofounder Steve Jobs died of pancreatic cancer in 2011. Apple will not immediately appoint a new chief design officer. Instead, Alan Dye, who leads Apple’s user interface team, and Evans Hankey, head of industrial design, will report directly to Apple’s chief operating officer, Jeff Williams, according to the Financial Times.

    “This just seems like a natural and gentle time to make this change,” Ive said in the interview, somewhat perplexingly. Apple’s business is currently weathering many changes: slumping iPhone sales, an increasingly tense trade war between President Trump’s administration and China, the April departure of retail chief Angela Ahrendts. The company is also in the midst of a pivot away from hardware devices to software services.

    It’s not clear exactly what LoveFrom will work on, and Ive was relatively vague about the nature of the firm, though he said he will continue to work on technology and health care. Another Apple design employee, Marc Newson, is also leaving to join the new venture. This isn’t the first time the pair have worked on a non-Apple project together. In 2013, they designed a custom Leica camera that was sold at auction to benefit the Global Fund fighting AIDS, Tuberculosis, and Malaria.

    During an interview with Anna Wintour last November at the WIRED25 summit, Ive discussed the creative process and how he sees his responsibility as a mentor at Apple. “I still think it’s so remarkable that ideas that can become so powerful and so literally world-changing,” he said. “But those same ideas at the beginning are shockingly fragile. I think the creative process doesn’t naturally or easily sit in a large group of people.”

    Ive left the London design studio Tangerine and moved to California to join Apple in 1992. He became senior vice president of industrial design in 1997, after Jobs returned to the company. The next year, the iMac G3 was released, which would prove to be Ive’s first major hit, helping to turn around Apple’s then struggling business. He later helped oversee the design of Apple’s new headquarters, Apple Park.

    “It’s frustrating to talk about this building in terms of absurd, large numbers,” Ive told WIRED’s Steven Levy when the campus opened in 2017. “It makes for an impressive statistic, but you don’t live in an impressive statistic. While it is a technical marvel to make glass at this scale, that’s not the achievement. The achievement is to make a building where so many people can connect and collaborate and walk and talk.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:34 am on May 27, 2019 Permalink | Reply
    Tags: "These Hidden Women Helped Invent Chaos Theory", , Margaret Hamilton, Miss Ellen Fetter, Royal McBee LGP-30, Strange attractor, The butterfly effect, WIRED   

    From WIRED: “These Hidden Women Helped Invent Chaos Theory” 

    Wired logo

    From WIRED

    1
    Ellen Fetter and Margaret Hamilton were responsible for programming the enormous 1960s-era computer that would uncover strange attractors and other hallmarks of chaos theory. Credit: Olena Shmahalo/Quanta Magazine

    2
    Ellen Fetter in 1963, the year Lorenz’s seminal paper came out. Courtesy of Ellen Gille

    3
    Margaret Hamilton. Photo: Wikimedia Commons

    A little over half a century ago, chaos started spilling out of a famous experiment. It came not from a petri dish, a beaker or an astronomical observatory, but from the vacuum tubes and diodes of a Royal McBee LGP-30.

    4
    Royal McBee LGP-30. Credit Ed Thelen

    This “desk” computer—it was the size of a desk—weighed some 800 pounds and sounded like a passing propeller plane. It was so loud that it even got its own office on the fifth floor in Building 24, a drab structure near the center of the Massachusetts Institute of Technology.

    2

    Instructions for the computer came from down the hall, from the office of a meteorologist named Edward Norton Lorenz.

    The story of chaos is usually told like this: Using the LGP-30, Lorenz made paradigm-wrecking discoveries. In 1961, having programmed a set of equations into the computer that would simulate future weather, he found that tiny differences in starting values could lead to drastically different outcomes. This sensitivity to initial conditions, later popularized as the butterfly effect, made predicting the far future a fool’s errand. But Lorenz also found that these unpredictable outcomes weren’t quite random, either. When visualized in a certain way, they seemed to prowl around a shape called a strange attractor.

    About a decade later, chaos theory started to catch on in scientific circles. Scientists soon encountered other unpredictable natural systems that looked random even though they weren’t: the rings of Saturn, blooms of marine algae, Earth’s magnetic field, the number of salmon in a fishery. Then chaos went mainstream with the publication of James Gleick’s Chaos: Making a New Science in 1987. Before long, Jeff Goldblum, playing the chaos theorist Ian Malcolm, was pausing, stammering and charming his way through lines about the unpredictability of nature in Jurassic Park.

    All told, it’s a neat narrative. Lorenz, “the father of chaos,” started a scientific revolution on the LGP-30. It is quite literally a textbook case for how the numerical experiments that modern science has come to rely on—in fields ranging from climate science to ecology to astrophysics—can uncover hidden truths about nature.

    But in fact, Lorenz was not the one running the machine. There’s another story, one that has gone untold for half a century. A year and a half ago, an MIT scientist happened across a name he had never heard before and started to investigate. The trail he ended up following took him into the MIT archives, through the stacks of the Library of Congress, and across three states and five decades to find information about the women who, today, would have been listed as co-authors on that seminal paper. And that material, shared with Quanta, provides a fuller, fairer account of the birth of chaos.

    The Birth of Chaos

    In the fall of 2017, the geophysicist Daniel Rothman, co-director of MIT’s Lorenz Center, was preparing for an upcoming symposium. The meeting would honor Lorenz, who died in 2008, so Rothman revisited Lorenz’s epochal paper, a masterwork on chaos titled Deterministic Nonperiodic Flow. Published in 1963, it has since attracted thousands of citations, and Rothman, having taught this foundational material to class after class, knew it like an old friend. But this time he saw something he hadn’t noticed before. In the paper’s acknowledgments, Lorenz had written, “Special thanks are due to Miss Ellen Fetter for handling the many numerical computations.”

    “Jesus … who is Ellen Fetter?” Rothman recalls thinking at the time. “It’s one of the most important papers in computational physics and, more broadly, in computational science,” he said. And yet he couldn’t find anything about this woman. “Of all the volumes that have been written about Lorenz, the great discovery — nothing.”

    With further online searches, however, Rothman found a wedding announcement from 1963. Ellen Fetter had married John Gille, a physicist, and changed her name. A colleague of Rothman’s then remembered that a graduate student named Sarah Gille had studied at MIT in the 1990s in the very same department as Lorenz and Rothman. Rothman reached out to her, and it turned out that Sarah Gille, now a physical oceanographer at the University of California, San Diego, was Ellen and John’s daughter. Through this connection, Rothman was able to get Ellen Gille, née Fetter, on the phone. And that’s when he learned another name, the name of the woman who had preceded Fetter in the job of programming Lorenz’s first meetings with chaos: Margaret Hamilton.

    When Margaret Hamilton arrived at MIT in the summer of 1959, with a freshly minted math degree from Earlham College, Lorenz had only recently bought and taught himself to use the LGP-30. Hamilton had no prior training in programming either. Then again, neither did anyone else at the time. “He loved that computer,” Hamilton said. “And he made me feel the same way about it.”

    For Hamilton, these were formative years. She recalls being out at a party at three or four a.m., realizing that the LGP-30 wasn’t set to produce results by the next morning, and rushing over with a few friends to start it up. Another time, frustrated by all the things that had to be done to make another run after fixing an error, she devised a way to bypass the computer’s clunky debugging process. To Lorenz’s delight, Hamilton would take the paper tape that fed the machine, roll it out the length of the hallway, and edit the binary code with a sharp pencil. “I’d poke holes for ones, and I’d cover up with Scotch tape the others,” she said. “He just got a kick out of it.”

    There were desks in the computer room, but because of the noise, Lorenz, his secretary, his programmer and his graduate students all shared the other office. The plan was to use the desk computer, then a total novelty, to test competing strategies of weather prediction in a way you couldn’t do with pencil and paper.

    First, though, Lorenz’s team had to do the equivalent of catching the Earth’s atmosphere in a jar. Lorenz idealized the atmosphere in 12 equations that described the motion of gas in a rotating, stratified fluid. Then the team coded them in.

    Sometimes the “weather” inside this simulation would simply repeat like clockwork. But Lorenz found a more interesting and more realistic set of solutions that generated weather that wasn’t periodic. The team set up the computer to slowly print out a graph of how one or two variables—say, the latitude of the strongest westerly winds—changed over time. They would gather around to watch this imaginary weather, even placing little bets on what the program would do next.

    And then one day it did something really strange. This time they had set up the printer not to make a graph, but simply to print out time stamps and the values of a few variables at each time. As Lorenz later recalled, they had re-run a previous weather simulation with what they thought were the same starting values, reading off the earlier numbers from the previous printout. But those weren’t actually the same numbers. The computer was keeping track of numbers to six decimal places, but the printer, to save space on the page, had rounded them to only the first three decimal places.

    After the second run started, Lorenz went to get coffee. The new numbers that emerged from the LGP-30 while he was gone looked at first like the ones from the previous run. This new run had started in a very similar place, after all. But the errors grew exponentially. After about two months of imaginary weather, the two runs looked nothing alike. This system was still deterministic, with no random chance intruding between one moment and the next. Even so, its hair-trigger sensitivity to initial conditions made it unpredictable.

    This meant that in chaotic systems the smallest fluctuations get amplified. Weather predictions fail once they reach some point in the future because we can never measure the initial state of the atmosphere precisely enough. Or, as Lorenz would later present the idea, even a seagull flapping its wings might eventually make a big difference to the weather. (In 1972, the seagull was deposed when a conference organizer, unable to check back about what Lorenz wanted to call an upcoming talk, wrote his own title that switched the metaphor to a butterfly.)

    Many accounts, including the one in Gleick’s book, date the discovery of this butterfly effect to 1961, with the paper following in 1963. But in November 1960, Lorenz described it during the Q&A session following a talk he gave at a conference on numerical weather prediction in Tokyo. After his talk, a question came from a member of the audience: “Did you change the initial condition just slightly and see how much different results were?”

    “As a matter of fact, we tried out that once with the same equation to see what could happen,” Lorenz said. He then started to explain the unexpected result, which he wouldn’t publish for three more years. “He just gives it all away,” Rothman said now. But no one at the time registered it enough to scoop him.

    In the summer of 1961, Hamilton moved on to another project, but not before training her replacement. Two years after Hamilton first stepped on campus, Ellen Fetter showed up at MIT in much the same fashion: a recent graduate of Mount Holyoke with a degree in math, seeking any sort of math-related job in the Boston area, eager and able to learn. She interviewed with a woman who ran the LGP-30 in the nuclear engineering department, who recommended her to Hamilton, who hired her.

    Once Fetter arrived in Building 24, Lorenz gave her a manual and a set of programming problems to practice, and before long she was up to speed. “He carried a lot in his head,” she said. “He would come in with maybe one yellow sheet of paper, a legal piece of paper in his pocket, pull it out, and say, ‘Let’s try this.’”

    The project had progressed meanwhile. The 12 equations produced fickle weather, but even so, that weather seemed to prefer a narrow set of possibilities among all possible states, forming a mysterious cluster which Lorenz wanted to visualize. Finding that difficult, he narrowed his focus even further. From a colleague named Barry Saltzman, he borrowed just three equations that would describe an even simpler nonperiodic system, a beaker of water heated from below and cooled from above.

    Here, again, the LGP-30 chugged its way into chaos. Lorenz identified three properties of the system corresponding roughly to how fast convection was happening in the idealized beaker, how the temperature varied from side to side, and how the temperature varied from top to bottom. The computer tracked these properties moment by moment.

    The properties could also be represented as a point in space. Lorenz and Fetter plotted the motion of this point. They found that over time, the point would trace out a butterfly-shaped fractal structure now called the Lorenz attractor. The trajectory of the point—of the system—would never retrace its own path. And as before, two systems setting out from two minutely different starting points would soon be on totally different tracks. But just as profoundly, wherever you started the system, it would still head over to the attractor and start doing chaotic laps around it.

    The attractor and the system’s sensitivity to initial conditions would eventually be recognized as foundations of chaos theory. Both were published in the landmark 1963 paper. But for a while only meteorologists noticed the result. Meanwhile, Fetter married John Gille and moved with him when he went to Florida State University and then to Colorado. They stayed in touch with Lorenz and saw him at social events. But she didn’t realize how famous he had become.

    2

    Still, the notion of small differences leading to drastically different outcomes stayed in the back of her mind. She remembered the seagull, flapping its wings. “I always had this image that stepping off the curb one way or the other could change the course of any field,” she said.

    Flight Checks

    After leaving Lorenz’s group, Hamilton embarked on a different path, achieving a level of fame that rivals or even exceeds that of her first coding mentor. At MIT’s Instrumentation Laboratory, starting in 1965, she headed the onboard flight software team for the Apollo project.

    Her code held up when the stakes were life and death—even when a mis-flipped switch triggered alarms that interrupted the astronaut’s displays right as Apollo 11 approached the surface of the moon. Mission Control had to make a quick choice: land or abort. But trusting the software’s ability to recognize errors, prioritize important tasks, and recover, the astronauts kept going.

    Hamilton, who popularized the term “software engineering,” later led the team that wrote the software for Skylab, the first US space station. She founded her own company in Cambridge in 1976, and in recent years her legacy has been celebrated again and again. She won NASA’s Exceptional Space Act Award in 2003 and received the Presidential Medal of Freedom in 2016. In 2017 she garnered arguably the greatest honor of all: a Margaret Hamilton Lego minifigure.

    Fetter, for her part, continued to program at Florida State after leaving Lorenz’s group at MIT. After a few years, she left her job to raise her children. In the 1970s, she took computer science classes at the University of Colorado, toying with the idea of returning to programming, but she eventually took a tax preparation job instead. By the 1980s, the demographics of programming had shifted. “After I sort of got put off by a couple of job interviews, I said forget it,” she said. “They went with young, techy guys.”

    Chaos only reentered her life through her daughter, Sarah. As an undergraduate at Yale in the 1980s, Sarah Gille sat in on a class about scientific programming. The case they studied? Lorenz’s discoveries on the LGP-30. Later, Sarah studied physical oceanography as a graduate student at MIT, joining the same overarching department as both Lorenz and Rothman, who had arrived a few years earlier. “One of my office mates in the general exam, the qualifying exam for doing research at MIT, was asked: How would you explain chaos theory to your mother?” she said. “I was like, whew, glad I didn’t get that question.”

    The Changing Value of Computation

    Today, chaos theory is part of the scientific repertoire. In a study published just last month, researchers concluded that no amount of improvement in data gathering or in the science of weather forecasting will allow meteorologists to produce useful forecasts that stretch more than 15 days out. (Lorenz had suggested a similar two-week cap to weather forecasts in the mid-1960s.)

    But the many retellings of chaos’s birth say little to nothing about how Hamilton and Ellen Gille wrote the specific programs that revealed the signatures of chaos. “This is an all-too-common story in the histories of science and technology,” wrote Jennifer Light, the department head for MIT’s Science, Technology and Society program, in an email to Quanta. To an extent, we can chalk up that omission to the tendency of storytellers to focus on solitary geniuses. But it also stems from tensions that remain unresolved today.

    First, coders in general have seen their contributions to science minimized from the beginning. “It was seen as rote,” said Mar Hicks, a historian at the Illinois Institute of Technology. “The fact that it was associated with machines actually gave it less status, rather than more.” But beyond that, and contributing to it, many programmers in this era were women.

    In addition to Hamilton and the woman who coded in MIT’s nuclear engineering department, Ellen Gille recalls a woman on an LGP-30 doing meteorology next door to Lorenz’s group. Another woman followed Gille in the job of programming for Lorenz. An analysis of official U.S. labor statistics shows that in 1960, women held 27 percent of computing and math-related jobs.

    The percentage has been stuck there for a half-century. In the mid-1980s, the fraction of women pursuing bachelor’s degrees in programming even started to decline. Experts have argued over why. One idea holds that early personal computers were marketed preferentially to boys and men. Then when kids went to college, introductory classes assumed a detailed knowledge of computers going in, which alienated young women who didn’t grow up with a machine at home. Today, women programmers describe a self-perpetuating cycle where white and Asian male managers hire people who look like all the other programmers they know. Outright harassment also remains a problem.

    Hamilton and Gille, however, still speak of Lorenz’s humility and mentorship in glowing terms. Before later chroniclers left them out, Lorenz thanked them in the literature in the same way he thanked Saltzman, who provided the equations Lorenz used to find his attractor. This was common at the time. Gille recalls that in all her scientific programming work, only once did someone include her as a co-author after she contributed computational work to a paper; she said she was “stunned” because of how unusual that was.

    Computation in science has become even more indispensable, of course. For recent breakthroughs like the first image of a black hole, the hard part was not figuring out which equations described the system, but how to leverage computers to understand the data.

    Today, many programmers leave science not because their role isn’t appreciated, but because coding is better compensated in industry, said Alyssa Goodman, an astronomer at Harvard University and an expert in computing and data science. “In the 1960s, there was no such thing as a data scientist, there was no such thing as Netflix or Google or whoever, that was going to suck in these people and really, really value them,” she said.

    Still, for coder-scientists in academic systems that measure success by paper citations, things haven’t changed all that much. “If you are a software developer who may never write a paper, you may be essential,” Goodman said. “But you’re not going to be counted that way.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 11:56 am on May 12, 2019 Permalink | Reply
    Tags: "A Bizarre Form of Water May Exist All Over the Universe", , , Creating a shock wave that raised the water’s pressure to millions of atmospheres and its temperature to thousands of degrees., Experts say the discovery of superionic ice vindicates computer predictions which could help material physicists craft future substances with bespoke properties., , , Superionic ice, Superionic ice can now claim the mantle of Ice XVIII., Superionic ice is black and hot. A cube of it would weigh four times as much as a normal one., Superionic ice is either another addition to water’s already cluttered array of avatars or something even stranger., Superionic ice would conduct electricity like a metal with the hydrogens playing the usual role of electrons., The discovery of superionic ice potentially solves decades-old puzzles about the composition of “ice giant” worlds., The fields around the solar system’s other planets seem to be made up of strongly defined north and south poles without much other structure., The magnetic fields emanating from Uranus and Neptune looked lumpier and more complex with more than two poles., The probe Voyager 2 had sailed into the outer solar system uncovering something strange about the magnetic fields of the ice giants Uranus and Neptune., , What giant icy planets like Uranus and Neptune might be made of, WIRED   

    From University of Rochester Laboratory for Laser Energetics via WIRED: “A Bizarre Form of Water May Exist All Over the Universe” 

    U Rochester bloc

    From University of Rochester

    U Rochester’s Laboratory for Laser Energetics

    via

    Wired logo

    WIRED

    1
    The discovery of superionic ice potentially solves the puzzle of what giant icy planets like Uranus and Neptune are made of. They’re now thought to have gaseous, mixed-chemical outer shells, a liquid layer of ionized water below that, a solid layer of superionic ice comprising the bulk of their interiors, and rocky centers. Credit: @iammoteh/Quanta Magazine.

    Recently at the Laboratory for Laser Energetics in Brighton, New York, one of the world’s most powerful lasers blasted a droplet of water, creating a shock wave that raised the water’s pressure to millions of atmospheres and its temperature to thousands of degrees. X-rays that beamed through the droplet in the same fraction of a second offered humanity’s first glimpse of water under those extreme conditions.

    The X-rays revealed that the water inside the shock wave didn’t become a superheated liquid or gas. Paradoxically—but just as physicists squinting at screens in an adjacent room had expected—the atoms froze solid, forming crystalline ice.

    “You hear the shot,” said Marius Millot of Lawrence Livermore National Laboratory in California, and “right away you see that something interesting was happening.” Millot co-led the experiment with Federica Coppari, also of Livermore.

    The findings, published this week in Nature, confirm the existence of “superionic ice,” a new phase of water with bizarre properties. Unlike the familiar ice found in your freezer or at the north pole, superionic ice is black and hot. A cube of it would weigh four times as much as a normal one. It was first theoretically predicted more than 30 years ago, and although it has never been seen until now, scientists think it might be among the most abundant forms of water in the universe.

    Across the solar system, at least, more water probably exists as superionic ice—filling the interiors of Uranus and Neptune—than in any other phase, including the liquid form sloshing in oceans on Earth, Europa and Enceladus. The discovery of superionic ice potentially solves decades-old puzzles about the composition of these “ice giant” worlds.

    Including the hexagonal arrangement of water molecules found in common ice, known as “ice Ih,” scientists had already discovered a bewildering 18 architectures of ice crystal. After ice I, which comes in two forms, Ih and Ic, the rest are numbered II through XVII in order of their discovery. (Yes, there is an Ice IX, but it exists only under contrived conditions, unlike the fictional doomsday substance in Kurt Vonnegut’s novel Cat’s Cradle.)

    Superionic ice can now claim the mantle of Ice XVIII. It’s a new crystal, but with a twist. All the previously known water ices are made of intact water molecules, each with one oxygen atom linked to two hydrogens. But superionic ice, the new measurements confirm, isn’t like that. It exists in a sort of surrealist limbo, part solid, part liquid. Individual water molecules break apart. The oxygen atoms form a cubic lattice, but the hydrogen atoms spill free, flowing like a liquid through the rigid cage of oxygens.

    3
    A time-integrated photograph of the X-ray diffraction experiment at the University of Rochester’s Laboratory for Laser Energetics. Giant lasers focus on a water sample to compress it into the superionic phase. Additional laser beams generate an X-ray flash off an iron foil, allowing the researchers to take a snapshot of the compressed water layer. Credit: Millot, Coppari, Kowaluk (LLNL)

    Experts say the discovery of superionic ice vindicates computer predictions, which could help material physicists craft future substances with bespoke properties. And finding the ice required ultrafast measurements and fine control of temperature and pressure, advancing experimental techniques. “All of this would not have been possible, say, five years ago,” said Christoph Salzmann at University College London, who discovered ices XIII, XIV and XV. “It will have a huge impact, for sure.”

    Depending on whom you ask, superionic ice is either another addition to water’s already cluttered array of avatars or something even stranger. Because its water molecules break apart, said the physicist Livia Bove of France’s National Center for Scientific Research and Pierre and Marie Curie University, it’s not quite a new phase of water. “It’s really a new state of matter,” she said, “which is rather spectacular.”

    Puzzles Put on Ice

    Physicists have been after superionic ice for years—ever since a primitive computer simulation led by Pierfranco Demontis in 1988 predicted [Physical Review Letters] water would take on this strange, almost metal-like form if you pushed it beyond the map of known ice phases.

    Under extreme pressure and heat, the simulations suggested, water molecules break. With the oxygen atoms locked in a cubic lattice, “the hydrogens now start to jump from one position in the crystal to another, and jump again, and jump again,” said Millot. The jumps between lattice sites are so fast that the hydrogen atoms—which are ionized, making them essentially positively charged protons—appear to move like a liquid.

    This suggested superionic ice would conduct electricity, like a metal, with the hydrogens playing the usual role of electrons. Having these loose hydrogen atoms gushing around would also boost the ice’s disorder, or entropy. In turn, that increase in entropy would make this ice much more stable than other kinds of ice crystals, causing its melting point to soar upward.

    But all this was easy to imagine and hard to trust. The first models used simplified physics, hand-waving their way through the quantum nature of real molecules. Later simulations folded in more quantum effects but still sidestepped the actual equations required to describe multiple quantum bodies interacting, which are too computationally difficult to solve. Instead, they relied on approximations, raising the possibility that the whole scenario could be just a mirage in a simulation. Experiments, meanwhile, couldn’t make the requisite pressures without also generating enough heat to melt even this hardy substance.

    As the problem simmered, though, planetary scientists developed their own sneaking suspicions that water might have a superionic ice phase. Right around the time when the phase was first predicted, the probe Voyager 2 had sailed into the outer solar system, uncovering something strange about the magnetic fields of the ice giants Uranus and Neptune.

    The fields around the solar system’s other planets seem to be made up of strongly defined north and south poles, without much other structure. It’s almost as if they have just bar magnets in their centers, aligned with their rotation axes. Planetary scientists chalk this up to “dynamos”: interior regions where conductive fluids rise and swirl as the planet rotates, sprouting massive magnetic fields.

    By contrast, the magnetic fields emanating from Uranus and Neptune looked lumpier and more complex, with more than two poles. They also don’t align as closely to their planets’ rotation. One way to produce this would be to somehow confine the conducting fluid responsible for the dynamo into just a thin outer shell of the planet, instead of letting it reach down into the core.

    But the idea that these planets might have solid cores, which are incapable of generating dynamos, didn’t seem realistic. If you drilled into these ice giants, you would expect to first encounter a layer of ionic water, which would flow, conduct currents and participate in a dynamo. Naively, it seems like even deeper material, at even hotter temperatures, would also be a fluid. “I used to always make jokes that there’s no way the interiors of Uranus and Neptune are actually solid,” said Sabine Stanley at Johns Hopkins University. “But now it turns out they might actually be.”

    Ice on Blast

    Now, finally, Coppari, Millot and their team have brought the puzzle pieces together.

    In an earlier experiment, published last February [Nature Physics], the physicists built indirect evidence for superionic ice. They squeezed a droplet of room-temperature water between the pointy ends of two cut diamonds. By the time the pressure raised to about a gigapascal, roughly 10 times that at the bottom of the Marianas Trench, the water had transformed into a tetragonal crystal called ice VI. By about 2 gigapascals, it had switched into ice VII, a denser, cubic form transparent to the naked eye that scientists recently discovered also exists in tiny pockets inside natural diamonds.

    Then, using the OMEGA laser at the Laboratory for Laser Energetics, Millot and colleagues targeted the ice VII, still between diamond anvils. As the laser hit the surface of the diamond, it vaporized material upward, effectively rocketing the diamond away in the opposite direction and sending a shock wave through the ice. Millot’s team found their super-pressurized ice melted at around 4,700 degrees Celsius, about as expected for superionic ice, and that it did conduct electricity thanks to the movement of charged protons.

    4
    Federica Coppari, a physicist at Lawrence Livermore National Laboratory, with an x-ray diffraction image plate that she and her colleagues used to discover ice XVIII, also known as superionic ice. Credit: Eugene Kowaluk/Laboratory for Laser Energetics

    With those predictions about superionic ice’s bulk properties settled, the new study led by Coppari and Millot took the next step of confirming its structure. “If you really want to prove that something is crystalline, then you need X-ray diffraction,” Salzmann said.

    Their new experiment skipped ices VI and VII altogether. Instead, the team simply smashed water with laser blasts between diamond anvils. Billionths of a second later, as shock waves rippled through and the water began crystallizing into nanometer-size ice cubes, the scientists used 16 more laser beams to vaporize a thin sliver of iron next to the sample. The resulting hot plasma flooded the crystallizing water with X-rays, which then diffracted from the ice crystals, allowing the team to discern their structure.

    Atoms in the water had rearranged into the long-predicted but never-before-seen architecture, Ice XVIII: a cubic lattice with oxygen atoms at every corner and the center of each face. “It’s quite a breakthrough,” Coppari said.

    “The fact that the existence of this phase is not an artifact of quantum molecular dynamic simulations, but is real—­that’s very comforting,” Bove said.

    And this kind of successful cross-check behind simulations and real superionic ice suggests the ultimate “dream” of material physics researchers might be soon within reach. “You tell me what properties you want in a material, and we’ll go to the computer and figure out theoretically what material and what kind of crystal structure you would need,” said Raymond Jeanloz, a member of the discovery team based at University of California, Berkeley. “The community at large is getting close.”

    The new analyses also hint that although superionic ice does conduct some electricity, it’s a mushy solid. It would flow over time, but not truly churn. Inside Uranus and Neptune, then, fluid layers might stop about 8,000 kilometers down into the planet, where an enormous mantle of sluggish, superionic ice like Millot’s team produced begins. That would limit most dynamo action to shallower depths, accounting for the planets’ unusual fields.

    Other planets and moons in the solar system likely don’t host the right interior sweet spots of temperature and pressure to allow for superionic ice. But many ice giant-sized exoplanets might, suggesting the substance could be common inside icy worlds throughout the galaxy.

    Of course, though, no real planet contains just water. The ice giants in our solar system also mix in chemical species like methane and ammonia. The extent to which superionic behavior actually occurs in nature is “going to depend on whether these phases still exist when we mix water with other materials,” Stanley said. So far, that isn’t clear, although other researchers have argued [Science] superionic ammonia should also exist.

    Aside from extending their research to other materials, the team also hopes to keep zeroing in on the strange, almost paradoxical duality of their superionic crystals. Just capturing the lattice of oxygen atoms “is clearly the most challenging experiment I have ever done,” said Millot. They haven’t yet seen the ghostly, interstitial flow of protons through the lattice. “Technologically, we are not there yet,” Coppari said, “but the field is growing very fast.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    University of Rochester Laboratory for Laser Energetics

    The Laboratory for Laser Energetics (LLE) is a scientific research facility which is part of the University of Rochester’s south campus, located in Brighton, New York. The lab was established in 1970 and its operations since then have been funded jointly; mainly by the United States Department of Energy, the University of Rochester and the New York State government. The Laser Lab was commissioned to serve as a center for investigations of high-energy physics, specifically those involving the interaction of extremely intense laser radiation with matter. Many types of scientific experiments are performed at the facility with a strong emphasis on inertial confinement, direct drive, laser-induced fusion, fundamental plasma physics and astrophysics using OMEGA. In June of 1995, OMEGA became the world’s highest-energy ultraviolet laser. The lab shares its building with the Center for Optoelectronics and Imaging and the Center for Optics Manufacturing. The Robert L. Sproull Center for Ultra High Intensity Laser Research was opened in 2005 and houses the OMEGA EP laser, which was completed in May 2008.

    The laboratory is unique in conducting big science on a university campus.[not verified in body] More than 180 Ph.D.s have been awarded for research done at the LLE.[2][3] During summer months the lab sponsors a program for high school students which involves local-area high school juniors in the research being done at the laboratory. Most of the projects are done on current research that is led by senior scientists at the lab.

    U Rochester Campus

    The University of Rochester is one of the country’s top-tier research universities. Our 158 buildings house more than 200 academic majors, more than 2,000 faculty and instructional staff, and some 10,500 students—approximately half of whom are women.

    Learning at the University of Rochester is also on a very personal scale. Rochester remains one of the smallest and most collegiate among top research universities, with smaller classes, a low 10:1 student to teacher ratio, and increased interactions with faculty.

     
  • richardmitnick 2:32 pm on April 17, 2019 Permalink | Reply
    Tags: “Not only is wind power less expensive but you can place the turbines in deeper water and do it less expensively than before.”, Environment advocates worry that offshore wind platform construction will damage sound-sensitive marine mammals like whales and dolphins., Even though cables can stretch further somebody still has to pay to bring this electricity back on land, Fishermen fear they will be shut out from fishing grounds, GE last year unveiled an even bigger turbine the 12 MW Haliade-X, In Denmark and Germany the governments pay for these connections and to convert the turbine’s alternating current (AC) to direct current (DC) for long-distance transmission., Offshore wind developers must also be sensitive to neighbors who don’t like power cables coming ashore near their homes, The potential is to generate more than 2000 gigawatts of capacity or 7200 terawatt-hours of electricity generation per year., US officials say there’s a lot of room for offshore wind to grow in US coastal waters, Vineyard Wind project, Wind Power finally catches hold, WIRED   

    From WIRED: “Offshore Wind Farms Are Spinning Up in the US—At Last” 

    Wired logo

    From WIRED

    1
    Christopher Furlong/Getty Images

    On June 1, the Pilgrim nuclear plant in Massachusetts will shut down, a victim of rising costs and a technology that is struggling to remain economically viable in the United States. But the electricity generated by the aging nuclear station soon will be replaced by another carbon-free source: a fleet of 84 offshore wind turbines rising nearly 650 feet above the ocean’s surface.

    The developers of the Vineyard Wind project say their turbines—anchored about 14 miles south of Martha’s Vineyard—will generate 800 megawatts of electricity once they start spinning sometime in 2022. That’s equivalent to the output of a large coal-fired power plant and more than Pilgrim’s 640 megawatts.

    “Offshore wind has arrived,” says Erich Stephens, chief development officer for Vineyard Wind, a developer based in New Bedford, Massachusetts, that is backed by Danish and Spanish wind energy firms. He explains that the costs have fallen enough to make developers take it seriously. “Not only is wind power less expensive, but you can place the turbines in deeper water, and do it less expensively than before.”

    Last week, the Massachusetts Department of Public Utilities awarded Vineyard Wind a 20-year contract to provide electricity at 8.9 cents per kilowatt-hour. That’s about a third the cost of other renewables (such as Canadian hydropower), and it’s estimated that ratepayers will save $1.3 billion in energy costs over the life of the deal.

    Can offshore wind pick up the slack from Pilgrim and other fading nukes? Its proponents think so, as long they can respond to concerns about potential harm to fisheries and marine life, as well as successfully connect to the existing power grid on land. Wind power is nothing new in the US, with 56,000 turbines in 41 states, Guam, and Puerto Rico producing a total of 96,433 MW nationwide. But wind farms located offshore, where wind blows stead and strong, unobstructed by buildings or mountains, have yet to start cranking.

    In recent years, however, the turbines have grown bigger and the towers taller, able to generate three times more power than they could five years ago. The technology needed to install them farther away from shore has improved as well, making them more palatable to nearby communities. When it comes to wind turbines, bigger is better, says David Hattery, practice group coordinator for power at K&L Gates, a Seattle law firm that represents wind power manufacturers and developers. Bigger turbines and blades perform better under the forces generated by strong ocean winds. “Turbulence wears out bearings and gear boxes,” Hattery said. “What you don’t want offshore is a turbine that breaks down. It is very expensive to fix it.”

    In the race to get big, Vineyard Wind plans to use a 9.5 MW turbine with a 174-meter diameter rotor, a giant by the standard of most wind farms. But GE last year unveiled an even bigger turbine, the 12 MW Haliade-X. When complete in 2021, each turbine will have a 220-meter wingspan (tip to tip) and be able to generate enough electricity to light 16,000 European homes. GE is building these beasts for offshore farms in Europe, where wind power now generates 14 percent of the continent’s electricity (compared to 6.5 percent in the US). “We feel that we have just the right machine at just the right time,” says John Lavelle, CEO of GE Renewable Energy’s Offshore Wind business.

    US officials say there’s a lot of room for offshore wind to grow in US coastal waters, with the potential to generate more than 2,000 gigawatts of capacity, or 7,200 terawatt-hours of electricity generation per year, according to the US Department of Energy. That’s nearly double the nation’s current electricity use. Even if only 1 percent of that potential is captured, nearly 6.5 million homes could be powered by offshore wind energy.

    Of course, getting these turbines built and spinning takes years of planning and dozens of federal and state permits. The federal government made things a bit easier in the past five years with new rules governing where to put the turbines. The Bureau of Ocean Energy Management (a division of the Department of Interior) now sets boundaries for offshore leases and accepts bids from commercial enterprises to develop wind farms.

    The first offshore project was a 30 MW, five-turbine wind farm that went live at the end of 2016. Developed by Deepwater Wind, the installation replaced diesel generators that once serviced the resorts of Block Island, Rhode Island. Now there are 15 active proposals for wind farms along the East Coast, and others are in the works for California, Hawaii, South Carolina, and New York.

    By having federal planners determine where to put the turbines, developers hope to avoid the debacle that was Cape Wind. Cape Wind was proposed for Nantucket Sound, a shallow area between Nantucket, Martha’s Vineyard, and Cape Cod. Developers began it with high hopes back in 2001, but pulled the plug in 2017 after years of court battles with local residents, fishermen, and two powerful American families: the Kennedys and the Koch brothers, both of whom could see the turbines from their homes.

    Like an extension cord that won’t reach all the way to the living room, Cape Wind’s developers were stuck in Nantucket Sound because existing undersea cables were limited in length. But new undersea transmission capability means the turbines can be located further offshore, away from beachfront homes, commercial shipping lanes, or whale migration routes.

    Even though cables can stretch further, somebody still has to pay to bring this electricity back on land, says Mark McGranaghan, vice president of integrated grid for the Electric Power Research Institute. McGranaghan says that in Denmark and Germany the governments pay for these connections and for the offshore electrical substations that convert the turbine’s alternating current (AC) to direct current (DC) for long-distance transmission. Here in the US, he predicts these costs will likely have to be paid by utility ratepayers or state taxpayers. “Offshore wind is totally real and we know how to do it,” McGranaghan says. “One of the things that comes up is who pays for the infrastructure to bring the power back.”

    It’s not just money. Offshore wind developers must also be sensitive to neighbors who don’t like power cables coming ashore near their homes, fishermen who fear they will be shut out from fishing grounds, or environmental advocates who worry that offshore wind platform construction will damage sound-sensitive marine mammals like whales and dolphins.

    Still, maybe that’s an easier job than finding a safe place to put all the radioactive waste that keeps piling up around Pilgrim and the nation’s 97 other nuclear reactors.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 9:57 am on April 7, 2019 Permalink | Reply
    Tags: "How Google Is Cramming More Data Into Its New Atlantic Cable", , , Google says the fiber-optic cable it's building across the Atlantic Ocean will be the fastest of its kind. Fiber-optic networks work by sending light over thin strands of glass., Japanese tech giant NEC says it has technology that will enable long-distance undersea cables with 16 fiber-optic pairs., The current growth in new cables is driven less by telcos and more by companies like Google Facebook and Microsoft, Today most long-distance undersea cables contain six or eight fiber-optic pairs., Vijay Vusirikala head of network architecture and optical engineering at Google says the company is already contemplating 24-pair cables., WIRED   

    From WIRED: “How Google Is Cramming More Data Into Its New Atlantic Cable” 

    Wired logo

    From WIRED

    04.05.19
    Klint Finley

    1
    Fiber-optic cable being loaded onto a ship owned by SubCom, which is working with Google to build the world’s fastest undersea data connection. Bill Gallery/SubCom.

    1

    Google says the fiber-optic cable it’s building across the Atlantic Ocean will be the fastest of its kind. When the cable goes live next year, the company estimates it will transmit around 250 terabits per second, fast enough to zap all the contents of the Library of Congress from Virginia to France three times every second. That’s about 56 percent faster than Facebook and Microsoft’s Marea cable, which can transmit about 160 terabits per second between Virginia and Spain.

    Fiber-optic networks work by sending light over thin strands of glass. Fiber-optic cables, which are about the diameter of a garden hose, enclose multiple pairs of these fibers. Google’s new cable is so fast because it carries more fiber pairs. Today, most long-distance undersea cables contain six or eight fiber-optic pairs. Google said Friday that its new cable, dubbed Dunant, is expected to be the first to include 12 pairs, thanks to new technology developed by Google and SubCom, which designs, manufactures, and deploys undersea cables.

    Dunant might not be the fastest for long: Japanese tech giant NEC says it has technology that will enable long-distance undersea cables with 16 fiber-optic pairs. And Vijay Vusirikala, head of network architecture and optical engineering at Google, says the company is already contemplating 24-pair cables.

    The surge in intercontinental cables, and their increasing capacity, reflect continual growth in internet traffic. They enable activists to livestream protests to distant countries, help companies buy and sell products around the world, and facilitate international romances. “Many people still believe international telecommunications are conducted by satellite,” says NEC executive Atsushi Kuwahara. “That was true in 1980, but nowadays, 99 percent of international telecommunications is submarine.”

    So much capacity is being added that, for the moment, it’s outstripping demand. Animations featured in a recent New York Times article illustrated the exploding number of undersea cables since 1989. That growth is continuing. Alan Mauldin of the research firm Telegeography says only about 30 percent of the potential capacity of major undersea cable routes is currently in use—and more than 60 new cables are planned to enter service by 2021. That summons memories of the 1990s Dotcom Bubble, when telecoms buried far more fiber in both the ground and the ocean than they would need for years to come.

    3
    A selection of fiber-optic cable products made by SubCom. Brian Smith/SubCom.

    But the current growth in new cables is driven less by telcos and more by companies like Google, Facebook, and Microsoft that crave ever more bandwidth for the streaming video, photos, and other data scuttling between their global data centers. And experts say that as undersea cable technologies improve, it’s not crazy for companies to build newer, faster routes between continents, even with so much fiber already laying idle in the ocean.

    Controlling Their Own Destiny

    Mauldin says that although there’s still lots of capacity available, companies like Google and Facebook prefer to have dedicated capacity for their own use. That’s part of why big tech companies have either invested in new cables through consortia or, in some cases, built their own cables.

    “When we do our network planning, it’s important to know if we’ll have the capacity in the network,” says Google’s Vusirikala. “One way to know is by building our own cables, controlling our own destiny.”

    Another factor is diversification. Having more cables means there are alternate routes for data if a cable breaks or malfunctions. At the same time, more people outside Europe and North America are tapping the internet, often through smartphones. That’s prompted companies to think about new routes, like between North and South America, or between Europe and Africa, says Mike Hollands, an executive at European data center company Interxion. The Marea cable ticks both of those boxes, giving Facebook and Microsoft faster routes to North Africa and the Middle East, while also creating an alternate path to Europe in case one or more of the traditional routes were disrupted by something like an earthquake.

    Cost Per Bit

    There are financial incentives for the tech companies as well. By owning the cables instead of leasing them from telcos, Google and other tech giants can potentially save money in the long term, Mauldin says.

    The cost to build and deploy a new undersea cable isn’t dropping. But as companies find ways to pump more data through these cables more quickly, their value increases.

    There are a few ways to increase the performance of a fiber-optic communications system. One is to increase the energy used to push the data from one end to the other. The catch is that to keep the data signal from degrading, undersea cables need repeaters roughly every 100 kilometers, Vusirikala explains. Those repeaters amplify not just the signal, but any noise introduced along the way, diminishing the value of boosting the energy.

    4
    A rendering of one of SubCom’s specialized Reliance-class cable ships. SubCom.

    You can also increase the amount of data that each fiber pair within a fiber-optic cable can carry. A technique called “dense wavelength division multiplexing” now enables more than 100 wavelengths to be sent along a single fiber pair.

    Or you can pack more fiber pairs into a cable. Traditionally each pair in a fiber-optic cable required two repeater components called “pumps.” The pumps take up space inside the repeater casing, so adding more pumps would require changes to the way undersea cable systems are built, deployed, and maintained, says SubCom CTO Georg Mohs.

    To get around that problem, SubCom and others are using a technique called space-division multiplexing (SDM) to allow four repeater pumps to power four fiber pairs. That will reduce the capacity of each pair, but cutting the required number of pumps in half allows them to add additional pairs that more than makes up for it, Mohs says.

    “This had been in our toolkit before,” Mohs says, but like other companies, SubCom has been more focused on adding more wavelengths per fiber pair.

    The result: Cables that can move more data than ever before. That means the total cost per bit of data sent across the cable is lower.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 12:50 pm on March 18, 2019 Permalink | Reply
    Tags: "AI Algorithms Are Now Shockingly Good at Doing Science", , WIRED   

    From Quanta via WIRED: “AI Algorithms Are Now Shockingly Good at Doing Science” 

    Quanta Magazine
    Quanta Magazine

    via

    Wired logo

    From WIRED

    3.17.19
    Dan Falk

    1
    Whether probing the evolution of galaxies or discovering new chemical compounds, algorithms are detecting patterns no humans could have spotted. Rachel Suggs/Quanta Magazine

    No human, or team of humans, could possibly keep up with the avalanche of information produced by many of today’s physics and astronomy experiments. Some of them record terabytes of data every day—and the torrent is only increasing. The Square Kilometer Array, a radio telescope slated to switch on in the mid-2020s, will generate about as much data traffic each year as the entire internet.

    SKA Square Kilometer Array

    The deluge has many scientists turning to artificial intelligence for help. With minimal human input, AI systems such as artificial neural networks—computer-simulated networks of neurons that mimic the function of brains—can plow through mountains of data, highlighting anomalies and detecting patterns that humans could never have spotted.

    Of course, the use of computers to aid in scientific research goes back about 75 years, and the method of manually poring over data in search of meaningful patterns originated millennia earlier. But some scientists are arguing that the latest techniques in machine learning and AI represent a fundamentally new way of doing science. One such approach, known as generative modeling, can help identify the most plausible theory among competing explanations for observational data, based solely on the data, and, importantly, without any preprogrammed knowledge of what physical processes might be at work in the system under study. Proponents of generative modeling see it as novel enough to be considered a potential “third way” of learning about the universe.

    Traditionally, we’ve learned about nature through observation. Think of Johannes Kepler poring over Tycho Brahe’s tables of planetary positions and trying to discern the underlying pattern. (He eventually deduced that planets move in elliptical orbits.) Science has also advanced through simulation. An astronomer might model the movement of the Milky Way and its neighboring galaxy, Andromeda, and predict that they’ll collide in a few billion years. Both observation and simulation help scientists generate hypotheses that can then be tested with further observations. Generative modeling differs from both of these approaches.

    Milkdromeda -Andromeda on the left-Earth’s night sky in 3.75 billion years-NASA

    “It’s basically a third approach, between observation and simulation,” says Kevin Schawinski, an astrophysicist and one of generative modeling’s most enthusiastic proponents, who worked until recently at the Swiss Federal Institute of Technology in Zurich (ETH Zurich). “It’s a different way to attack a problem.”

    Some scientists see generative modeling and other new techniques simply as power tools for doing traditional science. But most agree that AI is having an enormous impact, and that its role in science will only grow. Brian Nord, an astrophysicist at Fermi National Accelerator Laboratory who uses artificial neural networks to study the cosmos, is among those who fear there’s nothing a human scientist does that will be impossible to automate. “It’s a bit of a chilling thought,” he said.


    Discovery by Generation

    Ever since graduate school, Schawinski has been making a name for himself in data-driven science. While working on his doctorate, he faced the task of classifying thousands of galaxies based on their appearance. Because no readily available software existed for the job, he decided to crowdsource it—and so the Galaxy Zoo citizen science project was born.

    Galaxy Zoo via Astrobites

    Beginning in 2007, ordinary computer users helped astronomers by logging their best guesses as to which galaxy belonged in which category, with majority rule typically leading to correct classifications. The project was a success, but, as Schawinski notes, AI has made it obsolete: “Today, a talented scientist with a background in machine learning and access to cloud computing could do the whole thing in an afternoon.”

    Schawinski turned to the powerful new tool of generative modeling in 2016. Essentially, generative modeling asks how likely it is, given condition X, that you’ll observe outcome Y. The approach has proved incredibly potent and versatile. As an example, suppose you feed a generative model a set of images of human faces, with each face labeled with the person’s age. As the computer program combs through these “training data,” it begins to draw a connection between older faces and an increased likelihood of wrinkles. Eventually it can “age” any face that it’s given—that is, it can predict what physical changes a given face of any age is likely to undergo.

    3
    None of these faces is real. The faces in the top row (A) and left-hand column (B) were constructed by a generative adversarial network (GAN) using building-block elements of real faces. The GAN then combined basic features of the faces in A, including their gender, age and face shape, with finer features of faces in B, such as hair color and eye color, to create all the faces in the rest of the grid. NVIDIA

    The best-known generative modeling systems are “generative adversarial networks” (GANs). After adequate exposure to training data, a GAN can repair images that have damaged or missing pixels, or they can make blurry photographs sharp. They learn to infer the missing information by means of a competition (hence the term “adversarial”): One part of the network, known as the generator, generates fake data, while a second part, the discriminator, tries to distinguish fake data from real data. As the program runs, both halves get progressively better. You may have seen some of the hyper-realistic, GAN-produced “faces” that have circulated recently — images of “freakishly realistic people who don’t actually exist,” as one headline put it.

    More broadly, generative modeling takes sets of data (typically images, but not always) and breaks each of them down into a set of basic, abstract building blocks — scientists refer to this as the data’s “latent space.” The algorithm manipulates elements of the latent space to see how this affects the original data, and this helps uncover physical processes that are at work in the system.

    The idea of a latent space is abstract and hard to visualize, but as a rough analogy, think of what your brain might be doing when you try to determine the gender of a human face. Perhaps you notice hairstyle, nose shape, and so on, as well as patterns you can’t easily put into words. The computer program is similarly looking for salient features among data: Though it has no idea what a mustache is or what gender is, if it’s been trained on data sets in which some images are tagged “man” or “woman,” and in which some have a “mustache” tag, it will quickly deduce a connection.

    In a paper published in December in Astronomy & Astrophysics, Schawinski and his ETH Zurich colleagues Dennis Turp and Ce Zhang used generative modeling to investigate the physical changes that galaxies undergo as they evolve. (The software they used treats the latent space somewhat differently from the way a generative adversarial network treats it, so it is not technically a GAN, though similar.) Their model created artificial data sets as a way of testing hypotheses about physical processes. They asked, for instance, how the “quenching” of star formation—a sharp reduction in formation rates—is related to the increasing density of a galaxy’s environment.

    For Schawinski, the key question is how much information about stellar and galactic processes could be teased out of the data alone. “Let’s erase everything we know about astrophysics,” he said. “To what degree could we rediscover that knowledge, just using the data itself?”

    First, the galaxy images were reduced to their latent space; then, Schawinski could tweak one element of that space in a way that corresponded to a particular change in the galaxy’s environment—the density of its surroundings, for example. Then he could re-generate the galaxy and see what differences turned up. “So now I have a hypothesis-generation machine,” he explained. “I can take a whole bunch of galaxies that are originally in a low-density environment and make them look like they’re in a high-density environment, by this process.” Schawinski, Turp and Zhang saw that, as galaxies go from low- to high-density environments, they become redder in color, and their stars become more centrally concentrated. This matches existing observations about galaxies, Schawinski said. The question is why this is so.

    The next step, Schawinski says, has not yet been automated: “I have to come in as a human, and say, ‘OK, what kind of physics could explain this effect?’” For the process in question, there are two plausible explanations: Perhaps galaxies become redder in high-density environments because they contain more dust, or perhaps they become redder because of a decline in star formation (in other words, their stars tend to be older). With a generative model, both ideas can be put to the test: Elements in the latent space related to dustiness and star formation rates are changed to see how this affects galaxies’ color. “And the answer is clear,” Schawinski said. Redder galaxies are “where the star formation had dropped, not the ones where the dust changed. So we should favor that explanation.”

    4
    Using generative modeling, astrophysicists could investigate how galaxies change when they go from low-density regions of the cosmos to high-density regions, and what physical processes are responsible for these changes. K. Schawinski et al.; doi: 10.1051/0004-6361/201833800

    The approach is related to traditional simulation, but with critical differences. A simulation is “essentially assumption-driven,” Schawinski said. “The approach is to say, ‘I think I know what the underlying physical laws are that give rise to everything that I see in the system.’ So I have a recipe for star formation, I have a recipe for how dark matter behaves, and so on. I put all of my hypotheses in there, and I let the simulation run. And then I ask: Does that look like reality?” What he’s done with generative modeling, he said, is “in some sense, exactly the opposite of a simulation. We don’t know anything; we don’t want to assume anything. We want the data itself to tell us what might be going on.”

    The apparent success of generative modeling in a study like this obviously doesn’t mean that astronomers and graduate students have been made redundant—but it appears to represent a shift in the degree to which learning about astrophysical objects and processes can be achieved by an artificial system that has little more at its electronic fingertips than a vast pool of data. “It’s not fully automated science—but it demonstrates that we’re capable of at least in part building the tools that make the process of science automatic,” Schawinski said.

    Generative modeling is clearly powerful, but whether it truly represents a new approach to science is open to debate. For David Hogg, a cosmologist at New York University and the Flatiron Institute (which, like Quanta, is funded by the Simons Foundation), the technique is impressive but ultimately just a very sophisticated way of extracting patterns from data—which is what astronomers have been doing for centuries.


    In other words, it’s an advanced form of observation plus analysis. Hogg’s own work, like Schawinski’s, leans heavily on AI; he’s been using neural networks to classify stars according to their spectra and to infer other physical attributes of stars using data-driven models. But he sees his work, as well as Schawinski’s, as tried-and-true science. “I don’t think it’s a third way,” he said recently. “I just think we as a community are becoming far more sophisticated about how we use the data. In particular, we are getting much better at comparing data to data. But in my view, my work is still squarely in the observational mode.”

    Hardworking Assistants

    Whether they’re conceptually novel or not, it’s clear that AI and neural networks have come to play a critical role in contemporary astronomy and physics research. At the Heidelberg Institute for Theoretical Studies, the physicist Kai Polsterer heads the astroinformatics group — a team of researchers focused on new, data-centered methods of doing astrophysics. Recently, they’ve been using a machine-learning algorithm to extract redshift information from galaxy data sets, a previously arduous task.

    Polsterer sees these new AI-based systems as “hardworking assistants” that can comb through data for hours on end without getting bored or complaining about the working conditions. These systems can do all the tedious grunt work, he said, leaving you “to do the cool, interesting science on your own.”

    But they’re not perfect. In particular, Polsterer cautions, the algorithms can only do what they’ve been trained to do. The system is “agnostic” regarding the input. Give it a galaxy, and the software can estimate its redshift and its age — but feed that same system a selfie, or a picture of a rotting fish, and it will output a (very wrong) age for that, too. In the end, oversight by a human scientist remains essential, he said. “It comes back to you, the researcher. You’re the one in charge of doing the interpretation.”

    For his part, Nord, at Fermilab, cautions that it’s crucial that neural networks deliver not only results, but also error bars to go along with them, as every undergraduate is trained to do. In science, if you make a measurement and don’t report an estimate of the associated error, no one will take the results seriously, he said.

    Like many AI researchers, Nord is also concerned about the impenetrability of results produced by neural networks; often, a system delivers an answer without offering a clear picture of how that result was obtained.

    Yet not everyone feels that a lack of transparency is necessarily a problem. Lenka Zdeborová, a researcher at the Institute of Theoretical Physics at CEA Saclay in France, points out that human intuitions are often equally impenetrable. You look at a photograph and instantly recognize a cat—“but you don’t know how you know,” she said. “Your own brain is in some sense a black box.”

    It’s not only astrophysicists and cosmologists who are migrating toward AI-fueled, data-driven science. Quantum physicists like Roger Melko of the Perimeter Institute for Theoretical Physics and the University of Waterloo in Ontario have used neural networks to solve some of the toughest and most important problems in that field, such as how to represent the mathematical “wave function” describing a many-particle system.

    Perimeter Institute in Waterloo, Canada


    AI is essential because of what Melko calls “the exponential curse of dimensionality.” That is, the possibilities for the form of a wave function grow exponentially with the number of particles in the system it describes. The difficulty is similar to trying to work out the best move in a game like chess or Go: You try to peer ahead to the next move, imagining what your opponent will play, and then choose the best response, but with each move, the number of possibilities proliferates.

    Of course, AI systems have mastered both of these games—chess, decades ago, and Go in 2016, when an AI system called AlphaGo defeated a top human player. They are similarly suited to problems in quantum physics, Melko says.

    The Mind of the Machine

    Whether Schawinski is right in claiming that he’s found a “third way” of doing science, or whether, as Hogg says, it’s merely traditional observation and data analysis “on steroids,” it’s clear AI is changing the flavor of scientific discovery, and it’s certainly accelerating it. How far will the AI revolution go in science?

    Occasionally, grand claims are made regarding the achievements of a “robo-scientist.” A decade ago, an AI robot chemist named Adam investigated the genome of baker’s yeast and worked out which genes are responsible for making certain amino acids. (Adam did this by observing strains of yeast that had certain genes missing, and comparing the results to the behavior of strains that had the genes.) Wired’s headline read, “Robot Makes Scientific Discovery All by Itself.”

    More recently, Lee Cronin, a chemist at the University of Glasgow, has been using a robot to randomly mix chemicals, to see what sorts of new compounds are formed.

    Monitoring the reactions in real-time with a mass spectrometer, a nuclear magnetic resonance machine, and an infrared spectrometer, the system eventually learned to predict which combinations would be the most reactive. Even if it doesn’t lead to further discoveries, Cronin has said, the robotic system could allow chemists to speed up their research by about 90 percent.

    Last year, another team of scientists at ETH Zurich used neural networks to deduce physical laws from sets of data. Their system, a sort of robo-Kepler, rediscovered the heliocentric model of the solar system from records of the position of the sun and Mars in the sky, as seen from Earth, and figured out the law of conservation of momentum by observing colliding balls. Since physical laws can often be expressed in more than one way, the researchers wonder if the system might offer new ways—perhaps simpler ways—of thinking about known laws.

    These are all examples of AI kick-starting the process of scientific discovery, though in every case, we can debate just how revolutionary the new approach is. Perhaps most controversial is the question of how much information can be gleaned from data alone—a pressing question in the age of stupendously large (and growing) piles of it. In The Book of Why (2018), the computer scientist Judea Pearl and the science writer Dana Mackenzie assert that data are “profoundly dumb.” Questions about causality “can never be answered from data alone,” they write. “Anytime you see a paper or a study that analyzes the data in a model-free way, you can be certain that the output of the study will merely summarize, and perhaps transform, but not interpret the data.” Schawinski sympathizes with Pearl’s position, but he described the idea of working with “data alone” as “a bit of a straw man.” He’s never claimed to deduce cause and effect that way, he said. “I’m merely saying we can do more with data than we often conventionally do.”

    Another oft-heard argument is that science requires creativity, and that—at least so far—we have no idea how to program that into a machine. (Simply trying everything, like Cronin’s robo-chemist, doesn’t seem especially creative.) “Coming up with a theory, with reasoning, I think demands creativity,” Polsterer said. “Every time you need creativity, you will need a human.” And where does creativity come from? Polsterer suspects it is related to boredom—something that, he says, a machine cannot experience. “To be creative, you have to dislike being bored. And I don’t think a computer will ever feel bored.” On the other hand, words like “creative” and “inspired” have often been used to describe programs like Deep Blue and AlphaGo. And the struggle to describe what goes on inside the “mind” of a machine is mirrored by the difficulty we have in probing our own thought processes.

    Schawinski recently left academia for the private sector; he now runs a startup called Modulos which employs a number of ETH scientists and, according to its website, works “in the eye of the storm of developments in AI and machine learning.” Whatever obstacles may lie between current AI technology and full-fledged artificial minds, he and other experts feel that machines are poised to do more and more of the work of human scientists. Whether there is a limit remains to be seen.

    “Will it be possible, in the foreseeable future, to build a machine that can discover physics or mathematics that the brightest humans alive are not able to do on their own, using biological hardware?” Schawinski wonders. “Will the future of science eventually necessarily be driven by machines that operate on a level that we can never reach? I don’t know. It’s a good question.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: